Skip to content

where to find downloaded models ? #20

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
zoldaten opened this issue Dec 17, 2024 · 5 comments
Open

where to find downloaded models ? #20

zoldaten opened this issue Dec 17, 2024 · 5 comments

Comments

@zoldaten
Copy link

inference put models to temp folder ?
in user/cache cant find them.

@xenova
Copy link
Collaborator

xenova commented Dec 21, 2024

Hi there 👋 Generally, the models are added to the user's browser cache using the Web Cache API. You can find this in dev tools as follows:
Application -> Cache storage -> transformers-cache

@Nithur-M
Copy link

Hi @xenova, any way to store this to indexed db so that I don't need to download the models for each session? or is there any other workaround? Thank you.

@hpssjellis
Copy link

Hi @xenova, any way to store this to indexed db so that I don't need to download the models for each session? or is there any other workaround? Thank you.

Typically with Web-LLM from the cache the models should auto load. I will do some testing.

@hannes-sistemica
Copy link

Hello, I was also trying to find a way to not only cache the models, but also make multiple available at once and switch between them, without always downloading them. You can see the cached model here (in browser console):

async function listCachedModels() {
    // Get all cache names
    const cacheNames = await caches.keys();
    
    // Look for transformers cache
    const transformersCache = await caches.open('transformers-cache');
    
    // Get all cached requests
    const requests = await transformersCache.keys();
    
    // Group by model name
    const modelFiles = {};
    for (const request of requests) {
        const path = request.url;
        const modelName = path.split('/').slice(-2)[0];  // Get the model name from path
        
        if (!modelFiles[modelName]) {
            modelFiles[modelName] = [];
        }
        modelFiles[modelName].push(path);
    }
    
    return {
        models: Object.keys(modelFiles),
        modelDetails: modelFiles
    };
}

// Use it with async/await in console
const result = await listCachedModels();
console.log('Cached Models:', result.models);
console.log('Model Details:', result.modelDetails);

It would be great to store the models in indexDB. I am already thinking to build a model loader from/to indexDB/cache, but that would always have the active model within cache and indexDB in parallel.

@hannes-sistemica
Copy link

Actually I found out that multiple models can be downloaded and cached at the same time. So maybe no need for indexDB?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants