-
Notifications
You must be signed in to change notification settings - Fork 173
where to find downloaded models ? #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi there 👋 Generally, the models are added to the user's browser cache using the Web Cache API. You can find this in dev tools as follows: |
Hi @xenova, any way to store this to indexed db so that I don't need to download the models for each session? or is there any other workaround? Thank you. |
Typically with Web-LLM from the cache the models should auto load. I will do some testing. |
Hello, I was also trying to find a way to not only cache the models, but also make multiple available at once and switch between them, without always downloading them. You can see the cached model here (in browser console):
It would be great to store the models in indexDB. I am already thinking to build a model loader from/to indexDB/cache, but that would always have the active model within cache and indexDB in parallel. |
Actually I found out that multiple models can be downloaded and cached at the same time. So maybe no need for indexDB? |
inference put models to temp folder ?
in user/cache cant find them.
The text was updated successfully, but these errors were encountered: