-
Notifications
You must be signed in to change notification settings - Fork 43
Support for RTX4060 with 8GB VRAM? #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
👋 Welcome! Thanks for opening your first issue. If you'd like to take a crack at fixing it, feel free to open a pull request — otherwise, we'll take a look as soon as we can! |
You can try running the |
@nigelp I have integrated ollama. You should be able to use this with ollama. Can check here https://github.com/NanoNets/docext?tab=readme-ov-file#models-with-ollama-linux-and-macos Let me know if this works. I have tested on Quadro M4000 (8gb). So should work fine on your 4060 |
Thanks very much. Just tried it. Feedback: Bit of a problem not finding a requirements.text file. Had to do it manually using my AI. It didn't work when it was installed anyway. Gradio comes up fine, but when I try to scan a file it gives me a console error as below. css.ts:32 Unable to preload CSS for https://gradio.s3-us-west-2.amazonaws.com/assets/index-Bu6H1l3u.css |
@nigelp Can you share the installed package version and the command you had used to start the app? |
The latest version I assume? The command: python -m docext.app.app --model_name ollama/llama3.2-vision --max_img_size 1024 Another error: Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): |
Fixed. According to my AI - "The problem was fixed by modifying the code so that if Ollama is detected running on localhost:11434, the app explicitly overrides the host to localhost, instead of keeping the default 0.0.0.0. Before, the app would only override the port but not the host, resulting in invalid API calls to 0.0.0.0:11434. Now, it correctly uses localhost:11434 for Ollama requests, allowing successful communication." |
Yeah. Nice catch. The error is in this line. Need to change it to localhost as default. https://github.com/NanoNets/docext/blob/b3093f4a71dc2a895645b85d847dcc85ddf7f87a/docext/app/args.py#L17C10-L17C27 For now you can do this |
@nigelp I have fixed this with the default settings also. Let me know if you are still facing any issues? |
@nigelp closing this. Feel free to reopen if you face any issue |
Any chance of support for smaller local models?
The text was updated successfully, but these errors were encountered: