We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
where can I find your fine-tuned Llama-8B1M-MoBA and Llama-8B-1M-Full model mentioned in your paper, thx!
Llama-8B1M-MoBA
Llama-8B-1M-Full
The text was updated successfully, but these errors were encountered:
Greetings.
Currently, we do not have plans to open-source both of these models.
Sorry, something went wrong.
I would like to ask about the size of your dataset, how many GPU resources you used, and how long your fine-tuning process took.
No branches or pull requests
where can I find your fine-tuned
Llama-8B1M-MoBA
andLlama-8B-1M-Full
model mentioned in your paper, thx!The text was updated successfully, but these errors were encountered: