Replies: 1 comment 2 replies
-
|
Okay, after some research, Apertus-8B seems to look good! I'll check if it works and hopefully remember to tell you guys and gals over here. :) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey to y'all,
the title is pretty much my question. I am constrained (due to policy requirements for a project, please don't question why 😮💨) to using European, open-weights LLMs for fine-tuning with Unsloth. The model in question should not be bigger than 9B parameters, ideally smaller. And recent enough.
I've found out about Ministral 8B, which looks awesome, but Unsloth hasn't released 4-bit quants for the base model, because the base model isn't even open-weights as far as I'm concerned.
Is Mistral 7b v0.3 @ 4 bits still a good choice in 2025?
Any help is greatly appreciated!
Best greets.
Beta Was this translation helpful? Give feedback.
All reactions