Post
472
we are using the hugging face pro model but all of a sudden we are getting this below error "Error: "The model meta-llama/Llama-3.2-11B-Vision-Instruct is too large to be loaded automatically (21GB > 10GB)." need help, please
Join the community of Machine Learners and AI enthusiasts.
Sign UpI have to know if my pro access would help because its not working with hf-inference anymore, so is it a provider issue or the model issue.
Do i need to change providers?.
I'm trying this with a Pro subscription, so it's not a Pro subscription issue. As you say, it does work if you use a different inference provider.
However, only HF staff can tell whether the current state is an error or a specification.
https://github.com/huggingface/hub-docs/issues
website@huggingface.co