Improve model card for openbmb/MiniCPM-o-2_6-gguf: Add metadata, links, and usage examples
#8
by
nielsr
HF Staff
- opened
This PR significantly enhances the model card for openbmb/MiniCPM-o-2_6-gguf by:
- Adding
license: apache-2.0andlibrary_name: transformersto the metadata. Thelibrary_nameis justified by the presence oftransformers-based code examples in the official GitHub repository, enabling automated "How to use" snippets on the Hub. - Updating the model description with key highlights and a clear introduction of MiniCPM-o 2.6, linking it to its foundational paper MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe.
- Including direct links to the main GitHub repository (
https://github.com/OpenBMB/MiniCPM-V) and the project homepage (https://minicpm-omni-webdemo-us.modelbest.cn/). - Adding a "Quickstart" section with a Python code snippet demonstrating how to use the base MiniCPM-o 2.6 model with the Hugging Face
transformerslibrary, as found in the original project's GitHub README. - Retaining and clearly separating the existing, valuable instructions for converting to and using the GGUF format with
llama.cpp. - Adding a citation section for proper attribution.
These changes aim to make the model card more informative, discoverable, and user-friendly for a wider audience, catering to both transformers users and those utilizing the GGUF format.
tc-mb
changed pull request status to
closed