nielsr HF Staff commited on
Commit
d8fa728
·
verified ·
1 Parent(s): 9ec4c65

Improve model card for openbmb/MiniCPM-o-2_6-gguf: Add metadata, links, and usage examples

Browse files

This PR significantly enhances the model card for `openbmb/MiniCPM-o-2_6-gguf` by:

- Adding `license: apache-2.0` and `library_name: transformers` to the metadata. The `library_name` is justified by the presence of `transformers`-based code examples in the official GitHub repository, enabling automated "How to use" snippets on the Hub.
- Updating the model description with key highlights and a clear introduction of MiniCPM-o 2.6, linking it to its foundational paper [MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe](https://huggingface.co/papers/2509.18154).
- Including direct links to the main GitHub repository (`https://github.com/OpenBMB/MiniCPM-V`) and the project homepage (`https://minicpm-omni-webdemo-us.modelbest.cn/`).
- Adding a "Quickstart" section with a Python code snippet demonstrating how to use the *base* MiniCPM-o 2.6 model with the Hugging Face `transformers` library, as found in the original project's GitHub README.
- Retaining and clearly separating the existing, valuable instructions for converting to and using the GGUF format with `llama.cpp`.
- Adding a citation section for proper attribution.

These changes aim to make the model card more informative, discoverable, and user-friendly for a wider audience, catering to both `transformers` users and those utilizing the GGUF format.

Files changed (1) hide show
  1. README.md +106 -9
README.md CHANGED
@@ -1,16 +1,100 @@
1
  ---
2
- tags:
3
- - minicpm-o
4
- pipeline_tag: any-to-any
5
  base_model:
6
  - openbmb/MiniCPM-o-2_6
 
 
 
 
 
7
  ---
8
 
9
- ## MiniCPM-o 2.6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
- This repository contains the [MiniCPM-o 2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6) model weights in GGUF format, used for llama.cpp.
 
 
 
12
 
13
- Currently, this readme only supports minicpm-omni's vision capabilities, and we will update the full-mode support as soon as possible.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ### Prepare models and code
16
 
@@ -23,7 +107,7 @@ cd llama.cpp
23
  git checkout minicpm-omni
24
  ```
25
 
26
- ### Usage of MiniCPM-o 2.6
27
 
28
  Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) by us)
29
 
@@ -36,8 +120,8 @@ python ./convert_hf_to_gguf.py ../MiniCPM-o-2_6/model
36
  ./llama-quantize ../MiniCPM-o-2_6/model/ggml-model-f16.gguf ../MiniCPM-o-2_6/model/ggml-model-Q4_K_M.gguf Q4_K_M
37
  ```
38
 
39
- Build llama.cpp using `CMake`:
40
- https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md
41
 
42
  ```bash
43
  cmake -B build
@@ -54,4 +138,17 @@ Inference on Linux or Mac
54
 
55
  # or run in interactive mode
56
  ./llama-minicpmv-cli -m ../MiniCPM-o-2_6/model/ggml-model-Q4_K_M.gguf --mmproj ../MiniCPM-o-2_6/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -i
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ```
 
1
  ---
 
 
 
2
  base_model:
3
  - openbmb/MiniCPM-o-2_6
4
+ pipeline_tag: any-to-any
5
+ tags:
6
+ - minicpm-o
7
+ license: apache-2.0
8
+ library_name: transformers
9
  ---
10
 
11
+ <div align="center">
12
+ <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpm_v_and_minicpm_o_title.png" width="500em" ></img>
13
+
14
+ **A GPT-4o Level MLLM for Single Image, Multi Image and High-FPS Video Understanding on Your Phone**
15
+ </div>
16
+
17
+ This repository provides the GGUF model weights for **MiniCPM-o 2.6**, an 8B parameter end-to-end multimodal large language model (MLLM). MiniCPM-o 2.6 is part of the MiniCPM-V family, with its foundational advancements described in the paper [MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe](https://huggingface.co/papers/2509.18154).
18
+
19
+ MiniCPM-o 2.6 achieves comparable performance to GPT-4o-202405 in vision, speech, and multimodal live streaming, making it one of the most versatile and performant models in the open-source community. It supports various modalities including images, videos, text, and audio inputs, producing high-quality text and speech outputs.
20
+
21
+ - **Project Homepage:** [https://minicpm-omni-webdemo-us.modelbest.cn/](https://minicpm-omni-webdemo-us.modelbest.cn/)
22
+ - **Main GitHub Repository:** [https://github.com/OpenBMB/MiniCPM-V](https://github.com/OpenBMB/MiniCPM-V)
23
+
24
+ <br>
25
+
26
+ ### ✨ Key Highlights
27
+ - **Leading Visual Capability:** Achieves state-of-the-art performance on various image, multi-image, and video understanding benchmarks.
28
+ - **State-of-the-art Speech Capability:** Supports bilingual real-time speech conversation with configurable voices, outperforming GPT-4o-realtime on audio understanding tasks.
29
+ - **Strong Multimodal Live Streaming Capability:** Can accept continuous video and audio streams independent of user queries, supporting real-time speech interaction.
30
+ - **Superior Efficiency:** Features state-of-the-art token density, improving inference speed, first-token latency, memory usage, and power consumption.
31
+
32
+ <br>
33
+
34
+ ## 🚀 Quickstart: Using the Base Model with Hugging Face Transformers
35
+
36
+ The underlying PyTorch model, `openbmb/MiniCPM-o-2_6`, is compatible with the Hugging Face `transformers` library using `trust_remote_code=True`. This allows for flexible multi-turn conversations with images, videos, and audio.
37
 
38
+ ```python
39
+ import torch
40
+ from PIL import Image
41
+ from transformers import AutoModel, AutoTokenizer
42
 
43
+ torch.manual_seed(100)
44
+
45
+ # Load the base MiniCPM-o 2.6 model (PyTorch version)
46
+ # Ensure you have 'openbmb/MiniCPM-o-2_6' in your Hugging Face cache or local path
47
+ model_name = 'openbmb/MiniCPM-o-2_6'
48
+ model = AutoModel.from_pretrained(
49
+ model_name,
50
+ trust_remote_code=True,
51
+ attn_implementation='sdpa', # or 'flash_attention_2', no 'eager'
52
+ torch_dtype=torch.bfloat16
53
+ )
54
+ model = model.eval().cuda()
55
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
56
+
57
+ # Example: Multi-turn chat with an image
58
+ # For local execution, you might need to provide a local image path
59
+ # For example, you can use an image from the model's assets on GitHub:
60
+ # image = Image.open('https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmo2_6/show_demo.jpg').convert('RGB')
61
+ # Placeholder for an actual image if you run it locally:
62
+ image = Image.new('RGB', (224, 224), color = 'red') # Replace with your actual image loading logic
63
+
64
+ enable_thinking = False # If `enable_thinking=True`, the long-thinking mode is enabled.
65
+
66
+ # First round chat
67
+ question_round1 = "What is the landform in the picture?"
68
+ msgs_round1 = [{'role': 'user', 'content': [image, question_round1]}]
69
+
70
+ answer_round1 = model.chat(
71
+ msgs=msgs_round1,
72
+ tokenizer=tokenizer,
73
+ enable_thinking=enable_thinking
74
+ )
75
+ print(f"User: {question_round1}
76
+ Assistant: {answer_round1}
77
+ ")
78
+
79
+ # Second round chat, pass history context of multi-turn conversation
80
+ msgs_round2 = msgs_round1 + [{"role": "assistant", "content": [answer_round1]}]
81
+ question_round2 = "What should I pay attention to when traveling here?"
82
+ msgs_round2.append({"role": "user", "content": [question_round2]})
83
+
84
+ answer_round2 = model.chat(
85
+ msgs=msgs_round2,
86
+ tokenizer=tokenizer
87
+ )
88
+ print(f"User: {question_round2}
89
+ Assistant: {answer_round2}")
90
+
91
+ # For more detailed usage, including multi-image, video, and audio conversations,
92
+ # please refer to the main GitHub repository: https://github.com/OpenBMB/MiniCPM-V
93
+ ```
94
+
95
+ ## MiniCPM-o 2.6 GGUF Version for llama.cpp
96
+
97
+ This repository specifically contains the `MiniCPM-o 2.6` model weights in GGUF format, used for `llama.cpp`. This allows for efficient CPU inference on local devices.
98
 
99
  ### Prepare models and code
100
 
 
107
  git checkout minicpm-omni
108
  ```
109
 
110
+ ### Usage of MiniCPM-o 2.6 (GGUF)
111
 
112
  Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) by us)
113
 
 
120
  ./llama-quantize ../MiniCPM-o-2_6/model/ggml-model-f16.gguf ../MiniCPM-o-2_6/model/ggml-model-Q4_K_M.gguf Q4_K_M
121
  ```
122
 
123
+ Build `llama.cpp` using `CMake`:
124
+ [https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
125
 
126
  ```bash
127
  cmake -B build
 
138
 
139
  # or run in interactive mode
140
  ./llama-minicpmv-cli -m ../MiniCPM-o-2_6/model/ggml-model-Q4_K_M.gguf --mmproj ../MiniCPM-o-2_6/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -i
141
+ ```
142
+
143
+ ## 📝 Citation
144
+
145
+ If you find our model/code/paper helpful, please consider citing our paper:
146
+
147
+ ```bib
148
+ @article{yao2024minicpm,
149
+ title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
150
+ author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others},
151
+ journal={arXiv preprint arXiv:2408.01800},
152
+ year={2024}
153
+ }
154
  ```