HuggingFaceFW/fineweb-edu
Viewer • Updated • 3.5B • 572k • 1.07k
How to use cmpatino/nanowhale-100m with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="cmpatino/nanowhale-100m", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cmpatino/nanowhale-100m", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("cmpatino/nanowhale-100m", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use cmpatino/nanowhale-100m with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "cmpatino/nanowhale-100m"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "cmpatino/nanowhale-100m",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/cmpatino/nanowhale-100m
How to use cmpatino/nanowhale-100m with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "cmpatino/nanowhale-100m" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "cmpatino/nanowhale-100m",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "cmpatino/nanowhale-100m" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "cmpatino/nanowhale-100m",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use cmpatino/nanowhale-100m with Docker Model Runner:
docker model run hf.co/cmpatino/nanowhale-100m
A small ~110M parameter language model implementing the DeepSeek-V4 architecture, fine-tuned for chat/instruction following. Trained from scratch — no weights from DeepSeek-V4 were used.
This model implements key DeepSeek-V4 innovations at a miniature scale:
| Component | Details |
|---|---|
| Parameters | ~110M total (41M embeddings, 69M non-embedding) |
| Hidden size | 320 |
| Layers | 8 |
| Attention heads | 8 (1 KV head — MQA-style) |
| MLA | Multi-head Latent Attention with q_lora_rank=160 |
| MoE | 4 routed experts + 1 shared, top-2 routing |
| Hyper-Connections | hc_mult=4, Sinkhorn routing (replacing residual connections) |
| MTP | 1 next-token prediction layer |
| Vocab | 129,280 (DeepSeek-V4 tokenizer) |
| Context | 2,048 tokens |
| Metric | Pretrained | SFT |
|---|---|---|
| Eval loss | — | 2.607 |
| Perplexity (held-out) | 13.62 | 12.90 |
| Token accuracy | 33.8% | 48.5% |
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"cmpatino/nanowhale-100m", trust_remote_code=True, dtype=torch.float32
).cuda().eval()
tokenizer = AutoTokenizer.from_pretrained("cmpatino/nanowhale-100m")
messages = [{"role": "user", "content": "What are 3 benefits of exercise?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
input_ids = tokenizer.encode(prompt, return_tensors="pt").cuda()
output = model.generate(input_ids, max_new_tokens=200, temperature=0.7, top_p=0.9,
pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(output[0][input_ids.shape[1]:], skip_special_tokens=True))
dtype=torch.float32.trust_remote_code=True.Trained on 1× NVIDIA H100 80GB.
Apache-2.0
Base model
cmpatino/nanowhale-100m-base