Instructions to use Xwin-LM/XwinCoder-34B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Xwin-LM/XwinCoder-34B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Xwin-LM/XwinCoder-34B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/XwinCoder-34B") model = AutoModelForCausalLM.from_pretrained("Xwin-LM/XwinCoder-34B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Xwin-LM/XwinCoder-34B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Xwin-LM/XwinCoder-34B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Xwin-LM/XwinCoder-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Xwin-LM/XwinCoder-34B
- SGLang
How to use Xwin-LM/XwinCoder-34B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Xwin-LM/XwinCoder-34B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Xwin-LM/XwinCoder-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Xwin-LM/XwinCoder-34B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Xwin-LM/XwinCoder-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Xwin-LM/XwinCoder-34B with Docker Model Runner:
docker model run hf.co/Xwin-LM/XwinCoder-34B
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/XwinCoder-34B")
model = AutoModelForCausalLM.from_pretrained("Xwin-LM/XwinCoder-34B")XwinCoder
We are glad to introduce our instruction finetuned code generation models based on CodeLLaMA: XwinCoder. We release model weights and evaluation code.
Repository: https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder
Models:
| Model | ๐คhf link | HumanEval pass@1 | MBPP pass@1 | APPS-intro pass@5 |
|---|---|---|---|---|
| XwinCoder-7B | link | 63.8 | 57.4 | 31.5 |
| XwinCoder-13B | link | 68.8 | 60.1 | 35.4 |
| XwinCoder-34B | link | 74.2 | 64.8 | 43.0 |
Updates
๐ฅ We released XwinCoder-7B, XwinCoder-13B, XwinCoder-34B. Our XwinCoder-34B reached 74.2 on HumanEval and it achieves comparable performance as GPT-3.5-turbo on 6 benchmarks.
We support evaluating instruction finetuned models on HumanEval, MBPP, APPS, DS1000 and MT-Bench. See our github repository.
Overview
- To fully demonstrate our model's coding capabilities in real-world usage scenarios, we have conducted thorough evaluations on several existing mainstream coding capability leaderboards (rather than only on the currently most popular HumanEval).
- As shown in the radar chart results, our 34B model achieves comparable performance as GPT-3.5-turbo on coding abilities.
- It is worth mentioning that, to ensure accurate visualization, our radar chart has not been scaled (only translated; MT-Bench score is scaled by 10x to be more comparable with other benchmarks).
- Multiple-E-avg6 refer to the 6 languages used in CodeLLaMA paper. Results of GPT-4 and GPT-3.5-turbo are conducted by us, more details will be released later.
Demo
We provide a chat demo in our github repository, here are some examples:

- Downloads last month
- 968

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Xwin-LM/XwinCoder-34B")