Model Card for Jese/Qwen-2.5-SFT-QLoRA-4bit-v251027
This model is a fine-tuned version of Qwen-2.5-Instruct on the OpenR1-Math-220k dataset. It has been trained using TRL.
Quick start
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
base_model_id = "Qwen/Qwen2.5-7B-Instruct"
adapter_id = "Jese/Qwen-2.5-SFT-QLoRA-4bit-v251027"
compute_dtype = (
torch.bfloat16
if torch.cuda.is_available() and torch.cuda.get_device_capability(0)[0] >= 8
else torch.float16
)
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
quantization_config=quantization_config,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(base_model, adapter_id)
model.eval()
# 对齐 pad id,减少 warning/边角情况
model.config.pad_token_id = tokenizer.pad_token_id
model.generation_config.pad_token_id = tokenizer.pad_token_id
model.config.use_cache = True
# --- 推理 ---
messages = [{"role": "user", "content": "Solve: 2 + 2 = ?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt", add_special_tokens=False).to(model.device)
with torch.inference_mode():
outputs = model.generate(
**inputs,
max_new_tokens=100,
do_sample=False, # 如需更“活泼”可设为 True 并加温度等
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
# 只打印新生成内容,避免把 prompt 也 decode 出来
gen_ids = outputs[:, inputs["input_ids"].shape[-1]:]
print(tokenizer.decode(gen_ids[0], skip_special_tokens=True))
Training procedure
This model was trained with SFT.
Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 4.3.0
- Tokenizers: 0.21.4
Citations
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
- Downloads last month
- 6