FreedomIntelligence/medical-o1-reasoning-SFT
Viewer • Updated • 90.1k • 6.12k • 1.1k
How to use vignesha7/DeepSeek-R1-Distill-Llama-8B-Medical-Expert with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="vignesha7/DeepSeek-R1-Distill-Llama-8B-Medical-Expert") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("vignesha7/DeepSeek-R1-Distill-Llama-8B-Medical-Expert")
model = AutoModelForCausalLM.from_pretrained("vignesha7/DeepSeek-R1-Distill-Llama-8B-Medical-Expert")How to use vignesha7/DeepSeek-R1-Distill-Llama-8B-Medical-Expert with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for vignesha7/DeepSeek-R1-Distill-Llama-8B-Medical-Expert to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for vignesha7/DeepSeek-R1-Distill-Llama-8B-Medical-Expert to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for vignesha7/DeepSeek-R1-Distill-Llama-8B-Medical-Expert to start chatting
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="vignesha7/DeepSeek-R1-Distill-Llama-8B-Medical-Expert",
max_seq_length=2048,
)DeepSeek-R1-Distill-Llama-8B medical CoT model.
DeepSeek-R1-Distill-Llama-8B model fine tuned on the FreedomIntelligence/medical-o1-reasoning-SFT.
find the quantized models here: https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Medical-Expert-GGUF
FreedomIntelligence/medical-o1-reasoning-SFT
Base model
deepseek-ai/DeepSeek-R1-Distill-Llama-8B