YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
mix_semantic_10
LoRA adapter uploaded automatically.
Overview
- Type: LoRA adapter (PEFT)
- Task type:
CAUSAL_LM - Base model:
/home/praveen/coreset/outputs/unified_llama - LoRA r:
8 - LoRA alpha:
16
Usage
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "coreset-selection/mix_semantic_10"
cfg = PeftConfig.from_pretrained(peft_model_id)
base = cfg.base_model_name_or_path
tok = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, torch_dtype='auto')
model = PeftModel.from_pretrained(model, peft_model_id)
Files
adapter_config.jsonadapter_model.binoradapter_model.safetensors
Notes
- This repo contains only the LoRA adapter weights.
- Load with the matching base model specified above.
Uploaded from: /home/praveen/coreset/outputs/unified/gd_semantic_10_model
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support