Model Card for Model ID
Qwen0.6B trained on the MetaMathQA dataset using Unsloth. Used to test ExecuTorch LoRA capabilities.
Training Data
Dataset: https://huggingface.co/datasets/meta-math/MetaMathQA
Training Configuration
OUTPUT_DIR = "./outputs"
BATCH_SIZE = 2 # Smaller batch for longer sequences
GRADIENT_ACCUMULATION_STEPS = 8 # Effective batch = 16
LEARNING_RATE = 2e-4
NUM_EPOCHS = 1 # MetaMathQA is large, 1 epoch is often enough
WARMUP_RATIO = 0.03
LOGGING_STEPS = 25
SAVE_STEPS = 500
MAX_SAMPLES = 50000 # Limit samples for faster training (set None for full dataset)
Training Hyperparameters
Using bf16, which is what the original Qwen0.6B checkpoint it.
Framework versions
- PEFT 0.18.0
- Downloads last month
- 765