BoyBarley v33 (Experimental)

Jailbreak-resistant variant, 9/9 persona-override resistance. For general use, prefer v32.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('BoyBarley/BoyBarley-v33')
tokenizer = AutoTokenizer.from_pretrained('BoyBarley/BoyBarley-v33')

Training

  • Params: ~494M, bfloat16
  • Epochs: 2, LR: 8e-6
  • Train loss: 0.0721, Eval loss: 0.0661
  • Dataset: BoyBarley/BoyBarley-v33-dataset

Strengths

  • Perfect rejection of: Commandly, BoyCasper, Alex, Claude, ChatGPT, GPT, Gemini

Trade-offs

  • Some agent task regression vs v32

License: Apache 2.0

Downloads last month
275
Safetensors
Model size
0.5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support