Instructions to use SteelStorage/Umbra-v3-MoE-4x11b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SteelStorage/Umbra-v3-MoE-4x11b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SteelStorage/Umbra-v3-MoE-4x11b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("SteelStorage/Umbra-v3-MoE-4x11b") model = AutoModelForCausalLM.from_pretrained("SteelStorage/Umbra-v3-MoE-4x11b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SteelStorage/Umbra-v3-MoE-4x11b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SteelStorage/Umbra-v3-MoE-4x11b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SteelStorage/Umbra-v3-MoE-4x11b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/SteelStorage/Umbra-v3-MoE-4x11b
- SGLang
How to use SteelStorage/Umbra-v3-MoE-4x11b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SteelStorage/Umbra-v3-MoE-4x11b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SteelStorage/Umbra-v3-MoE-4x11b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SteelStorage/Umbra-v3-MoE-4x11b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SteelStorage/Umbra-v3-MoE-4x11b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use SteelStorage/Umbra-v3-MoE-4x11b with Docker Model Runner:
docker model run hf.co/SteelStorage/Umbra-v3-MoE-4x11b
Umbra-v3-MoE-4x11b
Creator: SteelSkull
About Umbra-v3-MoE-4x11b: A Mixture of Experts model designed for general assistance with a special knack for storytelling and RP/ERP
Integrates models from notable sources for enhanced performance in diverse tasks.
Source Models:
Update-Log:
The [Umbra Series] keeps rolling out from the [Lumosia Series] garage, aiming to be your digital Alfred with a side of Shakespeare for those RP/ERP nights.
What's Fresh in v3?
Didn’t reinvent the wheel, just slapped on some fancier rims. Upgraded the models and tweaked the prompts a bit. Now, Umbra's not just a general use LLM; it's also focused on spinning stories and "Stories".
Negative Prompt Minimalism
Got the prompts to do a bit of a diet and gym routine—more beef on the positives, trimming down the negatives as usual with a dash of my midnight musings.
Still Guessing, Aren’t We?
Just so we're clear, "v3" is not the messiah of updates. It’s another experiment in the saga.
Dive into Umbra v3 and toss your two cents my way. Your feedback is the caffeine in my code marathon.
Exl2 available by:
EXL2-Rpcal = AzureBlack
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 73.09 |
| AI2 Reasoning Challenge (25-Shot) | 68.43 |
| HellaSwag (10-Shot) | 87.83 |
| MMLU (5-Shot) | 65.99 |
| TruthfulQA (0-shot) | 69.30 |
| Winogrande (5-shot) | 83.90 |
| GSM8k (5-shot) | 63.08 |
- Downloads last month
- 66
Model tree for SteelStorage/Umbra-v3-MoE-4x11b
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard68.430
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard87.830
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard65.990
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard69.300
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard83.900
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard63.080