How to use liminerity/Phigments12 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="liminerity/Phigments12")
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("liminerity/Phigments12") model = AutoModelForCausalLM.from_pretrained("liminerity/Phigments12")
How to use liminerity/Phigments12 with vLLM:
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "liminerity/Phigments12" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "liminerity/Phigments12", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
docker model run hf.co/liminerity/Phigments12
How to use liminerity/Phigments12 with SGLang:
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "liminerity/Phigments12" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "liminerity/Phigments12", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "liminerity/Phigments12" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "liminerity/Phigments12", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
How to use liminerity/Phigments12 with Docker Model Runner:
Failed to function normally after imported to Ollama, which responded everything in python code. And looking forward to the .gguf.
Some one made one. It works best in chat ml format
· Sign up or log in to comment