Inference Providers
Active filters: marlin
caiovicentino1/Qwen3.6-35B-A3B-HLWQ-CT-INT4
Text Generation
• 35B • Updated • 1.59k
• 7
caiovicentino1/Huihui-Qwopus3.5-27B-v3-abliterated-HLWQ-Q5
Text Generation
• 26B • Updated • 2.48k
• 14
caiovicentino1/Qwopus-MoE-35B-A3B-HLWQ-Q5
Text Generation
• 35B • Updated • 2.01k
• 7
caiovicentino1/Qwopus3.5-9B-v3-HLWQ-v7-GPTQ
Text Generation
• 9B • Updated • 438
• 5
demon-zombie/MiniMax-M2.7-AWQ-4bit
Text Generation
• 229B • Updated • 2.64k
• 3
caiovicentino1/Qwen3.5-27B-HLWQ-Q5
Text Generation
• 27B • Updated • 2.16k
• 11
caiovicentino1/Qwopus3.5-27B-v3-HLWQ-v7-GPTQ
Text Generation
• 27B • Updated • 732
• 5
caiovicentino1/Qwopus3.5-9B-v3-HLWQ-Q5
Text Generation
• 9B • Updated • 2.78k
• 9
RedHatAI/llama-2-7b-chat-marlin
Text Generation
• 1B • Updated • 8
robertgshaw2/llama-2-7b-chat-marlin
Text Generation
• 1B • Updated • 5
• 1
robertgshaw2/llama-2-13b-chat-marlin
Text Generation
• 2B • Updated • 5
RedHatAI/zephyr-7b-beta-marlin
Text Generation
• 1B • Updated • 35
RedHatAI/TinyLlama-1.1B-Chat-v1.0-marlin
Text Generation
• 0.3B • Updated • 1.08k
• 2
RedHatAI/OpenHermes-2.5-Mistral-7B-marlin
Text Generation
• 1B • Updated • 15
• 2
RedHatAI/Nous-Hermes-2-Yi-34B-marlin
Text Generation
• 5B • Updated • 16
• 5
vilsonrodrigues/OpenHermes-2.5-Mistral-7B-Pruned50-GPTQ-Marlin
Text Generation
• 1B • Updated • 3
softmax/Llama-2-70b-chat-hf-marlin
Text Generation
• 10B • Updated • 4
softmax/falcon-180B-chat-marlin
Text Generation
• 26B • Updated • 14
mgoin/Meta-Llama-3-8B-Instruct-Marlin
Text Generation
• Updated • 6
tachyphylaxis/opus-v1.2-70b-marlin
Text Generation
• 10B • Updated • 2
ControlNet/marlin_vit_small_ytf
Feature Extraction
• 22.5M • Updated • 51
ControlNet/marlin_vit_base_ytf
Feature Extraction
• 87.4M • Updated • 188
ControlNet/marlin_vit_large_ytf
Feature Extraction
• 0.3B • Updated • 53
Teaspoon-AI/Voxtral-Mini-4B-INT4-Jetson
thehighnotes/vllm-jetson-orin
Text Generation
• Updated tacodevs/Cydonia-24B-v4.3-W4A16-GPTQ
Text Generation
• 4B • Updated • 178
groxaxo/lukey03-Qwen3.5-9B-abliterated-gptq-pro-w4g128
Text Generation
• 9B • Updated • 3.9k
caiovicentino1/Qwen3.5-9B-HLWQ-Q5
Text Generation
• 9B • Updated • 1.84k
• 3
caiovicentino1/Qwen3.5-9B-Claude-Opus-HLWQ-Q5
Text Generation
• 9B • Updated • 1.79k
• 3
caiovicentino1/Qwen3.5-27B-Claude-Opus-HLWQ-Q5
Text Generation
• 27B • Updated • 2k