nm-testing/Meta-Llama-3-8B-Instruct-W4A16-compressed-tensors-test
Text Generation
•
2B
•
Updated
•
3
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test-bos
Text Generation
•
8B
•
Updated
•
3
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation
•
8B
•
Updated
•
2.89k
nm-testing/Meta-Llama-3-8B-Instruct-W4-Group128-A16-Test
Text Generation
•
2B
•
Updated
•
1
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Per-Token-Test
Text Generation
•
8B
•
Updated
•
18
nm-testing/tinyllama-oneshot-w8a16-per-channel
Text Generation
•
0.4B
•
Updated
•
607
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-Dyn-Per-Token-2048-Samples
Text Generation
•
8B
•
Updated
•
2
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-Dyn-Per-Token
Text Generation
•
8B
•
Updated
•
1
nm-testing/llama-3-instruct-w8a8-dyn-per-token-test
Text Generation
•
8B
•
Updated
•
2
nm-testing/tinyllama-oneshot-w8-channel-a8-tensor
Text Generation
•
1B
•
Updated
•
616
nm-testing/tinyllama-oneshot-w8a8-channel-dynamic-token-v2
Text Generation
•
1B
•
Updated
•
3.46k
nm-testing/tinyllama-oneshot-w8w8-test-static-shape-change
Text Generation
•
1B
•
Updated
•
16.4k
nm-testing/tinyllama-oneshot-w4a16-channel-v2
Text Generation
•
0.3B
•
Updated
•
8.7k
•
1
nm-testing/tinyllama-oneshot-w4a16-group128-v2
Text Generation
•
0.3B
•
Updated
•
588
nm-testing/tinyllama-oneshot-w8a8-static-v2
Text Generation
•
1B
•
Updated
•
6
nm-testing/tinyllama-oneshot-w8a8-dynamic-token-v2
Text Generation
•
1B
•
Updated
•
2.88k
nm-testing/tinyllama-marlin24-w4a16-group128
Text Generation
•
0.3B
•
Updated
•
3
nm-testing/llama7b-one-shot-2_4-w4a16-marlin24-t-alt
Text Generation
•
0.9B
•
Updated
•
4
nm-testing/llama7b-one-shot-2_4-w4a16-marlin24-t
Text Generation
•
1B
•
Updated
•
1.84k
•
1
nm-testing/llama3-8b-w8_channel-a8_tensor-compressed
Text Generation
•
8B
•
Updated
•
2
nm-testing/tinyllama-one-shot-w4a16-group-compressed
Text Generation
•
1B
•
Updated
•
2
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B
•
Updated
•
14
•
1
nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf
Text Generation
•
8B
•
Updated
•
2
nm-testing/deepseek-v2-instruct-autoawq
236B
•
Updated
•
1
nm-testing/deepseekv2-lite-awq
16B
•
Updated
•
1
nm-testing/Meta-Llama-3-8B-Instruct-fp8-hf_compat
8B
•
Updated
•
38
nm-testing/Meta-Llama-3-70B-Instruct-FBGEMM-nonuniform
Text Generation
•
71B
•
Updated
•
3.89k
•
1
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation
•
8B
•
Updated
•
4
nm-testing/llama-3-8b-instruct-fbgemm-test-model
Text Generation
•
8B
•
Updated
•
3