·
AI & ML interests
None yet
Organizations
horheynm/TinyLlama-1.1B-Chat-v1.0-FP8_DYNAMIC-e2e
horheynm/Llama-3.2-1B-Instruct-FP8-input_activation_channel
1B
•
Updated
•
9
horheynm/Phi-3-mini-4k-instruct-kv_cache
4B
•
Updated
•
6
horheynm/TinyLlama_1.1B_Chat_v1.0_W8A8_Dynamic_Per_Token_uncompressed
1B
•
Updated
•
8
horheynm/TinyLlama_1.1B_Chat_v1.0_W8A8_Dynamic_Per_Token_compressed
1B
•
Updated
•
8
horheynm/TinyLlama_1.1B_Chat_v1.0_W8A16_G128_uncompressed
1B
•
Updated
•
9
horheynm/TinyLlama_1.1B_Chat_v1.0_W8A16_G128_compressed
0.4B
•
Updated
•
9
horheynm/TinyLlama_1.1B_Chat_v1.0_W4A16_G128_uncompressed
1B
•
Updated
•
6
horheynm/TinyLlama_1.1B_Chat_v1.0_W4A16_G128_compressed
0.3B
•
Updated
•
9
horheynm/TinyLlama_1.1B_Chat_v1.0_FP8_Dynamic_uncompressed
1B
•
Updated
•
8
horheynm/TinyLlama_1.1B_Chat_v1.0_FP8_Dynamic_compressed
1B
•
Updated
•
8
horheynm/llama2.c_stories15M_pruned_50.2of4_uncompressed
15.2M
•
Updated
•
8
horheynm/llama2.c_stories15M_pruned_50.2of4_compressed
24.6M
•
Updated
•
9
24.6M
•
Updated
•
7
horheynm/TinyLlama-1.1B-Chat-v1.0-W4A16-G128
0.3B
•
Updated
•
7
horheynm/TinyLlama_1.1B_Chat_v1.0_W4A16_G128
Updated
horheynm/Llama-3.2-11B-Vision-Instruct-FP8-Dynamic
11B
•
Updated
•
5
horheynm/TinyLlama-1.1B-Chat-v1.0-QuantizedCache
1B
•
Updated
•
6
horheynm/TinyLlama-1.1B-Chat-v1.0-QuantizedCache-dipika2
Updated
horheynm/TinyLlama-1.1B-Chat-v1.0-QuantizedCache-dipika
1B
•
Updated
•
7
horheynm/TinyLlama-1.1B-Chat-v1.0-GPTQ-w4a16-KV-QuantizedCache
1B
•
Updated
•
7
horheynm/TinyLlama-1.1B-Chat-v1.0-FP8
1B
•
Updated
•
7
horheynm/Qwen2-1.5B-FP8-KV
2B
•
Updated
•
6
horheynm/Qwen2-1.5B-FP8-KV-QuantizedCache
2B
•
Updated
•
7
horheynm/Llama-2-70b-chat-hf-FP8-KV-QuantizedCache
69B
•
Updated
•
5
horheynm/Phi-3-mini-4k-instruct-FP8-KV-QuantizedCache
4B
•
Updated
•
5
horheynm/Meta-Llama-3-8B-Instruct-FP8-KV-QuantizedCache
8B
•
Updated
•
6
horheynm/TinyLlama-1.1B-Chat-v1.0-FP8-KV-QuantizedCache
1B
•
Updated
•
8
horheynm/actoder_20241712_154349
Text Generation
•
0.3B
•
Updated
•
12
Text Generation
•
48.4M
•
Updated
•
11