Inference Providers
Active filters: quantllm
codewithdark/Llama-3.2-3B-4bit
3B • Updated • 9
codewithdark/Llama-3.2-3B-GGUF-4bit
3B • Updated • 2
codewithdark/Llama-3.2-3B-4bit-mlx
Text Generation
• 3B • Updated • 53
QuantLLM/Llama-3.2-3B-4bit-mlx
Text Generation
• 3B • Updated • 10
QuantLLM/Llama-3.2-3B-2bit-mlx
Text Generation
• 3B • Updated • 18
QuantLLM/Llama-3.2-3B-8bit-mlx
Text Generation
• 3B • Updated • 27
QuantLLM/Llama-3.2-3B-5bit-mlx
Text Generation
• 3B • Updated • 9
QuantLLM/Llama-3.2-3B-5bit-gguf
3B • Updated • 37
QuantLLM/Llama-3.2-3B-2bit-gguf
3B • Updated • 9
QuantLLM/functiongemma-270m-it-8bit-gguf
0.3B • Updated • 9
• 1
QuantLLM/functiongemma-270m-it-4bit-gguf
0.3B • Updated • 9
QuantLLM/functiongemma-270m-it-4bit-mlx
Text Generation
• 0.3B • Updated • 15