Inference Providers
Active filters: code
bartowski/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation
• 3B • Updated • 22k
• 22
Text Generation
• 3B • Updated • 398k
• • 48
Text Generation
• 15B • Updated • 20.2k
• • 73
Qwen/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation
• 33B • Updated • 108k
• 197
unsloth/Phi-4-mini-instruct-GGUF
Text Generation
• 4B • Updated • 27.8k
• 98
Text Generation
• 8B • Updated • 15.3k
• • 135
ricdomolm/mini-coder-1.7b
Text Generation
• 2B • Updated • 1.17M
• • 4
DavidAU/OpenAi-GPT-oss-20b-HERETIC-uncensored-NEO-Imatrix-gguf
Text Generation
• 21B • Updated • 16.1k
• 131
sweepai/sweep-next-edit-1.5B
1B • Updated • 1.35k
• 318
danielcherubini/Qwen3.5-DeltaCoder-9B
Text Generation
• Updated • 7
saricles/MiniMax-M2.7-NVFP4-GB10-AC
Text Generation
• 119B • Updated • 2.34k
• 9
bekoozkan/godot-gemma-4-e4b-it-GGUF
Any-to-Any
• 8B • Updated • 5.85k
• 3
teknium/Replit-v1-CodeInstruct-3B
Text Generation
• Updated • 23
• 37
mvasiliniuc/iva-codeint-swift-small
Text Generation
• Updated • 17
• 2
TheBloke/CodeLlama-7B-GGUF
Text Generation
• 7B • Updated • 7.18k
• 133
Text Generation
• Updated • 1.71k
• 31
Text Generation
• 4B • Updated • 123
• 150
mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX
Text Generation
• Updated • 78
• 3
stabilityai/stable-code-instruct-3b
Text Generation
• 3B • Updated • 2.14k
• 186
microsoft/Phi-3-mini-128k-instruct
Text Generation
• Updated • 243k
• 1.7k
microsoft/Phi-3-mini-4k-instruct-gguf
Text Generation
• 4B • Updated • 59k
• 582
bartowski/Phi-3-mini-4k-instruct-GGUF
Text Generation
• 4B • Updated • 4.92k
• 12
microsoft/Phi-3-medium-4k-instruct
Text Generation
• 14B • Updated • 43.7k
• 227
bartowski/Phi-3-medium-128k-instruct-GGUF
Text Generation
• 14B • Updated • 3.7k
• 61
microsoft/Phi-3.5-vision-instruct
Image-Text-to-Text
• Updated • 1.73M
• 734
Text Generation
• 8B • Updated • 1.61M
• • 144
Qwen/Qwen2.5-Coder-7B-Instruct-AWQ
Text Generation
• 8B • Updated • 402k
• 23
unsloth/Qwen2.5-Coder-7B-Instruct
Text Generation
• 8B • Updated • 5.69k
• • 9
WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
Text Generation
• 8B • Updated • 1.62k
• • 67
Qwen/Qwen2.5-Coder-0.5B-Instruct
Text Generation
• 0.5B • Updated • 139k
• • 67