-
-
-
-
-
-
Inference Providers
Active filters:
int4
Updated
•
141
•
4
ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g
Image-Text-to-Text
•
5B
•
Updated
•
11k
•
43
Advantech-EIOT/intel_llama-2-chat-7b
Text Generation
•
Updated
•
10
RedHatAI/zephyr-7b-beta-marlin
Text Generation
•
1B
•
Updated
•
25
RedHatAI/TinyLlama-1.1B-Chat-v1.0-marlin
Text Generation
•
0.3B
•
Updated
•
1.7k
•
2
RedHatAI/OpenHermes-2.5-Mistral-7B-marlin
Text Generation
•
1B
•
Updated
•
94
•
2
RedHatAI/Nous-Hermes-2-Yi-34B-marlin
Text Generation
•
5B
•
Updated
•
16
•
5
ecastera/ecastera-eva-westlake-7b-spanish-int4-gguf
7B
•
Updated
•
22
•
2
softmax/Llama-2-70b-chat-hf-marlin
Text Generation
•
10B
•
Updated
•
11
softmax/falcon-180B-chat-marlin
Text Generation
•
26B
•
Updated
•
15
study-hjt/Meta-Llama-3-8B-Instruct-GPTQ-Int4
Text Generation
•
8B
•
Updated
•
7
study-hjt/Meta-Llama-3-70B-Instruct-GPTQ-Int4
Text Generation
•
71B
•
Updated
•
9
•
6
study-hjt/Meta-Llama-3-70B-Instruct-AWQ
Text Generation
•
71B
•
Updated
•
11
study-hjt/Qwen1.5-110B-Chat-GPTQ-Int4
Text Generation
•
111B
•
Updated
•
18
•
2
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
7B
•
Updated
•
8
study-hjt/Qwen1.5-110B-Chat-AWQ
Text Generation
•
111B
•
Updated
•
8
modelscope/Yi-1.5-34B-Chat-AWQ
Text Generation
•
34B
•
Updated
•
31
•
1
modelscope/Yi-1.5-6B-Chat-GPTQ
Text Generation
•
6B
•
Updated
•
9
modelscope/Yi-1.5-6B-Chat-AWQ
Text Generation
•
6B
•
Updated
•
17
modelscope/Yi-1.5-9B-Chat-GPTQ
Text Generation
•
9B
•
Updated
•
10
•
1
modelscope/Yi-1.5-9B-Chat-AWQ
Text Generation
•
9B
•
Updated
•
13
modelscope/Yi-1.5-34B-Chat-GPTQ
Text Generation
•
34B
•
Updated
•
11
•
1
jojo1899/Phi-3-mini-128k-instruct-ov-int4
Text Generation
•
Updated
•
28
jojo1899/Llama-2-13b-chat-hf-ov-int4
Text Generation
•
Updated
•
22
jojo1899/Mistral-7B-Instruct-v0.2-ov-int4
Text Generation
•
Updated
•
19
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
•
9B
•
Updated
•
41
•
6
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
12B
•
Updated
•
47
•
5
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
8B
•
Updated
•
69
•
4
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
8B
•
Updated
•
40
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
71B
•
Updated
•
46
•
4