Farid Saud
fsaudm
AI & ML interests
None yet
Recent Activity
new activity 1 day ago
deepseek-ai/DeepSeek-V4-Flash:Is 158B or 284b params ? liked a model 6 days ago
cyankiwi/MiniMax-M2.7-AWQ-4bit new activity 12 days ago
Jackrong/Gemopus-4-26B-A4B-it:What about a gemopus-4-31B?Organizations
Is 158B or 284b params ?
6
#17 opened 2 days ago
by
celsowm
What about a gemopus-4-31B?
#1 opened 12 days ago
by
fsaudm
unsloth for the non-gguf crowd?
👍 1
4
#1 opened about 2 months ago
by deleted
4-bit quantization: MXFP4_MOE vs Q4_K_XL?
4
#3 opened 2 months ago
by
fsaudm
GGUF quants?
3
#17 opened 4 months ago
by
fsaudm
Consulta
1
#14 opened 5 months ago
by
velicomen58
File size mismatch vs https://huggingface.co/unsloth/Qwen3-VL-235B-A22B-Thinking-GGUF ??
3
#1 opened 5 months ago
by
fsaudm
Empty model card???
#2 opened 8 months ago
by
fsaudm
is this model open-source?
2
#1 opened 9 months ago
by
melodyinray
How do I serve a model in the original folder as bf16 in VLLM?
4
#60 opened 9 months ago
by
bakch92
Model issue with 64GB ram
5
#4 opened about 1 year ago
by
llama-anon
Something is wrong with the 4bit uploads, 57.9B params???
2
#2 opened about 1 year ago
by
fsaudm
OOM on 2xH100
7
#3 opened about 1 year ago
by
Maverick17
assert self.quant_method is not None
4
#5 opened about 1 year ago
by
Seri0usLee
F*** china!
14
#10 opened about 1 year ago
by
Opm84736929
Are the Q4 and Q5 models R1 or R1-Zero
18
#2 opened over 1 year ago
by
gng2info
Is this an MOE?
2
#5 opened over 1 year ago
by
AlgorithmicKing
Encountering Unknown quantization type, got fp8 - supported types are: XXXXX
🔥 1
3
#1 opened over 1 year ago
by
ivanmanu
vLLM help pls :(
4
#6 opened over 1 year ago
by
fsaudm
vLLM on A100s
6
#41 opened over 1 year ago
by
fsaudm