Qwen3.5-9B-f32-GGUF
Qwen3.5-9B from Alibaba's Qwen team is the flagship of the Qwen3.5 small series (0.8B-9B), a 9B-parameter dense multimodal causal language model with integrated vision encoder, leveraging a hybrid Gated DeltaNet architecture (8×[3×(Gated DeltaNet → FFN) → 1×(Gated Attention → FFN)] with 32 layers, hidden dim 4096, 248K vocab for 201 languages), multi-token prediction, and 262K native context (extensible to 1M+ via YaRN). Outperforming Qwen3-30B (3x larger) on GPQA/IFEval/long-context tasks and dominating GPT-5-Nano/Gemini-2.5-Flash-Lite on vision benchmarks (MMMU-Pro +13 to 70.1, MathVision +17 to 78.9, OmniDocBench +32, VideoMME 84.5), it runs efficiently on single RTX 4090 (~18GB BF16, ~5GB 4-bit) for text/image/video processing with DeepStack ViT for temporal dynamics. Apache 2.0-licensed with base model and GGUF quants, it supports vLLM/SGLang/llama.cpp inference as a scalable native multimodal agent foundation for edge-to-server deployments in OCR, video QA, coding, math, and multilingual reasoning.
Model Files
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| Qwen3.5-9B.BF16.gguf | BF16 | 17.9 GB | Download |
| Qwen3.5-9B.F16.gguf | F16 | 17.9 GB | Download |
| Qwen3.5-9B.F32.gguf | F32 | 35.8 GB | Download |
| Qwen3.5-9B.Q8_0.gguf | Q8_0 | 9.53 GB | Download |
| Qwen3.5-9B.mmproj-bf16.gguf | mmproj-bf16 | 922 MB | Download |
| Qwen3.5-9B.mmproj-f16.gguf | mmproj-f16 | 922 MB | Download |
| Qwen3.5-9B.mmproj-f32.gguf | mmproj-f32 | 1.82 GB | Download |
| Qwen3.5-9B.mmproj-q8_0.gguf | mmproj-q8_0 | 624 MB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 2,481
8-bit
16-bit
32-bit
