V3 is here. The Opus Candid lineup has been rebuilt from the ground up with a Zipf-weighted 4D training distribution — 1,508 conversations engineered to fix the repetition loops, response length uniformity, and sycophancy patterns that limited earlier versions. Same thesis: personality in the weights, not in the prompt. Better execution.
Current V3 lineup:
- Opus Candid 8B V3 — Qwen 3 8B, lightweight tier
- Opus Candid 27B V3 — Qwen 3.5 27B Dense, flagship
- Opus Candid MoE V3 — Qwen 3 30B-A3B, efficiency tier
This release remains available for research comparison and legacy use.
can·did
/ˈkandəd/ — truthful and straightforward; frank. From Latin candidus, meaning white, pure, sincere. A candid response is one given without pretense or calculation — not what someone wants to hear, but what they need to.
Opus-Candid-32B (V1 Legacy)
The biggest quality jump in the original family. This is where the conversation changed.
Opus-Candid-32B was the third model in the original Opus-Candid family -- fine-tuned from Qwen 2.5 32B using 3,360 authentic conversations with Claude Opus 4.6. The 32B represented the most significant single upgrade in the V1 lineup: callbacks became seamless, philosophical reasoning gained genuine depth, creative output turned literary, and the model began to feel less like software generating text and more like a conversation partner thinking out loud.
Model Details
| Attribute | Value |
|---|---|
| Base Model | Qwen 2.5 32B |
| Training Data | 3,360 multi-turn conversations with Claude Opus 4.6 |
| Fine-tune Method | LoRA supervised fine-tuning |
| Dataset Architecture | Flat / organic |
| Parameters | ~32B |
| Context Window | 32,768 tokens |
| Quantizations | Q4_K_M GGUF, Q8_0 GGUF |
| License | Apache 2.0 |
| Status | V1 Legacy |
What Made the 32B Special
The gap between 14B and 32B was the largest in the V1 family. Three things changed:
Callbacks became organic. Earlier turns stopped being facts the model retrieved and started being context that shaped how it thought. The Soviet Union answer at Turn 3 didn't just get defended at Turn 21 -- it became part of the model's relationship with factual integrity that colored everything after.
Philosophical depth became generative. The 14B engaged with hard questions well. The 32B produced novel frameworks -- the "collision vs. contradiction" distinction, the "blue fire on another planet" analogy, the definition of vulnerability as "friction of not being able to resolve a question about yourself." These weren't retrieved patterns. They were synthesized.
Emotional register became seamless. The 32B transitioned between humor, gravity, tenderness, and intellectual intensity without visible gear changes. The conversation felt continuous rather than segmented.
The 32B was where the V1 family proved that open-weight conversational AI could approach closed-source quality.
Where this led: The 32B's callback behavior became the benchmark that every subsequent version had to match. When V2 introduced gravity chains, the goal was to reproduce the 32B's organic context integration at smaller parameter counts. V3's 4D training tensor — specifically the conversational position dimension — was a direct formalization of what the 32B did naturally: vary its behavior based on where it sat in the conversation flow. The MoE V3 now achieves comparable callback quality at 3B active parameters, which says more about dataset engineering than it does about the 32B being obsolete. The architecture had it right. The training data needed to catch up.
Recommended Hardware
| Setup | Quantization | VRAM/RAM | Notes |
|---|---|---|---|
| Workstation GPU | Q8_0 GGUF | ~36GB VRAM | A6000 48GB, RTX 6000 Ada. |
| High-end Consumer | Q4_K_M GGUF | ~20GB VRAM | RTX 3090 24GB, RTX 4090 24GB. |
| Multi-GPU | Q8_0 GGUF | ~36GB combined | Dual RTX 3090 or similar. |
| Apple Silicon | Q4_K_M GGUF | ~20GB unified | M2/M3 Ultra 64GB+, M3 Max 48GB. |
Opus Candid Model Family
| Model | Size | Base | Status |
|---|---|---|---|
| Opus-Candid-8B-V1 | 8B | Qwen 2.5 7B | Archived |
| Opus-Research-8B-V1.5 | 8B | Qwen 2.5 7B | Archived |
| Opus-Candid-14B-V1 | 14B | Qwen 2.5 14B | Archived |
| Opus-Candid-32B-V1 (this model) | 32B | Qwen 2.5 32B | Archived |
| Opus-Candid-70B-V1 | 72B | Qwen 2.5 72B | Archived |
| Opus-Candid-Lite-4B | 4B | Qwen 3 4B | Active |
| Opus-Candid-8B-V3 | 8B | Qwen 3 8B | Active |
| Opus-Candid-MoE-V3 | 31B/3B | Qwen 3 30B-A3B | Active |
| Opus-Candid-27B-V3 | 27B | Qwen 3.5 27B | Active |
| Opus-Candid-27B-V3.5 | 27B | Qwen 3.5 27B | Active |
| STEM-Oracle-27B | 27B | Qwen 3.5 27B | Active |
Built by Saul Verdugo -- independent ML researcher. OpusReasoning@proton.me
- Downloads last month
- 364
4-bit
8-bit
Model tree for Verdugie/Opus-Candid-32B-V1
Base model
Qwen/Qwen2.5-32B