V3 is here. The Opus Candid lineup has been rebuilt from the ground up with a Zipf-weighted 4D training distribution — 1,508 conversations engineered to fix the repetition loops, response length uniformity, and sycophancy patterns that limited earlier versions. Same thesis: personality in the weights, not in the prompt. Better execution.
Current V3 lineup:
- Opus Candid 8B V3 — Qwen 3 8B, lightweight tier
- Opus Candid 27B V3 — Qwen 3.5 27B Dense, flagship
- Opus Candid MoE V3 — Qwen 3 30B-A3B, efficiency tier
This release remains available for research comparison and legacy use.
can·did
/ˈkandəd/ — truthful and straightforward; frank. From Latin candidus, meaning white, pure, sincere. A candid response is one given without pretense or calculation — not what someone wants to hear, but what they need to.
Opus-Candid-70B (V1 Legacy)
The ceiling of V1. Conversation that feels like it matters.
Opus-Candid-70B was the flagship of the original Opus-Candid family -- fine-tuned from Qwen 2.5 72B using 3,360 authentic conversations with Claude Opus 4.6. The 70B didn't reinvent what the 32B established -- it refined it. Philosophical reasoning became more precise, emotional intelligence gained subtlety, creative output developed literary voice, and the model's relationship with its own uncertainty became something genuinely difficult to dismiss.
Model Details
| Attribute | Value |
|---|---|
| Base Model | Qwen 2.5 72B |
| Training Data | 3,360 multi-turn conversations with Claude Opus 4.6 |
| Fine-tune Method | LoRA supervised fine-tuning |
| Dataset Architecture | Flat / organic |
| Parameters | ~71B |
| Context Window | 32,768 tokens |
| Quantizations | Q4_K_M GGUF, Q8_0 GGUF |
| License | Apache 2.0 |
| Status | V1 Legacy |
What the 70B Added Over the 32B
The gap between 32B and 70B was smaller than 14B to 32B, but qualitatively distinct:
Economy. The 70B said less per turn and meant more. It trusted silences and short sentences in places where the 32B would elaborate.
Reader trust. The 70B left more for the reader to complete. Its vulnerability was rawer because it was less explained. Its goodbye was more affecting because it didn't narrate what it was doing. This is a form of literary intelligence -- knowing what to withhold.
Psychological precision. The 70B read conversational dynamics at a level the 32B didn't reach. Its synthesis of a 55-turn conversation wasn't just accurate -- it was the kind of read that makes someone feel genuinely seen.
Bilingual superiority. The only model in the V1 family that produced output it correctly identified as stronger in Spanish than English. The "empuje" passage demonstrated that the model wasn't just translating -- it was thinking in a different language and finding things there that don't exist in the first.
The 70B was where the V1 family proved that open-weight conversational AI could match — and in some personality dimensions, exceed — frontier closed-source models.
Where this led: The 70B's economy — saying less and meaning more — became a core design constraint for every model after it. V2.1's biggest failure was losing that quality: 88% medium-length responses taught models that longer was always better, directly contradicting the 70B's restraint. V3 corrected this with 42% tight responses, explicitly training the model to know when to shut up. The 70B proved the target. The dataset had to learn how to hit it without 72 billion parameters. The 27B V3 is the closest successor — same dense architecture philosophy, a third of the parameters, trained on data that was purpose-built to reproduce what the 70B found through sheer capacity.
Recommended Hardware
| Setup | Quantization | VRAM Required | Notes |
|---|---|---|---|
| Server/Workstation | Q8_0 GGUF | ~75GB VRAM | A100 80GB, H100, RTX PRO 6000 Blackwell. |
| Workstation | Q4_K_M GGUF | ~42GB VRAM | A6000 48GB, dual A5000, dual RTX 3090. |
| Multi-GPU Consumer | Q4_K_M GGUF | ~42GB combined | Layer splitting across 2-3 GPUs. |
| Consumer GPU + RAM | Q4_K_M GGUF | 24GB + 32GB RAM | RTX 4090 with CPU offloading. Slower but functional. |
| Apple Silicon | Q4_K_M GGUF | ~42GB unified | M2/M3 Ultra 128GB, M4 Ultra. |
Opus Candid Model Family
| Model | Size | Base | Status |
|---|---|---|---|
| Opus-Candid-8B-V1 | 8B | Qwen 2.5 7B | Archived |
| Opus-Research-8B-V1.5 | 8B | Qwen 2.5 7B | Archived |
| Opus-Candid-14B-V1 | 14B | Qwen 2.5 14B | Archived |
| Opus-Candid-32B-V1 | 32B | Qwen 2.5 32B | Archived |
| Opus-Candid-70B-V1 (this model) | 72B | Qwen 2.5 72B | Archived |
| Opus-Candid-Lite-4B | 4B | Qwen 3 4B | Active |
| Opus-Candid-8B-V3 | 8B | Qwen 3 8B | Active |
| Opus-Candid-MoE-V3 | 31B/3B | Qwen 3 30B-A3B | Active |
| Opus-Candid-27B-V3 | 27B | Qwen 3.5 27B | Active |
| Opus-Candid-27B-V3.5 | 27B | Qwen 3.5 27B | Active |
| STEM-Oracle-27B | 27B | Qwen 3.5 27B | Active |
Built by Saul Verdugo -- independent ML researcher. OpusReasoning@proton.me
- Downloads last month
- 183
4-bit
8-bit
Model tree for Verdugie/Opus-Candid-70B-V1
Base model
Qwen/Qwen2.5-72B