Papers
arxiv:2512.24165

DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models

Published on Dec 30, 2025
· Submitted by
Tong Zhu
on Jan 2
#3 Paper of the day
Authors:
,
,
,

Abstract

While recent Multimodal Large Language Models (MLLMs) have attained significant strides in multimodal reasoning, their reasoning processes remain predominantly text-centric, leading to suboptimal performance in complex long-horizon, vision-centric tasks. In this paper, we establish a novel Generative Multimodal Reasoning paradigm and introduce DiffThinker, a diffusion-based reasoning framework. Conceptually, DiffThinker reformulates multimodal reasoning as a native generative image-to-image task, achieving superior logical consistency and spatial precision in vision-centric tasks. We perform a systematic comparison between DiffThinker and MLLMs, providing the first in-depth investigation into the intrinsic characteristics of this paradigm, revealing four core properties: efficiency, controllability, native parallelism, and collaboration. Extensive experiments across four domains (sequential planning, combinatorial optimization, constraint satisfaction, and spatial configuration) demonstrate that DiffThinker significantly outperforms leading closed source models including GPT-5 (+314.2\%) and Gemini-3-Flash (+111.6\%), as well as the fine-tuned Qwen3-VL-32B baseline (+39.0\%), highlighting generative multimodal reasoning as a promising approach for vision-centric reasoning.

Community

Paper author Paper submitter

TLDR: A new paradigm for multi-modal reasoning with image-to-image generation. Diffusion could think too!

Paper author

arXiv lens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/diffthinker-towards-generative-multimodal-reasoning-with-diffusion-models-8277-5a4d5999

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.24165 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.24165 in a Space README.md to link it from this page.

Collections including this paper 1