Papers
arxiv:2512.06776

From Next-Token to Next-Block: A Principled Adaptation Path for Diffusion LLMs

Published on Dec 7
· Submitted by YuchuanTian on Dec 10
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Adapting autoregressive models to block-wise diffusion enables parallel generation and retains pretrained knowledge, achieving state-of-the-art performance among 7B-class diffusion language models.

AI-generated summary

Large language models (LLMs) excel at generation but dominant autoregressive (AR) decoding is inherently sequential, creating a throughput bottleneck. Diffusion Language Models (DLMs)--especially block-wise variants--enable parallel generation and intra-block bidirectional reasoning, yet training large DLMs from scratch is costly and wastes the knowledge in mature AR checkpoints. Prior "adaptation" attempts either modify logits or randomly grow attention masks to full-sequence diffusion, or simply transplant AR weights into a block-diffusion recipe, leaving a fundamental mismatch between AR causality and block-wise bidirectionality unaddressed. We reframe adaptation as a intra-paradigm path from AR to Block-Diffusion by viewing AR as Block-Diffusion with blocksize=1. Concretely, we design the pathway of adaptation as follows: we use a context-causal attention mask (causal in context, bidirectional only within the active block), an efficient parallel adaptation procedure, an auxiliary AR loss to maximize data utilization and retain pretrained knowledge, and gradual increment of the generation block size. The recipe integrates cleanly with masked block-diffusion and maintains train-inference consistency. Built on these components, NBDiff-7B (Base and Instruct) could inherit the long-context modeling and reasoning capabilities, and achieve state-of-the-art performance among the 7B-class DLMs, delivering strong gains on general-knowledge, math, and code benchmarks over strong baselines. These results demonstrate that principled AR-to-block-diffusion adaptation is an effective and compute-efficient alternative to training DLMs from scratch. Codes: https://github.com/YuchuanTian/NBDiff.

Community

Paper submitter

image

NBDiff: A principled path from AR to Diffusion LLMs

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Excellent work on showing that adding AR loss would provide benefits on pure diffusion objectives!

Would recommend checking out two relevant works that also demonstrate that:

  1. TiDAR: https://tidarlm.github.io (sequence hybrid architecture with parallel diffusion drafting and AR verification all in one forward -> 4.71x to 5.91x higher tokens per second throughput)
  2. Set Block Decoding: https://arxiv.org/abs/2509.04185 (3x-5x reduction in forward passes compared to NTP training objective)

Looking forward to the code release!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.06776 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.06776 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.06776 in a Space README.md to link it from this page.

Collections including this paper 1