MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head
Abstract
Multi-Head Linear Attention addresses the performance degradation in linear attention by preserving representational diversity through head-wise token dimension computation, maintaining linear complexity while recovering softmax attention's expressive power across multiple domains.
While the Transformer architecture dominates many fields, its quadratic self-attention complexity hinders its use in large-scale applications. Linear attention offers an efficient alternative, but its direct application often degrades performance, with existing fixes typically re-introducing computational overhead through extra modules (e.g., depthwise separable convolution) that defeat the original purpose. In this work, we identify a key failure mode in these methods: global context collapse, where the model loses representational diversity. To address this, we propose Multi-Head Linear Attention (MHLA), which preserves this diversity by computing attention within divided heads along the token dimension. We prove that MHLA maintains linear complexity while recovering much of the expressive power of softmax attention, and verify its effectiveness across multiple domains, achieving a 3.6\% improvement on ImageNet classification, a 6.3\% gain on NLP, a 12.6\% improvement on image generation, and a 41\% enhancement on video generation under the same time complexity.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LinMU: Multimodal Understanding Made Linear (2026)
- Nexus: Higher-Order Attention Mechanisms in Transformers (2025)
- Training-free Context-adaptive Attention for Efficient Long Context Modeling (2025)
- Rectified SpaAttn: Revisiting Attention Sparsity for Efficient Video Generation (2025)
- Trainable Log-linear Sparse Attention for Efficient Diffusion Transformers (2025)
- ReHyAt: Recurrent Hybrid Attention for Video Diffusion Transformers (2026)
- A Unified Sparse Attention via Multi-Granularity Compression (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper