Papers
arxiv:2512.19535

CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion

Published on Dec 22
· Submitted by
Niels Rogge
on Dec 23
Authors:
,
,
,
,

Abstract

CASA, a cross-attention method enhanced with self-attention, improves vision-language models' performance on detailed visual tasks while maintaining scalability for long-context multimodal applications.

AI-generated summary

Vision-language models (VLMs) are commonly trained by inserting image tokens from a pretrained vision encoder into the textual stream of a language model. This allows text and image information to fully attend to one another within the model, but becomes extremely costly for high-resolution images, long conversations, or streaming videos, both in memory and compute. VLMs leveraging cross-attention are an efficient alternative to token insertion but exhibit a clear performance gap, in particular on tasks involving fine-grained visual details. We find that a key to improving such models is to also enable local text-to-text interaction in the dedicated cross-attention layers. Building on this, we propose CASA, Cross-Attention via Self-Attention, a simple and efficient paradigm which substantially reduces the gap with full token insertion on common image understanding benchmarks, while enjoying the same scalability as cross-attention models when applied to long-context multimodal tasks such as streaming video captioning. For samples and code, please see our project page at https://kyutai.org/casa .

Community

Paper submitter

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.19535 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 1