Papers
arxiv:2512.09363

StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation

Published on Dec 10
· Submitted by taesiri on Dec 11
#1 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

StereoWorld generates high-quality stereo video from monocular input using a pretrained video generator with geometry-aware regularization and spatio-temporal tiling.

AI-generated summary

The growing adoption of XR devices has fueled strong demand for high-quality stereo video, yet its production remains costly and artifact-prone. To address this challenge, we present StereoWorld, an end-to-end framework that repurposes a pretrained video generator for high-fidelity monocular-to-stereo video generation. Our framework jointly conditions the model on the monocular video input while explicitly supervising the generation with a geometry-aware regularization to ensure 3D structural fidelity. A spatio-temporal tiling scheme is further integrated to enable efficient, high-resolution synthesis. To enable large-scale training and evaluation, we curate a high-definition stereo video dataset containing over 11M frames aligned to natural human interpupillary distance (IPD). Extensive experiments demonstrate that StereoWorld substantially outperforms prior methods, generating stereo videos with superior visual fidelity and geometric consistency. The project webpage is available at https://ke-xing.github.io/StereoWorld/.

Community

Paper submitter

StereoWorld presents geometry-aware monocular-to-stereo video generation using a pretrained video generator with geometry regularization and tiling for high-resolution, consistent stereo videos.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.09363 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.09363 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.09363 in a Space README.md to link it from this page.

Collections including this paper 1