Abstract
A computational model called Zero-shot Visual World Model demonstrates how children can efficiently learn physical world understanding from limited first-person experiences, generating competent behavior across multiple benchmarks while mimicking developmental patterns and brain-like representations.
Young children demonstrate early abilities to understand their physical world, estimating depth, motion, object coherence, interactions, and many other aspects of physical scene understanding. Children are both data-efficient and flexible cognitive systems, creating competence despite extremely limited training data, while generalizing to myriad untrained tasks -- a major challenge even for today's best AI systems. Here we introduce a novel computational hypothesis for these abilities, the Zero-shot Visual World Model (ZWM). ZWM is based on three principles: a sparse temporally-factored predictor that decouples appearance from dynamics; zero-shot estimation through approximate causal inference; and composition of inferences to build more complex abilities. We show that ZWM can be learned from the first-person experience of a single child, rapidly generating competence across multiple physical understanding benchmarks. It also broadly recapitulates behavioral signatures of child development and builds brain-like internal representations. Our work presents a blueprint for efficient and flexible learning from human-scale data, advancing both a computational account for children's early physical understanding and a path toward data-efficient AI systems.
Community
Today's best AI needs orders of magnitude more data than a human child to achieve visual competence.
We introduce the Zero-shot World Model (ZWM), an approach that substantially narrows this gap. Even when trained on the first-person experience of a single child, BabyZWM matches state-of-the-art models on diverse visual-cognitive tasks – with no task-specific training, i.e., zero-shot.
Our work presents a blueprint for efficient and flexible learning from human-scale data, advancing a path toward data-efficient AI systems.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Joint-Aligned Latent Action: Towards Scalable VLA Pretraining in the Wild (2026)
- OmniStream: Mastering Perception, Reconstruction and Action in Continuous Streams (2026)
- Towards Stable Self-Supervised Object Representations in Unconstrained Egocentric Video (2026)
- Generation Models Know Space: Unleashing Implicit 3D Priors for Scene Understanding (2026)
- Universal Pose Pretraining for Generalizable Vision-Language-Action Policies (2026)
- DriveVA: Video Action Models are Zero-Shot Drivers (2026)
- FEEL (Force-Enhanced Egocentric Learning): A Dataset for Physical Action Understanding (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.10333 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper