SurgWorld: Learning Surgical Robot Policies from Videos via World Modeling
Abstract
Data scarcity remains a fundamental barrier to achieving fully autonomous surgical robots. While large scale vision language action (VLA) models have shown impressive generalization in household and industrial manipulation by leveraging paired video action data from diverse domains, surgical robotics suffers from the paucity of datasets that include both visual observations and accurate robot kinematics. In contrast, vast corpora of surgical videos exist, but they lack corresponding action labels, preventing direct application of imitation learning or VLA training. In this work, we aim to alleviate this problem by learning policy models from SurgWorld, a world model designed for surgical physical AI. We curated the Surgical Action Text Alignment (SATA) dataset with detailed action description specifically for surgical robots. Then we built SurgeWorld based on the most advanced physical AI world model and SATA. It's able to generate diverse, generalizable and realistic surgery videos. We are also the first to use an inverse dynamics model to infer pseudokinematics from synthetic surgical videos, producing synthetic paired video action data. We demonstrate that a surgical VLA policy trained with these augmented data significantly outperforms models trained only on real demonstrations on a real surgical robot platform. Our approach offers a scalable path toward autonomous surgical skill acquisition by leveraging the abundance of unlabeled surgical video and generative world modeling, thus opening the door to generalizable and data efficient surgical robot policies.
Community
Proposes SurgWorld world model to learn surgical robot policies from unlabeled videos via synthetic pseudokinematics, enabling data-efficient VLA policies from SATA data.
Good Work! Do you have any plans to open source your models and datasets?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ViPRA: Video Prediction for Robot Actions (2025)
- mimic-video: Video-Action Models for Generalizable Robot Control Beyond VLAs (2025)
- LatBot: Distilling Universal Latent Actions for Vision-Language-Action Models (2025)
- See Once, Then Act: Vision-Language-Action Model with Task Learning from One-Shot Video Demonstrations (2025)
- Scalable Policy Evaluation with Video World Models (2025)
- Large Video Planner Enables Generalizable Robot Control (2025)
- Image Generation as a Visual Planner for Robotic Manipulation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper