metadata
license: mit
task_categories:
- video-classification
- reinforcement-learning
- robotics
language:
- en
tags:
- Chain-of-Frames
- Video-Reasoning
- Visual-Planning
- Maze
- Wan
size_categories:
- 10K<n<100K
base_model:
- Wan-AI/Wan2.2-TI2V-5B
pipeline_tag: image-to-video
Wan-R1: A Reasoning-via-Video Maze-Solving Model
Fine-tuned on VR-Bench to evaluate and enhance video-based reasoning ability across structured maze environments.
π° News
- 2026-01-04: π Released Wan_R1_General_5B, a general-purpose model fine-tuned on the entire VR-Bench suite (all sub-tasks combined).
- 2025-11-20: Released 5 fine-tuned Wan-R1 models (3D, Regular, Irregular, Sokoban, Trapfield) trained on VR-Bench.
- 2025-12: In-progress: preparing codebase for fine-tuning and evaluation release.
π§ Future Work
- π¦ Release LoRA fine-tuning scripts based on VR-Bench.
- π Open-source evaluation toolkit for reasoning via video.
- π Provide training logs & hyperparameters for full reproducibility.
π§ Models
| Model | Download | Description |
|---|---|---|
| Wan_R1_General_5B | π€ HuggingFace | New! Full LoRA fine-tuned on all VR-Bench tasks. |
| Wan_R1_3d_maze_5B | π€ HuggingFace | Fine-tuned LoRA for Maze3D tasks (easy, medium, and hard) from the base model Wan2.2-TI2V-5B. |
| Wan_R1_irregular_maze_5B | π€ HuggingFace | Fine-tuned LoRA for PathFinder tasks (easy, medium, and hard) from base model Wan2.2-TI2V-5B. |
| Wan_R1_regular_maze_5B | π€ HuggingFace | Fine-tuned LoRA for Maze tasks (easy, medium, and hard) from base model Wan2.2-TI2V-5B. |
| Wan_R1_sokoban_5B | π€ HuggingFace | Fine-tuned LoRA for Sokoban tasks (easy, medium, and hard) from base model Wan2.2-TI2V-5B. |
| Wan_R1_trapfield_5B | π€ HuggingFace | Fine-tuned LoRA for TrapField tasks (easy, medium, and hard) from base model Wan2.2-TI2V-5B. |
π Citation
If you use this model or the VR-Bench dataset in your work, please cite:
@misc{yang2025reasoningvideoevaluationvideo,
title={Reasoning via Video: The First Evaluation of Video Models' Reasoning Abilities through Maze-Solving Tasks},
author={Cheng Yang and Haiyuan Wan and Yiran Peng and Xin Cheng and Zhaoyang Yu and Jiayi Zhang and Junchi Yu and Xinlei Yu and Xiawu Zheng and Dongzhan Zhou and Chenglin Wu},
year={2025},
eprint={2511.15065},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.15065},
}