Visionary: The World Model Carrier Built on WebGPU-Powered Gaussian Splatting Platform
Abstract
Visionary is an open web-native platform enabling real-time rendering of 3D Gaussian Splatting and meshes with efficient GPU-based inference, supporting dynamic content and generative models.
Neural rendering, particularly 3D Gaussian Splatting (3DGS), has evolved rapidly and become a key component for building world models. However, existing viewer solutions remain fragmented, heavy, or constrained by legacy pipelines, resulting in high deployment friction and limited support for dynamic content and generative models. In this work, we present Visionary, an open, web-native platform for real-time various Gaussian Splatting and meshes rendering. Built on an efficient WebGPU renderer with per-frame ONNX inference, Visionary enables dynamic neural processing while maintaining a lightweight, "click-to-run" browser experience. It introduces a standardized Gaussian Generator contract, which not only supports standard 3DGS rendering but also allows plug-and-play algorithms to generate or update Gaussians each frame. Such inference also enables us to apply feedforward generative post-processing. The platform further offers a plug in three.js library with a concise TypeScript API for seamless integration into existing web applications. Experiments show that, under identical 3DGS assets, Visionary achieves superior rendering efficiency compared to current Web viewers due to GPU-based primitive sorting. It already supports multiple variants, including MLP-based 3DGS, 4DGS, neural avatars, and style transformation or enhancement networks. By unifying inference and rendering directly in the browser, Visionary significantly lowers the barrier to reproduction, comparison, and deployment of 3DGS-family methods, serving as a unified World Model Carrier for both reconstructive and generative paradigms.
Community
TL;DR: Visionary is an open, web-native platform built on WebGPU and ONNX Runtime. Enabling real-time rendering of diverse Gaussian Splatting variants (3DGS, MLP-based 3DGS, 4DGS, Neural Avatars and ✨any future algorithms✨), and traditional 3d Mesh, directly in the browser. It also supports post-processing using feed-forward networks.
• 💻 GitHub: https://github.com/Visionary-Laboratory/visionary
• 🌍 Project page:https://visionary-laboratory.github.io/visionary/
• 🎬 Video: https://www.youtube.com/watch?v=-K8EjMfk09c
• 📝 Technical report: https://arxiv.org/abs/2512.08478
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- 4D Neural Voxel Splatting: Dynamic Scene Rendering with Voxelized Guassian Splatting (2025)
- HGC-Avatar: Hierarchical Gaussian Compression for Streamable Dynamic 3D Avatars (2025)
- The Impact and Outlook of 3D Gaussian Splatting (2025)
- SUCCESS-GS: Survey of Compactness and Compression for Efficient Static and Dynamic Gaussian Splatting (2025)
- Vorion: A RISC-V GPU with Hardware-Accelerated 3D Gaussian Rendering and Training (2025)
- Neo: Real-Time On-Device 3D Gaussian Splatting with Reuse-and-Update Sorting Acceleration (2025)
- NeAR: Coupled Neural Asset-Renderer Stack (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper