Baijiao is from the Au Fei Temple
量子位 | 公众号 QbitAI
Vision Pro is too expensive to play right now, when will I be able to fast forward to immersive 3D games?
Now, with a new study, real-time control of objects generated by 3D Gaussian splashes in VR is a big step closer to being truly realized.
For example, if you operate this iron horse, you will be a ghost animal if you are not careful......
Another example is the training of this fierce dog.
Led by Chen Fanfu Jiang from UCLA's Artificial Intelligence and Visual Computing Lab, as well as the University of Hong Kong, Zhejiang University, Style3D, CMU, the University of Utah, and Amazon All Chinese, researchers proposed a VR system, VR-GS.
Among them, I also saw a familiar graphics boss Wang Huamin.
Let's take a look at how it works.
Play 3D Gaussian splash in VR
In this study, the team made three main contributions:
- A high-fidelity immersive VR system was developed and evaluated extensively.
- Real-time 3D content interaction: The system is engineered with a focus on putting people first.
- Full system integration that combines technologies such as 3D Gaussian splash, scene segmentation, image drawing, a real-time physics-based solver, and a new rendering geometry embedding algorithm.
Of course, the core is to propose VR-GS, a VR system that is physically perceptual and interactive.
As the name suggests, it integrates 3D Gaussian Splatter (GS) and Position-Based Extended Dynamics (XPBD). The latter is a highly adaptable and consistent physics simulator for real-time deformation simulations.
Due to the different geometric representations of the simulation and rendering processes, it is difficult to integrate the simulator directly into the 3D Gaussian core.
To solve this dilemma, the researchers constructed a tetrahedral cage with each segmented Gaussian kernel set embedded into the corresponding mesh. The deformation mesh, driven by XPBD, then guides the deformation of the GS core.
Starting with multi-view images, the pipeline cleverly combines scene reconstruction, segmentation, and restoration using Gaussian cores, in addition to further incorporating technologies such as collision detection and shadow casting.
In addition, they noticed that simple embedding caused some spike artifacts in the Gaussian kernel.
Therefore, a two-stage embedding method is proposed, in which each Gaussian kernel is embedded into a local tetrahedron, and the vertices of the local tetrahedron are independently embedded into the global mesh.
The difference is still relatively obvious.
End-user feedback received positive ratings in areas including ease of use, latency satisfaction, system functionality, and overall satisfaction.
Research team
The study was conducted by researchers from UCLA, HKU, Zhejiang University, Style3D, the University of Utah, CMU, and Amazon.
Among them, Chen Fanfu Jiang from UCLA's Artificial Intelligence and Visual Computing Lab led the team, with Ying Jiang, Chang Yu, Tianyi Xie, and Xuan Li making equal contributions.
If you are interested, you can click the link below to study and study~
https://yingjiang96.github.io/VR-GS/