We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats. Current photorealistic drivable avatars require either accurate 3D registrations during training, dense input images during testing, or both. The ones based on neural radiance fields also tend to be prohibitively slow for telepresence applications. This work uses the recently presented 3D Gaussian Splatting (3DGS) technique to render realistic humans at real-time framerates, using dense calibrated multi-view videos as input. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a nunc odio. Etiam in purus cursus, hendrerit nunc in, tincidunt dolor. Ut sit amet molestie velit, vitae accumsan erat. Morbi elementum leo eu ipsum tincidunt, sit amet ornare mauris pharetra. Quisque vestibulum, nibh quis blandit tincidunt, lorem magna eleifend ante, a placerat elit velit et libero. Nam sed consectetur nulla, vitae auctor est. Aenean quis convallis sem. Sed et magna

D3GA - Drivable 3D Gaussian Avatars

D3GA - Drivable 3D Gaussian Avatars [3DV2025]
Meta Reality Labs Research1, Technical University of Darmstadt2
Max Planck Institute for Intelligent Systems, Tübingen, Germany3
Actor 1 | Test | 360° From the left: joint angles, predicted body cage, pred upper cage, pred lower cage, 3D Gaussians, garment parts, final image

We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats. Current photorealistic drivable avatars require either accurate 3D registrations during training, dense input images during testing, or both. The ones based on neural radiance fields also tend to be prohibitively slow for telepresence applications.

This work uses the recently presented 3D Gaussian Splatting (3DGS) technique to render realistic humans at real-time framerates, using dense calibrated multi-view videos as input. To deform those primitives, we depart from the commonly used point deformation method of linear blend skinning (LBS) and use a classic volumetric deformation method: cage deformations. Given their smaller size, we drive these deformations with joint angles and keypoints, which are more suitable for communication applications. Our experiments on nine subjects with varied body shapes, clothes, and motions obtain higher-quality results than state-of-the-art methods when using the same training and test data.


Actor 2 | Test | 360°
Given a multi-view video, D3GA learns drivable photo-realistic 3D human avatars, represented as a composition of 3D Gaussians embedded in tetrahedral cages. The Gaussians are transformed by those cages, colorized with an MLP, and rasterized as splats. We represent the drivable human as a layered set of 3D Gaussians, allowing us to decompose the avatar into its different cloth layers.

Video


Actor 3 | Test | 360° Actor 4 | Test | 360° Actor 5 | Test | 360° Actor 6 | Test | 360°
Actor 7 | Test | 360°
Actor 8 | Test | 360°
Actor 9 | Test | 360°

BibTeX

@inproceedings{zielonka25dega,
      title = {Drivable 3D Gaussian Avatars},
      author={Wojciech Zielonka and Timur Bagautdinov and Shunsuke Saito and 
              Michael Zollhöfer and Justus Thies and Javier Romero},
      booktitle = {International Conference on 3D Vision (3DV)},
      month = {March},
      year = {2025}
}
*Work done while Wojciech Zielonka was an intern at Reality Labs Research, Pittsburgh, PA, USA