Relightable and Animatable Neural Avatars from Videos

AAAI 2024
Tsinghua University

Our method reconstructs relightable avatar from monocular video.

Abstract

Lightweight creation of 3D digital avatars is a highly desirable but challenging task. With only sparse videos of a person under unknown illumination, we propose a method to create relightable and animatable neural avatars, which can be used to synthesize photorealistic images of humans under novel viewpoints, body poses, and lighting. The key challenge here is to disentangle the geometry, material of the clothed body, and lighting, which becomes more difficult due to the complex geometry and shadow changes caused by body motions. To solve this ill-posed problem, we propose novel techniques to better model the geometry and shadow changes. For geometry change modeling, we propose an invertible deformation field, which helps to solve the inverse skinning problem and leads to better geometry quality. To model the spatial and temporal varying shading cues, we propose a pose-aware part-wise light visibility network to estimate light occlusion. Extensive experiments on synthetic and real datasets show that our approach reconstructs high-quality geometry and generates realistic shadows under different body poses.

Pipeline

Video

More Results

BibTeX

@inproceedings{lin2024relightable,
  title={Relightable and Animatable Neural Avatars from Videos},
  author={Lin, Wenbin and Zheng, Chengwei and Yong, Jun-Hai and Xu, Feng},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={4},
  pages={3486--3494},
  year={2024}
}