3D-Aware Semantic-Guided Generative Model for Human Synthesis
The asset proposed a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis, which combines a Generative Neural Radiance Field (GNeRF) with a texture generator. Without requiring additional 3D information, our model can learn 3D human representations with a photo-realistic, controllable generation.
Main Characteristic
3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis combined by:
- a Generative Neural Radiance Field (GNeRF) that learns an implicit 3D representation of the human body and outputs a set of 2D semantic segmentation masks.
- a texture generator which transforms these semantic masks into a real image, adding a realistic texture to the human appearance.
Research areas
Integrative AI
Technical Categories
Computer vision
Last updated
26.01.2024 - 10:38