CVPR 2023 Tutorial on

Skull Restoration, Facial Reconstruction and Expression

9:30am - 11:50pm, June 18, 2023


Tutorial Lecturers




Overview

This tutorial focuses on the problems of reconstructing a 3D model from a fragmented skull or a human face and then generating facial expressions. Faces refer to a specific category of objects with particular patterns of identity and expression that can be leveraged to address general problems of reconstruction and modeling. This tutorial is composed of three parts, including facial reconstruction from skeletal remains, 4D dynamic facial performance with high-quality physically-based textures, and audio-driven talking face generation. We will describe these three parts in more detail as follows.
  • Face modeling (generation and editing) is a fundamental technique and has broad applications in animation, vision, games, and VR. While recent data-driven face modeling techniques achieved great success, facial geometries are fundamentally governed by its underlying skull and tissue structures. This session covers a forensic task of facial reconstruction from skeletal remains, in which the skull, tissue, and face are all studied together. We will discuss how to model anthropological features, restore fragmented skulls, and reconstruct human faces upon them. Involved feature modeling, correspondence, and constrained face generation and editing techniques are general and can benefit many other computer vision tasks.
  • In the second part, we will then detail state-of-the-art systems and methods to capture 4D dynamic facial performance, which is the foundation for face modeling and rendering with supervised ground truth data. We will consider the hardware design choices for cameras, sensors, lighting, and describe all the steps needed to obtain dynamic facial geometry along with high-quality physically-based textures, including pore-level diffuse albedo, specular intensity, and normal maps. We will discuss the two traditional and complementary workhorses for recovering facial performances: multi-view stereo and photometric stereo. We will also show how to combine conventional capture pipelines with neural rendering advances to construct neural facial assets. This section also involves the recent trends in combining medical imaging for capturing physically plausible and biologically correct facial performance.
  • In the third part, we will furthermore present talking face generation and its industrial applications. Talking face generation aims to create a talking video of a speaker with authentic facial expressions from an input of simultaneous speech. The output talking face video has been employed in many applications, such as computer games, intelligence assistance, and virtual reality. The face identity may be from a predefined 3D virtual character, a single image, or a few minutes of a specific speaker. This session will discuss the generation of 3D animation parameters and 2D photo-realistic video, as well as their relationship. It involves the research progress by the academic community and also industrial applications in online education and computer games. It aims at providing fundamental knowledge, crucial technologies, and robust solutions.
  • Schedule
    9:30-10:15 AM Skull Restoration Prof. Xin Li
    10:15-10:20 AM Break
    10:20-11:00 AM Facial Reconstruction Dr. Lan Xu
    11:00-11:05 AM Break
    11:05-11:50 AM Facial Expression Dr. Yu Ding