- This event has passed.
Yu Yin’s PhD Proposal Review
February 27, 2023 @ 1:00 pm - 2:00 pm
Committee:
Prof. Yun Fu (Advisor)
Prof. Sarah Ostadabbas
Prof. Ming Shao
Abstract:
The community has long enjoyed the benefits of synthesizing data, as it provides a reliable and controllable source for training machine learning models while reducing the need for data collection from the real world. Human face and body synthesis are especially appealing to research communities, where model fairness and ethical deployment are critical concerns. However, generating digit humans that are convincing, realistic-looking, identity-preserving, and high-quality are still challenging in 2D and 3D image synthesis.
This dissertation investigates the potential for understanding human behavior by recreating it, and can be broadly divided into three sections. (1) In Section one, we explore the 2D image generation models and their interaction with face applications (i.e., landmark localization and face recognition tasks). Specifically, super-resolution (SR) and landmark localization of tiny faces are highly correlated tasks. To this end, we propose joint frameworks that enable face alignment and SR to benefit from one another, hence enhancing the performance of both tasks. Moreover, we demonstrate that face frontalization provides an effective and efficient way for face data augmentation and further improves face recognition performance in extreme pose scenarios. (2) In Section two, we explore the 3D parametric generation models and how they support human body pose and shape estimation. Advancing technology to monitor our bodies and behavior while sleeping and resting is essential for healthcare. However, keen challenges arise from our tendency to rest under blankets. To mitigate the negative effects of blanket occlusion, we use an attention-based restoration module to explicitly reduce the uncertainty of occluded parts by generating uncovered modalities, which further update the current estimation via a cyclic fashion. (3) In Section three, we explore the 3D Nerf-based Generative models in generating high-quality images with consistent 3D geometry. We propose a universal method to surgically fine-tune these NeRF-GAN models in order to achieve high-fidelity animation of real subjects only by a single image.