Towards Photorealistic Visual Reconstruction and Generation
Ziyu Wan
■ Abstract
Throughout history, there have been various methods for acquiring photorealistic visual content. In today's rapidly evolving landscape of artificial intelligence, one of the most convenient approaches to generate novel samples perhaps would be using prominent learning-based reconstruction and generation frameworks. In this presentation, I will introduce some of our recent efforts on rendering photorealistic images through implicit representation and generative models. Furthermore, we will uncover the correlations between these methodologies, showcasing how they can synergistically enhance each other's capabilities. Ultimately, we aim to demonstrate how these advancements are paving the way towards the creation of high-quality, photorealistic and continuous visual representations.
■ Bio
Ziyu Wan is a Senior Researcher at Microsoft GenAI, Redmond. He received PhD from City University of Hong Kong. He was a visiting PhD at the Geometric Computation group, Stanford University advised by Prof Leonidas J. Guibas. During his PhD study, Ziyu did research internships at Google DeepMind, Meta Reality Labs, Tencent AI Lab and Microsoft Research. Ziyu's research lies at the intersection of computational photography, neural rendering, and generative AI.