Ομιλία Γιώργου Δρεττάκη - Geometry-Guided Neural Rendering for Real and Synthetic scenes

Την Τρίτη 5 Δεκεμβρίου στις 11:00 θα είναι μαζί μας για μια ομιλία ο Γιώργος Δρεττάκης, Senior Researcher στο ερευνητικό κέντρο INRIA Sophia-Antipolis και επικεφαλής της ερευνητικής ομάδας GRAPHDECO. Η ερευνητική του δουλειά καλύπτει ένα ευρύ φάσμα αντικειμένων από την περιοχή των γραφικών υπολογιστών, με πιο πρόσφατο αυτό του Neural Rendering. Ο κύριος Δρεττάκης έχει λάβει το βραβείο εξαιρετικής τεχνικής συνεισφοράς από το Eurographics Association και χρηματοδότηση από το ERC Advanced Grant. H παρουσίασή του έχει θέμα: “Geometry-Guided Neural Rendering for Real and Synthetic scenes” και θα λάβει χώρα στην αίθουσα Τ203, στο κτήριο Τροίας. Ακολουθεί σύντομο βιογραφικό σημείωμα και περίληψη του αντικειμένου της παρουσίασης.

Short biographical information:

George Drettakis graduated in Computer Science from the University of Crete, Greece, and obtained an M.Sc. and a Ph.D., (1994) at the University of Toronto, with E. Fiume. After an ERCIM postdoc in Grenoble, Barcelona and Bonn, he obtained a Inria researcher position in Grenoble in 1995, and his "Habilitation" at the University of Grenoble (1999). He then founded the REVES research group at INRIA Sophia-Antipolis, and now heads the follow-up group GRAPHDECO and is a INRIA Senior Researcher (full professor equivalent). He received the Eurographics (EG) Outstanding Technical Contributions award in 2007 and the prestigious ERC Advanced Grant in 2019 and is an EG fellow. He was associate editor for ACM Trans. on Graphics, technical papers chair of SIGGRAPH Asia 2010, co-chair of Eurographics IPC 2002 & 2008, associate editor and chairs the EG working group on Rendering (EGSR). He has worked on many different topics in computer graphics, with an emphasis on rendering. He initially concentrated on lighting and shadow computation and subsequently worked on 3D audio, perceptually-driven algorithms, virtual reality and 3D interaction. He has worked on textures, weathering and perception for graphics and in recent years on image-based and neural rendering/relighting as well as deep material acquisition.

Presentation abstract:

Neural rendering has advanced at outstanding speed in recent years, with the advent of Neural Radiance Fields, typically based on volumetric ray-marching, as well as (3D) generative models. In this talk, we investigate alternative approaches building on the explicit use of geometry to guide neural renderers, both for captured and synthetic scenes. We will start with a short historical perspective of our work on image-based and neural rendering over the last 20+ years, outlining the guiding principles that led to our recent work. The majority of neural rendering methods focus on captured scenes; We first present a sequence of three point-based rasterization methods for novel view synthesis. We quickly discuss differentiable point splatting and how we extended in our first approach that enhances points with neural features, optimizing geometry to correct reconstruction errors. We next overview our second method that handles scenes with highly reflective objects, where we use two multi-layer perceptrons (MLP), one that learns the motion of reflections and the other that performs the final rendering of captured scenes. We then discuss our recent third method, that provides the first high-quality real-time rendering for novel view synthesis using a novel 3D scene representation based on 3D Gaussians and fast GPU rasterization. The obvious power of neural rendering illustrated with these methods can also be applied in interesting ways to the rendering of synthetic scenes, by overfitting to the (variable) illumination of a given scene and by using generative models to create 3D textures. We discuss our method to learn complex global illumination effects for scenes with variations (moving lights, geometry, materials, viewpoint). The focus of this approach is how to perform Active Exploration on the space of variable scene configurations to significantly accelerate and improve training using on-the-fly ground truth data generation. We then briefly review our recent work that applies generative models for meso-scale texture generation and control. We will conclude the talk with perspectives on neural rendering both for synthetic and captured content.

 Αναφορές (references):

S. Diolatzis et al. 2023 "MesoGAN: Generative Neural Reflectance Shells" Computer Graphics Forum (EGSR)

Kerbl, G. Kopanas et al. 2023 "3D Gaussian Splatting for Real-Time Radiance Field Rendering" ACM Tran. on Graphics (SIGGRAPH)

Diolatzis et al. 2022 "Active Exploration for Neural Global Illumination of Variable Scenes" ACM Tran. on Graphics

Kopanas et al. 2022 "Neural Point Catacaustics for Novel-View Synthesis of Reflections", ACM Tran. on Graphics (SIGGRAPH Asia)

Kopanas et al. 2021 "Point-Based Neural Rendering with Per-View Optimization", Computer Graphics Forum (EGSR)