
Ruofei Du
Ruofei Du is the Interactive Perception & Graphics Lead at Google XR, where he drives AI + XR innovations. An established expert in his field, he serves as an Associate Chair on the program committees for both ACM CHI (2021-2026) and UIST (2022-2025), and is an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology. Dr. Du's extensive contributions includes 6 US patents and over 45 peer-reviewed publications in top venues across HCI, Computer Graphics, and Computer Vision (including CHI, SIGGRAPH, UIST, TVCG, CVPR, and ICCV). His research has garnered significant recognition, including a Distinguished Paper Award in ACM IMWUT, Best Paper Awards at SIGGRAPH Web3D 2016 and SIGGRAPH I3D 2024, and multiple Honorable Mentions Awards at CHI and TVCG. He holds a Ph.D. in Computer Science from the University of Maryland, College Park. Website: https://duruofei.com
Alternative Bio for Academic Invited Talk
Ruofei Du is a Staff Research Scientist and Manager at Google XR Labs, where he leads AI + XR innovations. His research focuses on interactive perception, computer graphics, and human-computer interaction. He serves on the program committees of both ACM CHI and UIST, and is an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology. Dr. Du is a prolific researcher and inventor, holding 6 US patents and authoring over 45 peer-reviewed publications in leading venues across HCI, Computer Graphics, and Computer Vision (including CHI, SIGGRAPH, UIST, TVCG, CVPR, and ICCV). His research has consistently been recognized for its excellence, securing a Distinguished Paper Award (ACM IMWUT), Best Paper Awards (SIGGRAPH Web3D 2016, SIGGRAPH I3D 2024), and several Honorable Mentions (CHI, TVCG). He earned his Ph.D. in Computer Science from the University of Maryland, College Park, and a B.S. from ACM Honored Class, Shanghai Jiao Tong University. Website: https://duruofei.com
Augmented Communication for a Universally Accessible XR
The emergent revolution of generative AI and spatial computing will fundamentally change the way we work and live. However, it remains a challenge how to make information universally accessible, and further, how to make generative AI and spatial computing useful in our daily life. In this talk, we will delve into a series of innovations in augmented programming, augmented interaction, and augmented communication, that aim to make both the extended reality and the physical world universally accessible.
With Visual Blocks and InstructPipe, we empower novice users to unleash their inner creativity, by rapidly building machine learning pipelines with visual programming and generative AI. With Depth Lab, Ad hoc UI, Finger Switches, we present real-time 3D interactions with depth maps, objects, and micro-gestures. Finally, with CollaboVR, GazeChat, Visual Captions, ThingShare, and ChatDirector, we enrich communication with mid-air sketches, gaze-aware 3D photos, LLM-informed visuals, object-focused views, and co-presented avatars.
We conclude the talk with highlights of the Google I/O Keynote, offering a visionary glimpse into the future of a universally accessible XR.