Peter Hedman

- Senior Research Scientist, Google DeepMind (London) [firstname].j.[lastname] Twitter Google Scholar
Peter Hedman

Hi! I'm Peter Hedman, a researcher at Google DeepMind in London.

My focus is on creating immersive 3D experiences from easy-to-capture footage of real places. For example, you may want to scan and digitally revisit your childhood home, or capture a VR-ready 3D panorama of the places you visit during a vacation.

Formally, my research interests are view synthesis, image-based rendering, neural rendering and real-time graphics.


Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis
Christian Reiser, Stephan Garbin, Pratul Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman*, Andreas Geiger*
SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration
Daniel Duckworth*, Peter Hedman*, Christian Reiser, Peter Zhizhin, Jean-François Thibert, Mario Lučić, Richard Szeliski, Jonathan T. Barron
Eclipse: Disambiguating Illumination and Materials using Unintended Shadows
Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Todd Zickler, Pratul Srinivasan
CVPR 2024 (Oral)
Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion
Kira Prabhu*, Jane Wu*, Lynn Tsai*, Peter Hedman, Dan B Goldman, Ben Poole, Michael Broxton
arXiv 2023
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul Srinivasan, Peter Hedman
ICCV 2023 (Oral, Best Paper Finalist)
Vox-E: Text-guided Voxel Editing of 3D Objects
Etai Sella, Gal Fiebelman, Peter Hedman, Hadar Averbuch-Elor
ICCV 2023
MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes
Christian Reiser, Richard Szeliski, Dor Verbin, Pratul Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman
BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis
Lior Yariv*, Peter Hedman*, Christian Reiser, Dor Verbin, Pratul Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall
MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures
Zhiqin Chen, Thomas Funkhouser, Peter Hedman, Andrea Tagliasacchi
CVPR 2023 (Best Paper Award Candidate)
AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training
Yifan Jiang, Peter Hedman, Ben Mildenhall, Dejia Xu, Jonathan T. Barron, Zhangyang Wang, Tianfan Xue
CVPR 2023
NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images
Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T. Barron
CVPR 2022 (Oral)
Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan and Peter Hedman
CVPR 2022 (Oral)
Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron and Pratul P. Srinivasan
CVPR 2022 (Oral, Best Student Paper Honorable Mention), TPAMI 2024
HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields
Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B. Goldman, Ricardo Martin-Brualla and Steven M. Seitz
SIGGRAPH Asia 2021
Baking Neural Radiance Fields for Real-Time View Synthesis
Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron and Paul Debevec
ICCV 2021 (Oral), TPAMI 2024
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla and Pratul P. Srinivasan
ICCV 2021 (Oral, Best Paper Honorable Mention)
Immersive Light Field Video with a Layered Mesh Representation
Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen and Paul Debevec
Image-Based Rendering of Cars using Semantic Labels and Approximate Reflection Flow
Simon Rodriguez, Siddhant Prakash, Peter Hedman and George Drettakis
I3D 2020
Deep Blending for Free-Viewpoint Image-Based Rendering
Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis and Gabriel Brostow
SIGGRAPH Asia 2018
Instant 3D Photography
Peter Hedman and Johannes Kopf
Casual 3D Photography
Peter Hedman, Suhib Alsisan, Richard Szeliski and Johannes Kopf
SIGGRAPH Asia 2017
Scalable Inside-Out Image-Based Rendering
Peter Hedman, Tobias Ritschel, George Drettakis and Gabriel Brostow
SIGGRAPH Asia 2016
Sequential Monte Carlo Instant Radiosity
Peter Hedman, Tero Karras and Jaakko Lehtinen
I3D 2016, TVCG 2017
Multi-view Reconstruction of Highly Specular Surfaces in Uncontrolled Environments
ClĂ©ment Godard*, Peter Hedman*, Wenbin Li and Gabriel J. Brostow
3DV 2015 (Oral) *Joint first authors.


May 2024 Google DeepMind, Senior Research Scientist, London U.K.
Working on Neural Radiance Fields.
May 2022 - May 2024 Google, Senior Research Scientist, London U.K.
Working on Neural Radiance Fields.
Nov 2020 - May 2022 Google, Research Scientist, London U.K.
Working on Neural Radiance Fields.
Nov 2019 - Nov 2020 Google, Research Scientist, Los Angeles USA.
Worked on Immersive Light Field Video with a Layered Mesh Representation (SIGGRAPH 2020).
Jun 2017 - Aug 2018 Pro Unlimited, Contingent worker @ Facebook (Computational photography group), London U.K.
Developed Instant 3D Photography (SIGGRAPH 2018).
Jun 2016 - Jun 2017 Facebook, PhD research intern (Computational photography group), Seattle USA.
Developed Casual 3D Photography (SIGGRAPH Asia 2017).
Dec 2013 - May 2014 NVIDIA, research intern (NVIDIA research), Helsinki Finland.
Developed Sequential Monte Carlo Instant Radiosity (I3D 2016, TVCG 2017).
Jan 2012 - Nov 2013 NVIDIA, Systems software engineer (mobile browser team), Helsinki Finland.
Worked with a heavily threaded C++, HTML5 Canvas, WebGL, WebKit, Skia and OpenGL ES2.
May 2011 - Jan 2012 NVIDIA, software engineering intern (Flash3D team), Helsinki Finland.
Optimized Flash3D for NVIDIA Tegra.


Dec 2021 - Jan 2022 Publicity for HyperNeRF (Two Minute Papers, Gigazine, 80 Level).
Mar 2021 - Apr 2021 Publicity for Baking Neural Radiance Fields for Real-Time View Synthesis (Marktechpost, Synced).
Jun 2020 - Jan 2021 Publicity for Immersive Light Field Video with a Layered Mesh Representation (TechCrunch, Two Minute Papers, Photonics, UploadVR, VRTimes, TechXplore, Hackaday, SIGGRAPH Blog, Immersive Pavilion BEST IN SHOW).
May 2018 - Jun 2018 Publicity for Instant 3D Photography (TechCrunch, Business Wire, F8 Conference keynote - 1:08:45).
Apr 2017 Publicity for Casual 3D Photography (Mark Zuckerberg keynote at F8 - 11:45, F8 day 2 keynote - 46:00).
Nov 2016 The Finnish Academic Association for Mathematics and Natural Sciences (MAL) prize for
the most distinguished Master's thesis (
Apr 2016 Rabin Ezra Scholarship for doctoral students in computer graphics, imaging and vision (


Area Chair / Papers Committee CVPR (2024), SIGGRAPH (2023, 2022), EGSR (2022, 2021, session chair), HPG (2024, 2021)
Reviewer SIGGRAPH, SIGGRAPH Asia, CVPR, TOG, TVCG, 3DV, Computers & Graphics, Eurographics, Pacific Graphics


Aug 2014 - Jul 2019 PhD in Computer Science (University College London)
Viewpoint-Free Photography for Virtual Reality (
Supervisors: Gabriel Brostow and Tobias Ritschel.
Jan 2012 - Jun 2015 Master's degree in Computer Science (University of Helsinki)
Sequential Monte Carlo Instant Radiosity (
Sep 2009 - Jan 2012 Bachelors degree in Computer Science (University of Helsinki)
Triangle based and voxel based rendering in real-time graphics ( - in Swedish).


2014, 2015, 2017 Teaching assistant, Computer Graphics (University College London)