Peter Hedman Profile
Peter Hedman

@PeterHedman3

Followers
1K
Following
320
Media
2
Statuses
104

Researcher at Google DeepMind, working on radiance fields and view-synthesis. Pixels in, pixels out!

Joined March 2021
Don't wanna be here? Send us removal request.
@jon_barron
Jon Barron
1 month
The generative 3D/video corner of Google DeepMind that I run in is now hiring research scientists. If you're on the market for full-time roles in that space, email us! barron@google.com
5
30
260
@joshwoodward
Josh Woodward
4 months
Project Starline is going from research to reality. It will now be called Google Beam! Our advanced AI doesn't just connect you—it feels like it teleports you in stunning, lifelike 3D. The first devices from @HP launch later this year for early customers!
9
18
227
@philipphenzler
Philipp Henzler
4 months
We cooked something exciting up for you! 🧑‍🍳 Your vision, brought to life: Transform any reference image(s) into videos exactly as you envision them and even star in them yourself. This has been so much fun to work on with an amazing team: @tkipf, @sserenazz, @YuliaRubanova,
5
8
31
@threejs
Three.js
5 months
Say goodbye to installs.
Tweet media one
249
215
4K
@StanSzymanowicz
Stan Szymanowicz
6 months
⚡️ Introducing Bolt3D ⚡️ Bolt3D generates interactive 3D scenes in less than 7 seconds on a single GPU from one or more images. It features a latent diffusion model that *directly* generates 3D Gaussians of seen and unseen regions, without any test time optimization. 🧵👇 (1/9)
28
93
530
@MohammedAmr1
Mohamed Sayed
6 months
Splats are great, but wouldn’t it be wonderful if we could transform them into the stuff of dreams? Introducing our #CVPR2025 paper ✨Morpheus✨! Morpheus lets you change the shape and appearance of 3D Gaussian splats with a few words! https://t.co/16O5v18IRd (1/6)
2
56
241
@GalFiebelman
Gal Fiebelman
6 months
Excited to announce that "4-LEGS: 4D Language Embedded Gaussian Splatting" has been accepted to #Eurographics2025! 🎉 We connect language with a 4D Gaussian Splatting representation to enable spatiotemporal localization using just text prompts! https://t.co/2e074a27di [1/7]
6
11
49
@alexandertmai
Alexander Mai
7 months
The code for EVER: Exact Volumetric Ellipsoid Rendering has finally been released! To recap our paper, EVER is an exact, real-time volume rendering method, capable of achieving the highest quality results on large scenes! Code:
Tweet card summary image
github.com
Original reference implementation of "EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis" - half-potato/ever_training
3
16
103
@poolio
Ben Poole
7 months
Brush🖌️ is now a competitive 3D Gaussian Splatting engine for real-world data and supports dynamic scenes too! Check out the release notes here: https://t.co/aKErBGrCC6
12
63
385
@QianqianWang5
Qianqian Wang
8 months
Introducing CUT3R! An online 3D reasoning framework for many 3D tasks directly from just RGB. For static or dynamic scenes. Video or image collections, all in one!
7
113
635
@kiaran_ritchie
Kiaran Ritchie
9 months
New "Meshtron" paper from Nvidia produces "artist-like" meshes from point clouds. This is huge for producing useable assets from scan data or text-to-model diffusion systems. Could even be a powerful retopo tool for artist modeling workflows. https://t.co/Vg7jV2ap8c
15
157
1K
@jin_linyi
Linyi Jin
9 months
Introducing 👀Stereo4D👀 A method for mining 4D from internet stereo videos. It enables large-scale, high-quality, dynamic, *metric* 3D reconstructions, with camera poses and long-term 3D motion trajectories. We used Stereo4D to make a dataset of over 100k real-world 4D scenes.
16
105
534
@broxtronix
Michael Broxton
9 months
Today we are pleased to present Quark, our new method for real-time, high-resolution, and generalized neural view synthesis! We will be giving our talk at #SIGGRAPHAsia2024 in Hall B5 (2) at 10:45am. 🚀 Quark renders 1080p @ 30fps on a single A100 GPU 🌎 Generalizes across
0
3
11
@dorverbin
Dor Verbin
10 months
We’ll be presenting NeRF-Casting at SIGGRAPH Asia next week! NeRF-Casting enables photorealistic rendering of scenes with highly reflective surfaces—something that was previously impossible with models like Zip-NeRF and 3DGS. (1/6)
9
53
381
@PeterHedman3
Peter Hedman
10 months
This is a great opportunity if you're interested in view synthesis and light. I've had lots fun working with Julien and Paul!
@JulienPhilip2
Julien Philip
10 months
Our research group at Eyeline Studios - Powered by Netflix is hiring research interns for summer 2025. We create transformative tools for visual storytelling, read more about the positions and apply here: https://t.co/SnRSt0x3B7 (1/2)
0
0
1
@XuanLuo14
Xuan Luo
10 months
Excited to share our latest paper: Quark – Real-time, high-resolution, and generalized neural view synthesis! 🚀 🎥 1080p @ 30fps 🌍 Generalizes across diverse scenes & datasets 🏆 Achieves real-time quality rivaling even offline methods 🔗 Project page: https://t.co/no48RZMWcV
4
32
165
@PeterHedman3
Peter Hedman
10 months
This project blew my mind! It looks just as good as 3DGS but it has no per-scene optimization or reconstruction. Every frame is generated *from scratch* and it nevertheless runs in real-time.
3
17
144
@PeterHedman3
Peter Hedman
10 months
Come work with us on view synthesis and inverse rendering!
@jon_barron
Jon Barron
10 months
Our group at Google DeepMind is now accepting intern applications for summer 2025. Attached is the official "call for interns" email; the links and email aliases that got lost in the screenshot are below.
Tweet media one
0
0
9
@PeterHedman3
Peter Hedman
11 months
This feels like a glimpse of the future. You can train 3DGS scenes from scratch directly in your browser! I had to pinch myself to check if I was dreaming when I tried this.
0
0
8
@alexandertmai
Alexander Mai
11 months
Our new paper performs exact volume rendering at 30FPS@720p, giving us the highest detail 3D-consistent NeRF! Paper: https://t.co/CRtvXC69s1 Website: https://t.co/EbVmedJp0U
14
75
494