Daniel Duckworth Profile
Daniel Duckworth

@duck

Followers
3K
Following
614
Media
27
Statuses
333

Research Scientist at Google DeepMind, Berlin. https://t.co/FNRtZRR38w

Berlin, DE
Joined September 2010
Don't wanna be here? Send us removal request.
@duck
Daniel Duckworth
2 years
Introducing SMERF: a streamable, memory-efficient method for real-time exploration of large, multi-room scenes on everyday devices. Our method brings the realism of Zip-NeRF to your phone or laptop!. Project page: ArXiv: . (1/n)
23
191
857
@duck
Daniel Duckworth
7 days
I had the pleasure last month of giving slightly provocative talk at Bliss AI, a a wonderful student-run organization here in Berlin. The best part: the talk is online for everyOne to enjoy! Behold: "NeRF is dead" .
www.youtube.com
We are excited to feature Daniel Duckworth, Senior Research Software Engineer at Google DeepMind, who will discuss "Radiance Fields are Dead (and why that’s ...
0
1
4
@duck
Daniel Duckworth
1 year
RT @ChrisJReiser: Are you at #SIGGRAPH2024 and want to learn how to reconstruct meshes from multi-view images that contain details like in….
0
31
0
@duck
Daniel Duckworth
1 year
RT @zhenjun_zhao: InterNeRF: Scaling Radiance Fields via Parameter Interpolation. @clintonjwang, @PeterHedman3, Polina Golland, @jon_barron….
0
9
0
@duck
Daniel Duckworth
2 years
RT @MartinNebelong: Seeing the amazing new SMERF technology immediately made me imagine a time when we can walk around in environments like….
0
46
0
@duck
Daniel Duckworth
2 years
RT @bilawalsidhu: Thoughts NeRFs were dead? Google DeepMind just dropped SMERF — streamable, multi-room NeRFs with cm-level detail. Oh and….
0
92
0
@duck
Daniel Duckworth
2 years
RT @RadianceFields: SMERF from Google Research (again) achieves Zip-NeRF quality, operating at a remarkable 60fps on everyday devices like….
0
67
0
@duck
Daniel Duckworth
2 years
This has been a joint work with my amazing collaborators: @PeterHedman3, @chrisjreiser, @PeterZhizhin, @jfthibert, @mariolucic_, @RSzeliski, and @jon_barron. Learn more and try SMERF out yourself at (8/n).
smerf-3d.github.io
Project page for SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration
2
4
26
@duck
Daniel Duckworth
2 years
The result: a set of compact, streaming-ready submodels ready to run at up to 60 fps in your browser. The best part: you can try it out yourself: (7/n)
1
1
16
@duck
Daniel Duckworth
2 years
Only a single submodel needs to be in memory at a time, and while the user explores the space, we swap out old submodels and stream in new ones. We train submodels to be mutually consistent, making transitions barely noticeable. (6/n)
Tweet media one
1
0
20
@duck
Daniel Duckworth
2 years
We also modify MERF to significantly improve visual fidelity on small-to-medium size scenes. Our submodels capture thin geometry, high-resolution textures, and specular highlights better than ever before. (5/n)
1
1
17
@duck
Daniel Duckworth
2 years
How do we achieve this? We distill a teacher model into a family of MERF-like student submodels, each of which specializes to a different part of the scene. Each submodel captures the entire scene, so rendering stays fast and GPU memory consumption stays low. (4/n)
Tweet media one
1
0
22
@duck
Daniel Duckworth
2 years
SMERF has the best of both worlds: we produce renders nearly indistinguishable from Zip-NeRF while rendering at 60 fps or more on desktops, laptops, and even recent smartphones, all while scaling to scenes as big as a house!. (3/n)
2
5
48
@duck
Daniel Duckworth
2 years
Existing approaches for view-synthesis are torn between two conflicting goals: high quality and fast rendering. Most methods only achieve one or the other. (2/n)
1
1
14
@duck
Daniel Duckworth
2 years
While this blog post may only have two authors, the project itself is the hard work of a number of amazing teammates. Take a peak at the "Acknowledgments" section -- you may spot a few familiar names :).
0
0
2
@duck
Daniel Duckworth
2 years
From the moment NeRF was first published, the research community knew it would be something game-changing. I'm proud to be part of the team turning this amazing line of work into a real product experience!.
@GoogleAI
Google AI
2 years
Immersive View gives users a virtual, close-up look at indoor spaces in 3D! Learn how it uses neural radiance fields to seamlessly fuse photos to produce realistic, multidimensional reconstructions of your favorite businesses and public spaces →
2
0
1
@duck
Daniel Duckworth
3 years
RT @BenMildenhall: Code finally released for our CVPR 2022 papers (mip-NeRF 360/Ref-NeRF/RawNeRF)! You can also find links for each paper's….
0
96
0
@duck
Daniel Duckworth
3 years
I'm stoked to be a contributor on Object SRT, a new method for unsupervised, posed-images-to-3D-scene representation and segmentation! It's crazy fast and, while far from perfect, is leaps and bounds better than anything I've seen yet :).
@tkipf
Thomas Kipf
3 years
So excited to share Object Scene Representation Transformer (OSRT):. OSRT learns about complex 3D scenes & decomposes them into objects w/o supervision, while rendering novel views up to 3000x faster than prior methods!. 🖥️ 📜 1/7
0
1
7
@duck
Daniel Duckworth
5 years
None of this would be possible without my amazing collaborators! @negative_result, @sschoenholz, @ethansdyer, and @jaschasd.
0
0
3