
Daniel Duckworth
@duck
Followers
3K
Following
611
Media
27
Statuses
333
Research Scientist at Google DeepMind, Berlin. https://t.co/FNRtZRR38w
Berlin, DE
Joined September 2010
Introducing SMERF: a streamable, memory-efficient method for real-time exploration of large, multi-room scenes on everyday devices. Our method brings the realism of Zip-NeRF to your phone or laptop!. Project page: ArXiv: . (1/n)
23
191
856
RT @ChrisJReiser: Are you at #SIGGRAPH2024 and want to learn how to reconstruct meshes from multi-view images that contain details like in….
0
31
0
RT @zhenjun_zhao: InterNeRF: Scaling Radiance Fields via Parameter Interpolation. @clintonjwang, @PeterHedman3, Polina Golland, @jon_barron….
0
9
0
RT @gigazine: Googleがリアルタイムに高精度なレンダリングを実現する新技術「SMERF」を発表.
gigazine.net
画像や映像から立体的なイメージを生み出せる「NeRF(Neural Radiance Fields:ニューラル輝度場)」や「MERF(Memory-Efficient Radiance Fields:メモリ効率のいい輝度場)」をさらに発展させ、センチメートル単位の精度のレンダリングを行いつつも必要スペックはMERF並みな「SMERF」という技術を、Googleの開発者チームが発表しました。
0
28
0
RT @MartinNebelong: Seeing the amazing new SMERF technology immediately made me imagine a time when we can walk around in environments like….
0
46
0
RT @bilawalsidhu: Thoughts NeRFs were dead? Google DeepMind just dropped SMERF — streamable, multi-room NeRFs with cm-level detail. Oh and….
0
91
0
RT @RadianceFields: SMERF from Google Research (again) achieves Zip-NeRF quality, operating at a remarkable 60fps on everyday devices like….
0
67
0
This has been a joint work with my amazing collaborators: @PeterHedman3, @chrisjreiser, @PeterZhizhin, @jfthibert, @mariolucic_, @RSzeliski, and @jon_barron. Learn more and try SMERF out yourself at (8/n).
smerf-3d.github.io
Project page for SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration
2
4
26
From the moment NeRF was first published, the research community knew it would be something game-changing. I'm proud to be part of the team turning this amazing line of work into a real product experience!.
Immersive View gives users a virtual, close-up look at indoor spaces in 3D! Learn how it uses neural radiance fields to seamlessly fuse photos to produce realistic, multidimensional reconstructions of your favorite businesses and public spaces →
2
0
1
RT @BenMildenhall: Code finally released for our CVPR 2022 papers (mip-NeRF 360/Ref-NeRF/RawNeRF)! You can also find links for each paper's….
0
96
0
I'm stoked to be a contributor on Object SRT, a new method for unsupervised, posed-images-to-3D-scene representation and segmentation! It's crazy fast and, while far from perfect, is leaps and bounds better than anything I've seen yet :).
So excited to share Object Scene Representation Transformer (OSRT):. OSRT learns about complex 3D scenes & decomposes them into objects w/o supervision, while rendering novel views up to 3000x faster than prior methods!. 🖥️ 📜 1/7
0
1
7
None of this would be possible without my amazing collaborators! @negative_result, @sschoenholz, @ethansdyer, and @jaschasd.
0
0
3