Felix Taubner
@taubnerfelix
Followers
452
Following
373
Media
8
Statuses
88
Intern @ Meta | PhD Student at University of Toronto, working on generative 3D animation and digital humans
Toronto
Joined October 2022
๐๐ง๐ญ๐ซ๐จ๐๐ฎ๐๐ข๐ง๐ ๐๐๐๐๐๐๐ (๐๐๐๐๐๐๐๐ ๐๐ฌ๐ข๐ ๐๐๐๐) MVP4D turns a single reference image into animatable 360-degree avatars! Project: https://t.co/6oTPEGqI5B Paper: https://t.co/lOSSjjUp8h ๐Code releases Nov 15th
6
27
140
At @SpAItial_AI, weโre building world models that transform generative AI! We are hiring across several roles! Reach out and check out https://t.co/pnma7QJuXm (of course also if you were impacted by the recent Meta layoffs)
4
10
66
โผ๏ธ500h of 3D motion data releasedโผ๏ธ Our team at the Codec Avatars Lab just released a large scale dataset of 3D tracked human motion, including audio and text annotations. Check it out here: https://t.co/ZnRxNjtJx1
7
37
173
today we're open-sourcing Krea Realtime. this 14B autoregressive model is 10x larger than any open-source equivalent, and it can generate long-form videos at 11 fps on a single B200. weights and technical report below ๐
59
203
1K
๐ข Bolt3D will be presented @ICCVConference ! Catch us at the poster sessions: Main conference: Thu 23rd Oct 14.45-16.45 Invited workshops: FOUND workshop: Sun 19th Oct 11.30-12.30 AI3DCC workshop: Mon 20th Oct 10.10-11.00
โก๏ธ Introducing Bolt3D โก๏ธ Bolt3D generates interactive 3D scenes in less than 7 seconds on a single GPU from one or more images. It features a latent diffusion model that *directly* generates 3D Gaussians of seen and unseen regions, without any test time optimization. ๐งต๐ (1/9)
0
6
54
Check out Multispectral Demosaicing via Dual Cameras #ICCV2025ย Spotlight๐ก๐ก! In the future, cameras wonโt just see color โ theyโll read health, understand materials, and recognize life. Multispectral sensors are coming to your phone! Our work helps pave the way.
5
4
17
0
0
3
Iโve been sitting on this for a while โ thrilled to share! Huge shoutout to the team @RuihangZhang, @TuliMathieu, @SherwinBahmani and @DaveLindell ๐
1
0
5
Step 1: Multi-view video diffusion โ We generate clips of the subject from many viewpoints with controllable pose & expressions, and show how to train a model with limited 360ยฐ data. Step 2: 4D avatar fitting โ We distill those multi-view videos into a real-time, 360ยฐ avatar.
1
0
3
๐ข๐ข๐ข ๐๐ซ๐ข๐๐ง๐ ๐ฅ๐ ๐๐ฉ๐ฅ๐๐ญ๐ญ๐ข๐ง๐ +: ๐๐ข๐๐๐๐ซ๐๐ง๐ญ๐ข๐๐๐ฅ๐ ๐๐๐ง๐๐๐ซ๐ข๐ง๐ ๐ฐ๐ข๐ญ๐ก ๐๐ฉ๐๐ช๐ฎ๐ ๐๐ซ๐ข๐๐ง๐ ๐ฅ๐๐ฌ. โ Project: https://t.co/3i8JWLLiIf โ ArXiv: https://t.co/Rl2RylGHH7 โย โ ๏ธCode released on October 8th
1
27
174
Big thanks to @SGiebenhain et al. for helping me and providing their code for the Pixel3DMM face tracker
github.com
[Official Code] Pixel3DMM: Versatile Screen-Space Priors for Single-Image 3D Face Reconstruction - SimonGiebenhain/pixel3dmm
0
0
12
โจ Pro tip: Use @NanoBanana ๐ to turn your photo into a cartoon ๐จ, then feed it into CAP4D to animate it in 3D! ๐บ
1
0
15
๐ Excited to release the full inference code of ๐งขCAP4D๐งข! Generate animatable 4D avatars from any image(s) + driving video. ๐คฉAlso works on stylized photos! ๐ Code: https://t.co/DFJmZHKCtB ๐ Project page: https://t.co/l6hRa5jYko
16
104
617
๐ข Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation Got only one or a few images and wondering if recovering the 3D environment is a reconstruction or generation problem? Why not do it with a generative reconstruction model! We show that a
19
71
251
Every lens leaves a blur signatureโa hidden fingerprint in every photo. In our new #TPAMI paper, we show how to learn it fast (5 mins of capture!) with Lens Blur Fields โจ With it, we can tell apart โidenticalโ phones by their optics, deblur images, and render realistic blurs.
157
709
7K
๐ข๐ขWant to build ๐๐ ๐
๐จ๐ฎ๐ง๐๐๐ญ๐ข๐จ๐ง ๐๐จ๐๐๐ฅ๐ฌ?๐ข๐ข โก๏ธWe're looking for Diffusion/3D/ML/Infra engineers and scientists in Munich & London. Get in touch and apply: https://t.co/atwcWtTyV5
#GenAI #foundationmodels #worldmodels #diffusion #transformers
5
36
239
(1/6) We are thrilled to announce that "AAA-Gaussians: Anti-Aliased and Artifact-Free 3D Gaussian Rendering" was accepted as a highlighted poster to #ICCV2025 TLDR: Enabling efficient training and rendering of 3DGS scenes without popping, distortion, and aliasing artifacts.
2
33
210
We are hiring a robotics lab technician at @UofT to help us build one of the world's best places for robotics research and education! ๐ 1/n
2
2
16
๐ขThrilled to share that I'll be joining Harvard and the Kempner Institute as an Assistant Professor starting Fall 2026! I'll be recruiting students this year for the Fall 2026 admissions cycle. Hope you apply!
We are thrilled to share the appointment of @QianqianWang5 as an #KempnerInstitute Investigator! She will bring her expertise in computer vision to @Harvard. Read the announcement: https://t.co/Aoh6A5gp9B
@hseas #AI #ComputerVision
101
41
721
Currently, only a few preprocessed subjects are available for inference. I am planning to make it work for any image or video very soon, so stay tuned!
0
0
11