
Marcel Bühler
@mc_buehler
Followers
275
Following
245
Media
23
Statuses
91
PhD candidate @ait_eth, @eth_en
Zurich, Switzerland
Joined September 2020
A purely synthetic face prior can generalize to casual in-the-wild captures and stylized faces 🤯. I'm happy to share our @SIGGRAPHAsia paper “Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures.” .
3
35
163
Reconstructing scenes in-the-wild with dynamic appearances. Check out the videos on the project page. Great work by @jonaskulhanek and colleagues!.
I will be presenting the WildGaussians paper this afternoon at @NeurIPSConf . To learn more about our paper, visit me at poster stand #1307 in East Exhibit Hall 16:30-19:30.
1
0
2
RT @Msadat97: Excited to present our recent work "LiteVAE: Lightweight and Efficient Variational Autoencoders for Latent Diffusion Models"….
0
4
0
One of the first works to use text inputs for hand-object grasp generation. Check it out at @SIGGRAPHAsia!.
A model that generates hand-object interactions based on text prompts!. I'm happy to share our @SIGGRAPHAsiapaper “DiffH2O: Diffusion-Based Synthesis of Hand-Object Interactions from Textual Descriptions”.
0
0
2
@SIGGRAPHAsia Project Page: .Dataset: Paper:
github.com
Contribute to syntec-research/Cafca development by creating an account on GitHub.
0
0
0
Yesterday, we presented Cafca---our latest work on casual few-shot face captures---at @SIGGRAPHAsia in Tokyo. Check out our talk:. We release a dataset of 1.7 Mio. multi-view, multi-environment, multi-expression face images for research.
1
5
13
Our talk starts very soon. Join us at hall B5 (2) @SIGGRAPHAsia.
We will present Cafca this week @SIGGRAPHAsia. Dec 3rd.i) Fast-forward at 9:36.#56 Hall C. ii) Interactive Discussion Session from 11:55 - 12:15.B5 Lobby, B Block, Level 5, Table 3. Dec 4th.iii) 10-min talk between 10:45 - 11:55.Hall B5 (2), B Block, Level 5
0
0
0
Interested in synthesizing realistic hand-object interaction data? . Here is the SOTA method for multi-hand-object interaction from text prompts. Check it out at @SIGGRAPHAsia.
A model that generates hand-object interactions based on text prompts!. I'm happy to share our @SIGGRAPHAsiapaper “DiffH2O: Diffusion-Based Synthesis of Hand-Object Interactions from Textual Descriptions”.
0
0
3
We will present Cafca this week @SIGGRAPHAsia. Dec 3rd.i) Fast-forward at 9:36.#56 Hall C. ii) Interactive Discussion Session from 11:55 - 12:15.B5 Lobby, B Block, Level 5, Table 3. Dec 4th.iii) 10-min talk between 10:45 - 11:55.Hall B5 (2), B Block, Level 5
A purely synthetic face prior can generalize to casual in-the-wild captures and stylized faces 🤯. I'm happy to share our @SIGGRAPHAsia paper “Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures.” .
0
3
13
Great collaboration between @GoogleARVR and @ait_eth at @ETH_en with.Gengyan Li, @errollw @lhlmgr @XuChen71058062 Tanmay Shah, Daoye Wang, Stephan Garbin, Sergio Orts-Escolano @OHilliges @DmitryLagun Jérémy Riviere, Paulo Gotardo, Thabo Beeler, Abhimitra Meka, @KripasindhuSar7.
0
0
1
Our synthetic dataset with over 1.7 Mio images is available for research.
1
0
1
The model generalizes to strong facial expressions, difficult lighting, and even stylized characters.
1
0
1
The core idea is to pre-train a synthetic face prior and fine-tune it to real inputs at inference time. The only real inputs the model ever sees, are three images of the captured person.
1
0
1
We reconstruct detailed faces from only three images. The only equipment required is a handheld camera.
1
0
0
RT @luminohope: I will be giving a Featured Sessions talk at SIGGRAPH ASIA in Tokyo on our efforts for building a foundation 3D digital hum….
0
6
0
RT @NaveenManwani17: 🚨Paper Alert 🚨. ➡️Paper Title: MagicMirror: Fast and High-Quality Avatar Generation with a Constrained Search Space. 🌟….
0
1
0
RT @sammy_j_c: Interested in reconstructing and generating digital humans? Check out our group's papers at #ECCV2024 this week!.
0
4
0
RT @zc_alexfan: Estimating poses for egocentric hand-(object) interaction is a challenging due to occlusion, camera distortion and motion b….
0
10
0
0
3
0