
Sai Bi
@Sai__Bi
Followers
404
Following
197
Media
7
Statuses
105
Research Scientist @ Adobe Research
San Jose, CA
Joined October 2011
RT @percyliang: Wrapped up Stanford CS336 (Language Models from Scratch), taught with an amazing team @tatsu_hashimoto @marcelroed @neilbba….
0
550
0
RT @flycooler_zd: 🚀 Excited to announce our CVPR 2025 Workshop: .3D Digital Twin: Progress, Challenges, and Future Directions .🗓 June 12,….
0
21
0
RT @tianyuanzhang99: Bored of linear recurrent memories (e.g., linear attention) and want a scalable, nonlinear alternative?. Our new paper….
0
75
0
RT @Haian_Jin: Excited to attend #ICLR2025 in person this year! I’ll be presenting two papers:. 1. LVSM: A Large View Synthesis Model with….
0
3
0
Check out the fantastic work by our intern @HanshengCh at Adobe Research. The code and model are publicly available!.
Excited to share our work: .Gaussian Mixture Flow Matching Models (GMFlow).GMFlow generalizes diffusion models by predicting Gaussian mixture denoising distributions, enabling precise few-step sampling and high-quality generation.
1
0
19
RT @Haian_Jin: Our paper LVSM has been accepted as an oral presentation at #ICLR2025! See you in Singapore! . We’ve just released the code….
0
20
0
The speaker was fully aware of the implications of her words and the damage they would cause. Yet, instead of preventing harm, she chose to inflict it first and then attempt to repair it with some 'nice' words. That’s not acceptable!.
Mitigating racial bias from LLMs is a lot easier than removing it from humans! . Can’t believe this happened at the best AI conference @NeurIPSConf . We have ethical reviews for authors, but missed it for invited speakers? 😡
0
0
20
Check out the latest work led by @hanzhe_hu. Turbo3D achieves high-quality text-to-3D generation within 0.35 seconds.
Text-to-image generation can already generate high-quality results in the blink of an eye, while text-to-3D still requires a much longer time. How do we bridge this gap?. Introducing "Turbo3D: Ultra-fast Text-to-3D Generation” — Ultra-fast high-quality text-to-3D generation in
0
0
9
RT @gene_ch0u: We've released our paper "Generating 3D-Consistent Videos from Unposed Internet Photos"! Video models like Luma generate pre….
0
46
0
RT @Haian_Jin: Novel view synthesis has long been a core challenge in 3D vision. But how much 3D inductive bias is truly needed? —Surprisin….
0
94
0
RT @zhenjun_zhao: LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias. @Haian_Jin, @hanwenjiang1, @HaoTan5, @KaiZhang9546, @S….
0
10
0
RT @KaiZhang9546: We have formed a foundation team in Adobe to work on video foundation models with @jianming_zhang_@Sai__Bi@fujun_luan. I’….
0
8
0
RT @chenziwee: Hate waiting 10 minutes for 3D GS to render your favorite indoor or outdoor scenes? ⏳ Our feed-forward solution, Long-LRM, c….
0
39
0
RT @YaoQin_UCSB: 🥰 Super excited to share this new work on benchmarking LLMs for carbohydrate estimation, which is a huge daily burden that….
0
9
0
RT @YaoQin_UCSB: Litian did this amazing work in designing this efficient and effective OOD detector based on decision boundary 👏👍. Come to….
0
4
0
Welcome to attend our CVPR workshop on 3D Foundation Models on June 18! Check the list of speakers at
Join us at our first workshop on 3D Foundation Models @CVPR2024, June 18 in Summit 434, starting at 8:50AM!. We have fantastic speakers to discuss the progress and prospects in 3D foundation models. Check out more details at
0
1
33