Sai Bi
@Sai__Bi
Followers
423
Following
215
Media
7
Statuses
108
Research Scientist @ Adobe Research
San Jose, CA
Joined October 2011
Check out the cool work by our intern @HanshengCh on policy-based distillation for few-step generation.
Excited to announce a new track of accelerating Generative AI: pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation https://t.co/6ro55E1XGP Distill 20B flow models now using just an L2 loss via imitation learning for SOTA diversity and teacher-aligned quality.
0
0
7
We found that visual foundation encoder can be aligned to serve as tokenizers for latent diffusion models in image generation! Our new paper introduces a new tokenizer training paradigm that produces a semantically rich latent space, improving diffusion model performancešš.
7
71
529
Wrapped up Stanford CS336 (Language Models from Scratch), taught with an amazing team @tatsu_hashimoto @marcelroed @neilbband @rckpudi. Researchers are becoming detached from the technical details of how LMs work. In CS336, we try to fix that by having students build everything:
46
595
5K
I am going to give a talk on scalable 3D reconstructions today at the 3D-LLM/VLA workshop at CVPR at 10:55am today at Room 106A. Welcome to attend!
3d-llm-vla.github.io
Bridging Language, Vision and Action in 3D Environments. Join us at CVPR 2025 in Nashville, TN, USA to explore the integration of language and 3D perception.
1
1
28
š Excited to announce our CVPR 2025 Workshop: 3D Digital Twin: Progress, Challenges, and Future Directions š June 12, 2025 Ā· 9:00 AMā5:00 PM š¢ Incredible lineup: @rapideRobot, Andrea Vedaldi @Oxford_VGG,@richardzhangsfu,@QianqianWang5,Dr. Xiaoshuai Zhang @Hillbot_AI,
2
24
58
Bored of linear recurrent memories (e.g., linear attention) and want a scalable, nonlinear alternative? Our new paperĀ āTest-Time Training Done Rightā proposeĀ LaCT (Large Chunk Test-Time Training)Ā ā a highly efficient, massively scalable nonlinear memory with: š”Ā Pure PyTorch
5
85
426
I will be attending ICLR in Singapore this week. Feel free to reach out and chat!
0
0
23
Check out the fantastic work by our intern @HanshengCh at Adobe Research. The code and model are publicly available!
Excited to share our work: Gaussian Mixture Flow Matching Models (GMFlow) https://t.co/XWAy2VCJlg GMFlow generalizes diffusion models by predicting Gaussian mixture denoising distributions, enabling precise few-step sampling and high-quality generation.
1
0
18
Our paper LVSM has been accepted as an oral presentation at #ICLR2025! See you in Singapore! Weāve just released the code and checkpointsācheck it out here: https://t.co/07Px6Rt2Jn.š
github.com
[ICLR 2025 Oral] Official code for "LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias" - Haian-Jin/LVSM
Novel view synthesis has long been a core challenge in 3D vision. But how much 3D inductive bias is truly needed? āSurprisingly, very little! Introducing "LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias"āa fully transformer-based approach that enables scalable,
2
23
129
The speaker was fully aware of the implications of her words and the damage they would cause. Yet, instead of preventing harm, she chose to inflict it first and then attempt to repair it with some 'nice' words. Thatās not acceptable!
Mitigating racial bias from LLMs is a lot easier than removing it from humans! Canāt believe this happened at the best AI conference @NeurIPSConf We have ethical reviews for authors, but missed it for invited speakers? š”
0
0
20
Check out the latest work led by @hanzhe_hu. Turbo3D achieves high-quality text-to-3D generation within 0.35 seconds.
Text-to-image generation can already generate high-quality results in the blink of an eye, while text-to-3D still requires a much longer time. How do we bridge this gap? Introducing "Turbo3D: Ultra-fast Text-to-3D Generationā ā Ultra-fast high-quality text-to-3D generation in
0
0
9
We've released our paper "Generating 3D-Consistent Videos from Unposed Internet Photos"! Video models like Luma generate pretty videos, but sometimes struggle with 3D consistency. We can do better by scaling them with 3D-aware objectives. 1/N page: https://t.co/Hgu8uo3tvu
6
47
230
Novel view synthesis has long been a core challenge in 3D vision. But how much 3D inductive bias is truly needed? āSurprisingly, very little! Introducing "LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias"āa fully transformer-based approach that enables scalable,
23
95
577
LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias @Haian_Jin, @hanwenjiang1, @HaoTan5, @KaiZhang9546, @Sai__Bi, @tianyuanzhang99, @fujun_luan, @Jimantha, @zexiangxu tl;dr: purely transformer-based large view synthesis https://t.co/bMmqX4fbq1
2
10
59
We have formed a foundation team in Adobe to work on video foundation models with @jianming_zhang_@Sai__Bi@fujun_luan. Iām excited to see the non-parametric side of 3D: an AI model with strong spatial-temporal capability, besides the existing parametric 3d representations!
3
8
129
Hate waiting 10 minutes for 3D GS to render your favorite indoor or outdoor scenes? ā³ Our feed-forward solution, Long-LRM, cuts it down to just 1 second! ā”ļø With a straightforward mix of Mamba2 and transformer, it scales up to 32 high-res input images. https://t.co/brAgawmtV3
5
38
209
I will be presenting our work on applying large reconstruction model for Gaussian Splsting ( https://t.co/RyXzojjMb0) from sparse images at #ECCV2024 in Milan. Welcome to come by our poster at stand 320 on Thursday morning!
2
3
52
š„° Super excited to share this new work on benchmarking LLMs for carbohydrate estimation, which is a huge daily burden that every patient with diabetes needs to deal with multiple times every day. ššProud of my students for starting to investigate the potential of LLMs in
2
9
38