letian_fu Profile Banner
Max Fu Profile
Max Fu

@letian_fu

Followers
771
Following
180
Media
22
Statuses
129

scaling robotics. Intern @NVIDIA. PhD student @UCBerkeley @berkeley_ai. Prev @Apple @autodesk

Berkeley, CA
Joined August 2012
Don't wanna be here? Send us removal request.
@letian_fu
Max Fu
4 days
RT @xiuyu_l: Sparsity can make your LoRA fine-tuning go brrr 💨. Announcing SparseLoRA (ICML 2025): up to 1.6-1.9x faster LLM fine-tuning (2….
0
57
0
@letian_fu
Max Fu
6 days
RT @hanna_mao: We build Cosmos-Predict2 as a world foundation model for Physical AI builders — fully open and adaptable. Post-train it for….
0
69
0
@letian_fu
Max Fu
20 days
RT @SawyerMerritt: Waymo in a new blog post: "We conducted a comprehensive study using Waymo’s internal dataset. Spanning 500,000 hours of….
0
139
0
@letian_fu
Max Fu
1 month
RT @leggedrobotics: A legged mobile manipulator trained to play badminton with humans coordinates whole-body maneuvers and onboard percepti….
0
42
0
@letian_fu
Max Fu
1 month
RT @StellaLisy: 🤯 We cracked RLVR with. Random Rewards?!.Training Qwen2.5-Math-7B with our Spurious Rewards improved MATH-500 by:.- Rando….
0
338
0
@letian_fu
Max Fu
1 month
RT @HaonanChen_: We hope everyone had a great time at the ICRA 2025 Workshop on Learning Meets Model-Based Methods for Contact-Rich Manipul….
0
2
0
@letian_fu
Max Fu
2 months
Learning 🤝 Model-Based Method.See you tomorrow at ICRA! . GWCC Building A, Room 412.1:30 PM - 6:00 PM.
@HaonanChen_
Haonan Chen
4 months
Excited to organize Workshop on Learning Meets Model-Based Methods for Contact-Rich Manipulation @ ICRA 2025!. We welcome submissions on a range of topics—check out our website for details:. Join us for an incredible lineup of speakers! #ICRA2025
Tweet media one
0
1
19
@letian_fu
Max Fu
2 months
RT @Papagina_Yi: 🚀 Struggling with the lack of high-quality data for AI-driven human-object interaction research? We've got you covered!….
0
158
0
@letian_fu
Max Fu
2 months
RT @ollama: Multimodal model support is here in 0.7! . Ollama now supports multimodal models via its new engine. Cool vision models to t….
0
285
0
@letian_fu
Max Fu
2 months
RT @mzubairirshad: Interested in collecting robot training data without robots in the loop? 🦾 Check out this cool new approach that uses a….
0
1
0
@letian_fu
Max Fu
2 months
Large language models can do new tasks from a few text prompts. What if robots could do the same—with trajectories?. 🤖 ICRT enables zero-shot imitation: prompt with a few teleop demos, and it acts—no fine-tuning. Happy to chat more at ICRA!.📍 ICRA | Wed 21 May | 08:35 - 08:40.
@letian_fu
Max Fu
10 months
Vision-language models perform diverse tasks via in-context learning. Time for robots to do the same! Introducing In-Context Robot Transformer (ICRT): a robot policy that learns new tasks by prompting with robot trajectories, without any fine-tuning. [1/N]
0
0
12
@letian_fu
Max Fu
2 months
RT @uynitsuj: Next challenge: scalable learning of robot manipulation skills from truly in-the-wild videos, such as YouTube!.
0
3
0
@letian_fu
Max Fu
2 months
RT @RavenHuang4: Can we scale up robot data collection without a robot? We propose a pipeline to scale robot dataset from one human demonst….
0
1
0
@letian_fu
Max Fu
2 months
RT @fangchenliu_: Ppl are collecting large-scale teleoperation datasets, which are often just kinematics-level trajectories. Real2Render2Re….
0
4
0
@letian_fu
Max Fu
2 months
RT @LongTonyLian: As we all know, collecting data for robotics is very costly. This is why I’m very impressed by this work: it generates a….
0
2
0
@letian_fu
Max Fu
2 months
This would not have been possible without my awesome co-lead @uynitsuj and collaborators @UC.@RavenHuang4, Karim El-Refai, Rares Andrei Ambrus, Richard Cheng, @mzubairirshad, @Ken_Goldberg.Arxiv: We will release the code over the fall:.
0
1
9
@letian_fu
Max Fu
2 months
Idea: Unlock scalable robot learning. Robotic manipulation is hard not just because of perception, but because it couples:.– Visual diversity.– Motion diversity.– And evolving robot embodiments.We need datasets that are:.🤖 Embodiment-agnostic.🎥 Rooted in human video.💻
1
0
5
@letian_fu
Max Fu
2 months
Can synthetic data match teleop?.Yes!.This plot shows policies trained **only** on R2R2R-rendered data — no sim, no teleop — matching real-data-trained policies. Faster. Cheaper. And scales with compute.
Tweet media one
1
1
4
@letian_fu
Max Fu
2 months
One demo, many trajectories. R2R2R doesn’t just copy a human demo—it diversifies it. From a single video, we generate:.– Diverse 6-DoF trajectories via interpolation.– Scene & viewpoint variation.– Analytical grasp sampling + IK for kinematically correct execution.– Compatible
1
1
6
@letian_fu
Max Fu
2 months
Render vs. Simulation: What’s the difference?.We started the project by trying sim2real, but we encountered some problems:.– Object interpenetration.– Violation of physics principles.– Unrealistic contact behavior.So we turned to rendering:.Rendering = Simulation – Dynamics.📹
1
1
11