
Max Fu
@letian_fu
Followers
771
Following
180
Media
22
Statuses
129
scaling robotics. Intern @NVIDIA. PhD student @UCBerkeley @berkeley_ai. Prev @Apple @autodesk
Berkeley, CA
Joined August 2012
RT @xiuyu_l: Sparsity can make your LoRA fine-tuning go brrr 💨. Announcing SparseLoRA (ICML 2025): up to 1.6-1.9x faster LLM fine-tuning (2….
0
57
0
RT @hanna_mao: We build Cosmos-Predict2 as a world foundation model for Physical AI builders — fully open and adaptable. Post-train it for….
0
69
0
RT @SawyerMerritt: Waymo in a new blog post: "We conducted a comprehensive study using Waymo’s internal dataset. Spanning 500,000 hours of….
0
139
0
RT @leggedrobotics: A legged mobile manipulator trained to play badminton with humans coordinates whole-body maneuvers and onboard percepti….
0
42
0
RT @StellaLisy: 🤯 We cracked RLVR with. Random Rewards?!.Training Qwen2.5-Math-7B with our Spurious Rewards improved MATH-500 by:.- Rando….
0
338
0
RT @HaonanChen_: We hope everyone had a great time at the ICRA 2025 Workshop on Learning Meets Model-Based Methods for Contact-Rich Manipul….
0
2
0
Learning 🤝 Model-Based Method.See you tomorrow at ICRA! . GWCC Building A, Room 412.1:30 PM - 6:00 PM.
Excited to organize Workshop on Learning Meets Model-Based Methods for Contact-Rich Manipulation @ ICRA 2025!. We welcome submissions on a range of topics—check out our website for details:. Join us for an incredible lineup of speakers! #ICRA2025
0
1
19
RT @Papagina_Yi: 🚀 Struggling with the lack of high-quality data for AI-driven human-object interaction research? We've got you covered!….
0
158
0
RT @ollama: Multimodal model support is here in 0.7! . Ollama now supports multimodal models via its new engine. Cool vision models to t….
0
285
0
RT @mzubairirshad: Interested in collecting robot training data without robots in the loop? 🦾 Check out this cool new approach that uses a….
0
1
0
Large language models can do new tasks from a few text prompts. What if robots could do the same—with trajectories?. 🤖 ICRT enables zero-shot imitation: prompt with a few teleop demos, and it acts—no fine-tuning. Happy to chat more at ICRA!.📍 ICRA | Wed 21 May | 08:35 - 08:40.
Vision-language models perform diverse tasks via in-context learning. Time for robots to do the same! Introducing In-Context Robot Transformer (ICRT): a robot policy that learns new tasks by prompting with robot trajectories, without any fine-tuning. [1/N]
0
0
12
RT @uynitsuj: Next challenge: scalable learning of robot manipulation skills from truly in-the-wild videos, such as YouTube!.
0
3
0
RT @RavenHuang4: Can we scale up robot data collection without a robot? We propose a pipeline to scale robot dataset from one human demonst….
0
1
0
RT @fangchenliu_: Ppl are collecting large-scale teleoperation datasets, which are often just kinematics-level trajectories. Real2Render2Re….
0
4
0
RT @LongTonyLian: As we all know, collecting data for robotics is very costly. This is why I’m very impressed by this work: it generates a….
0
2
0
This would not have been possible without my awesome co-lead @uynitsuj and collaborators @UC.@RavenHuang4, Karim El-Refai, Rares Andrei Ambrus, Richard Cheng, @mzubairirshad, @Ken_Goldberg.Arxiv: We will release the code over the fall:.
0
1
9