snasiriany Profile Banner
Soroush Nasiriany Profile
Soroush Nasiriany

@snasiriany

Followers
1K
Following
2K
Media
19
Statuses
159

PhD student @UTAustin. Building foundation models for robots.

Austin / Bay Area
Joined September 2014
Don't wanna be here? Send us removal request.
@snasiriany
Soroush Nasiriany
1 year
Iโ€™m excited to introduce RoboCasa, a large-scale simulation framework for everyday tasks. Scaling is the key driving force to unlocking generalist robots, and RoboCasa leverages simulation to take scaling to a whole new level. A short ๐Ÿงต
10
54
266
@snasiriany
Soroush Nasiriany
8 days
RT @stepjamUK: ๐—œ'๐˜ƒ๐—ฒ ๐—ต๐—ฒ๐—ฎ๐—ฟ๐—ฑ ๐˜๐—ต๐—ถ๐˜€ ๐—ฎ ๐—น๐—ผ๐˜ ๐—ฟ๐—ฒ๐—ฐ๐—ฒ๐—ป๐˜๐—น๐˜†: "๐—ช๐—ฒ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ฒ๐—ฑ ๐—ผ๐˜‚๐—ฟ ๐—ฟ๐—ผ๐—ฏ๐—ผ๐˜ ๐—ผ๐—ป ๐—ผ๐—ป๐—ฒ ๐—ผ๐—ฏ๐—ท๐—ฒ๐—ฐ๐˜ ๐—ฎ๐—ป๐—ฑ ๐—ถ๐˜ ๐—ด๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐—น๐—ถ๐˜€๐—ฒ๐—ฑ ๐˜๐—ผ ๐—ฎ ๐—ป๐—ผ๐˜ƒ๐—ฒ๐—น ๐—ผ๐—ฏ๐—ท๐—ฒ๐—ฐ๐˜ - ๐˜๐—ต๐—ฒ๐˜€๐—ฒ ๐—ป๐—ฒ๐˜„ ๐—ฉ๐—Ÿ๐—” ๐—บ๐—ผ๐—ฑโ€ฆ.
0
53
0
@snasiriany
Soroush Nasiriany
1 month
RT @saxenavaibhav11: Large robot datasets are crucial for training ๐Ÿค–foundation models. Yet, we lack systematic understanding of what data mโ€ฆ.
0
44
0
@snasiriany
Soroush Nasiriany
2 months
RT @huihan_liu: Meet Casper๐Ÿ‘ป, a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you sโ€ฆ.
0
37
0
@snasiriany
Soroush Nasiriany
3 months
RT @hellorobotinc: ๐Ÿš€Stretch MuJoCo v0.5 is released! It's a high-fidelity simulation of Stretch 3. Hereโ€™s what's new:. โ€ข ROS2 and Python liโ€ฆ.
0
17
0
@snasiriany
Soroush Nasiriany
3 months
RT @RoboPapers: Ep#11 with @snasiriany @SteveTod1998 @abhirammaddukur @Lawrence_Y_Chen on Sim-and-Real Co-Training: A Simple Recipe for Visโ€ฆ.
0
9
0
@snasiriany
Soroush Nasiriany
3 months
Itโ€™s not a matter of if, itโ€™s a matter of when, video models and world models are going to be a central tool for building robot foundation models.
@jang_yoel
Joel Jang
3 months
Introducing ๐ƒ๐ซ๐ž๐š๐ฆ๐†๐ž๐ง!. We got humanoid robots to perform totally new ๐‘ฃ๐‘’๐‘Ÿ๐‘๐‘  in new environments through video world models. We believe video world models will solve the data problem in robotics. Bringing the paradigm of scaling human hours to GPU hours. Quick ๐Ÿงต
0
1
11
@snasiriany
Soroush Nasiriany
5 months
Collecting real-world data is hard to scale. We show how co-training with large-scale simulation data from RoboCasa can significantly boost performance and robustness in real-world settings, even with notable gaps between real and sim. Check out the thread below for details ๐Ÿ‘‡.
@SteveTod1998
Zhenyu Jiang
5 months
How to use simulation data for real-world robot manipulation? We present sim-and-real co-training, a simple recipe for manipulation. We demonstrate that sim data can significantly enhance real-world performance, even with notable differences between the sim and the real. (1/n)
0
0
26
@snasiriany
Soroush Nasiriany
5 months
GROOT N1 is out! We show how simulation and video generation models help pave the way toward building generalist robots. Check out the white paper:.
@yukez
Yuke Zhu
5 months
Thrilled to announce GR00T N1, our open foundation model for generalist humanoid robots!. GR00T N1 adopts a dual-system design, leverages the entire data pyramid for model training, andย supports various robot embodiments. GR00T N1 embodiesย years of fundamental research, spanning.
1
2
28
@snasiriany
Soroush Nasiriany
6 months
RT @texas_robotics: Exciting news! ๐ŸŽ‰ Texas Robotics faculty members @FAlambeigi and @yukez have been awarded tenure! Their groundbreakingโ€ฆ.
0
6
0
@snasiriany
Soroush Nasiriany
7 months
RT @DrJimFan: We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive - truly open, frontier resโ€ฆ.
0
2K
0
@snasiriany
Soroush Nasiriany
7 months
RT @kevin_zakka: The ultimate test of any physics simulator is its ability to deliver real-world results. With MuJoCo Playground, weโ€™ve coโ€ฆ.
0
185
0
@snasiriany
Soroush Nasiriany
8 months
Fun thought experiment. How much does it cost to collect 1B robot demonstrations?. My guess is it's well north of $100M:.* 3500 robots running 247 to collect over 1 year: Cost 70M+.* Data collection outside US Cost: 50M+. Eye watering but feasible. But is 1B demos all you need?.
5
5
42
@snasiriany
Soroush Nasiriany
10 months
RT @smithlaura1028: Excited to share our work on STEERing robot behavior! With structured language annotation of offline data, STEER exposeโ€ฆ.
0
19
0
@snasiriany
Soroush Nasiriany
10 months
RT @JasonMa2020: Excited to finally share Generative Value Learning (GVL), my @GoogleDeepMind project on extracting universal value functiโ€ฆ.
0
115
0
@snasiriany
Soroush Nasiriany
10 months
RT @xiao_ted: Can we improve VLA generalization abilities by bridging the gap between robot data and internet data?. We find that *visual aโ€ฆ.
0
11
0
@snasiriany
Soroush Nasiriany
10 months
Please see the paper for more details. This was my internship project at Google DeepMind. A huge thank you to my awesome mentor @xiao_ted for supporting me and all of my lovely collaborators @SeanKirmani @TianliDing @smithlaura1028 @yukez @DannyDriess @DorsaSadigh!.
0
0
1
@snasiriany
Soroush Nasiriany
10 months
Hereโ€™s the big kicker: we can adapt to new tasks and objects by just providing cheap-to-collect example images and annotating them with affordances. No additional costly robot demonstrations or teleoperation required!
1
1
3
@snasiriany
Soroush Nasiriany
10 months
Our hierarchical model first predicts an affordance plan and then conditions the policy on the affordance plan. We co-train the model on web datasets (largest data source), robot trajectories, and a modest number of cheap-to-collect images labeled with affordances.
Tweet media one
1
0
1
@snasiriany
Soroush Nasiriany
10 months
We want to make a robotโ€™s job easy by telling it not only what to do but how to do it. Conditioning on language, goal images, and trajectory sketches are helpful, but they present their own challenges. Visual affordance plans are expressive and easy to specify!
Tweet media one
1
0
0
@snasiriany
Soroush Nasiriany
10 months
Excited to release RT-Affordance! We propose conditioning policies on visual affordance plans as an intermediate representation that allows us to learn new tasks without collecting any new robot trajectories. Website and paper: Hereโ€™s a short ๐Ÿงต
3
29
137