
Suning Huang
@suning_huang
Followers
340
Following
552
Media
13
Statuses
45
PhD @Stanford๏ฝBEng @Tsinghua_Uni. Learning to teach robots to learn. Nice to meet you ;)
Palo Alto
Joined January 2024
RT @stepjamUK: ๐'๐๐ฒ ๐ต๐ฒ๐ฎ๐ฟ๐ฑ ๐๐ต๐ถ๐ ๐ฎ ๐น๐ผ๐ ๐ฟ๐ฒ๐ฐ๐ฒ๐ป๐๐น๐: "๐ช๐ฒ ๐๐ฟ๐ฎ๐ถ๐ป๐ฒ๐ฑ ๐ผ๐๐ฟ ๐ฟ๐ผ๐ฏ๐ผ๐ ๐ผ๐ป ๐ผ๐ป๐ฒ ๐ผ๐ฏ๐ท๐ฒ๐ฐ๐ ๐ฎ๐ป๐ฑ ๐ถ๐ ๐ด๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐น๐ถ๐๐ฒ๐ฑ ๐๐ผ ๐ฎ ๐ป๐ผ๐๐ฒ๐น ๐ผ๐ฏ๐ท๐ฒ๐ฐ๐ - ๐๐ต๐ฒ๐๐ฒ ๐ป๐ฒ๐ ๐ฉ๐๐ ๐บ๐ผ๐ฑโฆ.
0
51
0
RT @marionlepert: Introducing Masquerade ๐ญ: We edit in-the-wild videos to look like robot demos, and find that co-training policies with thโฆ.
0
38
0
๐ก6/n Huge thanks to our amazing team! ๐. Grateful to my wonderful collaborators: @QianzhongChen, @XiaohanZhang220, @JiankaiSun.And to our incredible advisor @MacSchwager!.
0
0
2
๐ Excited to share our #CoRL2025 paper! See you in Korea ๐ฐ๐ท!๐. We present ParticleFormer, a Transformer-based 3D world model that learns from point cloud perception and captures complex dynamics across multiple objects and material types !. ๐ Project website:
6
19
105
Unfortunately I cannot attend the conference in person this year, but our co-author @Kevin_GuoweiXu will be presenting the paper and answer all your questions!. ๐Poster session:.Time: Wed 16 Jul 11 a.m. PDT โ 1:30 p.m. PDT.Location: West Exhibition Hall B2-B3 #W-607.
๐ Introducing MENTOR: Mixture-of-Experts Network with Task-Oriented Perturbation for Visual Reinforcement Learning! ๐. We propose a strong model-free visual RL algorithm that can learn robust visuomotor policies from scratch โ in the real world! ๐ช๐ค. ๐ Check out the project
0
1
5
RT @sizhe_lester_li: Now in Nature! ๐ Our method learns a controllable 3D model of any robot from vision, enabling single-camera closed-looโฆ.
0
70
0
RT @agiachris: What makes data โgoodโ for robot learning? We argue: itโs the data that drives closed-loop policy success!. Introducing CUPIโฆ.
0
20
0
RT @priyasun_: How can we move beyond static-arm lab setups and learn robot policies in our messy homes?.We introduce HoMeR, an imitation lโฆ.
0
53
0
RT @yjy0625: Introducing Mobi-ฯ: Mobilizing Your Robot Learning Policy. Our method:.โ๏ธ enables flexible mobile skill chaining.๐ชถ without reโฆ.
0
79
0
RT @tylerlum23: ๐ง๐ค Introducing Human2Sim2Robot!ย ๐ช๐ฆพ. Learn robust dexterous manipulation policies from just one human RGB-D video. Our Reaโฆ.
0
49
0
Excited to share that MENTOR has been accepted to #ICML2025!.See you in Vancouver this July๐ค.
๐ Introducing MENTOR: Mixture-of-Experts Network with Task-Oriented Perturbation for Visual Reinforcement Learning! ๐. We propose a strong model-free visual RL algorithm that can learn robust visuomotor policies from scratch โ in the real world! ๐ช๐ค. ๐ Check out the project
1
3
22
RT @ZhengrongX: ๐ซ๐๐๐๐ฎ๐๐ has been accepted to #RSS2025 ๐ฅณ.See you in LA this June ๐.
0
6
0
RT @GuanyaShi: When I was a Ph.D. student at @Caltech, @lschmidt3 discussed the paper "Do ImageNet Classifiers Generalize to ImageNet?" inโฆ.
0
28
0
RT @ju_yuanchen: ๐We present DenseMatcher๏ผ.๐ค๏ธDenseMatcher enables robots to acquire generalizable skills across diverse object categories bโฆ.
0
29
0
RT @Kevin_GuoweiXu: ๐ Introducing LLaVA-o1: The first visual language model capable of spontaneous, systematic reasoning, similar to GPT-o1โฆ.
0
238
0