Simeon ADEBOLA Profile
Simeon ADEBOLA

@funmilore

Followers
273
Following
5K
Media
39
Statuses
2K

Knowledge Enthusiast, Book Lover. Current: PhD Student Doing Robotics @UCBerkeley @berkeley_ai @AUTOLab_Cal

Joined July 2010
Don't wanna be here? Send us removal request.
@funmilore
Simeon ADEBOLA
22 days
How can a robot provide details of plant anatomy for plant phenotyping? Today @IROS2025 , we present Botany-Bot from @berkeley_ai @Siemens. Botany-Bot 1) creates segmented 3D models of plants using Gaussian splats and GarField 2) uses a robot arm to expose hidden details. (1/9)
2
4
24
@funmilore
Simeon ADEBOLA
19 days
Congratulations to @kevin_zakka. Well deserved!
@kevin_zakka
Kevin Zakka
19 days
Super happy and honored to be a 2025 Google PhD Fellow! Thank you @Googleorg for believing in my research. I'm looking forward to making humanoid robots more capable and trustworthy partners 🤗
1
0
1
@ZehanMa123
Zehan Ma
20 days
Can a robot inspect all views of an object? Today @IROS, we present Omni-Scan from @berkeley_ai, a novel method for bimanual robo 360° object scanning & reconstruction using 3D Gaussian Splats. (1/8) 🔗 https://t.co/8emyJfUNk4
5
12
122
@Ken_Goldberg
Ken Goldberg
23 days
Looking fwd to @AUTOLab students presenting 4 papers @IROS2025 this week in Hangzhou, starting today with "A 'Botany-Bot' for Digital Twin Monitoring of Occluded and Underleaf Plant Structures" co-authored with @funmilore @shuangyuXxx & our collaborators @Siemens Eugene Solowjow.
3
2
22
@funmilore
Simeon ADEBOLA
22 days
Botany-Bot is in collaboration with @ChungMInKim, @justkerrding, Shuaungyu Xie, Prithvi Akella, Jose Luis Susa Rincon, Eugen Solowjow, and @Ken_Goldberg. (9/9). See our website for our paper and more details:
berkeleyautomation.github.io
Create a segmented 3D reconstruction of plants with Gaussian Splatting and Garfield, then use a robot's interaction with the plant to reveal even more information.
0
0
2
@funmilore
Simeon ADEBOLA
22 days
We also obtain the physical metrics of leaf height and leaf area and number of leaves directly from digital twin plant data. (8/9)
1
0
2
@funmilore
Simeon ADEBOLA
22 days
In experiments, we show Botany-Bot can segment leaves with 90.8% accuracy, detect leaves with 86.2% accuracy, lift/push leaves with 77.9% accuracy, and take detailed underside/overside images with 77.3% accuracy. (7/9)
1
0
1
@funmilore
Simeon ADEBOLA
22 days
We combine all of the images into a single spatial indexable model. (6/9)
1
0
1
@funmilore
Simeon ADEBOLA
22 days
In this clip the robot with tool lifts the desired leaf. We also show images of example leaves before and after robot interaction. (5/9)
1
0
1
@funmilore
Simeon ADEBOLA
22 days
For inspection planning, Botany-Bot uses a custom ring-shaped inspection tool and a 7 DOF robot arm to lift up or push down detected leaves to take images of occluded details. (4/9)
1
0
1
@funmilore
Simeon ADEBOLA
22 days
Botany-Bot’s pipeline consists of plant modelling and plant inspection. For plant modelling, we scan a plant, carry out a 3D reconstruction using Gaussian splatting and then generate 3D segments using GarField. (3/9)
1
0
1
@funmilore
Simeon ADEBOLA
22 days
Existing plant phenotyping systems can produce high-quality images yet commercial systems are expensive and require expensive multi-camera hardware. In addition, due to leaf occlusion they cannot perceive many plant details. (2/9)
1
0
1
@funmilore
Simeon ADEBOLA
2 months
A different way for robots to see, closer to how humans see! Interesting work led by @justkerrding !
@justkerrding
Justin Kerr
2 months
Should robots have eyeballs? Human eyes move constantly and use variable resolution to actively gather visual details. In EyeRobot ( https://t.co/iSL7ZLZcHu) we train a robot eyeball entirely with RL: eye movements emerge from experience driven by task-driven rewards.
0
0
0
@justkerrding
Justin Kerr
2 months
Should robots have eyeballs? Human eyes move constantly and use variable resolution to actively gather visual details. In EyeRobot ( https://t.co/iSL7ZLZcHu) we train a robot eyeball entirely with RL: eye movements emerge from experience driven by task-driven rewards.
8
56
272
@brenthyi
Brent Yi
4 months
Had so much fun working on this😊 PyTorch and JAX implementations are both out!
@ruilong_li
Ruilong Li
4 months
For everyone interested in precise 📷camera control 📷 in transformers [e.g., video / world model etc] Stop settling for Plücker raymaps -- use camera-aware relative PE in your attention layers, like RoPE (for LLMs) but for cameras! Paper & code: https://t.co/HPW7moJuvW
0
8
67
@graceluo_
Grace Luo
4 months
I'm presenting a poster at #ICML2025 today! Stop by if you want to learn how VLMs encode different representations of the same task (spoiler: it's the same). 🌐 https://t.co/Dm5PqGefkW 🔗 https://t.co/TCsFZ21Npa cc @_amirbar @trevordarrell
2
15
128
@Chenfeng_X
Chenfeng_X
5 months
Angles Don't Lie! 👼 (And no, that's not a typo!) We're glad to introduce our latest work: "Angles Don't Lie: Unlocking Training-Efficient RL Through the Model's Own Signals." Kids don't want to answer the same simple questions a thousand times, and neither should your AI!
3
8
21
@ZhaoMandi
Mandi Zhao
5 months
How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
19
108
622
@AdemiAdeniji
Ademi Adeniji
5 months
Everyday human data is robotics’ answer to internet-scale tokens. But how can robots learn to feel—just from videos?📹 Introducing FeelTheForce (FTF): force-sensitive manipulation policies learned from natural human interactions🖐️🤖 👉 https://t.co/CZcG87xYn5 1/n
11
39
220
@AdemiAdeniji
Ademi Adeniji
6 months
Closed-loop robot policies directly from human interactions. No teleop, no robot data co-training, no RL, and no sim. Just Aria smart glasses. Everyday human data is passively scalable and a massively underutilized resource in robotics...More to come here in the coming weeks.
@vincentjliu
Vincent Liu
6 months
The future of robotics isn't in the lab – it's in your hands. Can we teach robots to act in the real world without a single robot demonstration? Introducing EgoZero. Train real-world robot policies from human-first egocentric data. No robots. No teleop. Just Aria glasses and
4
14
71