ItalAI
@_italai
Followers
121
Following
441
Media
75
Statuses
161
Pioneering AI Innovation and Startup Acceleration in Italy, inspired by Silicon Valley's transformative spirit.
Rome
Joined May 2024
Not all classrooms have walls. For our interns, Silicon Valley became the ultimate school. In our latest Q&A, Luca and Matteo share what 3 months in the heart of tech taught them — and the mindset that changes how you see research forever 👇 https://t.co/077c4P1p9s
linkedin.com
After three months in California, Matteo Gioia and Luca Zhou are back from their internship with Panasonic North America R&D Labs, where they worked on major projects alongside researchers from...
0
1
3
For us, this project is more than a scientific breakthrough — it embodies what ItalAI stands for: empowering Italian talent to contribute to top-tier AI research. We’re proud of Matteo and the collaboration with @PanasonicNA AI Labs × @UCBerkeley × @SapienzaRoma. 🔗 Paper:
huggingface.co
0
0
1
The team created ~250 randomly assembled toy objects from 4 shape primitives and 3D-printed them, collecting ~2K grasping demonstrations. Training on these “toys” enabled robust zero-shot generalization to real-world objects. 80 % success in simulation and 67 % in real
1
0
1
During 3 months in Silicon Valley, our R&D intern Matteo Gioia worked on a major project with @PanasonicNA AI Labs & @berkeley_ai, tackling one of the key challenges in today's robotics — generalization. Inspired by how children learn through play, mastering a small set of
1
0
4
Generalization is the biggest problem for robotics right now. This includes generalization to unseen objects, environments, tasks… Our recent work shows that generalization to novel objects might not be *that* hard. Specifically, we show that robots, trained on **randomly
2
8
27
Learning from random toy shapes generalizes to OOD object grasping capabilities, and boosts the performance of off-the-shelf VLA models like Pi0-Fast! Check out our latest work LEGO 👉
Children learn to manipulate the world by playing with toys — can robots do the same? 🧸🤖 We show that robots trained on 250 "toys" made of 4 shape primitives (🔵,🔶,🧱,💍) can generalize grasping to real objects. @JitendraMalikCV @trevordarrell Shankar Sastry @berkeley_ai😊
0
6
43
Results: ✅ ManiSkill: 80% zero‑shot success, beating all finetuned baselines. ✅ Franka DROID: 67%, surpassing ShapeGrasp / OpenVLA / π₀‑FAST (27 / 9 / 62). ✅ H1‑2 Hands: 51%, outperforming large VLAs (18–26). Simplicity ⇒ Generalization 🔵🔶🧱💍
1
2
6
Children learn to manipulate the world by playing with toys — can robots do the same? 🧸🤖 We show that robots trained on 250 "toys" made of 4 shape primitives (🔵,🔶,🧱,💍) can generalize grasping to real objects. @JitendraMalikCV @trevordarrell Shankar Sastry @berkeley_ai😊
3
29
140
Co-authors: @Dantong_Niu, Yuvan Sharma, @baifeng_shi, Rachel Ding, Matteo Gioia, @HaoruXue, Henry Tsai, Konstantinos Kallidromitis, Anirudh Pai, Shankar Shastry, @trevordarrell, @JitendraMalikCV, @roeiherzig 🦾 @berkeley_ai × @Panasonic × @_italai x @SapienzaRoma 📄
0
0
3
Results speak for themselves 👇 ✔️67% real-world grasp success on YCB — surpassing state-of-the-art systems trained on much more data. ✔️80% zero-shot success in ManiSkill simulation. ✔️51–67% on real robot setups (H1-2 Hands, Franka DROID) — consistently outperforming
1
0
2
Training uses only four basic shape primitives: 🔵 spheres 🔶 cuboids 🧱 cylinders 💍 rings From these, robots learn generalizable grasping skills — achieving zero-shot transfer to unseen objects in the real world.
1
0
2
Can robots learn like children do? We trained robots on just 250 “toy” objects, and they can now generalize grasping to 64 real-world items — no fine-tuning needed. Inspired by how children learn through play, this new paper explores a new path to scalable, general-purpose
1
2
6
Unbiasedness, interpretability, and trustworthiness are important aspects, especially for biomedical computer vision. Proud to co-organize this workshop at @ICCVConference on October 19th morning. #ICCV25 👇Awesome line-up of speakers
🍪🌴 Join us at our BISCUIT Workshop at @ICCVConference in Honolulu, Hawaii, on Oct 19, 2025, from 9:00 AM to 12:30 PM. 🎤 @MariaVakalopou1 - University of Paris Saclay 🎤 @stefanroth - TU Darmstadt 🎤 @kushalkafle - Adobe 🎤 @davidbau - Northeastern University
0
1
4
Presenting “LongCodeBench: Evaluating Coding LLMs at 1M Context Windows” at @COLM_conf right now🚀 The only benchmark combining real-world coding tasks, non-synthetic data, million-token contexts, and granular multi-scale evaluation. Come find us: 📍 Room 710 | Poster #49 |
0
1
8
Second Keynote of Day 1 at @COLM_conf by Shirley Ho on building Polymathic Foundation Model for science💡 A deep dive into building versatile systems for numerical data and ML tasks that can learn across heterogeneous scientific fields where no shared representation like text
0
0
3
Luke Zettlemoyer (@LukeZettlemoyer) plenary talk on scalable architectures for multimodal language modeling #COLM2025 Chameleon: autoregressive multimodal language models -- treat image as tokens -- works but harder to scale -- modality gap seems to be a big problem
2
13
117
It's #COLM2025 week! Presenting "LongCodeBench: Evaluating Coding LLMs at 1M Context Windows" this Thursday at @COLM_conf in Montreal. 🗓 Poster Session 5 | 11:00 AM–1:00 PM 📍 710 | Poster #49 Come meet us! 🖇️ https://t.co/BDuX5TEsdH
0
0
4
Can LLMs really handle 1M-token contexts? Our #COLM2025 paper shows performance collapses at scale - even for top models. Enter LongCodeBench: the first realistic 1M-token coding benchmark, built from real GitHub issues to evaluate comprehension and bug repair in long-context
1
1
4
We’ll be presenting LongCodeBench at @COLM_conf next week in Montreal. Don’t hesitate to reach out if you’ll be there — would love to chat about the paper (and not only)!
Our paper “LongCodeBench: Evaluating Coding LLMs at 1M Context Windows” has been accepted at #COLM2025 🚀 This is a huge milestone for our team and LLM research. LongCodeBench is the first 1M-token benchmark for code ability and marks the first paper in our collaboration with
0
0
5