Iro Armeni
@ir0armeni
Followers
2K
Following
566
Media
70
Statuses
332
Assistant Prof @Stanford CEE | Gradient 𝚫 Spaces Group. Researching #VisualMachinePerception to transform #Design & #Construction | https://t.co/27cilI29Bt
Joined June 2018
Very excited to open the afternoon Oral session with SuperDec 🏝️🚀 ✨ Oral: 1PM @ Exhibit Hall III (Ground Floor) 🖼️ Poster: 2:30PM, #143 🌍: https://t.co/H1CmUxp8Wc
@ICCVConference @mapo1 @FrancisEngelman @BoyangBoysun
Are photorealistic representations all we need? In SuperDec, we turn millions of points into compact and modular abstractions made of just a few superquadrics!🧩 Try our code and get a compact representation of your favorite scene!🚀 👾: https://t.co/4MSfLuUlTl
1
4
25
Come check this afternoon: 🏡 HouseTour: A Virtual Real Estate A(I)gent Posed photos ➡️ 3D tours + summaries. Built on 1.6K+ real house tours with poses, reconstructions & descriptions. 📍@ICCVConference | Poster #275 🔗 https://t.co/aSP7Ipj8bE W/ Ata Celen, @majti89, @mapo1
2
17
96
Visit our Demo on Controllable 3D Object Generation @ICCVConference - today afternoon Oct Tue 21, 3pm-5pm - Exhibit Hall 1 Come early, so we can catch some waves before sunset 🌊🏄🏝️ Project: https://t.co/S1NkJaeO6p
@mapo1 @efedele16 @IanHuang3D @orlitany @GuibasLeonidas
0
24
149
Finally, I will give a talk on our ICCV paper HouseTour: A Virtual Real Estate A(I)gent, at OpenSun3D, Room 306B, 5:20PM. https://t.co/pE7k74LL90 W/ Ata Celen, Marc Pollefeys, Daniel Barath @ICCVConference
0
1
2
Later, meet me at the End-to-End 3D Learning Workshop to learn more about our work Rectified Point Flow, a flow model for assembly and registration through pose prediction, 2:05PM, Room 304B https://t.co/rJvDIACKwR W/ Tao Sun, Liyuan Zhu, Shengyu Huang, Shuran Song
1
1
1
Join me today for 3 ICCVW talks on three different topics and content: Starting with our 3D appearance work Guideflow3D, at Computer Vision for Fashion; Art; and Design, 11:20AM, Room 316B https://t.co/TJKXnN7tV4 w/ S. D. Sarkar, S. Stekovic, V. Lepetit
1
0
4
Join us this afternoon @ICCVConference for a fantastic line-up of speakers!
Join us this afternoon at our @ICCVConference workshop on Open-Vocabulary 3D Scene Understanding✨🌺 #OpenSUN3D #iccv2025 ➡️ https://t.co/4nSXaJHlfp
@FrancisEngelman @aycatakmaz @AlexDelitzas @mapo1 @TrackingActions @sainingxie @jstraub6 @ir0armeni
0
1
30
Our paper Rectified Point Flow is a Spotlight @NeurIPSConf 2025 🎉 It can assemble objects from unposed 3D fragments 🧩➡️🪑 …and even create brand new objects from random parts 🤯 https://t.co/wjMnkPPFKd Tao Sun ( https://t.co/Ys4eHuoXZk),
@liyuan_zz, @ShengyHuang, @SongShuran
Rectified Point Flow has been accepted to NeurIPS 2025 as spotlight! Our code, data and weights have been released: https://t.co/LajSoc0jSC
#neurips25
0
1
18
The next OpenSUN3D workshop takes place at @ICCVConference -- we are excited to announce a fantastic line-up of speakers: 👩💼 @TrackingActions @sainingxie @jstraub6 @angelxchang @ir0armeni ⏱️Date/time: Oct. 19, 1:30pm--6pm (1st day, afternoon) 🏡Details: https://t.co/XqA2dyAp2Q
1
12
73
Just dropped: Rectified Point Flow Can we automate 3D assembly from unposed point clouds without supervision? Yes—we use a generative model that learns symmetry & part interchangeability entirely from shape Enabling robotics, AR/VR, & reverse engineering 🌐 https://t.co/7MtBwstJtB
Point maps have become a powerful representation for image-based 3D reconstruction. What if we could push point maps even further tackle 3D registration and assembly? Introducing Rectified Point Flow (RPF), a generic formulation for point cloud pose estimation.
1
7
65
🌟🏆 Excited to share that I’ve been selected as a recipient of the 2025 Google Research Scholar Program, on Machine Perception! Grateful to Google Research for supporting early-career faculty. Details and this year’s cohort of exceptional researchers:
research.google
Learn more about our faculty and student programs and events that support the research community by providing research engagement opportunities.
We’re announcing the 87 professors selected for the 2025 Google Research Scholar Program — join us in congratulating these exceptional recipients and learn more about their groundbreaking work at https://t.co/sIoedpv9pI.
#GoogleResearch #GoogleResearchScholar
3
2
49
🚨Only 1 day left to submit a full paper on (scene) graphs to the 3rd Workshop on #SG2RL! Can't make it? Extended abstracts due July 7th. Dangling carrot🥕? We are in 𝑯𝒂𝒘𝒂𝒊'𝒊, @ICCVConference. 🌐 https://t.co/WyoCoDaqGD 🤹w/ the driving force @azadef, & @eadeli, @fedassa
sites.google.com
Overview The workshop focuses on the topic of scene graphs and graph representation learning for visual perception applications in different domains. Through a series of keynote talks, the audience...
#SG2RL returns to @ICCVConference 2025 at Honolulu, Hawai'i 🌴☀️ We call for full papers and extended abstracts on all topics around graphs and scene graphs. Deadline: 26 June (23:59PT) Website: https://t.co/e0EJfmZjoh With our amazing co-organizers: @eadeli @ir0armeni @fedassa
0
3
8
Back home after an intense week running the @CVPR AI Art gallery in Nashville, where we displayed 16 individual projects, 57 videos on 10 screens and a total of 102 projects online 🤖👀 Thank you to all the artists who participated and showed up 🙏 #CVPR2025 #CVPRAIart
2
6
33
We are here today till 5:15PM and tomorrow from 10AM. Creators (alphabetically): @ir0armeni, @mnbucher, @debsarkar_sayan, Emily Steiner, Tao Sun, @jianhao75895505, @liyuan_zz
0
0
0
At @CVPR? Come to the AI Art exhibit to interact with our #GradientCanvas! It is a community generated #AR painting. Users can claim a region and add their own contribution. No painting skills required, just words! https://t.co/Cm8R84NxnV Gradients Spaces, Stanford Univ.
2
3
15
📣 Happening in 30 mins! Come chat with us about our ✨Highlight ✨ #CVPR paper on 3D scene cross-modal alignment, CrossOver! 🗓️ 4:00 p.m. - 6:00 p.m. CDT 📍 Poster Session #2 Exhibit Hall D Poster #346 Work w/ Ondrej Miksik, @mapo1, @majti89 and @ir0armeni 😄
🎉 Excited to share our latest work, CrossOver: 3D Scene Cross-Modal Alignment, accepted to #CVPR2025 🌐✨ We learn a unified, modality-agnostic embedding space, enabling seamless scene-level alignment across multiple modalities — no semantic annotations needed!🚀
0
3
17
Can’t make it? Don’t worry, he will present it again on Saturday, Poster 12060, morning session. Also, on Sunday morning during the demo session.
0
0
2