Ayoung
@_ayoungk
Followers
435
Following
317
Media
45
Statuses
171
Associate professor, SNU SLAM computer vision lidar underwater unconventional sensors
Seoul, Republic of Korea
Joined April 2024
Scientists have developed a rollable robotic structure that is flexible enough to collapse into a compact hub like a tape measure, but also stiff enough to bear heavy loads like 3D printers when extended. Learn more in Science #Robotics: https://t.co/fgD4ZjcnPp
8
95
457
Tried this on two of my submitted manuscripts under review. We got very constructive feedback to enhance the paper before the review arrives in a few months.π€
Releasing a new "Agentic Reviewer" for research papers. I started coding this as a weekend project, and @jyx_su made it much better. I was inspired by a student who had a paper rejected 6 times over 3 years. Their feedback loop -- waiting ~6 months for feedback each time -- was
0
0
4
Beautiful work Depth Anything 3 by @HaotongLin, @bingyikang , and the team! btw I thought it would be named as Depth Anything v3 π
4
14
194
Check our heterogeneous radar PR. How can we localize automotive radar on the map created by imaging radar?
[RA-L] SHeRLoc: Synchronized Heterogeneous Radar Place Recognition for Cross-Modal Localization https://t.co/QhejbdZ6DB
0
0
2
π Best paper candidate at ICCV: Back on Track: Bundle Adjustment for Dynamic Scene Reconstruction (Chen et al., 2025) β check it out here: https://t.co/3MNQeU8wXA For anyone working on 3D reconstruction, SLAM or dynamic scene understanding β this is a must-see that shows how
0
2
2
Very unexpected meet up π²
κΉλΆμΉν¨μμ μ΄μ¬μ© μ μμ μ μ¨ν©μ΄ μΉλ§₯μ ν κ²μ΄λΌλ κ²μ λ체 λκ° μμΈ‘ν μ μμμκΉμ? @AlloraNetwork κ° νΉμ μ§λ¨μ§μ± AIλͺ¨λΈμ ν΅ν΄μ μ΄λ₯Ό μμΈ‘νκ³ @SonicLabs μ λΉ λ₯Έ νΈλμμ
μ ν΅ν΄ κΉλΆμΉν¨ μ리λ₯Ό μ μ νλ€λ©΄ μ μ리μμ κ°μ΄ μΉν¨μ λ¨Ήμ μ μμμκΉμ?
0
0
2
Thu poster 6, we will be presenting TRAN-D at poster #308 π΄
Our new work TRAN-D is accepted to ICCV 2025! TRAN-D reconstructs transparent object geometry in more dynamic scenes. https://t.co/PrHSDAZpPE π39% lower MAE than baselines, with fewer views. β‘οΈScene updates in seconds with physics sim, no rescan needed! More in the thread π
0
0
3
Tue poster 1, we will be presenting "registration beyond points". See us at poster #350π΄
π Our paper "Registration beyond Points" was selected as a highlight at #ICCV2025! Kudos to the amazing team @joomeok98, Hyeonjae Gil, Junwoo Jang, and @GhaffariMaani Read thread to learn more π Paper: https://t.co/g30tcsvO1H Code:
0
0
5
#CoRL2025 will live-stream all the talks for the entire community:
corl.org
CoRL 2025 grants virtual participation in talk sessions for everyone. Talks will be live-streamed on YouTube. You can join each day's session through the links below: September 28: https://youtube....
1
11
63
π The SLAM Handbook is here! From Localization & Mapping to Spatial Intelligence A must-read for anyone in computer vision & robotics β packed with both classical and modern SLAM algorithms. π https://t.co/ILJey7bcJf
#SLAM #Robotics #ComputerVision
0
7
13
We have completed editing the SLAM Handbook "From Localization and Mapping to Spatial Intelligence" and released it. The 3-part handbook will be published by Cambridge University Press. Enjoy reading online for now!
6
23
211
#IROS2025 IROS 2025 EXPO β¨ πAwards: A Best Demo Award will recognize the most creative and impactful presentations. How to Submit: Submit your proposal via PaperPlaza β https://t.co/0yTje7WN4a
0
1
2
We also release some LaTeX sty and bib files used in the handbook. If you are writing an ICRA paper on SLAM, these should be useful. Visit our GitHub repo for details: https://t.co/HWffjGuz7B
github.com
Release repo for our SLAM Handbook. Contribute to SLAM-Handbook-contributors/slam-handbook-public-release development by creating an account on GitHub.
We have completed the SLAM Handbook "From Localization and Mapping to Spatial Intelligence" and released it online: https://t.co/AnKa398nyw . The handbook will be published by Cambridge University Press. [1/n]
0
6
35
DINOv3 looks *amazing* ! Self-supervised training FTW.
Introducing DINOv3 π¦π¦π¦ A SotA-enabling vision foundation model, trained with pure self-supervised learning (SSL) at scale. High quality dense features, combining unprecedented semantic and geometric scene understanding. Three reasons why this mattersβ¦
0
5
51
Glad to share our paper at CoRL! ImLPR presents how to use VFM for lidar PR. tl;dr Choose 3 channel wisely from RIV Kudos to our amazing Oxford-SNU team! Minwoo Jung, Frank Fu and @MauriceFallon
https://t.co/hagQpbwmTv
0
1
6
Paper of today: Kim et al., "ExploreGS: Explorable 3D Scene Reconstruction with Virtual Camera Samplings and Diffusion Priors" Create virtual trajectories that would improve novel view renderings, and use a specialized video model to help fill the gap.
1
10
41
β οΈ Due to an overwhelming number of requests, #CoRL2025 registration is temporarily closed. One presenter per accepted paper will be guaranteed registration, and registration will reopen at a later date.
3
4
28
π¨ HIRING: New Post-Doc Position at @oxfordrobots! I have a new collaborative project working with @DrStephenMellon to further develop our vision-based ultrasound reconstruction system - called ScanLite. Deadline: 8 Sept 2025 Apply here: https://t.co/68w2vrR2jG
1
7
20
Very cool indoor localization startup story
Craziest DM I ever received, from a VP at a global retailer: "Our app is shit and we know it's shit". I met her for coffee and she asked me if I could solve the biggest unsolved problem in retail. This is a deep dive into why and how Hyper built a 1m-accurate indoor GPS. This
0
1
20