Taein Kwon
@taeinkwon1
Followers
279
Following
81
Media
5
Statuses
53
Postdoc fellow at VGG, Oxford. Working on Video Understanding, Augmented Reality and Hand Object Interaction
Oxford, England
Joined November 2021
We are seeking a full-time Postdoctoral Research Assistant in Computer Vision to join the Visual Geometry Group (University of Oxford) to work on 3D and Spatial AI with Professor Andrea Vedaldi. The post is funded by ERC and is fixed-term for two years with a possible extension.
3
13
48
After two amazing years with @Oxford_VGG, I will be joining @NTUsg as a Nanyang Assistant Professor in Fall 2025! Iβll be leading the Physical Vision Group ( https://t.co/byLxP7FE4a) β and we're hiring for next year!π If you're passionate about vision or AI, get in touch!
24
29
243
EgoPressure at @CVPR 2025 (Highlight)! w/ @_yimzhao_ , @taeinkwon1 , @mapo1 , @cholz π Project page: https://t.co/Ndm0Mo8Bvx π° Paper: https://t.co/w8C669rnny πCome visit us during our poster session in ExHall D Poster 149, 15 Jun, 16:00 CDT β 18:00 CDT!
2
7
45
Thrilled and honored to receive the Best Paper Award at #CVPR2025! Huge thanks to my fantastic collaborators @MinghaoChen23, @n_karaev, Andrea Vedaldi, Christian Rupprecht, and @davnov134. Could not be there without you!
5
4
156
Many Congratulations to @jianyuan_wang, @MinghaoChen23, @n_karaev, Andrea Vedaldi, Christian Rupprecht and @davnov134 for winning the Best Paper Award @CVPR for "VGGT: Visual Geometry Grounded Transformer" π₯π ππ #CVPR2025!!!!!!
17
71
493
Congratulations to @taeinkwon1 and the Team for presenting their work "A Dataset for Hand Pressure and Pose Estimation in Egocentric Vision" as a @CVPR Highlight #CVPR25 π π. https://t.co/HsuGLEljx1
0
3
28
Join our @CVPR panel discussion on Vision-based Assistants with Mike Shou, Kate Saenko, Angela Yao and Marc Pollefeys moderated by Roland Memisevic at 17:00, Room 211. @MikeShou1 , @kate_saenko_ , @angelayao101 , @mapo1 , @RolandMemisevic #CVPR25
0
4
7
Introducing JEGALπ JEGAL can match hand gestures with words & phrases in speech/text. By only looking at hand gestures, JEGAL can perform tasks like determining who is speaking, or if a keyword (eg beautiful) is gestured More about our latest research on co-speech gestures π§΅π
2
15
35
Excited to announce that I'm joining Yale as an Assistant Prof in ECE! Iβm building a research group and hiring passionate PhD students & postdocs. If you're interested in mobile & wireless sensing, wireless networking, & robotics-let's connect! Check out my TEDx Talk for an
11
40
347
Thank you so much for our team's contributions: @xinw_ai, Mahdi Rad, Bowen Pan, Ishani Chakraborty, @seanandrist, @danbohus, @AshleyFeniello, Bugra Tekin, Felipe Vieira Frujeri, @neelsj, and @mapo1 !
0
0
1
I am deeply honored and grateful for this recognition from the community. We remain dedicated to advancing assistive AI. Stay tuned for more exciting developments and research in egocentric computer vision from us!
0
0
0
π I am very excited to announce that our paper, HoloAssist ( https://t.co/rn9Ef90HEt), has been selected for the EgoVis 2022/2023 Distinguished Paper Award @ CVPR'24.
2
0
4
0
3
10
Thank our amazing co-authors and challenge organizers, @xinw_ai, Mahdi Rad, Bowen Pan, Ishani Chakraborty, @seanandrist, @danbohus, @AshleyFeniello, Bugra Tekin, Felipe Vieira Frujeri, @neelsj, and @mapo1 !
0
0
1
The dataset and associated challenges will be an excellent starting point for anyone interested in assistive AI, action recognition, procedural videos, and AR/VR/MR applications!
0
0
0
For detailed information about challenges, please visit the Egovis workshop site ( https://t.co/YnOMwbXH6v) and explore the HoloAssist page ( https://t.co/xEIzQDJUeQ).
0
0
0
In the workshop, our HoloAssist dataset will host three challenges, witha deadline of June 5th, 2024: - Action Recognition - Mistake Detection - Intervention Type Prediction The results of these challenges will be presented at the EgoVis workshop during CVPR2024.
0
0
0
Our dataset covers: - 166 hours of data - 2,221 sessions - 350 unique instructor-performer pairs - 16 objects with different shapes and functions - 20 object-centric manipulation tasks - 8 modalities (RGB, hand pose, depth, head pose, eye gaze, audio, IMU, summary/action labels)
0
0
0