taeinkwon1 Profile Banner
Taein Kwon Profile
Taein Kwon

@taeinkwon1

Followers
259
Following
78
Media
5
Statuses
52

Postdoc fellow at VGG, Oxford. Working on Video Understanding, Augmented Reality and Hand Object Interaction

Oxford, England
Joined November 2021
Don't wanna be here? Send us removal request.
@taeinkwon1
Taein Kwon
17 days
RT @ChuanxiaZ: After two amazing years with @Oxford_VGG, I will be joining @NTUsg as a Nanyang Assistant Professor in Fall 2025! . I’ll be….
0
28
0
@taeinkwon1
Taein Kwon
24 days
RT @paulstreli: EgoPressure at @CVPR 2025 (Highlight)!. w/ @_yimzhao_ , @taeinkwon1 , @mapo1 , @cholz . 🌍 Project page: .
0
7
0
@taeinkwon1
Taein Kwon
27 days
RT @jianyuan_wang: Thrilled and honored to receive the Best Paper Award at #CVPR2025! Huge thanks to my fantastic collaborators @MinghaoChe….
0
5
0
@taeinkwon1
Taein Kwon
27 days
RT @Oxford_VGG: Many Congratulations to @jianyuan_wang, @MinghaoChen23, @n_karaev, Andrea Vedaldi, Christian Rupprecht and @davnov134 for w….
0
67
0
@taeinkwon1
Taein Kwon
27 days
RT @Oxford_VGG: Congratulations to @taeinkwon1 and the Team for presenting their work "A Dataset for Hand Pressure and Pose Estimation in E….
0
3
0
@taeinkwon1
Taein Kwon
29 days
RT @famesener: Join our @CVPR panel discussion on Vision-based Assistants with Mike Shou, Kate Saenko, Angela Yao and Marc Pollefeys modera….
0
4
0
@taeinkwon1
Taein Kwon
3 months
RT @SindhuBHegde: Introducing JEGAL👐.JEGAL can match hand gestures with words & phrases in speech/text. By only looking at hand gestures, J….
0
15
0
@taeinkwon1
Taein Kwon
8 months
RT @Tara_Boroushaki: Excited to announce that I'm joining Yale as an Assistant Prof in ECE! I’m building a research group and hiring passio….
0
43
0
@taeinkwon1
Taein Kwon
9 months
RT @xiwang1212: What a great pleasure of having Taein @taeinkwon1 presenting our poster ❤️
Tweet media one
0
2
0
@taeinkwon1
Taein Kwon
1 year
Thank you so much for our team's contributions:.@xinw_ai, Mahdi Rad, Bowen Pan, Ishani Chakraborty,.@seanandrist, @danbohus,.@AshleyFeniello, Bugra Tekin, Felipe Vieira Frujeri,.@neelsj, and @mapo1 !.
0
0
1
@taeinkwon1
Taein Kwon
1 year
I am deeply honored and grateful for this recognition from the community. We remain dedicated to advancing assistive AI. Stay tuned for more exciting developments and research in egocentric computer vision from us!.
0
0
0
@taeinkwon1
Taein Kwon
1 year
🏆 I am very excited to announce that our paper, HoloAssist (, has been selected for the EgoVis 2022/2023 Distinguished Paper Award @ CVPR'24.
Tweet media one
Tweet media two
2
0
4
@taeinkwon1
Taein Kwon
1 year
RT @anfurnari: #EGOVIS @CVPR The first challenge session for the day is about to start! @taeinkwon1
Tweet media one
0
3
0
@taeinkwon1
Taein Kwon
1 year
0
0
0
@taeinkwon1
Taein Kwon
1 year
Thank our amazing co-authors and challenge organizers, @xinw_ai, Mahdi Rad, Bowen Pan, Ishani Chakraborty, @seanandrist, @danbohus, @AshleyFeniello, Bugra Tekin, Felipe Vieira Frujeri,.@neelsj, and @mapo1 !.
0
0
1
@taeinkwon1
Taein Kwon
1 year
The dataset and associated challenges will be an excellent starting point for anyone interested in assistive AI, action recognition, procedural videos, and AR/VR/MR applications!.
0
0
0
@taeinkwon1
Taein Kwon
1 year
For detailed information about challenges, please visit the Egovis workshop site ( and explore the HoloAssist page (.
0
0
0
@taeinkwon1
Taein Kwon
1 year
In the workshop, our HoloAssist dataset will host three challenges, witha deadline of June 5th, 2024: .- Action Recognition.- Mistake Detection.- Intervention Type Prediction.The results of these challenges will be presented at the EgoVis workshop during CVPR2024.
0
0
0
@taeinkwon1
Taein Kwon
1 year
Our dataset covers:.- 166 hours of data.- 2,221 sessions.- 350 unique instructor-performer pairs.- 16 objects with different shapes and functions.- 20 object-centric manipulation tasks.- 8 modalities (RGB, hand pose, depth, head pose, eye gaze, audio, IMU, summary/action labels).
0
0
0
@taeinkwon1
Taein Kwon
1 year
Through the challenges, we gain key insights into how human assistants correct mistakes, intervene in task completion procedures, and ground their instructions in the environment.
0
0
0