Gedas Bertasius Profile
Gedas Bertasius

@gberta227

Followers
1K
Following
3K
Media
45
Statuses
490

Assistant Professor at @unccs, previously a postdoc at @facebookai, PhD from @Penn, a basketball enthusiast.

Chapel Hill, NC
Joined June 2020
Don't wanna be here? Send us removal request.
@gberta227
Gedas Bertasius
1 month
Excited to share our new video-language benchmark for expert-level action analysis! Most existing VLMs struggle significantly with our new benchmark, which requires a precise understanding of nuanced physical human skills. Try your VLM models and let us know how they do!.
@Han_Yi_724
Han Yi
1 month
🚀 Introducing ExAct: A Video-Language Benchmark for Expert Action Analysis.🎥 3,521 expert-curated video QA pairs in 6 domains (Sports, Bike Repair, Cooking, Health, Music & Dance). 🧠 GPT‑4o scores 44.70% vs human experts at 82.02%—a huge gap!.📄Paper:
Tweet media one
0
5
22
@gberta227
Gedas Bertasius
4 hours
RT @ZiyangW00: 🚨Introducing Video-RTS: Resource-Efficient RL for Video Reasoning with Adaptive Video TTS! . While RL-based video reasoning….
0
15
0
@gberta227
Gedas Bertasius
8 days
RT @mmiemon: 🚀 On the job market!.Final-year PhD @ UNC Chapel Hill working on computer vision, video understanding, multimodal LLMs & AI ag….
0
4
0
@gberta227
Gedas Bertasius
20 days
RT @mmiemon: Great to see our paper ReVisionLLM featured by MCML blog! @gberta227 #CVPR2025.
0
1
0
@gberta227
Gedas Bertasius
25 days
RT @mmiemon: Come to our poster today at #CVPR2025!.🗓️ June 15 | 🕓 4–6PM.📍 Poster #282 | ExHall D. 📝 Paper: 🌐 Proje….
0
2
0
@gberta227
Gedas Bertasius
25 days
RT @mmiemon: Great to see a lot of interest among the video understanding community about ReVisionLLM! If you missed it, checkout https://t….
0
2
0
@gberta227
Gedas Bertasius
26 days
RT @mmiemon: Presenting ReVisionLLM at #CVPR2025 today!. Recursive Vision-Language Model for Temporal Grounding in Hour-Long Videos. If you….
0
3
0
@gberta227
Gedas Bertasius
27 days
Another great accomplishment by Emon this #CVPR2025. Interestingly, rather than using some complex ensemble model, Emon won the EgoSchema challenge by simply applying his latest BIMBA model, which he will also present at the poster session on Sunday 4-6pm. Be sure to stop by!.
@mmiemon
Mohaiminul (Emon) Islam (on job market)
27 days
🚀 Excited to share that we won 1st place at the EgoSchema Challenge at EgoVis, #CVPR2025!. Our method (81%) outperformed human accuracy (76.2%) for the first time on this challenging task 🎯. Stop by #CVPR:.📍 Poster #282 | June 15, 4–6PM | ExHall D.🔗
Tweet media one
Tweet media two
Tweet media three
1
4
26
@gberta227
Gedas Bertasius
28 days
RT @mmiemon: Excited to share that our paper Video ReCap (#CVPR2024) won the EgoVis Distinguished Paper Award at #CVPR2025!.Honored to see….
0
4
0
@gberta227
Gedas Bertasius
28 days
Very proud of this great accomplishment! Congrats @mmiemon! Well deserved!.
@mmiemon
Mohaiminul (Emon) Islam (on job market)
28 days
Excited to share that our paper Video ReCap (#CVPR2024) won the EgoVis Distinguished Paper Award at #CVPR2025!.Honored to see our work recognized and its impact on the video understanding community. Huge thanks to my co-authors and my advisor @gberta227.🔗
Tweet media one
Tweet media two
0
2
24
@gberta227
Gedas Bertasius
28 days
RT @SenguptRoni: Had a fun time attending @unccs Computer Vision past and present meetup at #CVPR2025 , missing a lot of folks though 😌 htt….
0
3
0
@gberta227
Gedas Bertasius
28 days
I will be presenting more details on SiLVR at the LOVE: Multimodal Video Agent workshop at 12:15pm CST in Room 105A!.
@cezhhh
Ce Zhang
1 month
Recent advances in test-time optimization have led to remarkable reasoning capabilities in LLMs. However, the reasoning capabilities of MLLMs still significantly lag, especially for complex video-language tasks. We present SiLVR, a Simple Language-based Video Reasoning framework.
Tweet media one
Tweet media two
0
3
12
@gberta227
Gedas Bertasius
28 days
RT @cezhhh: Recent advances in test-time optimization have led to remarkable reasoning capabilities in LLMs. However, the reasoning capabil….
0
10
0
@gberta227
Gedas Bertasius
28 days
Happening today at 1:20pm CST in Rooms 209 A-C!.
@CMHungSteven
Min-Hung (Steve) Chen
1 month
@CVPR is around the corner!!.Join us at the Workshop on T4V at #CVPR2025 with a great speaker lineup (@MikeShou1, @jw2yang4ai, @WenhuChen, @roeiherzig, Yuheng Li, Kristen Grauman) covering diverse topics!. Website: #CVPR #Transformer #Vision #T4V2025 #T4V
Tweet media one
Tweet media two
0
2
5
@gberta227
Gedas Bertasius
29 days
RT @ZiyangW00: Excited to present VideoTree🌲 at #CVPR2025 Fri at 10:30AM!. VideoTree improves long-video QA via smart sampling:.-Query-adap….
0
18
0
@gberta227
Gedas Bertasius
29 days
RT @CMHungSteven: @CVPR is around the corner!!.Join us at the Workshop on T4V at #CVPR2025 with a great speaker lineup (@MikeShou1, @jw2yan….
0
19
0
@gberta227
Gedas Bertasius
30 days
RT @A_v_i__S: Big news! 🎉 I’m joining UNC-Chapel Hill as an Assistant Professor in Computer Science starting next year! Before that, I’ll….
0
33
0
@gberta227
Gedas Bertasius
1 month
Join us for the 4th iteration of Transformers for Vision (T4V) workshop on Thursday!.
@CMHungSteven
Min-Hung (Steve) Chen
1 month
@CVPR is around the corner!!.Join us at the Workshop on T4V at #CVPR2025 with a great speaker lineup (@MikeShou1, @jw2yang4ai, @WenhuChen, @roeiherzig, Yuheng Li, Kristen Grauman) covering diverse topics!. Website: #CVPR #Transformer #Vision #T4V2025 #T4V
Tweet media one
Tweet media two
0
4
18
@gberta227
Gedas Bertasius
1 month
RT @Han_Yi_724: 🚀 Introducing ExAct: A Video-Language Benchmark for Expert Action Analysis.🎥 3,521 expert-curated video QA pairs in 6 domai….
0
5
0
@gberta227
Gedas Bertasius
1 month
RT @mmiemon: Had a great time presenting at the GenAI session @CiscoMeraki—thanks @nahidalam for the invite🙏.Catch us at #CVPR2025:.📌 BIMBA….
0
3
0