Michael Wray
@mwray0
Followers
300
Following
662
Media
28
Statuses
141
Interested in (Egocentric) Video Understanding and Language. Lecturer/Assistant Professor at the University of Bristol.
Bristol
Joined November 2012
For more information and results, go to Sam's website: https://t.co/20DyTysDrZ We include code and test sets for reproducibility and the ability to evaluate your own method's attribution scores across the different modalities! Work done by Sam Pollard as first author.
sjpollard.github.io
Sam Pollard, Michael Wray arXiv
1
0
0
We evaluate 6 methods (both new and old) across 4 common and new video question answering benchmarks. It's not all bad news however, by including more than 5 answers, we find that the model's pay more attention to both the video frames and the question (PFC_v/PFC_q below)!
1
0
0
We extend Shapley Values across Video, Question, and Answer modalities to calculate new metrics for 'per modality' and 'per feature' metrics. Our results show that the answer/question modalities dominate over the video modality by orders of magnitude!
1
0
0
A Video is Not Worth a Thousand Words! Our new Paper on ArXiv explores how VLMs deal with different modalities for Video Question Answering using Shapley Values. Read the paper: https://t.co/HVJXQNtsyd Website: https://t.co/tl6VogIDL5
1
0
2
Fantastic two days with François visiting our MaVi group! Inspiring discussions, engaging talks, and great exchanges on the latest research. Big congrats to Dr. Kevin on a successful viva and to @mwray0 on a career milestone!
Many thanks 2 Francois Bremond (@inria_sophia) for a 2-day #Bristol visit @BristolUni #MaVi (Machine Learning &Computer Vision) research group hosted by @WayHomeLi. Great presentation incl. #CVPR2025 & #ICCV2025 papers from Francois's group, insights and future directions. 1/2
0
1
3
Many thanks @PascalMettes @UvA_IvI for visiting us @BristolUni to examine (now Dr) Adriano Fragomeni (supervised by myself and @mwray0) and give a great talk on hyperbolic deep learning. Enjoyed your visit
0
6
21
@SiyuTang3 on "Towards an egocentric multimodal foundation model" now at the #EGOVIS workshop at #CVPR2025!
0
3
7
Code can be found on Github: https://t.co/xVxRyb0nqv Work by Sam Pollard. Want More Details? We both will be at CVPR 2025 next week to present the work and answer Qs!
github.com
Code for Video, How Do Your Tokens Merge? Contribute to sjpollard/video-how-do-your-tokens-merge development by creating an account on GitHub.
0
0
1
By evaluating across Kinetics, Something-Something, and EPIC-Kitchens-100, we investigate how action granularity and first/third person viewpoints affect the token merging process and performance.
1
0
0
We evaluate different token merging strategies across four different models (see paper for everything!) showcasing how a 2x speedup can be achieved with little to no drop in accuracy!
1
0
0
📢 Our paper "Video, How Do Your Tokens Merge?" is now on ArXiv, to be presented at eLVM @CVPR 2025! We explore Training-Free token merging for video understanding models across datasets with differing granularities. https://t.co/ZW68MkFtmB
https://t.co/fQLLHGzKsw
1
0
8
⏳Only 1 day left to submit your reviews! ⚠️Late or careless reviews won’t be taken lightly — authors of such reviews risk having their own submissions desk rejected. Please be fair and responsible!🙂
2
5
26
Thanks @hubertshum for visiting us today at Bristol and giving a great talk as part of the #MaVi seminar series!
0
0
2
Attending @wacv_official #WACV2025 today? This morning's poster session 3 will include our paper: "Moment of Untruth", Check Poster 70 and talk to Kevin who will be there to tell you m ore or answer any questions,
📢 Our @wacv_official #WACV2025 paper: Moment of Untruth - Dealing With Negative Queries in Video Moment Retrieval, is now on ArXiv. We explore false positives of Video Moment Retrieval models when given negative queries. https://t.co/3nE4gisGVb
https://t.co/YyJI8VIGM9
0
5
24
Code and dataset can be found on github: https://t.co/fAnVDpHjwm Work by Kevin Flanagan @BristolUni @Bristol_AI_CDT With @dimadamen Any Qs? Kevin will be at WACV to present the work!
github.com
Contribute to keflanagan/MomentofUntruth development by creating an account on GitHub.
0
0
0
Finally, we propose a simple model which can be applied on any VMR method to successfully separate positive queries that exist in the video from negative queries that don't!
1
0
2
We showcase this is an issue for current video moment retrieval models providing two new benchmarks for negative-aware video moment retrieval through both In-Domain and Out-of-Domain negatives.
1
0
0
📢 Our @wacv_official #WACV2025 paper: Moment of Untruth - Dealing With Negative Queries in Video Moment Retrieval, is now on ArXiv. We explore false positives of Video Moment Retrieval models when given negative queries. https://t.co/3nE4gisGVb
https://t.co/YyJI8VIGM9
1
0
15