Thomas Hummel Profile
Thomas Hummel

@hummelth_

Followers
117
Following
66
Media
2
Statuses
10

PhD student @uni_tue within #IMPRSIS | Interested in multimodal learning and video understanding | Prev. research intern @SonyAI_global🇨🇭

Joined February 2018
Don't wanna be here? Send us removal request.
@hummelth_
Thomas Hummel
3 months
RT @ExplainableML: 🎓PhD Spotlight: Thomas Hummel. A spotlight on our one and only @hummelth_ , who will defend his PhD on 23rd June! 🎉. Tho….
0
7
0
@hummelth_
Thomas Hummel
11 months
RT @eccvconf: The list of #ECCV2024 Outstanding Reviewers! Thank you for your service 🫡.
Tweet media one
0
21
0
@grok
Grok
6 days
What do you want to know?.
510
317
2K
@hummelth_
Thomas Hummel
1 year
Juggling my PhD work and reviewing isn't always easy, but super happy to be recognized for this!.
@CVPR
#CVPR2025
1 year
HUGE shoutout to our #CVPR2024 Outstanding Reviewers 🫡
Tweet media one
0
1
27
@hummelth_
Thomas Hummel
2 years
This was a joint work with Otniel-Bogdan Mercea (@MerceaOtniel), A. Sophia Koepke and Zeynep Akata (@zeynepakata). You can find paper, code and more on our project page! (4/4).
0
0
3
@hummelth_
Thomas Hummel
2 years
Our method ReGaDa uses a residual gating mechanism to explicitly exploit the compositionality of adverbs and actions when learning text representations. ReGaDa outperforms all prior works on the video-adverb retrieval tasks, setting the new state of the art! (3/4)
1
0
3
@hummelth_
Thomas Hummel
2 years
In our work, we aim to better understand how actions are being performed 🧐.In addition to recognising actions, it is also useful to understand details about their execution (e.g. slowly vs. quickly). We tackle this problem as a video-adverb retrieval task 🤖 (2/4)
1
0
3
@hummelth_
Thomas Hummel
2 years
Excited to share that I'll be presenting our work on "Video-adverb retrieval with compositional adverb-action embeddings" as an Oral today in beautiful Aberdeen at #BMVC2023 @BMVCconf! 🎉(1/4).
1
7
22
@hummelth_
Thomas Hummel
2 years
Come talk to me at our poster today at #GCPR2023 to dive into the details 🚀 . In this work, we propose a novel few-shot audio-visual classification benchmark and a text-to-feature diffusion framework to augment the training!.
@ExplainableML
Explainable Machine Learning
2 years
At #GCPR23 today? Then come to our poster on "Text-to-feature diffusion for audio-visual few-shot learning" by @MerceaOtniel, @hummelth_, A. Sophia Koepke and @zeynepakata! Find out more about our work here:
Tweet media one
0
1
13
@hummelth_
Thomas Hummel
2 years
RT @ExplainableML: Interested in XAI? Do not miss the Explainability in ML workshop! . It will take place March 28-29 in Tübingen (Germany)….
0
18
0
@hummelth_
Thomas Hummel
3 years
Come stop by our poster this afternoon (1.B-101) if you want to talk with us about audio-visual ZSL! We show that our temporal and cross-modal constrained attention mechanism outperforms previous work on three audio-visual GZSL benchmarks!. #ECCV2022.
@ExplainableML
Explainable Machine Learning
3 years
🗓️Sess 6 | 101, 25/10 | afternoon.“Temporal and cross-modal attention for audio-visual zero-shot learning” . @MerceaOtniel* @hummelth_*, A. S. Koepke, @zeynepakata. We leverage temporal context for better and improved audio-visual zero-shot learning!. Blog
Tweet media one
0
1
9