Pascal Mettes
@PascalMettes
Followers
2K
Following
836
Media
34
Statuses
403
Assistant Professor - University of Amsterdam | Hyperbolic deep learning
Joined March 2020
Our survey "Hyperbolic Deep Learning in Computer Vision: A Survey" has been accepted to #IJCV! The survey provides an organization of supervised and unsupervised hyperbolic literature. Online now: https://t.co/pHbOZKWqeq w/ @GhadimiAtigMina @mkellerressel @jeffhygu
@syeung10
link.springer.com
International Journal of Computer Vision - Deep representation learning is a ubiquitous part of modern computer vision. While Euclidean space has been the de facto standard manifold for learning...
2
24
138
I am very happy to share that I received the NWO VIDI grant for my research on hyperbolic computer vision. Many more hyperbolic papers on the horizon the next few years! https://t.co/Ib72OrcbeY
ivi.uva.nl
The Dutch Research Council (NWO) has awarded four IvI researchers of the Informatics Institute: Iris Groen, Sara Magliacane, Pascal Mettes, and Vítor Vasconcelos
10
5
60
We are getting ready for the second edition of Beyond Euclidean Workshop @ICCVConference We hope to see you there on Sunday!
0
5
8
The 2nd workshop on hyperbolic and hyperspherical learning will commence soon!
0
0
4
Finally the next @iclr_conf location is revealed... https://t.co/f0Xmq5OxSR
#ICLR2026 will be in Rio de Janeiro from 23 to 27 April!
0
6
63
Thank you @dimadamen for inviting me to Adriano's committee and for all the discussions on the latest and greatest in egovision! I had a blast.
Many thanks @PascalMettes @UvA_IvI for visiting us @BristolUni to examine (now Dr) Adriano Fragomeni (supervised by myself and @mwray0) and give a great talk on hyperbolic deep learning. Enjoyed your visit
0
0
18
Starting now! Leveraging context in prototypes for learning representations. Come find out more! 📍East Exhibition Hall A-B #E-2406
SSL, in particular, CL, relies more and more on prototypes (learnable representations that hold some group meaning) But! overclustering, enforcing equipartitions, and underrepresentation are still problems that plague the prototypes We solve them with our method: SOP
0
2
9
Want the best possible embeddings in hyperbolic space while still maintaining GPU compatibility? Check out our #ICML2025 poster! We show how to minimize distortion and get more precision with hyperbolic floating point expansions. #1 method for embedding any tree-like structure.
Excited to be in Vancouver for #ICML2025 this week! I’m here to talk about our latest work “Low-distortion and GPU-compatible tree embeddings in hyperbolic space”. If you're interested in graph embeddings and hyperbolic geometry, come and check it out! More details below 👇
0
3
29
CVPR26: USA CVPR27: USA CVPR28: Change of location! Still USA CVPR29: USA CVPR30: USA CVPR31: USA Ugh.
7
9
130
Important update: the submission deadline for the hyperbolic and hyperspherical learning workshop #ICCV2025 has been extended to June 27th! If you have any new papers or recently accepted papers in this field, share them with us!
Paper submission deadline for our @ICCVConference workshop has been extended to 27th of June! Plenty of time to work on a paper submission off the back of @CVPRConf @CVPR @TheBMVA @eccvconf @PascalMettes @adn_twitts @IndroSpinelli
0
2
26
I’ll be at my first @CVPR this week presenting our #Highlight paper on (hyperbolic) safety-awareness in VLMs along with my co-author @tobiapoppi! Friends on twitter, DM me if you are going too and let’s catch up to discuss research and other banter 🥳 #CVPR2025
1
4
31
We are back at #ICCV2025 for the second workshop on Hyperbolic and Hyperspherical Learning for Computer Vision. We will have 2 tracks: one for new research (to be published in proceedings) and one for recently published works. Deadline: June 1st Link:
sites.google.com
Important Dates: Submission Portal Opens: 29th of April 2025 Submission Deadline: 27th of June 2025 (AOE) Preliminary Author Notification Deadline: 10th of July 2025 Camera-ready deadline (Proceedi...
0
11
60
Today I will give a keynote on "Hyperbolic Visual Understanding" at the Non-Euclidean Foundation Models and Geometric Learning (NEGEL) workshop at the Web Conference in Sydney. If you are at #WWW2025 and want to discuss hyperbolic learning, I'm around all week!
1
0
38
#ICLR2025 was a blast, great to talk with all of you about our works on hyperbolic vision-language models, a better way to do object detection, and brain-aligned image generation! Next destination: the Web Conference in Australia, happening now
0
6
62
After a very successful 1st edition at @eccvconf our Beyond Euclidean workshop is back, now at @ICCVConference in October! We will update our webpage ( https://t.co/iG9GlqLxAH) with information on our keynote speakers and open a call for full and short papers in the coming days!
sites.google.com
Bringing together researchers to uncover the principles of Non-euclidean representations. Within deep learning, Euclidean geometry is the default basis for deep neural networks, yet the naive...
1
18
63
We will be back at #CVPR2025 with another piece of evidence for "All Vision-Language Models should be Hyperbolic". This time, we show how hyperbolic CLIP makes safety awareness possible! Check out the original post below"
Want to improve content safety and NSFW detection in CLIP? Hyperbolic geometry can make this possible. Check out our #CVPR2025 paper, Hyperbolic Safety-Aware Vision-Language Models. With: @TobiaP93332, @PascalMettes, @lorenzo_baraldi, @ricucch #ELLISforEurope #Ellis_Amsterdam
0
4
35
Hyperbolic Safety-Aware Vision-Language Models https://t.co/oMgFriHG9K Important problem. Nice idea. Cool embedding visualizations.
arxiv.org
Addressing the retrieval of unsafe content from vision-language models such as CLIP is an important step towards real-world integration. Current efforts have relied on unlearning techniques that...
1
2
16
Hyperbolic embeddings not only allow for an asymmetric modeling between vision and language, but open the door to models all sorts of compositions. This paper shows how image-box compositions form cool new hierarchies and strong vision-language models!
0
1
13