
Valentino Maiorca
@ValeMaiorca
Followers
395
Following
1K
Media
13
Statuses
208
@Apple MLR (Barcelona) intern | ELLIS Ph.D. student in representation learning @SapienzaRoma & @ISTAustria | Former NLP Engineer @babelscape
Rome, Lazio
Joined October 2020
✨ Meet #ResiDual, a novel perspective on the alignment of multimodal latent spaces! . Think of it as a spectral "panning for gold" along the residual stream. It improves text-image alignment by simply amplifying task-related directions! 🌌🔍 . [1/6]
2
11
29
RT @yingtian80536: 🧠 NEW PREPRINT .Many-Two-One: Diverse Representations Across Visual Pathways Emerge from A Single Objective. https://t.co….
biorxiv.org
How the human brain supports diverse behaviours has been debated for decades. The canonical view divides visual processing into distinct "what" and "where/how" streams – however, their origin and...
0
19
0
RT @RSTLessGroup: RSTLess people at #ACL2025 presenting 2 papers + meeting with friends! @FlorinCuconasu presenting “The Distracting Effect….
0
5
0
RT @ClementineDomi6: 🎓Thrilled to share I’ve officially defended my PhD!🥳. At @GatsbyUCL, my research explored how prior knowledge shapes n….
0
11
0
RT @YungSungChuang: Scaling CLIP on English-only data is outdated now…. 🌍We built CLIP data curation pipeline for 300+ languages.🇬🇧We train….
0
65
0
RT @agostina_cal: At #ACL2025NLP and on the job market (NLP + AI Safety) 💼. It's great to see growing interest in safety/alignment, but we….
0
7
0
RT @Dingling_Yao: 🚀 Got fresh ideas in causal discovery, inference, or reasoning for scientific problems? Share them at the @CauScien works….
sites.google.com
Call for Papers
0
7
0
RT @HCasademunt: Problem: Train LLM on insecure code → it becomes broadly misaligned.Solution: Add safety data? What if you can't?. Use int….
0
27
0
RT @teelinsan: Uncertainty quantification (UQ) is key for safe, reliable LLMs. but are we evaluating it correctly?. 🚨 Our ACL2025 paper f….
0
11
0
RT @unireps: 🔥 Mark your calendars for the next session of the @ELLISforEurope x UniReps Speaker Series! . 🗓️ When: 31th July – 16:00 CEST….
0
8
0
RT @mihirp98: 🚨 The era of infinite internet data is ending, So we ask:. 👉 What’s the right generative modelling objective when data—not co….
0
172
0
RT @lavoiems: 🧵 Everyone is chasing new diffusion models—but what about the representations they model from?.We introduce Discrete Latent C….
0
45
0
RT @OwainEvans_UK: New paper & surprising result. LLMs transmit traits to other models via hidden signals in data. Datasets consisting only….
0
1K
0
RT @neur_reps: Are you studying how structure shapes computation in the brain and in AI systems? 🧠. Come share your work in San Diego at Ne….
neurreps.org
Call for Papers
0
24
0
RT @unireps: Ready to present your latest work? The Call for Papers for #UniReps2025 @NeurIPSConf is open!. 👉Check the CFP: .
0
11
0
RT @CauScien: 🚨 Reviewer Call! 🚨.Are you passionate about causality + science? Join us as a reviewer for the CausCien Workshop @ #NeurIPS20….
docs.google.com
Thank you for your interest in the CausCien Workshop! This year, we plan to have a short paper submission track (~4 pages). See the topics at https://sites.google.com/view/causcien. Reviewers will be...
0
2
0
RT @mbelitsky1: Introducing cache steering – a new method for implicit behavior steering in LLMs. Cache steering is a lightweight method fo….
0
4
0
RT @HThasarathan: 🌌🛰️🔭Want to explore universal visual features? Check out our interactive demo of concepts learned from our #ICML2025 pape….
0
46
0
RT @MustafaShukor1: We propose new scaling laws that predict the optimal data mixture, for pretraining LLMs, native multimodal models and l….
0
48
0
RT @danbusbridge: Excited to be heading to Vancouver for #ICML2025 next week!. I'll be giving a deep dive on Distillation Scaling Laws at t….
0
4
0