iatitov Profile Banner
Ivan Titov Profile
Ivan Titov

@iatitov

Followers
7K
Following
2K
Media
46
Statuses
927

Professor of Natural Language Processing at Uni Edinburgh / Uni Amsterdam

Edinburgh, Scotland
Joined September 2016
Don't wanna be here? Send us removal request.
@iatitov
Ivan Titov
11 days
More info & how to apply (deadline 7 Jan 2026): https://t.co/m4wMS7pGCg My colleagues and I at U Edinburgh will be accepting PhD students through this program, happy to answer questions if you’re considering applying.
0
1
4
@iatitov
Ivan Titov
11 days
We at @EdinburghUni are looking for new PhD students to join us through the Centre for Doctoral Training in Responsible NLP. Work with us on making AI systems more responsible, trustworthy and safe @EdinburghNLP
2
8
31
@nsaphra
Naomi Saphra
19 days
I’m recruiting PhD students for 2026! If you are interested in robustness, training dynamics, interpretability for scientific understanding, or the science of LLM analysis you should apply. BU is building a huge LLM analysis/interp group and you’ll be joining at the ground floor.
@nsaphra
Naomi Saphra
7 months
Life update: I'm starting as faculty at Boston University in 2026! BU has SCHEMES for LM interpretability & analysis, so I couldn't be more pumped to join a burgeoning supergroup w/ @najoungkim @amuuueller. Looking for my first students, so apply and reach out!
18
125
662
@BlackboxNLP
BlackboxNLP
16 days
📢 Only 20 days to go until BlackboxNLP 25! Excited to announce our two invited speakers: @QuanshiZhang and @vernadankers. Join us on Nov 9th at @emnlpmeeting to hear their talks!
0
9
29
@Polymarket
Polymarket
1 month
Polymarket is coming back to the US. 🇺🇸 Get on the waiting list to get early access to Polymarket's fully regulated U.S. trading platform:
0
24
147
@IVADO_Qc
IVADO
27 days
The @IVADO_Qc workshop, entitled “Assessing and Improving the #Capabilities and #Safety of #Agents” has just come to a close, following on from our Bootcamp last August. Some twenty speakers from around the world gathered for four days at @HEC_Montreal.
1
3
5
@iatitov
Ivan Titov
1 month
What do you consider private? We’re creating a benchmark for privacy-aware human-AI collaboration - your 5-minute input will help shape it.
@Guillemram
Guillem Ramírez
1 month
🚨 Before Sam puts personalized ads in your AI chats… Take our 5 min survey & discover what LLMs actually know about you! 🤖💡 Your responses will help build better AI privacy safeguards.
0
1
5
@ospanbatyr
Osman Batur İnce
2 months
Multimodal models typically need millions of examples from each modality paired with text for training. With SEMI 🌓, we integrate new low-resource modalities into LLMs with as few as 32 samples — including satellite images, galaxies, sensors, and molecules. (1/6)
3
40
213
@amrenewctr
Center for Renewing America
4 days
The next conservative era has arrived. Join the movement rebuilding America from the ground up.
0
1
8
@vernadankers
Verna Dankers
3 months
Proud to accept a 5y outstanding paper award @IJCAIconf 🏆 from JAIR for the impact Compositionality Decomposed has had, on behalf of the team w/ @_dieuwke_, @eliabruni & Mathijs Mul! 🧡 Come to room 513 on Wed@11.30 to learn about rethinking compgen evaluation in the LLM era 🤖
@IJCAIconf
IJCAIconf
3 months
Congratulations to the winners of the 2025 IJCAI–JAIR Prize for their paper “Compositionality Decomposed: How Do Neural Networks Generalise?” — Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni!    https://t.co/n9SHuRis17   #IJCAI2025
10
8
73
@IJCAIconf
IJCAIconf
3 months
Congratulations to the winners of the 2025 IJCAI–JAIR Prize for their paper “Compositionality Decomposed: How Do Neural Networks Generalise?” — Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni!    https://t.co/n9SHuRis17   #IJCAI2025
1
11
38
@iatitov
Ivan Titov
4 months
Many thanks to the @ActInterp organisers for highlighting our work - and congratulations to Pedro, Alex and the other awardees! Sad not to have been there in person, it looked like a fantastic workshop. @AmsterdamNLP @EdinburghNLP
@ActInterp
Actionable Interpretability Workshop ICML2025
4 months
Big congrats to Alex McKenzie, Pedro Ferreira, and their collaborators on receiving Outstanding Paper Awards!👏👏 and thanks for the fantastic oral presentations! Check out the papers here 👇
0
3
28
@ZeroyuHuang
Zeyu Huang
4 months
🚀 Introducing Prefix-RFT to blend SFT and RFT! SFT can learn more complex problems by mimicking, but can have poor generalization. RFT has better overall performance but is limited by the initial policy. Our method, Prefix-RFT, makes the best of both worlds!
6
45
184
@iatitov
Ivan Titov
4 months
Had a fantastic time hosting @Lavine_Lai at @EdinburghNLP! The visit led to an elegant light-PEFT method: from just a few examples, it learns sparse, targeted interventions — simple, robust, and easy to use.
@Lavine_Lai
Wen Lai
4 months
Still fine-tuning LLMs 🔥? Forget LoRA— use JoLA! #icml2025 PEFT methods like LoRA often struggle in low-resource settings (100–1000 examples). Activation editing is lightweight, but what to edit—and how? @iatitov @AlexanderFraser @TU_Muenchen @EdinburghNLP
0
0
10
@NeelRajani_
Neel Rajani
4 months
🚨New paper alert!🚨 "Scalpel vs. Hammer: GRPO Amplifies Existing Capabilities, SFT Replaces Them" @ActInterp ICML'25 @deepseek_ai popularised RLVR and distillation for 'reasoning training'! But how do they differ under the hood? Details in 🧵: (1/8)
2
22
45
@NeelRajani_
Neel Rajani
4 months
Finally made it to @icmlconf in gorgeous Vancouver! Presenting work at @ActInterp on Saturday (more on that soon 👀). If you're into interpretability/RL/AI Safety, I'd love to chat :)
0
3
52
@actos_non
WILLIAM CRAFT
5 days
The Democrats own this Filibuster and the results coming from it. Senate Democrats, led by Minority Leader Chuck Schumer, initiated the filibuster against the "clean" continuing resolution (CR) in September and October 2025. The House-passed CR would have temporarily funded the
3
2
13
@tallinzen
Tal Linzen
4 months
Congratulations Verna! This was one of the best theses I've ever read, I highly recommend checking out Verna's work on the tradeoffs between memorization and generalization in language models!
@vernadankers
Verna Dankers
4 months
I miss Edinburgh and its wonderful people already!! Thanks to @tallinzen and @PontiEdoardo for inspiring discussions during the viva! I'm now exchanging Arthur's Seat for Mont Royal to join @sivareddyg's wonderful lab @Mila_Quebec 🤩
2
3
33
@vernadankers
Verna Dankers
4 months
I miss Edinburgh and its wonderful people already!! Thanks to @tallinzen and @PontiEdoardo for inspiring discussions during the viva! I'm now exchanging Arthur's Seat for Mont Royal to join @sivareddyg's wonderful lab @Mila_Quebec 🤩
@agostina_cal
Agostina Calabrese 🦋
4 months
Huge congratulations to Dr. @vernadankers for passing her viva today! 🥳🎓 It's been an honour sharing the PhD journey with you. I wasn’t ready for the void your sudden departure left (in the office and in my life!). Your new colleagues are lucky to have you! 🥺🥰 @Edin_CDT_NLP
11
10
100
@KatiaShutova
Katia Shutova
7 months
Come and join us at @AmsterdamNLP! We have two open PhD positions in #NLProc with a focus on multilingual NLP and LLM alignment. Looking for students with an NLP/ML background and an interest in language and society.
1
12
34
@LeonardoAi
Leonardo.Ai
5 days
Hot dogs, family secrets, and... witchcraft? Check out Witch Wieners, submission by @CinemaByChaCha to the Third Annual AI Horror Film Competition, presented by @curiousrefuge, @epidemicsound, and @LeonardoAi Just as a dark presence descends, a young woman uncovers her
6
13
70
@PontiEdoardo
Edoardo Ponti
11 months
Is sparsity the key to conditional computation, interpretability, long context/generation, and more in foundation models? Find out at my #NeurIPS2024 tutorial on Dynamic Sparsity in Machine Learning with @andre_t_martins! Followed by a panel with @sarahookr and @murefil 🧵
2
26
87