Ivan Titov
            
            @iatitov
Followers
                7K
              Following
                2K
              Media
                46
              Statuses
                927
              Professor of Natural Language Processing at Uni Edinburgh / Uni Amsterdam
              
              Edinburgh, Scotland
            
            
              
              Joined September 2016
            
            
           More info & how to apply (deadline 7 Jan 2026):  https://t.co/m4wMS7pGCg  My colleagues and I at U Edinburgh will be accepting PhD students through this program, happy to answer questions if you’re considering applying. 
          
                
                0
              
              
                
                1
              
              
                
                4
              
             We at @EdinburghUni are looking for new PhD students to join us through the Centre for Doctoral Training in Responsible NLP. Work with us on making AI systems more responsible, trustworthy and safe @EdinburghNLP
          
          
                
                2
              
              
                
                8
              
              
                
                31
              
             I’m recruiting PhD students for 2026! If you are interested in robustness, training dynamics, interpretability for scientific understanding, or the science of LLM analysis you should apply. BU is building a huge LLM analysis/interp group and you’ll be joining at the ground floor. 
           Life update: I'm starting as faculty at Boston University in 2026! BU has SCHEMES for LM interpretability & analysis, so I couldn't be more pumped to join a burgeoning supergroup w/ @najoungkim @amuuueller. Looking for my first students, so apply and reach out! 
            
                
                18
              
              
                
                125
              
              
                
                662
              
             📢 Only 20 days to go until BlackboxNLP 25! Excited to announce our two invited speakers: @QuanshiZhang and @vernadankers. Join us on Nov 9th at @emnlpmeeting to hear their talks! 
          
                
                0
              
              
                
                9
              
              
                
                29
              
             Polymarket is coming back to the US. 🇺🇸 Get on the waiting list to get early access to Polymarket's fully regulated U.S. trading platform: 
          
                
                0
              
              
                
                24
              
              
                
                147
              
             The @IVADO_Qc workshop, entitled “Assessing and Improving the #Capabilities and #Safety of #Agents” has just come to a close, following on from our Bootcamp last August. Some twenty speakers from around the world gathered for four days at @HEC_Montreal. 
          
                
                1
              
              
                
                3
              
              
                
                5
              
             What do you consider private? We’re creating a benchmark for privacy-aware human-AI collaboration - your 5-minute input will help shape it. 
           🚨 Before Sam puts personalized ads in your AI chats… Take our 5 min survey & discover what LLMs actually know about you! 🤖💡 Your responses will help build better AI privacy safeguards. 
          
                
                0
              
              
                
                1
              
              
                
                5
              
             Multimodal models typically need millions of examples from each modality paired with text for training. With SEMI 🌓, we integrate new low-resource modalities into LLMs with as few as 32 samples — including satellite images, galaxies, sensors, and molecules. (1/6) 
          
                
                3
              
              
                
                40
              
              
                
                213
              
             The next conservative era has arrived. Join the movement rebuilding America from the ground up. 
          
                
                0
              
              
                
                1
              
              
                
                8
              
             Proud to accept a 5y outstanding paper award @IJCAIconf 🏆 from JAIR for the impact Compositionality Decomposed has had, on behalf of the team w/ @_dieuwke_, @eliabruni & Mathijs Mul! 🧡 Come to room 513 on Wed@11.30 to learn about rethinking compgen evaluation in the LLM era 🤖 
           Congratulations to the winners of the 2025 IJCAI–JAIR Prize for their paper “Compositionality Decomposed: How Do Neural Networks Generalise?” — Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni!     https://t.co/n9SHuRis17    #IJCAI2025
            
            
                
                10
              
              
                
                8
              
              
                
                73
              
             Congratulations to the winners of the 2025 IJCAI–JAIR Prize for their paper “Compositionality Decomposed: How Do Neural Networks Generalise?” — Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni!     https://t.co/n9SHuRis17    #IJCAI2025
          
          
                
                1
              
              
                
                11
              
              
                
                38
              
             Many thanks to the @ActInterp organisers for highlighting our work - and congratulations to Pedro, Alex and the other awardees! Sad not to have been there in person, it looked like a fantastic workshop. @AmsterdamNLP @EdinburghNLP
          
           Big congrats to Alex McKenzie, Pedro Ferreira, and their collaborators on receiving Outstanding Paper Awards!👏👏 and thanks for the fantastic oral presentations! Check out the papers here 👇 
            
                
                0
              
              
                
                3
              
              
                
                28
              
             🚀 Introducing Prefix-RFT to blend SFT and RFT! SFT can learn more complex problems by mimicking, but can have poor generalization. RFT has better overall performance but is limited by the initial policy. Our method, Prefix-RFT, makes the best of both worlds! 
          
                
                6
              
              
                
                45
              
              
                
                184
              
             Had a fantastic time hosting @Lavine_Lai at @EdinburghNLP! The visit led to an elegant light-PEFT method: from just a few examples, it learns sparse, targeted interventions — simple, robust, and easy to use. 
           Still fine-tuning LLMs 🔥? Forget LoRA— use JoLA! #icml2025 PEFT methods like LoRA often struggle in low-resource settings (100–1000 examples). Activation editing is lightweight, but what to edit—and how? @iatitov @AlexanderFraser @TU_Muenchen @EdinburghNLP
            
            
                
                0
              
              
                
                0
              
              
                
                10
              
             🚨New paper alert!🚨 "Scalpel vs. Hammer: GRPO Amplifies Existing Capabilities, SFT Replaces Them" @ActInterp ICML'25 @deepseek_ai popularised RLVR and distillation for 'reasoning training'! But how do they differ under the hood? Details in 🧵: (1/8) 
          
                
                2
              
              
                
                22
              
              
                
                45
              
             Finally made it to @icmlconf in gorgeous Vancouver! Presenting work at @ActInterp on Saturday (more on that soon 👀). If you're into interpretability/RL/AI Safety, I'd love to chat :) 
          
                
                0
              
              
                
                3
              
              
                
                52
              
             The Democrats own this Filibuster and the results coming from it. Senate Democrats, led by Minority Leader Chuck Schumer, initiated the filibuster against the "clean" continuing resolution (CR) in September and October 2025. The House-passed CR would have temporarily funded the 
          
                
                3
              
              
                
                2
              
              
                
                13
              
             Check out the full paper here:  https://t.co/RMAuMNkPry  credit to @ZeroyuHuang, @iatitov & other authors. 
          
            
            arxiv.org
              Existing post-training techniques for large language models are broadly categorized into Supervised Fine-Tuning (SFT) and Reinforcement Fine-Tuning (RFT). Each paradigm presents a distinct...
            
                
                1
              
              
                
                2
              
              
                
                7
              
             Congratulations Verna! This was one of the best theses I've ever read, I highly recommend checking out Verna's work on the tradeoffs between memorization and generalization in language models! 
           I miss Edinburgh and its wonderful people already!! Thanks to @tallinzen and @PontiEdoardo for inspiring discussions during the viva! I'm now exchanging Arthur's Seat for Mont Royal to join @sivareddyg's wonderful lab @Mila_Quebec 🤩 
          
                
                2
              
              
                
                3
              
              
                
                33
              
             I miss Edinburgh and its wonderful people already!! Thanks to @tallinzen and @PontiEdoardo for inspiring discussions during the viva! I'm now exchanging Arthur's Seat for Mont Royal to join @sivareddyg's wonderful lab @Mila_Quebec 🤩 
           Huge congratulations to Dr. @vernadankers for passing her viva today! 🥳🎓 It's been an honour sharing the PhD journey with you. I wasn’t ready for the void your sudden departure left (in the office and in my life!). Your new colleagues are lucky to have you! 🥺🥰 @Edin_CDT_NLP
            
            
                
                11
              
              
                
                10
              
              
                
                100
              
             Come and join us at @AmsterdamNLP! We have two open PhD positions in #NLProc with a focus on multilingual NLP and LLM alignment. Looking for students with an NLP/ML background and an interest in language and society. 
          
                
                1
              
              
                
                12
              
              
                
                34
              
             Two papers got accepted at #ICLR2025 and one at #NAACL2025! One for calibrating the RM bias:  https://t.co/XsdtITdmgW  Two for MoE:  https://t.co/LKvWsRn3GB; 
             https://t.co/Vb5TMTbARr  Thanks to my great supervisors @iatitov @PontiEdoardo and my excellent co-author @Qiuzihanhan!
          
          
            
            arxiv.org
              Mixture-of-experts (MoE) is gaining increasing attention due to its unique properties and remarkable performance, especially for language tasks. By sparsely activating a subset of parameters for...
            
                
                1
              
              
                
                3
              
              
                
                18
              
             Hot dogs, family secrets, and... witchcraft? Check out Witch Wieners, submission by @CinemaByChaCha to the Third Annual AI Horror Film Competition, presented by @curiousrefuge, @epidemicsound, and @LeonardoAi Just as a dark presence descends, a young woman uncovers her 
          
                
                6
              
              
                
                13
              
              
                
                70
              
             Is sparsity the key to conditional computation, interpretability, long context/generation, and more in foundation models? Find out at my #NeurIPS2024 tutorial on Dynamic Sparsity in Machine Learning with @andre_t_martins! Followed by a panel with @sarahookr and @murefil 🧵 
          
                
                2
              
              
                
                26
              
              
                
                87