
MIT NLP
@nlp_mit
Followers
4K
Following
53
Media
3
Statuses
64
NLP Group at @MIT_CSAIL! PIs: @yoonrkim @jacobandreas @lateinteraction @pliang279 @david_sontag, Jim Glass, @roger_p_levy
Cambridge, MA
Joined March 2025
Hello everyone! We are quite a bit late to the twitter party, but welcome to the MIT NLP Group account! follow along for the latest research from our labs as we dive deep into language, learning, and logic 🤖📚đź§
27
52
546
Generate videos in just a few seconds. Try Grok Imagine, free for a limited time.
548
662
4K
RT @pliang279: Since my undergraduate days at CMU, I've been participating in puzzlehunts: involving complex, multi-step puzzles, lacking w….
0
14
0
RT @anku__rani: ✨New work on mathematical reasoning and attribution is now on arXiv! When given charts and questions, multimodal LLMs gener….
0
4
0
RT @pliang279: A bit late, but finally got around to posting the recorded and edited lecture videos for the **How to AI (Almost) Anything**….
0
248
0
RT @ishapuri101: It seems GPT‑OSS is very prone to hallucinations … check out our RLCR paper to see how we trained reasoning models to know….
0
58
0
RT @YungSungChuang: Scaling CLIP on English-only data is outdated now…. 🌍We built CLIP data curation pipeline for 300+ languages.🇬🇧We train….
0
80
0
RT @ishapuri101: fun new paper training LLMs to analyze their own uncertainty and be more calibrated in their confidence! .
arxiv.org
When language models (LMs) are trained via reinforcement learning (RL) to generate natural language "reasoning chains", their performance improves on a variety of difficult question answering...
0
10
0
Check out this new paper training LLMs to analyze their own uncertainty and be more calibrated! from @MehulDamani2 @ishapuri101 @StewartSlocum1 @IdanShenfeld and co!.
🚨New Paper!🚨.We trained reasoning LLMs to reason about what they don't know. o1-style reasoning training improves accuracy but produces overconfident models that hallucinate more. Meet RLCR: a simple RL method that trains LLMs to reason and reflect on their uncertainty --
0
0
9
RT @mmtjandrasuwita: I'm currently in Vancouver for #ICML2025 this week and will present our work, "Understanding the Emergence of Multimod….
0
1
0
RT @seungwookh: Presenting our ICML spotlight poster at today 11am @ E-606 w/ @jyo_pari!. We need to fundamentally change how we train to a….
0
2
0
RT @MonicaNAgrawal: Excited to be here at #ICML2025 to present our paper on 'pragmatic misalignment' in (deployed!) RAG systems: narrowly "….
0
7
0
RT @belindazli: I'll be presenting "(How) Do Language Models Track State" at ICML!.Come by our poster tomorrow, Tuesday July 15 from 4:30pm….
0
13
0
RT @seungwookh: How do task vectors emerge during pretraining—and can they predict ICL performance?. Come see our ICML spotlight poster "Em….
0
5
0
RT @seungwookh: At #ICML 🇨🇦 this week. I'm convinced that the core computations are shared across modalities (vision, text, audio, etc). T….
0
3
0
RT @YungSungChuang: I will be in Vancouver🇨🇦 for #ICML2025 this week and present #SelfCite on Tuesday morning. Happy to chat and connect. S….
selfcite.github.io
Sentence-level, verifiable citations with zero human labels.
0
7
0
RT @AdamZweiger: Come check out our ICML poster on combining Test-Time Training and In-Context Learning for on-the-fly adaptation to novel….
0
6
0
RT @ziqiao_ma: 📣 Excited to announce SpaVLE: #NeurIPS2025 Workshop on Space in Vision, Language, and Embodied AI! . 👉 .
0
28
0