Greg Durrett
@gregd_nlp
Followers
8K
Following
4K
Media
89
Statuses
1K
Associate professor at NYU (Courant CS + Center for Data Science) | advisor for @bespokelabsai | large language models and NLP | he/him
Joined December 2017
📣 Today we launched an overhauled NLP course to 600 students in the online MS programs at UT Austin. 98 YouTube videos 🎥 + readings 📖 open to all! https://t.co/y7sTe2Pb83 w/5 hours of new 🎥 on LLMs, RLHF, chain-of-thought, etc! Meme trailer 🎬 https://t.co/Okv5LPQEyE 🧵
3
64
318
Welcoming new faculty to CDS! This fall, we welcomed Greg Durrett (@gregd_nlp). In Fall 2026, we'll welcome Jaume Vives-i-Bastida (@jaumevivesb), Zongyi Li (@zongyili_nyu), Juan Carlos Perdomo Silva, and Pratyusha Sharma (@pratyusha_PS).
0
7
50
"If you like Jack Reacher or Dirk Pitt, you must meet Levi Yoder." - Kevin J. Anderson, New York Times bestselling author.
0
0
3
How can we make a better TerminalBench agent? Today, we are announcing the OpenThoughts-Agent project. OpenThoughts-Agent v1 is the first TerminalBench agent trained on fully open curated SFT and RL environments. OpenThinker-Agent-v1 is the strongest model of its size on
16
69
259
A couple years (!) in the making: we’re releasing a new corpus of embodied, collaborative problem solving dialogues. We paid 36 people to play Portal 2’s co-op mode and collected their speech + game recordings Paper: https://t.co/EHB4lbR7Ax Website: https://t.co/FK7tTFuQLt
4
25
76
Yejin describes this in her NeurIPS invited talk as "chemistry" between the base model and RL. SkillFactory lets you change that chemistry (like a catalyst, at the risk of stressing the metaphor...)
0
2
11
Check out SkillFactory! Priming LLMs with SFT before RL is pretty cheap and lets models learn cognitive skills from RL more effectively. And adding this inductive bias via SFT data is nicely compatible with the bitter lesson!
RL amplifies existing behaviors. Let’s prime models w/ good behaviors for better RL! Introducing SkillFactory: ✂️Rearrange model traces on a problem to demo verification + retry ⚙️SFT on those traces 🦾RL Result: Learn robust explicit verification + retry across domains 🧵
2
15
82
RL amplifies existing behaviors. Let’s prime models w/ good behaviors for better RL! Introducing SkillFactory: ✂️Rearrange model traces on a problem to demo verification + retry ⚙️SFT on those traces 🦾RL Result: Learn robust explicit verification + retry across domains 🧵
2
18
55
@LiyanTang4 @sebajoed ChartMuseum: https://t.co/e2LK74TsSB AstroVisBench: https://t.co/vCJgJQalEh Meet with me if you're interested!
0
2
5
I'm at NeurIPS until Friday! This morning, catch: @LiyanTang4 presenting ChartMuseum, testing if VLMs can do visual reasoning over charts @sebajoed presenting AstroVisBench, testing if coding LLMs can work with real astro data workflows & link in thread if you want to meet!
4
12
61
More details here: https://t.co/cbsO51Nad2 Interfolio link to apply coming soon! Feel free to email me in the meantime following the instructions there.
0
1
4
📢 Postdoc position 📢 I’m recruiting a postdoc for my lab at NYU! Topics include LM reasoning, creativity, limitations of scaling, AI for science, & more! Apply by Feb 1. (Different from NYU Faculty Fellows, which are also great but less connected to my lab.) Link in 🧵
4
57
139
Wanna do a postdoc at NYU? We have postings for Faculty Fellows at both @NYUDataScience and @NYU_Courant CS in the new School for Mathematics, Computing, and Data Science! Come work with the best in the biz!
6
27
151
Welcome to CDS, Associate Professor Greg Durrett (@gregd_nlp)! Prof Durrett joins us from UT Austin. His research focuses on how to train large language models to reason, reflect, and extrapolate — and how to get them to be more creative. https://t.co/kFRVlEUssk
nyudatascience.medium.com
CDS welcomes Greg Durrett, whose work focuses on reasoning, creativity, and the future of large language models.
0
8
55
✨ New course materials: Interpretability of LLMs✨ This semester I'm teaching an active-learning grad course at @TelAvivUni on LLM interpretability, co-developed with my student @dhgottesman. We're releasing the materials as we go, so they can serve as a resource for anyone
github.com
Course Materials for Interpretability of Large Language Models (0368.4264) at Tel Aviv University - mega002/llm-interp-tau
14
104
792
I'm recruiting my first group of PhD students at TTIC! If you're interested, please apply! If you know people who might be interested, please spread the word! Application deadline is Dec 9, 2025, and there is no application fee:
16
160
611
How might we guide AI to generate *interesting* mathematical theories? How do we capture the notion of "interestingness"? Happy to share our new work on learning interestingness in automated theory formation! 🧵
2
7
40
Responsible #AI in 2025 ( https://t.co/tcZ0p1PPPp): free 2-day online conference Nov 18 & 19 next week by @UTGoodSystems #TexasAI @UTAustin. Excited for panel below w/ @mariadearteaga, @gregd_nlp & @jessyjli. @OdenInstitute
@CosmicAI_Inst
@UTiSchool
@TexasScience
@UTexasResearch
0
7
13
I am recruiting PhD students at @NYU_Courant to conduct research in learning theory, algorithmic statistics, and trustworthy machine learning, starting Fall 2026. Please share widely! Deadline to apply is December 12, 2025.
12
137
584
look how happy they are submit to COLM
1
13
141
COLM is going to San Francisco for 2026! 🗓️Dates: October 6-9, 2026 🏨Venue: Hilton San Francisco Union Square Website and CFPs for papers and workshops coming up soon!
7
51
437