Nicholas Tomlin
@NickATomlin
Followers
1K
Following
2K
Media
68
Statuses
399
Incoming assistant professor at TTIC, current faculty fellow at NYU CDS, and previous PhD student at Berkeley. Natural language processing. He/him.
New York, NY
Joined November 2013
I'm incredibly excited to share that I'll be joining @TTIC_Connect as an assistant professor in Fall 2026! Until then, I'm wrapping up my PhD at Berkeley, and after that I'll be a faculty fellow at @NYUDataScience
33
10
201
Do AI agents ask good questions? We built “Collaborative Battleship” to find out—and discovered that weaker LMs + Bayesian inference can beat GPT-5 at 1% of the cost. Paper, code & demos: https://t.co/lV76HRKR3d Here's what we learned about building rational information-seeking
4
27
148
NYU is recruiting faculty fellows! Happy to chat with anyone who is considering this as an option:
0
14
58
We're also located in the wonderful city of Chicago, which has surprisingly low CoL given all it offers
0
0
1
In many ways, I think TTIC is the ideal form of an academic research group. A more manageable balance of teaching and research, a tight-knit group of faculty who are actively engaged in their work, and excellent PhD students
1
0
0
TTIC is hiring for both tenure-track and research assistant professor positions:
TTIC is hiring Research Assistant Professors in #ML, #Robotics, #Algorithms & more! 3-year, fully funded role — no teaching required, full research freedom, strong mentorship & collaborations. Join a top #CS research community in Chicago. Apply by Dec 1: https://t.co/LIXnlkH597
1
7
28
The Applied AI group at UChicago Booth is hiring this year! This was the only non-CS faculty position I applied to during my search, and it turned out to be an incredible fit: more compute than most CS depts I interviewed with, tons of research freedom, and (best of all!) no
5
38
255
Prediction is central in human language comprehension. LLMs are trained to predict the next word. Match made in heaven? Turns out the better mainstream LLMs become, the less useful they are as cognitive models. @byungdoh and I wrote a position paper on why that is, and how to
2
17
68
I’m hiring PhD students for 2026 @TTIC_Connect. More details here:
21
174
721
One of my takeaways from #COLM2025 was that people are thinking a lot about user simulation (have been thinking about this myself in the context of tutoring!) Really exciting to see this work on the topic 🤩
Simulating user–AI conversations helps us understand how LMs work in multi-turn settings. Prompting LMs like GPT-4o to simulate users is common, but their assistant nature makes it hard to replicate user behavior. We introduce User LMs - trained to be users, not assistants.
7
13
111
COLM is the best conference I have attended throughout the entirety of my PhD, really looking forward to future iterations
5
10
208
I'll be at COLM in Montreal next week! 🇨🇦 Currently thinking about: scalable environments for RL, user sims, more data-efficient learning, and recruiting PhD students for my new group at TTIC
1
4
47
We wrote a blogpost outlining some open questions for building more useful + human-like user simulators Also: if you're applying to PhDs this cycle and interested in working on topics like these, or reasoning, interaction, and AI capabilities more broadly, please reach out!
What does it take to build a human-like user simulator? // To train collaborative agents, we need better user sims. In blog post pt 2, @NickATomlin and I sketch a framework for building user simulators + open questions for research: https://t.co/FD0dRt22lR
0
4
24
What does it take to build a human-like user simulator? // To train collaborative agents, we need better user sims. In blog post pt 2, @NickATomlin and I sketch a framework for building user simulators + open questions for research: https://t.co/FD0dRt22lR
jessylin.com
3
11
58
🔍 How do we teach an LLM to 𝘮𝘢𝘴𝘵𝘦𝘳 a body of knowledge? In new work with @AIatMeta, we propose Active Reading 📙: a way for models to teach themselves new things by self-studying their training data. Results: * 𝟔𝟔% on SimpleQA w/ an 8B model by studying the wikipedia
15
156
1K
📢I'm joining NYU (Courant CS + Center for Data Science) starting this fall! I’m excited to connect with new NYU colleagues and keep working on LLM reasoning, reliability, coding, creativity, and more! I’m also looking to build connections in the NYC area more broadly. Please
94
47
766
For many interactive tasks, we might want to train LLMs in conjunction with simulated users. In this blogpost, we discuss some of the challenges with this approach:
User simulators bridge RL with real-world interaction // https://t.co/bsrYxVHuVo How do we get the RL paradigm to work on tasks beyond math & code? Instead of designing datasets, RL requires designing environments. Given that most non-trivial real-world tasks involve
0
1
11
We’re proud to announce three new tenure-track assistant professors joining TTIC in Fall 2026: Yossi Gandelsman (@YGandelsman), Will Merrill (@lambdaviking), and Nick Tomlin (@NickATomlin). Meet them here: https://t.co/8UzLmuytNe
4
10
141
New paper by Andre He: Rewarding the Unlikely: Lifting GRPO Beyond Distribution Sharpening https://t.co/R3oKGqWwOw Tired of sharpening the distribution? Try unlikeliness reward to learn new things from the roads less traveled
4
53
358
Revoking visas to Chinese PhD students is economically shortsighted and inhumane. Most Chinese PhD students stay in the U.S. after graduation (first image, stats from 2022). They're staying and building technology in the U.S., not taking it to China. Immigrant students create
6
43
363