Scott Niekum
@scottniekum
Followers
4K
Following
928
Media
7
Statuses
577
Associate professor at UMass Amherst CICS. AIignment, safety, reinforcement learning, imitation learning, and robotics.
Joined February 2019
We have been overlooking a key factor in LLM-as-a-judge evaluation: the feedback collection protocol. Our #COLM2025 paper presents a comprehensive study on how feedback protocols shape reliability and bias in LLM evaluations.
2
6
28
RLZero will be presented at @NeurIPSConf 2025 . Learn more about the work in the thread below:
🤖 Introducing RL Zero 🤖: a new approach to transform language into behavior zero-shot for embodied agents without labeled datasets! RL Zero enables prompt-to-policy generation, and we believe this unlocks new capabilities in scaling up language-conditioned RL, providing an
4
7
55
Behavioral Foundation Models (BFMs) trained with RL are secretly more powerful than we think. BFM’s directly output a policy believed to be near-optimal given any reward function. Our new work shows that they can actually do much better:
3
47
353
Reminder that early registration for RLC closes on the 30th! Please register early to save yourself some money and help us get the word out.
1
3
18
I'm extremely proud of the work that Harshit has done and looking forward to seeing what he does next. Congratulations, Harshit!
Successfully defended my Ph.D. today 🎓🥳! @scottniekum and @yayitsamyzhang are the best advisors I could have ever asked for. A big thanks to my committee members @marcgbellemare @yukez @PeterStone_TX . The full presentation video will be uploaded soon... Excited about what's
1
0
34
1.5 yrs ago, we set out to answer a seemingly simple question: what are we *actually* getting out of RL in fine-tuning? I'm thrilled to share a pearl we found on the deepest dive of my PhD: the value of RL in RLHF seems to come from *generation-verification gaps*. Get ready to🤿!
24
235
2K
It’s about time! 🎉🎉🎉🎉🎉🎉🎉
nytimes.com
Andrew Barto and Richard Sutton developed reinforcement learning, a technique vital to chatbots like ChatGPT.
0
2
16
Our new paper proposes a novel method for model alignment: designing user interfaces to guide humans to conform more to the assumptions made by the algorithms that learn from their feedback. And it works! #AI #MachineLearning #RLHF #Alignment (1/n)
1
1
6
Huge congrats to @prasann_singhal for being one of the 8 CRA Outstanding Undergraduate Researcher Award winners! It has been an absolute privilege to work with Prasann during his time at UT. (And he's applying for PhD programs this year...hint hint...) Prasann's work... 🧵
4
14
101
I'm quite excited about this and still a bit shocked that it works as well as it does. Imitation via distribution matching has always felt like a clunky, brittle way to teach agents. Language + zero-shot RL is natural and scales well, due to the unsupervised nature of RL Zero.
🤖 Introducing RL Zero 🤖: a new approach to transform language into behavior zero-shot for embodied agents without labeled datasets! RL Zero enables prompt-to-policy generation, and we believe this unlocks new capabilities in scaling up language-conditioned RL, providing an
0
3
29
The call for papers for RLC is now up! Abstract deadline of 2/14, submission deadline of 2/21! Please help us spread the word. https://t.co/sNld9g9RDO
0
29
90
We're hiring new #nlproc faculty this year! Asst or Assoc Professors in NLP at UMass CICS -- https://t.co/45Jch7T5yZ
0
3
9
Save the date! RLDM 2025, The Multi-disciplinary Conference on Reinforcement Learning and Decision Making, is only around the corner. Visit our website to keep an eye on our submission deadlines👀 https://t.co/8zbBy1uQQN
0
7
29
Come join our team at UMass Robotics!! We are hiring at the Associate/Full level for a joint appointment in engineering and computer science. Feel free to reach out if you have any questions. RTs appreciated :)
0
17
20
In multi-object env, why do most Unsupervised Skill Discovery methods fail to learn complex skills like tool use? Because they simply maximize state coverage. Introducing our solution SkiLD: Skill Discovery Guided by Factor Interactions (NeurIPS24) https://t.co/buo3qSdI1O
1
12
64
In our new paper, we find that LLMs can efficiently do RLHF in-context! Our method, in-context preference learning (ICPL), iterates LLMs writing reward functions, training agents, and putting preferences into context. We see a 30x boost in query efficiency over baseline RLHF!
2
25
225
For those interested, the keynotes of the @RL_Conference 2024 are now available online: https://t.co/PvFBsamvoI Unfortunately, Doina Precup's talk was not recorded, but we have: Andy Barto, @EmmaBrunskill, @FinaleDoshi, @svlevine, David Silver, and @PeterStone_TX.
youtube.com
Videos from RLC 2024 and onwards!
1
61
250
"Predicting Future Actions of Reinforcement Learning Agents" - Chung et al. We introduce the problem of predicting RL agents' behavior, which could have important safety implications. We find that RL agents that perform explicit (or implicit) planning can be more predictable.
1
1
5
Our cross-university(s) collaborative work on "Scaling laws for Reward Model Overoptimization in Direct Alignment Algorithms" is accepted at @NeurIPSConf!
After the LLaMa 3.1 release and ICML, I wan to highlight our paper "Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms". TL;DR we explore the dynamics of over-optimization in DPO/IPO/SLiC and find similiar "reward hacking" issues as online RLHF.👇
0
5
21
This project started with us annoyed at papers evaluating CoT "reasoning" with only GSM8k & MATH. We didn't expect to find such strong evidence that these are the only type of problem where CoT helps! Credit to @juand_r_nlp & @kmahowald for driving the rigorous meta-analysis!
To CoT or not to CoT?🤔 300+ experiments with 14 LLMs & systematic meta-analysis of 100+ recent papers 🤯Direct answering is as good as CoT except for math and symbolic reasoning 🤯You don’t need CoT for 95% of MMLU! CoT mainly helps LLMs track and execute symbolic computation
6
32
164