Scott Niekum Profile
Scott Niekum

@scottniekum

Followers
4K
Following
928
Media
7
Statuses
577

Associate professor at UMass Amherst CICS. AIignment, safety, reinforcement learning, imitation learning, and robotics.

Joined February 2019
Don't wanna be here? Send us removal request.
@tuhina_tripathi
Tuhina Tripathi
1 month
We have been overlooking a key factor in LLM-as-a-judge evaluation: the feedback collection protocol. Our #COLM2025 paper presents a comprehensive study on how feedback protocols shape reliability and bias in LLM evaluations.
2
6
28
@harshit_sikchi
Harshit Sikchi (will be at NeurIPS 25)
2 months
RLZero will be presented at @NeurIPSConf 2025 . Learn more about the work in the thread below:
@harshit_sikchi
Harshit Sikchi (will be at NeurIPS 25)
11 months
🤖 Introducing RL Zero 🤖: a new approach to transform language into behavior zero-shot for embodied agents without labeled datasets! RL Zero enables prompt-to-policy generation, and we believe this unlocks new capabilities in scaling up language-conditioned RL, providing an
4
7
55
@harshit_sikchi
Harshit Sikchi (will be at NeurIPS 25)
5 months
Behavioral Foundation Models (BFMs) trained with RL are secretly more powerful than we think. BFM’s directly output a policy believed to be near-optimal given any reward function. Our new work shows that they can actually do much better:
3
47
353
@RL_Conference
RL_Conference
6 months
Reminder that early registration for RLC closes on the 30th! Please register early to save yourself some money and help us get the word out.
1
3
18
@scottniekum
Scott Niekum
7 months
I'm extremely proud of the work that Harshit has done and looking forward to seeing what he does next. Congratulations, Harshit!
@harshit_sikchi
Harshit Sikchi (will be at NeurIPS 25)
7 months
Successfully defended my Ph.D. today 🎓🥳! @scottniekum and @yayitsamyzhang are the best advisors I could have ever asked for. A big thanks to my committee members @marcgbellemare @yukez @PeterStone_TX . The full presentation video will be uploaded soon... Excited about what's
1
0
34
@g_k_swamy
Gokul Swamy
8 months
1.5 yrs ago, we set out to answer a seemingly simple question: what are we *actually* getting out of RL in fine-tuning? I'm thrilled to share a pearl we found on the deepest dive of my PhD: the value of RL in RLHF seems to come from *generation-verification gaps*. Get ready to🤿!
24
235
2K
@scottniekum
Scott Niekum
8 months
It’s about time! 🎉🎉🎉🎉🎉🎉🎉
Tweet card summary image
nytimes.com
Andrew Barto and Richard Sutton developed reinforcement learning, a technique vital to chatbots like ChatGPT.
0
2
16
@HatgisKessell
Stephane Hatgis-Kessell
10 months
Our new paper proposes a novel method for model alignment: designing user interfaces to guide humans to conform more to the assumptions made by the algorithms that learn from their feedback. And it works! #AI #MachineLearning #RLHF #Alignment (1/n)
1
1
6
@gregd_nlp
Greg Durrett
10 months
Huge congrats to @prasann_singhal for being one of the 8 CRA Outstanding Undergraduate Researcher Award winners! It has been an absolute privilege to work with Prasann during his time at UT. (And he's applying for PhD programs this year...hint hint...) Prasann's work... 🧵
4
14
101
@scottniekum
Scott Niekum
11 months
I'm quite excited about this and still a bit shocked that it works as well as it does. Imitation via distribution matching has always felt like a clunky, brittle way to teach agents. Language + zero-shot RL is natural and scales well, due to the unsupervised nature of RL Zero.
@harshit_sikchi
Harshit Sikchi (will be at NeurIPS 25)
11 months
🤖 Introducing RL Zero 🤖: a new approach to transform language into behavior zero-shot for embodied agents without labeled datasets! RL Zero enables prompt-to-policy generation, and we believe this unlocks new capabilities in scaling up language-conditioned RL, providing an
0
3
29
@RL_Conference
RL_Conference
11 months
The call for papers for RLC is now up! Abstract deadline of 2/14, submission deadline of 2/21! Please help us spread the word. https://t.co/sNld9g9RDO
0
29
90
@brendan642
brendan o'connor
1 year
We're hiring new #nlproc faculty this year! Asst or Assoc Professors in NLP at UMass CICS -- https://t.co/45Jch7T5yZ
0
3
9
@RLDMDublin2025
RLDM
1 year
Save the date! RLDM 2025, The Multi-disciplinary Conference on Reinforcement Learning and Decision Making, is only around the corner. Visit our website to keep an eye on our submission deadlines👀 https://t.co/8zbBy1uQQN
0
7
29
@meghanehuber
Meghan E. Huber
1 year
Come join our team at UMass Robotics!! We are hiring at the Associate/Full level for a joint appointment in engineering and computer science. Feel free to reach out if you have any questions. RTs appreciated :)
0
17
20
@duke_zzwang
Zizhao Wang
1 year
In multi-object env, why do most Unsupervised Skill Discovery methods fail to learn complex skills like tool use? Because they simply maximize state coverage. Introducing our solution SkiLD: Skill Discovery Guided by Factor Interactions (NeurIPS24) https://t.co/buo3qSdI1O
1
12
64
@EugeneVinitsky
Eugene Vinitsky 🦋
1 year
In our new paper, we find that LLMs can efficiently do RLHF in-context! Our method, in-context preference learning (ICPL), iterates LLMs writing reward functions, training agents, and putting preferences into context. We see a 30x boost in query efficiency over baseline RLHF!
2
25
225
@MarlosCMachado
Marlos C. Machado
1 year
For those interested, the keynotes of the @RL_Conference 2024 are now available online: https://t.co/PvFBsamvoI Unfortunately, Doina Precup's talk was not recorded, but we have: Andy Barto, @EmmaBrunskill, @FinaleDoshi, @svlevine, David Silver, and @PeterStone_TX.
Tweet card summary image
youtube.com
Videos from RLC 2024 and onwards!
1
61
250
@DavidSKrueger
David Krueger
1 year
"Predicting Future Actions of Reinforcement Learning Agents" - Chung et al. We introduce the problem of predicting RL agents' behavior, which could have important safety implications. We find that RL agents that perform explicit (or implicit) planning can be more predictable.
1
1
5
@harshit_sikchi
Harshit Sikchi (will be at NeurIPS 25)
1 year
Our cross-university(s) collaborative work on "Scaling laws for Reward Model Overoptimization in Direct Alignment Algorithms" is accepted at @NeurIPSConf!
@rm_rafailov
Rafael Rafailov
1 year
After the LLaMa 3.1 release and ICML, I wan to highlight our paper "Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms". TL;DR we explore the dynamics of over-optimization in DPO/IPO/SLiC and find similiar "reward hacking" issues as online RLHF.👇
0
5
21
@gregd_nlp
Greg Durrett
1 year
This project started with us annoyed at papers evaluating CoT "reasoning" with only GSM8k & MATH. We didn't expect to find such strong evidence that these are the only type of problem where CoT helps! Credit to @juand_r_nlp & @kmahowald for driving the rigorous meta-analysis!
@ZayneSprague
Zayne Sprague
1 year
To CoT or not to CoT?🤔 300+ experiments with 14 LLMs & systematic meta-analysis of 100+ recent papers 🤯Direct answering is as good as CoT except for math and symbolic reasoning 🤯You don’t need CoT for 95% of MMLU! CoT mainly helps LLMs track and execute symbolic computation
6
32
164