
Daniel Fried
@dan_fried
Followers
3K
Following
2K
Media
23
Statuses
892
Assistant prof. @LTIatCMU @SCSatCMU; Research scientist at @AIatMeta. Working on NLP: language interfaces, applied pragmatics, language-to-code, grounding.
Pittsburgh, PA
Joined August 2013
RT @realJessyLin: User simulators bridge RL with real-world interaction //. How do we get the RL paradigm to work….
0
45
0
RT @AdtRaghunathan: I will be at #ICML2025 🇨🇦 from Wednesday through Saturday. My students have a lot of exciting papers - check them out….
0
18
0
RT @valeriechen_: Excited to be hanging out today at @WiMLworkshop 👩🏻💻. Come say hi during the poster session .🕝 2:45–3:30pm.📍 West Meetin….
0
8
0
@ZhiruoW @maojiayuan @gneubig Paper: Work led by @YiqingXieNLP, with Alex Xie, @Divyanshu_Sheth, @stefan_fee, @dan_fried, and @carolynprose.
github.com
Code for "[COLM'25] RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing" - yiqingxyq/RepoST
0
0
4
@ZhiruoW @maojiayuan @gneubig 2) RepoST. We automatically create executable environments from real GitHub repos, allowing us to train and evaluate models for function generation in real-world contexts. Presenting at the CODEML workshop on Fri Jul 18th. Also accepted to COLM, upcoming!.
1
0
6
Excited to be presenting two of our papers at #ICML2025 and workshops, today through Saturday! The topics are memory for agents, and constructing coding environments for training & evaluation. See links below:.
1
0
10
RT @justintchiu: Are code agents good at software design, ie building general and reusable code?.We present Librarian, a new refactoring me….
0
22
0
RT @evanthebouncy: I've recently started my job as an asst professor at NTU, Singapore. If you are ever in town come say hi :) https://t.co….
0
11
0
RT @yueqi_song: Humans can perform complex reasoning without relying on specific domain knowledge, but can multimodal models truly do that….
0
44
0
RT @evanthebouncy: Just look at these multi-modal refinement instructions! How would we ground them into reasonable executions?? joint wor….
0
2
0
RT @evanthebouncy: What would it take to build agents that can similarly follow refinement instructions?. We hope that mrCAD can help, by g….
0
1
0
RT @evanthebouncy: Analyzing the instructions from successful rollouts reveals that:. - people used more drawings in generation (round 1) a….
0
1
0
In this multi-turn instruction following work, we found pretty interesting changes in the modalities people use to communicate from turn to turn -- and gaps in grounded LLM performance. I'm excited about this domain and dataset, and extensions to others (like code)!.
new multi-turn instruction grounding dataset with @wp_mccarthy and @saujasv . - multi-modal instruction : drawing + txt.- verifiable execution : 2D CAD gym env.- easy eval : API → score.- baselines : human vs VLMs.- large : 15,163 inst-exe rounds. [1/n]
0
0
10
I'm excited about Andre's work, which analyzes GRPO and identifies that it's biased towards reinforcing solutions that are already highly-probable. We found two easy-to-implement solutions. These improve pass@N, and produced a strong theorem proving model!.
New paper by Andre He:. Rewarding the Unlikely: Lifting GRPO Beyond Distribution Sharpening. Tired of sharpening the distribution? Try unlikeliness reward to learn new things from the roads less traveled
0
4
22
RT @kayo_yin: Happy to announce the first workshop on Pragmatic Reasoning in Language Models — PragLM @ COLM 2025! 🧠🎉. How do LLMs engage i….
0
20
0
RT @jmin__cho: Sharing some personal updates 🥳:.- I've completed my PhD at @unccs! 🎓.- Starting Fall 2026, I'll be joining the Computer Sci….
0
48
0
RT @PhilippeLaban: 🆕paper: LLMs Get Lost in Multi-Turn Conversation. In real life, people don’t speak in perfect prompts. So we simulate mu….
0
32
0
RT @wzhao_nlp: Some personal news: I'll join @UMassAmherst CS as an assistant professor in fall 2026. Until then, I'll postdoc at @Meta nyc….
0
32
0