
ML@CMU
@mlcmublog
Followers
2K
Following
5
Media
36
Statuses
116
Official twitter account for the ML@CMU blog @mldcmu @SCSatCMU
Pittsburgh, PA
Joined February 2020
Check out our latest post on CMU @ ICML 2025!.
blog.ml.cmu.edu
CMU researchers are presenting 127 papers at the Forty-Second International Conference on Machine Learning (ICML 2025), held from July 13th-19th at the Vancouver Convention Center. Here is a quick...
0
8
16
In this in-depth coding tutorial, @GaoZhaolin and @g_k_swamy walk through the steps to train an LLM via RL from Human Feedback!.
blog.ml.cmu.edu
Reinforcement Learning from Human Feedback (RLHF) is a popular technique used to align AI systems with human preferences by training them using feedback from people, rather than relying solely on...
0
8
25
Are your LLMs truly forgetting unwanted data? . In this new blog post authored by @shengyuan_26734, Yiwei Fu, @zstevenwu, and @gingsmith, we discuss how benign relearning can jog unlearned LLM's memory to recover knowledge that is supposed to be forgotten.
blog.ml.cmu.edu
Machine unlearning is a promising approach to mitigate undesirable memorization of training data in ML models. In this post, we will discuss our work (which appeared at ICLR 2025) demonstrating that...
0
3
5
Check out our latest blog post on CMU @ ICLR 2025!.
blog.ml.cmu.edu
CMU researchers are presenting 143 papers at the Thirteenth International Conference on Learning Representations (ICLR 2025), held from April 24 - 28 at the Singapore EXPO. Here is a quick overview...
0
2
4
Check out our new blog post on ALLIE, a new chess AI that actually plays like a human! Unlike Stockfish or AlphaZero that focus on winning at all costs, ALLIE uses a transformer model trained on human chess games to make moves, ponder and resign like.
blog.ml.cmu.edu
Play against Allie on lichess! Introduction In 1948, Alan Turning designed what might be the first chess playing AI, a paper program that Turing himself acted as the computer for. Since then, chess...
0
2
1
📈⚠️ Is your LLM unlearning benchmark measuring what you think it is? . In a new blog post authored by @prthaker_, @shengyuan_26734, @neilkale, @yash_maurya01, @zstevenwu, and @gingsmith, we discuss why empirical benchmarks are necessary but not.
blog.ml.cmu.edu
TL;DR: "Machine unlearning" aims to remove data from models without retraining the model completely. Unfortunately, state-of-the-art benchmarks for evaluating unlearning in LLMs are flawed, especia...
0
12
13
How do real-world developer preferences compare to existing evaluations? A CMU and UC Berkeley team led by @iamwaynechi and @valeriechen_ created @CopilotArena to collect user preferences on in-the-wild workflows. This blogpost overviews the design and
0
7
18
How can we train LLMs to solve complex challenges beyond just data scaling? In a new blogpost, @setlur_amrith, @QuYuxiao Matthew Yang, @LunjunZhang , @gingsmith and @aviral_kumar2 demonstrate that Meta RL can help LLMs better optimize test time compute.
blog.ml.cmu.edu
Figure 1: Training models to optimize test-time compute and learn "how to discover" correct responses, as opposed to the traditional learning paradigm of learning "what answer" to output. The major...
3
22
91
Why is our brain 🧠 modular with specialized areas? Recent research by Ruiyi Zhang @Xaqlab shows that artificial agents 🤖 with modular architectures—mirroring brain-like specialization—achieve better learning and generalization in naturalistic navigation.
blog.ml.cmu.edu
TL;DR: The brain may have evolved a modular architecture for daily tasks, with circuits featuring functionally specialized modules that match the task structure. We hypothesize that this architecture...
0
2
5
Have you had difficulty using a new machine for DIY or latte-making? Have you forgotten to add spice during cooking?. @hciphdstudent @hiromu1996 @mollyn_paan, Jill Fain Lehman, and @mynkgoel are leveraging multimodal sensing to improve the.
blog.ml.cmu.edu
TL;DR: At SmashLab, we're creating an intelligent assistant that uses the sensors in a smartwatch to support physical tasks such as cooking and DIY. This blog post explores how we use less intrusive...
0
5
14
A critical question arises when using large language models: should we fine-tune them or rely on prompting with in-context examples? Recent work led by @JunhongShen1 and collaborators demonstrates that we can develop state-of-the-art web agents by
0
3
14
Check out our latest blog post on CMU @ NeurIPS 2024!.
blog.ml.cmu.edu
Carnegie Mellon University is proud to present 194 papers at the 38th conference on Neural Information Processing Systems (NeurIPS 2024), held from December 10-15 at the Vancouver Convention Center....
0
0
2
AI-powered robots are alarmingly easy to jailbreak to perform dangerous tasks, including delivering bombs, surveilling humans, and ignoring traffic laws. What does the future hold for AI-powered robots? Learn more in our latest blog post, based on work.
blog.ml.cmu.edu
Summary. Recent research has shown that large language models (LLMs) such as ChatGPT are susceptible to jailbreaking attacks, wherein malicious users fool an LLM into generating toxic content (e.g.,...
0
6
15