mlcmublog Profile Banner
ML@CMU Profile
ML@CMU

@mlcmublog

Followers
2K
Following
5
Media
36
Statuses
116

Official twitter account for the ML@CMU blog @mldcmu @SCSatCMU

Pittsburgh, PA
Joined February 2020
Don't wanna be here? Send us removal request.
@mlcmublog
ML@CMU
2 months
Are your LLMs truly forgetting unwanted data? . In this new blog post authored by @shengyuan_26734, Yiwei Fu, @zstevenwu, and @gingsmith, we discuss how benign relearning can jog unlearned LLM's memory to recover knowledge that is supposed to be forgotten.
Tweet card summary image
blog.ml.cmu.edu
Machine unlearning is a promising approach to mitigate undesirable memorization of training data in ML models. In this post, we will discuss our work (which appeared at ICLR 2025) demonstrating that...
0
3
5
@mlcmublog
ML@CMU
3 months
Check out our new blog post on ALLIE, a new chess AI that actually plays like a human! Unlike Stockfish or AlphaZero that focus on winning at all costs, ALLIE uses a transformer model trained on human chess games to make moves, ponder and resign like.
Tweet card summary image
blog.ml.cmu.edu
Play against Allie on lichess! Introduction In 1948, Alan Turning designed what might be the first chess playing AI, a paper program that Turing himself acted as the computer for. Since then, chess...
0
2
1
@mlcmublog
ML@CMU
3 months
📈⚠️ Is your LLM unlearning benchmark measuring what you think it is? . In a new blog post authored by @prthaker_, @shengyuan_26734, @neilkale, @yash_maurya01, @zstevenwu, and @gingsmith, we discuss why empirical benchmarks are necessary but not.
Tweet card summary image
blog.ml.cmu.edu
TL;DR: "Machine unlearning" aims to remove data from models without retraining the model completely. Unfortunately, state-of-the-art benchmarks for evaluating unlearning in LLMs are flawed, especia...
0
12
13
@mlcmublog
ML@CMU
3 months
How do real-world developer preferences compare to existing evaluations? A CMU and UC Berkeley team led by @iamwaynechi and @valeriechen_ created @CopilotArena to collect user preferences on in-the-wild workflows. This blogpost overviews the  design and
Tweet media one
0
7
18
@mlcmublog
ML@CMU
6 months
How can we train LLMs to solve complex challenges beyond just data scaling? In a new blogpost, @setlur_amrith, @QuYuxiao Matthew Yang, @LunjunZhang , @gingsmith  and @aviral_kumar2 demonstrate that Meta RL can help LLMs better optimize test time compute.
Tweet card summary image
blog.ml.cmu.edu
Figure 1: Training models to optimize test-time compute and learn "how to discover" correct responses, as opposed to the traditional learning paradigm of learning "what answer" to output. The major...
3
22
91
@mlcmublog
ML@CMU
7 months
Why is our brain 🧠 modular with specialized areas? Recent research by Ruiyi Zhang @Xaqlab shows that artificial agents 🤖 with modular architectures—mirroring brain-like specialization—achieve better learning and generalization in naturalistic navigation.
Tweet card summary image
blog.ml.cmu.edu
TL;DR: The brain may have evolved a modular architecture for daily tasks, with circuits featuring functionally specialized modules that match the task structure. We hypothesize that this architecture...
0
2
5
@mlcmublog
ML@CMU
7 months
Have you had difficulty using a new machine for DIY or latte-making? Have you forgotten to add spice during cooking?. @hciphdstudent @hiromu1996 @mollyn_paan, Jill Fain Lehman, and @mynkgoel are leveraging multimodal sensing to improve the.
Tweet card summary image
blog.ml.cmu.edu
TL;DR: At SmashLab, we're creating an intelligent assistant that uses the sensors in a smartwatch to support physical tasks such as cooking and DIY. This blog post explores how we use less intrusive...
0
5
14
@mlcmublog
ML@CMU
8 months
A critical question arises when using large language models: should we fine-tune them or rely on prompting with in-context examples? Recent work led by @JunhongShen1 and collaborators demonstrates that we can develop state-of-the-art web agents by
Tweet media one
0
3
14
@mlcmublog
ML@CMU
9 months
Demining 70+ war-affected countries could take 1,100 years at the current pace. This AI-powered tool, developed in close collaboration with the UN in work led by Mateo Dulce, halves false alarms and speeds up clearance. Now tested in Afghanistan &
Tweet media one
0
1
2
@mlcmublog
ML@CMU
9 months
AI-powered robots are alarmingly easy to jailbreak to perform dangerous tasks, including delivering bombs, surveilling humans, and ignoring traffic laws. What does the future hold for AI-powered robots? Learn more in our latest blog post, based on work.
Tweet card summary image
blog.ml.cmu.edu
Summary. Recent research has shown that large language models (LLMs) such as ChatGPT are susceptible to jailbreaking attacks, wherein malicious users fool an LLM into generating toxic content (e.g.,...
0
6
15