Anqi (Angie) Liu
@anqi_liu33
Followers
2K
Following
169
Media
16
Statuses
189
Assistant Professor at CS Department of Johns Hopkins University @JHUCompSci
Baltimore, MD
Joined August 2012
Really exciting work led by Nathan! We show that In-Context Learning transcends natural language and is a more general artefact of next-token predictors in pattern rich sequences!
Honored to announce our recent work at @jhuclsp - "Genomic Next-Token Predictors are In-Context Learners". We show that genomic foundation models like Evo2 (by @arcinstitute) exhibit ICL on symbolic manipulation tasks outside their pre-training domain. ๐งต below w/ arxiv link.
0
3
6
For years since the GPT-2 paper, emergent in-context learning (ICL) from 'next-token' training has been treated as something deeply tied to ๐ก๐ฎ๐ฆ๐๐ง ๐ฅ๐๐ง๐ ๐ฎ๐๐ ๐. But โฆ is it? Thrilled to share our latest result: ๐๐ฒ๐ป๐ผ๐บ๐ถ๐ฐ๐งฌ ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐๐ฟ๐ฎ๐ถ๐ป๐ฒ๐ฑ ๐ค๐ฃ๐ก๐ฎ ๐ผ๐ป
5
23
108
Thanks Nathan for the great work! We show ICL does not only exist in human language models, but potentially models trained from more general structured sequences. And interestingly, their learned patterns are potentially different, so show different qualitative properties in ICL.
Honored to announce our recent work at @jhuclsp - "Genomic Next-Token Predictors are In-Context Learners". We show that genomic foundation models like Evo2 (by @arcinstitute) exhibit ICL on symbolic manipulation tasks outside their pre-training domain. ๐งต below w/ arxiv link.
0
2
4
This work started when I was doing the postdoc in Caltech. Analyzing social media data opens a door for me to think about the society and how AI/ML can play a positive role there. Thanks everyone for making this happen.
NEW RESEARCH: @SaraKangaslahti, @HarvardUniv | @DannyCEbanks, @IQSS, Harvard | @JeanKossaifi, @nvidia | @anqi_liu33, @HopkinsEngineer, @JohnsHopkins | @rmichaelalvarez, @Caltech, @CaltechLCSSP | @AnimaAnandkumar, Caltech Forthcoming in Political Analysis: https://t.co/NNR2z0mdcO
0
2
4
NEW RESEARCH: @SaraKangaslahti, @HarvardUniv | @DannyCEbanks, @IQSS, Harvard | @JeanKossaifi, @nvidia | @anqi_liu33, @HopkinsEngineer, @JohnsHopkins | @rmichaelalvarez, @Caltech, @CaltechLCSSP | @AnimaAnandkumar, Caltech Forthcoming in Political Analysis: https://t.co/NNR2z0mdcO
3
10
13
Join @anqi_liu33 for โBeyond Empirical Risk Minimization: Performance Guarantees, Distribution Shifts, and Noise Robustness,โ an invited talk from @BCAMBilbaoโs Santiago Mazuelas on Monday, November 10:
cs.jhu.edu
Abstract The empirical risk minimization (ERM) approach for supervised learning chooses prediction rules that fit training samples and are โsimpleโ (generalize). This approach has been the workhorse...
0
2
1
In @eLife, our OpenSpliceAI paper, led by @KuanHaoChao, is now 'official' though it's been online since July. If you want to enjoy the reviewers' comments and our responses, check it out at: https://t.co/Jv7DfmClBa
elifesciences.org
OpenSpliceAI is an open, retrainable framework for splice site prediction that enables rapid, memory-efficient, cross-species analyses at scale with accuracy comparable to SpliceAI.
0
9
23
If you are at #COLM2025 and are interested in uncertainty estimation and probabilistic predictions of LLMs, talk to @LiaoyaqiW! Our work leverages comprehensive synthetic data and training signals from both decoder-based regression and ranking to achieve fine-grained estimation.
๐ Thrilled to share our new work: "Always Tell Me The Odds" in COLM25 LLMs struggle with accurate probability predictions, often giving coarse answers. We train decoder-based models to provide fine-grained, calibrated probabilities, significantly outperforming strong baselines!
0
1
11
Thank you for having me! I enjoyed every conversation with faculty and students in the ASSET center.
Thank you @anqi_liu33 for your presentation yesterday! If you missed her talk, you can view it here:
0
1
5
ICL and SFT are the two most studied ways to adapt LMs. We understand each in isolation โ but far less about how they might ๐ฐ๐ผ๐บ๐ฝ๐น๐ฒ๐บ๐ฒ๐ป๐ ๐ผ๐ป๐ฒ ๐ฎ๐ป๐ผ๐๐ต๐ฒ๐ฟ. Our latest work asks two questions: 1๏ธโฃ Do ICL and SFT operate differently? 2๏ธโฃ And if so, can one
huggingface.co
"Pre-training is our crappy evolution. It is one candidate solution to the cold start problem..." Exactly! When presented with information rich context, LLMs prepare how to respond using their pre-trained (evolved) brains. In our paper, we exploit this signal to improve SFT!
1
28
171
"Pre-training is our crappy evolution. It is one candidate solution to the cold start problem..." Exactly! When presented with information rich context, LLMs prepare how to respond using their pre-trained (evolved) brains. In our paper, we exploit this signal to improve SFT!
Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea
4
21
136
#HopkinsDSAI welcomes 22 new faculty members, who join more than 150 DSAI faculty members across @JohnsHopkins in advancing the study of data science, machine learning, and #AI and translation to a range of critical and emerging fields. https://t.co/tAauSzRFWD
3
29
203
How to estimate an ML modelโs performance in the wild? Our takeaway: if you can tell the difference between a target distribution and source one, it will help! Sounds natural?It turns out no previous theory directly indicates it and no previous methods exist. Check Aayushโs work!
In high stakes applications, like medical diagnoses, practitioners often want an answer to the following question: "How would this ML model perform on new data"? In our UAI 2025 paper, we provide tight and practical upper bounds on target performance using domain overlap! ๐งต
0
2
8
Our university news wrote a nice story about our exhibit at the Smithsonian National Museum of Natural History. It was such a great experience, with lots of interesting and insightful conversations with visitors from all over the world! @JHUCompSci
hub.jhu.edu
Johns Hopkins and Notre Dame researchers partnered with the National Museum of Natural History to help children hone their online privacy skills
0
5
19
JHU researchers including @ZiangXiao and @anqi_liu33 have created a system that could make social robots more effective at detecting and managing user interruptions in real timeโa breakthrough for areas like health care and education where natural conversation is crucial.
Johns Hopkins computer scientists have created an interruption-handling system to facilitate more natural conversations with social robots.
0
4
13
check out our new @eLife paper, led by @KuanHaoChao along with @alan_mayonnaise, @anqi_liu33, @elapertea: OpenSpliceAI: An efficient, modular implementation of SpliceAI
elifesciences.org
OpenSpliceAI is an open, retrainable framework for splice site prediction that enables rapid, memory-efficient, cross-species analyses at scale with accuracy comparable to SpliceAI.
1
12
24
How can the accuracy and accessibility of a leading splice-site prediction tool be improved for broader genomic research across different species?@eLife @johnhopkinsuni "OpenSpliceAI: An efficient, modular implementation of SpliceAI enabling easy retraining on non-human species"
1
7
19
I am not going to make this ICML but @DrewPrinster will be there! Check out our recent work on online AI monitoring under a non-stationary environment.
& โWATCH: Adaptive Monitoring for AI Deployments via Weighted-Conformal Martingalesโ by @DrewPrinster, @xinghan0, @anqi_liu33, & @suchisaria proposes a weighted generalization of conformal test martingales: https://t.co/jdFmz7kFe0 (5/5)
0
3
10
Welcome!
Iโm thrilled to share that I will be joining Johns Hopkins Universityโs Department of Computer Science (@JHUCompSci, @HopkinsDSAI) as an Assistant Professor this fall.
1
0
4
Welcome!
I'm so excited to join the CS department at Johns Hopkins University as an Assistant Professor! I'm looking for students interested in social computing, HCI, and AIโespecially around designing better online systems in the age of LLMs. Come work with me! https://t.co/703Fxe3MdC
0
0
6