anqi_liu33 Profile Banner
Anqi (Angie) Liu Profile
Anqi (Angie) Liu

@anqi_liu33

Followers
2K
Following
169
Media
16
Statuses
189

Assistant Professor at CS Department of Johns Hopkins University @JHUCompSci

Baltimore, MD
Joined August 2012
Don't wanna be here? Send us removal request.
@aamixsh
Aayush Mishra
19 days
Really exciting work led by Nathan! We show that In-Context Learning transcends natural language and is a more general artefact of next-token predictors in pattern rich sequences!
@N8Programs
N8 Programs
19 days
Honored to announce our recent work at @jhuclsp - "Genomic Next-Token Predictors are In-Context Learners". We show that genomic foundation models like Evo2 (by @arcinstitute) exhibit ICL on symbolic manipulation tasks outside their pre-training domain. ๐Ÿงต below w/ arxiv link.
0
3
6
@DanielKhashabi
Daniel Khashabi ๐Ÿ•Š๏ธ
19 days
For years since the GPT-2 paper, emergent in-context learning (ICL) from 'next-token' training has been treated as something deeply tied to ๐ก๐ฎ๐ฆ๐š๐ง ๐ฅ๐š๐ง๐ ๐ฎ๐š๐ ๐ž. But โ€ฆ is it? Thrilled to share our latest result: ๐—š๐—ฒ๐—ป๐—ผ๐—บ๐—ถ๐—ฐ๐Ÿงฌ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ฒ๐—ฑ ๐™ค๐™ฃ๐™ก๐™ฎ ๐—ผ๐—ป
5
23
108
@anqi_liu33
Anqi (Angie) Liu
19 days
Thanks Nathan for the great work! We show ICL does not only exist in human language models, but potentially models trained from more general structured sequences. And interestingly, their learned patterns are potentially different, so show different qualitative properties in ICL.
@N8Programs
N8 Programs
19 days
Honored to announce our recent work at @jhuclsp - "Genomic Next-Token Predictors are In-Context Learners". We show that genomic foundation models like Evo2 (by @arcinstitute) exhibit ICL on symbolic manipulation tasks outside their pre-training domain. ๐Ÿงต below w/ arxiv link.
0
2
4
@anqi_liu33
Anqi (Angie) Liu
19 days
This work started when I was doing the postdoc in Caltech. Analyzing social media data opens a door for me to think about the society and how AI/ML can play a positive role there. Thanks everyone for making this happen.
0
2
4
@CaltechLCSSP
Caltech LCSSP
19 days
3
10
13
@JHUCompSci
JHU Computer Science
1 month
Join @anqi_liu33 for โ€œBeyond Empirical Risk Minimization: Performance Guarantees, Distribution Shifts, and Noise Robustness,โ€ an invited talk from @BCAMBilbaoโ€™s Santiago Mazuelas on Monday, November 10:
cs.jhu.edu
Abstract The empirical risk minimization (ERM) approach for supervised learning chooses prediction rules that fit training samples and are โ€œsimpleโ€ (generalize). This approach has been the workhorse...
0
2
1
@StevenSalzberg1
Steven Salzberg ๐Ÿ’™๐Ÿ’›
1 month
In @eLife, our OpenSpliceAI paper, led by @KuanHaoChao, is now 'official' though it's been online since July. If you want to enjoy the reviewers' comments and our responses, check it out at: https://t.co/Jv7DfmClBa
Tweet card summary image
elifesciences.org
OpenSpliceAI is an open, retrainable framework for splice site prediction that enables rapid, memory-efficient, cross-species analyses at scale with accuracy comparable to SpliceAI.
0
9
23
@anqi_liu33
Anqi (Angie) Liu
2 months
If you are at #COLM2025 and are interested in uncertainty estimation and probabilistic predictions of LLMs, talk to @LiaoyaqiW! Our work leverages comprehensive synthetic data and training signals from both decoder-based regression and ranking to achieve fine-grained estimation.
@LiaoyaqiW
Liaoyaqi Wang
2 months
๐Ÿš€ Thrilled to share our new work: "Always Tell Me The Odds" in COLM25 LLMs struggle with accurate probability predictions, often giving coarse answers. We train decoder-based models to provide fine-grained, calibrated probabilities, significantly outperforming strong baselines!
0
1
11
@anqi_liu33
Anqi (Angie) Liu
2 months
Thank you for having me! I enjoyed every conversation with faculty and students in the ASSET center.
@PennAsset
Center for Safe, Explainable, and Trustworthy AI
2 months
Thank you @anqi_liu33 for your presentation yesterday! If you missed her talk, you can view it here:
0
1
5
@DanielKhashabi
Daniel Khashabi ๐Ÿ•Š๏ธ
2 months
ICL and SFT are the two most studied ways to adapt LMs. We understand each in isolation โ€” but far less about how they might ๐—ฐ๐—ผ๐—บ๐—ฝ๐—น๐—ฒ๐—บ๐—ฒ๐—ป๐˜ ๐—ผ๐—ป๐—ฒ ๐—ฎ๐—ป๐—ผ๐˜๐—ต๐—ฒ๐—ฟ. Our latest work asks two questions: 1๏ธโƒฃ Do ICL and SFT operate differently? 2๏ธโƒฃ And if so, can one
Tweet card summary image
huggingface.co
@aamixsh
Aayush Mishra
2 months
"Pre-training is our crappy evolution. It is one candidate solution to the cold start problem..." Exactly! When presented with information rich context, LLMs prepare how to respond using their pre-trained (evolved) brains. In our paper, we exploit this signal to improve SFT!
1
28
171
@aamixsh
Aayush Mishra
2 months
"Pre-training is our crappy evolution. It is one candidate solution to the cold start problem..." Exactly! When presented with information rich context, LLMs prepare how to respond using their pre-trained (evolved) brains. In our paper, we exploit this signal to improve SFT!
@karpathy
Andrej Karpathy
2 months
Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea
4
21
136
@HopkinsDSAI
Johns Hopkins Data Science and AI Institute
3 months
#HopkinsDSAI welcomes 22 new faculty members, who join more than 150 DSAI faculty members across @JohnsHopkins in advancing the study of data science, machine learning, and #AI and translation to a range of critical and emerging fields. https://t.co/tAauSzRFWD
3
29
203
@anqi_liu33
Anqi (Angie) Liu
4 months
How to estimate an ML modelโ€™s performance in the wild? Our takeaway: if you can tell the difference between a target distribution and source one, it will help! Sounds natural?It turns out no previous theory directly indicates it and no previous methods exist. Check Aayushโ€™s work!
@aamixsh
Aayush Mishra
4 months
In high stakes applications, like medical diagnoses, practitioners often want an answer to the following question: "How would this ML model perform on new data"? In our UAI 2025 paper, we provide tight and practical upper bounds on target performance using domain overlap! ๐Ÿงต
0
2
8
@yaxingyao
Yaxing Yao
4 months
Our university news wrote a nice story about our exhibit at the Smithsonian National Museum of Natural History. It was such a great experience, with lots of interesting and insightful conversations with visitors from all over the world! @JHUCompSci
Tweet card summary image
hub.jhu.edu
Johns Hopkins and Notre Dame researchers partnered with the National Museum of Natural History to help children hone their online privacy skills
0
5
19
@JHUCompSci
JHU Computer Science
4 months
JHU researchers including @ZiangXiao and @anqi_liu33 have created a system that could make social robots more effective at detecting and managing user interruptions in real timeโ€”a breakthrough for areas like health care and education where natural conversation is crucial.
@JohnsHopkins
Johns Hopkins University
4 months
Johns Hopkins computer scientists have created an interruption-handling system to facilitate more natural conversations with social robots.
0
4
13
@aipulserx
DailyHealthcareAI
5 months
How can the accuracy and accessibility of a leading splice-site prediction tool be improved for broader genomic research across different species?@eLife @johnhopkinsuni "OpenSpliceAI: An efficient, modular implementation of SpliceAI enabling easy retraining on non-human species"
1
7
19
@anqi_liu33
Anqi (Angie) Liu
5 months
I am not going to make this ICML but @DrewPrinster will be there! Check out our recent work on online AI monitoring under a non-stationary environment.
@JHUCompSci
JHU Computer Science
5 months
& โ€œWATCH: Adaptive Monitoring for AI Deployments via Weighted-Conformal Martingalesโ€ by @DrewPrinster, @xinghan0, @anqi_liu33, & @suchisaria proposes a weighted generalization of conformal test martingales: https://t.co/jdFmz7kFe0 (5/5)
0
3
10
@anqi_liu33
Anqi (Angie) Liu
6 months
Welcome!
@anand_bhattad
Anand Bhattad
6 months
Iโ€™m thrilled to share that I will be joining Johns Hopkins Universityโ€™s Department of Computer Science (@JHUCompSci, @HopkinsDSAI) as an Assistant Professor this fall.
1
0
4
@anqi_liu33
Anqi (Angie) Liu
6 months
Welcome!
@tizianopiccardi
Tiziano Piccardi
6 months
I'm so excited to join the CS department at Johns Hopkins University as an Assistant Professor! I'm looking for students interested in social computing, HCI, and AIโ€”especially around designing better online systems in the age of LLMs. Come work with me! https://t.co/703Fxe3MdC
0
0
6