ryanchankh Profile Banner
Ryan Chan Profile
Ryan Chan

@ryanchankh

Followers
391
Following
943
Media
6
Statuses
51

Machine Learning PhD at @penn. Interested in the theory and practice of interpretable and interactive machine learning.

Philadelphia, PA
Joined September 2021
Don't wanna be here? Send us removal request.
@ryanchankh
Ryan Chan
2 months
Come check out our work at NeurIPS on Dec 5, Friday! Conformal Information Pursuit for Interactively Guiding Large Language Models Project Link: https://t.co/DZyHGQ9uZa Poster: Dec 5 (Friday), 11am-2pm, Exhibit Hall C,D,E #2811
1
0
5
@EdgarDobriban
Edgar Dobriban
2 months
I wrote a review paper about statistical methods in generative AI; specifically, about using statistical tools along with genAI models for making AI more reliable, for evaluation, etc. See here: https://t.co/oNrb4dYe9i! I have identified four main areas where statistical
11
99
484
@PennEngAI
Penn Engineering AI
2 months
Congratulations to @PennEngineers researchers featured at @NeurIPSConf, with work spanning LLMs, generative AI, trustworthy AI, neuroscience, health and more. #NeurIPS2025
0
5
14
@BuyunLiang
Buyun Liang
2 months
Come check out our work โ€œ๐’๐„๐‚๐€: ๐’๐ž๐ฆ๐š๐ง๐ญ๐ข๐œ๐š๐ฅ๐ฅ๐ฒ ๐„๐ช๐ฎ๐ข๐ฏ๐š๐ฅ๐ž๐ง๐ญ ๐š๐ง๐ ๐‚๐จ๐ก๐ž๐ซ๐ž๐ง๐ญ ๐€๐ญ๐ญ๐š๐œ๐ค๐ฌ ๐Ÿ๐จ๐ซ ๐„๐ฅ๐ข๐œ๐ข๐ญ๐ข๐ง๐  ๐‹๐‹๐Œ ๐‡๐š๐ฅ๐ฅ๐ฎ๐œ๐ข๐ง๐š๐ญ๐ข๐จ๐ง๐ฌโ€ at ๐๐ž๐ฎ๐ซ๐ˆ๐๐’ 2025 in San Diego this Friday, December 5!
1
2
3
@ryanchankh
Ryan Chan
2 months
Finally, I would like to thank my wonderful collaborators: Yuyan Ge @yuyan_ge, Edgar Dobriban @EdgarDobriban, Hamed Hassani @HamedSHassani, Renรฉ Vidal @vidal_rene.
1
0
2
@ryanchankh
Ryan Chan
2 months
The long-term goal is to develop helpful and guided AI agents that can collaborate and cooperate with humans and other agents for better decision making, especially in high-stake domains such as helping doctors to make challenging but accurate diagnoses.
1
0
1
@ryanchankh
Ryan Chan
2 months
While it seems straight-forward, it turns out expressing uncertainty in an LLM itself is a major challenge. To make it work, we had to leverage uncertainty quantification techniques such as conformal prediction to obtain calibrated measures of uncertainty.
1
0
2
@ryanchankh
Ryan Chan
2 months
In an interactive QA setting, it means that at each iteration, the LLM agent selects the informative question that reduces the most uncertainty. And based on the answer to the question, it selects the next question; so on and so forth until it is confident to make a prediction.
1
0
0
@ryanchankh
Ryan Chan
2 months
We begin by having LLM agents play a game called 20 Questions, where one LLM is thinking of some object, and the other has to guess the object by asking a small number of questions. The statistical principle that describes this process is known as Information Gain.
1
0
0
@ryanchankh
Ryan Chan
2 months
With todayโ€™s development in Large Language Models, itโ€™s an interesting question to see whether AI agents can interact like humans do. Can LLM agents seek information dynamically like humans do? We explore this question in our work: Conformal Information Pursuit!
1
0
1
@ryanchankh
Ryan Chan
2 months
As humans, we often interactively solve tasks together. Imagine you are feeling sick and going to the doctors. To make an accurate diagnosis, the doctor has to learn some information from you by asking you a sequence of questions.
1
3
6
@QuantaMagazine
Quanta Magazine
1 year
Concept cells stitch together, one by one, in imagination and memory. For instance, โ€œShrek and Jennifer Aniston walk into a bar. โ€ฆ Maybe Shrek orders a beer,โ€ suggested researcher Pieter Roelfsema. As you read this, itโ€™s likely that concept cells โ€œare building something in your
Tweet card summary image
quantamagazine.org
Individual cells in the brain light up for specific ideas. These concept neurons, once known as โ€œJennifer Aniston cells,โ€ help us think, imagine and remember episodes from our lives.
0
6
14
@llm_sec
LLM Security
2 years
PaCE: Parsimonious Concept Engineering for Large Language Models "we propose Parsimonious Concept Engineering (PaCE), a novel activation engineering framework for alignment. First, to sufficiently model the concepts, we construct a large-scale concept dictionary in the
1
4
21
@peterljq
Jinqi Luo
2 years
๐Ÿ” Curious about how to interpret and steer your LLM? Want to accurately engineer your LLM's activation? ๐Ÿš€ Excited to introduce ๐๐š๐‚๐„: ๐๐š๐ซ๐ฌ๐ข๐ฆ๐จ๐ง๐ข๐จ๐ฎ๐ฌ ๐‚๐จ๐ง๐œ๐ž๐ฉ๐ญ ๐„๐ง๐ ๐ข๐ง๐ž๐ž๐ซ๐ข๐ง๐  ๐Ÿ๐จ๐ซ ๐‹๐š๐ซ๐ ๐ž ๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ! We generate a concept
6
30
129
@PennEngineers
Penn Engineering
2 years
Penn Engineering is pleased to announce AI Month, a four-week series of events in April touching on AI's many facets and impact on engineering and technology. See the full list of events here: https://t.co/PFu6gnHzLA #AIMonthatPenn
0
4
11
@VSehwag_
Vikash Sehwag
2 years
Excited to see an executive order on pushing safe, secure, and trustworthy AI. Even more so because the order targets key current risks and tangible interventions to take today, unlike having a sole focus on future existential risks. (1/4) https://t.co/iEewS5DfAI
1
1
5
@RickTDing
Tianjiao Ding
2 years
Come checkout our poster for linearizing and clustering data on nonlinear manifolds using an unsup/self-sup approach! The method is simple, fast, and can be trained on a gaming GPU. Yet, it achieves SOTA clustering on CIFAR-10, 20, 100, and even TinyImageNet-200. See you Nord 108
3
6
25
@PennEngineers
Penn Engineering
2 years
๐Ÿ—๏ธ As we wrap up summer, construction on Penn Engineering's Amy Gutmann Hall is making incredible progress. Learn more about the building, which compliments the School's IDEAS Signature Initiative, at ๐Ÿ”— https://t.co/XsFhmcrbST ๐ŸŽฅ by @earthcam
0
2
3