_Violet24K_ Profile Banner
Zihao Li Profile
Zihao Li

@_Violet24K_

Followers
90
Following
123
Media
13
Statuses
85

Ph.D. candidate @siebelschool @UofIllinois | (ex-)intern @Amazon @MSFTResearch

Joined January 2023
Don't wanna be here? Send us removal request.
@ideaisailuiuc
iDEA-iSAIL Group@UIUC
17 days
We'll present 5 posters (1 spotlight) at #NeurIPS 2025. Stop by and chatโ˜•๏ธ!
0
1
0
@LingjieChen127
Lingjie Chen
1 month
Proud to share our own work here โ€” a simple diffusion recipe that turns standard BERTs into chat models. All code, checkpoints, and training curves are fully open in our repo. Really excited to push diffusion-based text generation further!
@asapzzhou
Zhanhui Zhou @ NeurIPS
1 month
(1/n) ๐Ÿšจ BERTs that chat: turn any BERT into a chatbot with diffusion hi @karpathy, we just trained a few BERTs to chat with diffusion โ€” we are releasing all the model checkpoints, training curves, and recipes! Hopefully this spares you the side quest into training nanochat with
0
2
5
@ChuxuanHu
Chuxuan Hu
2 months
With DRAMA being a new paradigm that unifies a wide range of sub-problems, DRAMA-Bot is just the beginning โ€” imagine complex data integration, cleaning, etc, all in one agent. Canโ€™t wait to see what DRAMA unfolds next: DATA SCIENCE IS FULL OF DRAMA๐ŸŽญ๐Ÿค 
@ddkang
Daniel Kang
2 months
Real-world data science (especially in the social sciences) starts with collecting and structuring data from open domains, yet existing AI agents either assume access to ready-to-query databases or stop at surface-level retrieval and summarization. To augment data scientists,
3
1
8
@hendrydong
Hanze Dong
2 months
๐Ÿ’ฅThrilled to share our new work Reinforce-Ada, which fixes signal collapse in GRPO ๐ŸฅณNo more blind oversampling or dead updates. Just sharper gradients, faster convergence, and stronger models. โš™๏ธ One-line drop-in. Real gains. https://t.co/kJTeVek1S3 https://t.co/7qLywG2KWR
7
24
179
@youjiaxuan
Jiaxuan You
3 months
Benchmarks don't just measure AI; they define its trajectory. Today, thereโ€™s a shortage of truly challenging and useful benchmarks for LLMs โ€” and we believe future forecasting is the next frontier. Introducing TradeBench. https://t.co/VI6nJMT58W A live-market benchmark where
2
6
35
@_Violet24K_
Zihao Li
4 months
๐ŸŒ Flow Matching Meets Biology and Life Science: A Survey Flow matching is emerging as a powerful generative paradigm. We comprehensively review its foundations and applications across biology & life science๐Ÿงฌ ๐Ÿ“šPaper: https://t.co/ynsegKOgXz ๐Ÿ’ปResource:
Tweet card summary image
github.com
A curated list of resources for "Flow Matching Meets Biology and Life Science: A Survey" - Violet24K/Awesome-Flow-Matching-Meets-Biology
0
10
10
@Yuji_Zhang_NLP
Yuji Zhang
6 months
๐Ÿง Letโ€™s teach LLMs to learn smarter, not harder๐Ÿ’ฅ[ https://t.co/DMjQaWyceE] ๐Ÿค–How can LLMs verify complex scientific information efficiently? ๐Ÿš€We propose modular, reusable atomic reasoning skills that reduce LLMsโ€™ cognitive load to verify scientific claims with little data.
7
36
107
@GaotangLi
Gaotang Li
6 months
๐Ÿ˜ฒ Not only reasoning?! Inference scaling can now boost LLM safety! ๐Ÿš€ Introducing Saffron-1: - Reduces attack success rate from 66% to 17.5% - Uses only 59.7 TFLOP compute - Counters latest jailbreak attacks - No model finetuning On the AI2 Refusals benchmark. ๐Ÿ“– Paper:
2
21
74
@_Violet24K_
Zihao Li
6 months
Joining the Hundredaire Club๐Ÿ’ฏ as a junior member. A.M. Turing committee please call me anytime.
1
0
12
@cikm2025
ACM CIKM 2025
7 months
Today, we introduce our hashtag#CIKM2025 Industry Day Chairs ๐Ÿ‘ Jingren Zhou, Soonmin Bae, and Xianfeng Tang are leading this program connecting academia and industry.
0
1
2
@EmpathYang
Ke Yang
7 months
๐Ÿค– New preprint: We propose ten principles of AI agent economics, offering a framework to understand how AI agents make decisions, influence social interactions, and participate in the broader economy. ๐Ÿ“œ Paper: https://t.co/kI1ze1WQL5
1
10
14
@_akhaliq
AK
7 months
Microsoft presents Chain-of-Model Learning for Language Model
2
36
189
@GaotangLi
Gaotang Li
8 months
๐Ÿšจ ICML โ€™25 SPOTLIGHT ๐Ÿšจ Taming Knowledge Conflict in Language Models ๐Ÿค” Why does your LLM sometimes echo the prompt but other times rely on its โ€œbuilt-inโ€ facts? ๐ŸŽญ Can we toggle between parametric memory and fresh context without fine-tuning? ๐Ÿ”ฌ Curious about LLM internals,
2
4
20
@ideaisailuiuc
iDEA-iSAIL Group@UIUC
8 months
We'll present 4 papers and 1 keynote talk at #ICLR2025. Prof. Jingrui He and Prof. Hanghang Tong will be at the conference. Let's connect! โ˜•๏ธ
0
3
8
@yuz9yuz
Yu Zhang
8 months
๐ŸšจCall for Papers - ๐— ๐—Ÿ๐—ผ๐—š-๐—š๐—ฒ๐—ป๐—”๐—œ @ ๐—ž๐——๐—— ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ Join us at the Workshop on ๐— ๐—ฎ๐—ฐ๐—ต๐—ถ๐—ป๐—ฒ ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ด ๐—ผ๐—ป ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต๐˜€ in the Era of ๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—”๐—œ co-located with #KDD2025! ๐ŸŒWebsite: https://t.co/hpeImZm9KR ๐ŸŒSubmission Link: https://t.co/z7BXvjmoiB
0
8
18
@CSProfKGD
Kosta Derpanis
9 months
7
21
219
@ideaisailuiuc
iDEA-iSAIL Group@UIUC
9 months
๐Ÿ”ฌGraph Self-Supervised Learning Toolkit ๐Ÿ”ฅWe release PyG-SSL, offering a unified framework of 10+ self-supervised choices to pretrain your graph foundation models. ๐Ÿ“œPaper: https://t.co/fwlWTsmquK ๐Ÿ’ปCode: https://t.co/oNaz18zCht Have fun!
Tweet card summary image
github.com
Graph Self-Supervised Learning Toolkit. Contribute to iDEA-iSAIL-Lab-UIUC/pyg-ssl development by creating an account on GitHub.
0
2
3
@Yuji_Zhang_NLP
Yuji Zhang
10 months
๐Ÿ”New findings of knowledge overshadowing! Why do LLMs hallucinate over all true training data? ๐Ÿค”Can we predict hallucinations even before model training or inference? ๐Ÿš€Check out our new preprint: [ https://t.co/Rzq7zFyzKF] The Law of Knowledge Overshadowing: Towards
6
34
123