
Jerry Liu
@jerrywliu
Followers
344
Following
182
Media
14
Statuses
65
ML & numerics | ICME PhD at Stanford, @doecsgf fellow | prev @duolingo @berkeleylab @livermore_lab
Stanford, CA
Joined May 2022
RT @BaigYasa: Was extremely fun to work on this paper with @jerrywliu and finally fulfilling our 7 year plan from year one of undergrad to….
0
8
0
@BaigYasa @rajat_vd @HazyResearch 11/10.BWLer was just presented at the Theory of AI for Scientific Computing (TASC) workshop at COLT 2025, where it received Best Paper 🏆. Huge thanks to the organizers (@nmboffi, @khodakmoments, Jianfeng Lu, @__tm__157, @risteski_a) for a fantastic event!
0
5
30
10/10.BWLer is just the beginning – we're excited to build precise, generalizable ML models for PDEs & physics!.📄 Paper: 🧠 Blog: 💻 Code: w/ @BaigYasa, Denise Lee, @rajat_vd, Atri Rudra, @HazyResearch.
1
1
34
RT @MayeeChen: LLMs often generate correct answers but struggle to select them. Weaver tackles this by combining many weak verifiers (rewar….
0
34
0
RT @JonSaadFalcon: How can we close the generation-verification gap when LLMs produce correct answers but fail to select them? .🧵 Introduci….
0
60
0
RT @Shanda_Li_2000: Can LLM solve PDEs? 🤯.We present CodePDE, a framework that uses LLMs to automatically generate solvers for PDE and outp….
0
11
0
RT @GeoffreyAngus: Struggling with context management? Wish you could just stick it all in your model?. We’ve integrated Cartridges, a new….
0
11
0
RT @KumbongHermann: Excited to be presenting our new work–HMAR: Efficient Hierarchical Masked Auto-Regressive Image Generation– at #CVPR202….
0
22
0
RT @EyubogluSabri: When we put lots of text (eg a code repo) into LLM context, cost soars b/c of the KV cache’s size. What if we trained a….
0
70
0
RT @jordanjuravsky: Happy Throughput Thursday! We’re excited to release Tokasaurus: an LLM inference engine designed from the ground up for….
0
47
0
RT @ollama: 3 months ago, Stanford's Hazy Research lab introduced Minions, a project that connects Ollama to frontier cloud models to reduc….
0
182
0