Jerry Liu Profile
Jerry Liu

@jerrywliu

Followers
344
Following
182
Media
14
Statuses
65

ML & numerics | ICME PhD at Stanford, @doecsgf fellow | prev @duolingo @berkeleylab @livermore_lab

Stanford, CA
Joined May 2022
Don't wanna be here? Send us removal request.
@jerrywliu
Jerry Liu
6 days
RT @BaigYasa: Was extremely fun to work on this paper with @jerrywliu and finally fulfilling our 7 year plan from year one of undergrad to….
0
8
0
@jerrywliu
Jerry Liu
6 days
@BaigYasa @rajat_vd @HazyResearch 11/10.BWLer was just presented at the Theory of AI for Scientific Computing (TASC) workshop at COLT 2025, where it received Best Paper 🏆. Huge thanks to the organizers (@nmboffi, @khodakmoments, Jianfeng Lu, @__tm__157, @risteski_a) for a fantastic event!
Tweet media one
0
5
30
@jerrywliu
Jerry Liu
6 days
10/10.BWLer is just the beginning – we're excited to build precise, generalizable ML models for PDEs & physics!.📄 Paper: 🧠 Blog: 💻 Code: w/ @BaigYasa, Denise Lee, @rajat_vd, Atri Rudra, @HazyResearch.
1
1
34
@jerrywliu
Jerry Liu
6 days
9/10.Of course, there’s no free lunch. Like spectral methods, BWLer struggles with discontinuities or irregular domains – sometimes taking hours to match the RMSE of PINNs that train in minutes. We view BWLer as a proof-of-concept toward high-precision scientific ML! 🔬
Tweet media one
1
0
17
@jerrywliu
Jerry Liu
6 days
8/10.Explicit BWLer can go even further 🚀.With a second-order optimizer, it reaches 10⁻¹² RMSE – near float64 machine precision! – and up to 10 billion× lower error than standard MLPs. See comparison across benchmark PDEs ⬇️
Tweet media one
1
0
19
@jerrywliu
Jerry Liu
6 days
7/10.Adding BWLer-hats 🎩 to standard MLPs improves RMSE by up to 1800× across benchmark PDEs (convection, reaction, wave). Why? BWLer’s global derivatives encourage smoother, more coherent solutions. Example below: standard MLP vs BWLer-hatted on the convection equation ⬇️
Tweet media one
1
1
15
@jerrywliu
Jerry Liu
6 days
6/10.BWLer comes in two modes:.– BWLer-hat 🎩: adds an interpolation layer atop an NN.– Explicit BWLer 🎳: replaces the NN, learns function values directly.Both versions let us explicitly tune expressivity and conditioning – yielding big precision gains on benchmark PDEs 📈
Tweet media one
1
1
18
@jerrywliu
Jerry Liu
6 days
5/10.💡If polynomials work so well, why not use them for PINNs?.We introduce BWLer 🎳, a drop-in module for physics-informed learning. Built on barycentric polynomials, BWLer wraps or replaces MLPs with a numerically stable interpolant rooted in classical spectral methods.
Tweet media one
1
2
28
@jerrywliu
Jerry Liu
6 days
4/10.Turns out, MLPs struggle 😬– even on simple sinusoids. Despite 1000× more parameters, they plateau far above machine precision, with RMSE up to 10,000× worse than basic polynomial interpolants!
Tweet media one
1
1
18
@jerrywliu
Jerry Liu
6 days
3/10.We strip away the PDE constraints and ask a simpler question: how well can MLPs perform basic interpolation? 🧩.E.g. can MLPs recover the black curve just from the red training points? (Pictured: f(x) = sin(4x).)
Tweet media one
1
1
15
@jerrywliu
Jerry Liu
6 days
2/10.Physics-Informed Neural Networks (PINNs) solve PDEs by training a neural network to satisfy the equation – no mesh required and handles complex boundaries/geometries with ease. But despite their flexibility, PINNs often fall short on precision, even on simple 2D problems.
Tweet media one
1
1
20
@jerrywliu
Jerry Liu
6 days
1/10.ML can solve PDEs – but precision🔬is still a challenge. Towards high-precision methods for scientific problems, we introduce BWLer 🎳, a new architecture for physics-informed learning achieving (near-)machine-precision (up to 10⁻¹² RMSE) on benchmark PDEs. 🧵How it works:
13
120
638
@jerrywliu
Jerry Liu
19 days
RT @MayeeChen: LLMs often generate correct answers but struggle to select them. Weaver tackles this by combining many weak verifiers (rewar….
0
34
0
@jerrywliu
Jerry Liu
19 days
RT @JonSaadFalcon: How can we close the generation-verification gap when LLMs produce correct answers but fail to select them? .🧵 Introduci….
0
60
0
@jerrywliu
Jerry Liu
26 days
RT @Shanda_Li_2000: Can LLM solve PDEs? 🤯.We present CodePDE, a framework that uses LLMs to automatically generate solvers for PDE and outp….
0
11
0
@jerrywliu
Jerry Liu
27 days
RT @GeoffreyAngus: Struggling with context management? Wish you could just stick it all in your model?. We’ve integrated Cartridges, a new….
0
11
0
@jerrywliu
Jerry Liu
1 month
RT @KumbongHermann: Excited to be presenting our new work–HMAR: Efficient Hierarchical Masked Auto-Regressive Image Generation– at #CVPR202….
0
22
0
@jerrywliu
Jerry Liu
1 month
RT @EyubogluSabri: When we put lots of text (eg a code repo) into LLM context, cost soars b/c of the KV cache’s size. What if we trained a….
0
70
0
@jerrywliu
Jerry Liu
1 month
RT @jordanjuravsky: Happy Throughput Thursday! We’re excited to release Tokasaurus: an LLM inference engine designed from the ground up for….
0
47
0
@jerrywliu
Jerry Liu
1 month
RT @ollama: 3 months ago, Stanford's Hazy Research lab introduced Minions, a project that connects Ollama to frontier cloud models to reduc….
0
182
0