_onionesque Profile Banner
Shubhendu Trivedi Profile
Shubhendu Trivedi

@_onionesque

Followers
9K
Following
27K
Media
2K
Statuses
23K

Cultivated Abandon. Twitter interests: Machine learning research, applied mathematics, mathematical miscellany, ML for physics/chemistry, books.

New York, NY / Cambridge, MA
Joined October 2008
Don't wanna be here? Send us removal request.
@EdgarDobriban
Edgar Dobriban
5 days
I wrote a review paper about statistical methods in generative AI; specifically, about using statistical tools along with genAI models for making AI more reliable, for evaluation, etc. See here: https://t.co/oNrb4dYe9i! I have identified four main areas where statistical
11
96
467
@_onionesque
Shubhendu Trivedi
9 days
E. O. Thorp also wrote a paper with an identical title https://t.co/h2nQVxlXpL One difference is that Breiman is writing more for a probability audience, while Thorp is writing as an applied mathematician concerned with constructive betting strategies for real world play.
0
0
3
@_onionesque
Shubhendu Trivedi
9 days
expected log wealth after n rounds), and shows that the log-utility formulation characterizes the only strategy that both avoids eventual ruin (with prob 1) and achieves maximal asymptotic growth. Also see this survey re: testing, where this is relevant
Tweet card summary image
arxiv.org
Safe anytime-valid inference (SAVI) provides measures of statistical evidence and certainty -- e-processes for testing and confidence sequences for estimation -- that remain valid at all stopping...
1
0
7
@valent44355
Victor Renard
6 hours
Check this ! Sunta Clause Run about to Happen ! $TSLA $NVDA $BTC $GME
1
0
16
@_onionesque
Shubhendu Trivedi
9 days
Kelly assumed a fixed game with constant odds. This generalizes the optimization of long-run capital growth to a) sequences of different gambles, b) arbitrary distributions, c) stochastic payoffs that may change over time. It frames it in stochastic optimization language (max [+]
1
0
5
@_onionesque
Shubhendu Trivedi
9 days
Finally got around to reading this paper by Breiman that generalizes Kelly betting to other settings. It gives that seemed like a heuristic, a decision-theoretic foundation. Now very relevant to testing. https://t.co/683BPDFFYX
6
17
138
@_onionesque
Shubhendu Trivedi
9 days
Basically unusable. Made me a bigger fan of brutalist architecture for similar settings, ironically.
1
1
5
@_onionesque
Shubhendu Trivedi
9 days
People at MIT (and the leaks, and the mold) could attest.
@dearvotion
N
10 days
RIP Frank Gehry, you were every engineer’s nightmare.
1
1
21
@BookCameo
Cameo
7 days
Sleigh the season with the most personal gift around. Get them a Cameo video!
0
37
604
@andrew_n_carr
Andrew Carr 🤸
12 days
Behold, the bible of code foundation models. A mixed team from over a dozen institutions put together almost 200 pages of survey on different paradigms for program synthesis. 1000+ references and some nice figures. Seems like a solid resource.
8
62
630
@_onionesque
Shubhendu Trivedi
10 days
The last bit (in the OP) is also not entirely true. Productivity gains are becoming bottlenecked by deployment, and poor AI strategy in cos. Open research is great, it accelerates scientific/engineering spillover, but does not meaningfully influence value capture at this stage.
0
0
0
@_onionesque
Shubhendu Trivedi
10 days
The last might seem like a joke, but it's literally true. Everyone needs exposure to "AI," this everyone knows, but returns are not being sought from productivity gains but from exit momentum (betting on secondary markets, roll-ups, or acquisition by incumbents).
1
0
0
@_onionesque
Shubhendu Trivedi
10 days
This is the legitimizing narrative around valuation. But valuations are less about productivity gains and more about risk capital concentration, crowding (due to the trend-chasing nature of the VC enterprise), and basically momentum-trading in venture markets.
@syhw
Gabriel Synnaeve
11 days
AI valuations are anchored in a promise of productivity gains based on compounding AI improvements. We're now building the products on a bedrock of research from 5 years ago, when everybody was publishing. Open research is the best way to keep compounding the AI improvements.
1
0
9
@_onionesque
Shubhendu Trivedi
13 days
I don't foresee ever working on the mean field type of stuff, but I like the papers. I like the conceptual neatness of the model moving in Wasserstein space of probability measures and inductive bias becoming the geometry of the flow + activation + architecture.
0
0
9
@_onionesque
Shubhendu Trivedi
13 days
7
22
199
@_onionesque
Shubhendu Trivedi
14 days
To read, looks cool:
@StatsPapers
Statistics Papers
15 days
Neural Networks Learn Generic Multi-Index Models Near Information-Theoretic Limit.
0
0
1
@yihaoli_0302
Yihao Li @Neurips 2025
15 days
🧵[1/8] Excited to share our NeurIPS 2025 Spotlight paper “Does Object Binding Naturally Emerge in Large Pretrained Vision Transformers?” ✨ To add to the broader discussion of binding in neural networks, we ask whether and how Vision Transformers perform object binding (the
13
100
717