Anil Ananthaswamy
@anilananth
Followers
14K
Following
4K
Media
640
Statuses
5K
Sci journalist/TED speaker/MIT KSJ Fellow/Books: The Edge of Physics, The Man Who Wasn't There, Through Two Doors at Once / Mastodon: @[email protected]
itinerant
Joined April 2010
Don't know how other authors feel, but seeing the first physical copy of your book, after years of looking at Word files and PDFs... Seems surreal. It's as if someone else wrote it... Anyway, this advance copy arrived in the mail. One more month to pub date
102
218
2K
"The double-slit experiment doesn't merely embody wave-particle duality [...] it incorporates entanglement too. Once physicists began appreciating this, [...] It made possible the delayed-choice quantum eraser experiment." - @anilananth
0
1
5
From science to philosophy, step into a world of bold ideas and questions. Meet @anilananth, Chandrashekhar Khare, @authorgayathri, @KSridhar921971, @ConsultVistas, Ragini Tharoor Srinivasan & Sundar Sarukkai at #BlrLitFest 2025. 🗓 6 & 7 Dec 2025 📍 Freedom Park 🎟 Free entry
1
4
9
2/2 Multiplying 2 n x n matrices requires O(n^w) arithmetic operations, where w=3 for the brute force algorithm. Strassen's method was the first big improvement in '69 (w=2.81), followed by two big jumps in the '80s. The world record today is w=2.3714. https://t.co/x3V0Z9HXFR
0
3
6
1/2 "It seems silly, but it's a very important problem." Virginia Vassilevska Williams (@MIT) on the progress in matrix multiplication algorithms during her Richard M. Karp Distinguished Lecture On Matrix Multiplication Algorithms at the Simons Institute. https://t.co/x3V0Z9HXFR
1
5
18
Ever wondered about graph learning? Watch Ameya Velingker (@ameya_pa) and Haggai Maron (@HaggaiMaron) give a masterful introduction at the Simons Institute's workshop on Graph Learning Meets Theoretical Computer Science. Video: https://t.co/R5cIal2R8L
0
12
35
"With that Von Neumann proved that in order to have life you have to have universal computation. You can't reproduce without computation. No computation, no life. This is a really profound insight and one that I think that most biologists and most computer scientists are unaware
2
3
26
This has to be one of my favorite episodes on @MLStreetTalk! @blaiseaguera's erudition is striking (references to books, papers, authors and ideas just effortlessly flow from him), and his firm thoughtfulness is refreshing, even if you were to disagree with him. I'd have loved
4
7
67
2/2 There are matrix multiplication algorithms that can do better than Strassen's but only for astronomically large matrices, making them impractical, said Oded Schwartz at the Simons Institute's workshop on Complexity and Linear Algebra Boot Camp. Video: https://t.co/z3wBTWcJRL
0
2
2
1/2 Should have paid attention to matrices during linear algebra classes! In 2026, AI will use ~1% of global electricity, of which ~45-90% will be for matrix multiplications, said Oded Schwartz of Hebrew University of Jerusalem at the Simons Institute. https://t.co/z3wBTWcJRL
1
4
20
Anil Ananthaswamy’s Why Machines Learn the mathematics that powers modern AI. A deep and clear exploration of how simple equations evolved into neural networks that learn from data.
1
2
8
2/2 For multiplying two n x n matrices, the arithmetic complexity of the standard method is of O(n^3); Strassen's method is of O(n^2.81). Prof. Olga Holtz spoke at the Simons Institute's workshop on Complexity and Linear Algebra Boot Camp. Video: https://t.co/N3ep8k1q3U
0
3
15
1/2 Matrix multiplications are central to machine learning. UC Berkeley Professor Olga Holtz's analyzed, from scratch, the arithmetic complexity of matrix multiplication using Strassen's fast algorithm. She spoke at the Simons Institute. Video: https://t.co/N3ep8k1q3U
2
18
186
2/2 One example showed a 1556x smaller carbon footprint using a better model, GPU, data center PUE and site, said @UCBerkeley's David Patterson at the Simons Institute’s workshop on Algorithmic Foundations for Emerging Computing Technologies Boot Camp. https://t.co/fN1cGO0H3h
0
2
3
1/2 Focusing on the 4Ms of AI—model (FLOPs), machine (GPU, mWh / 1.0e12 FLOPs), mechanization (power usage effectiveness) and maps (sites w/ clean energy)—can reduce AI’s carbon footprint, said @UCBerkeley’s David Patterson, at the Simons Institute. Video: https://t.co/fN1cGO0H3h
1
3
5
"DON'T PANIC!" -- when you find your book next to a book by one of your heroes! In this case, WHY MACHINES LEARN alongside Douglas Adams' THE HITCHHIKER'S GUIDE TO THE GALAXY. Feels kind of apt that a book on the math of AI ends up in the Sci-Fi section 😀, especially when the
0
2
11
The V-JEPA model by @ylecun's team @AIatMeta shows how learning in latent space and not in pixel space might help solve some of the shortcomings of today's GEN AI models (V-JEPA is not a generative model). Work by @garridoq_ and colleagues. @randall_balestr and @m_heilb gave
quantamagazine.org
The V-JEPA system uses ordinary videos to understand the physics of the real world.
3
26
174
An AI model called V-JEPA is capable of “intuiting” the physical properties of the real world, gaining a sense of object permanence, the constancy of shape and color, and the effects of gravity. @anilananth reports:
quantamagazine.org
The V-JEPA system uses ordinary videos to understand the physics of the real world.
6
57
229
2/2 Instead, pick the most efficient model, the latest GPU, and an efficient cloud data center in a low carbon location, said David Patterson, at the Simons Institute's workshop on Algorithmic Foundations for Emerging Computing Technologies Boot Camp. https://t.co/fN1cGO0H3h
0
3
5
1/2 "Bad AI carbon footprint commandments." @UCBerkeley's David Patterson on his "don'ts" for reducing AI's carbon footprint: don't pick the biggest model; don't pick an older GPU; don't pick a local data center. He spoke at the Simons Institute. Video: https://t.co/fN1cGO0H3h
1
4
3