AndyXAndersen Profile
AndyXAndersen

@AndyXAndersen

Followers
372
Following
18K
Media
43
Statuses
12K

Computer vision engineer, math phd. Interested in AI, science, ethics, society topics.

California
Joined April 2023
Don't wanna be here? Send us removal request.
@evamirandag
Eva Miranda
1 day
Classical billiards can compute. With @Isaacramr__ , we show that 2D billiard systems are Turing complete, implying the existence of undecidable trajectories in physically natural models from hard-sphere gases to celestial mechanics. Determinism ≠ predictability. 🎱🧠@ETH_en
43
117
788
@alex_prompter
Alex Prompter
3 days
This paper from Harvard and MIT quietly answers the most important AI question nobody benchmarks properly: Can LLMs actually discover science, or are they just good at talking about it? The paper is called “Evaluating Large Language Models in Scientific Discovery”, and instead
373
2K
8K
@AndyXAndersen
AndyXAndersen
1 day
"In 2025, Reinforcement Learning from Verifiable Rewards (RLVR) emerged as the de facto new major stage" "Supervision bits-wise, human neural nets are optimized for survival ... but LLM neural nets are optimized for imitating humanity's text"
0
0
0
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
2 days
"Current alignment work is all about putting lipstick on a shoggoth." -@romanyam
6
4
66
@AndyXAndersen
AndyXAndersen
2 days
The root problem here is that some folks start from the outset with a very adversarial and self-referential approach. That doesn't go well even if there is a small kernel of truth (and in this case one can't argue he has more than that).
@bengoertzel
Ben Goertzel
5 days
@GaryMarcus Gary, you have clearly seen and exposed the weaknesses of LLMs. Many others like myself have seen these weaknesses all along but you have put a lot of effort into highlighting and publicizing them, which has been a valuable service -- thanks for that! On the other hand, I
0
0
1
@steve_ike_
steve ike
4 days
For those who haven’t seen the actual interview:
0
9
54
@steve_ike_
steve ike
4 days
Demis Hassabis (@demishassabis) just laid out the clearest roadmap to AGI I’ve heard all year on the @FryRsquared podcast. 1/ AGI won’t come from scaling alone. Demis Hassabis says it’s 50% scaling, 50% innovation. Bigger models matter, but new ideas matter just as much. 2/
59
235
2K
@WesRothMoney
Wes Roth
3 days
Demis Hassabis points out that today’s models don’t learn continuously, they are trained once and then deployed. Unlike humans, they don’t improve from experience or adapt after release. The next major leap, like “AlphaZero,” would involve self-learning systems that discover
46
56
612
@HistoryIsOver
Liberal Memes
4 days
58
1K
17K
@fchollet
François Chollet
3 days
I would say there is no such thing as "universal" intelligence but there is definitely such a thing as "general" intelligence, and as a collective, we have it. "Science", modeled as an intelligent system (primarily powered by human intelligence) can solve any solvable problem in
@slow_developer
Haider.
4 days
Yann LeCun says there is no such thing as general intelligence Human intelligence is super-specialized for the physical world, and our feeling of generality is an illusion We only seem general because we can't imagine the problems we're blind to "the concept is complete BS"
40
36
271
@AndyXAndersen
AndyXAndersen
4 days
This compares an actual Waymo fleet on the road with tens of millions of miles vs Tesla with zero miles. The math is brutal indeed, like division by zero.
@MarceloLima
Marcelo P. Lima
5 days
The math for Waymo is brutal: Waymos cost >$100,000 per car. To deploy 500k more Waymos will cost the company $50bn of up front capex. This would be only 10% of Tesla’s existing fleet. Note that Tesla’s fleet is growing on the order of ~500k every three months. $0 capex.
0
0
0
@AndyXAndersen
AndyXAndersen
4 days
Current AI approaches are not based around cognitive science because we never managed to implement properly a system based on that. I suspect what we cognitive people think the brain is doing is a rationalization, and implementing that alone won't bring us any closer.
@AndrewLampinen
Andrew Lampinen
4 days
Why isn’t modern AI built around principles from cognitive science or neuroscience? Starting a new substack (link below) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3
0
0
0
@AndrewLampinen
Andrew Lampinen
4 days
Why isn’t modern AI built around principles from cognitive science or neuroscience? Starting a new substack (link below) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3
40
69
444
@AndyXAndersen
AndyXAndersen
5 days
Got blocked by @tunguz without any recent interaction, or much of integration ever. Barely saw his posts on my feed every now and then. Not a loss, but odd.
0
0
0
@Alex_Intel_
Alex
5 days
A lot of people are asking me does Broadcom have a moat? Does Nvidia have a moat? But Intel actually has one in Oregon
48
132
5K
@AndyXAndersen
AndyXAndersen
7 days
@hardmaru @Tim_Dettmers That essay is sloppy thinking. "Will not happen" is arrogance that is not backed by facts. There is a lot of fine-grained architecture work that goes beyond put-all-in-pot. Efficiency and specialization are going up. External tools are integrated. Lots of cool smart stuff.
2
3
30
@ChShersh
Dmitrii Kovanikov
7 days
If a program is well-designed and well-written, the programming language doesn’t matter. If a program is poorly designed and poorly written, the programming language doesn’t matter either.
52
38
424
@AndyXAndersen
AndyXAndersen
7 days
Great attitude, and thanks.
@Tyriar
Daniel Imms
7 days
Worked yesterday and this morning on a real fix for my workaround that's currently the joke of Twitter. If you're interested:
0
0
1
@fchollet
François Chollet
8 days
Fluid intelligence as measured by ARC 1 & 2 is your ability to turn information into a model that will generalize. That's not the only thing you need to make an intelligent agent. To start with, when you're an agent in the real world, information is not provided to you,
@wendyweeww
Wendy Wee
9 days
@fchollet What type of intelligence is needed for “exploration, goal-setting, and interactive planning”? What is “beyond fluid intelligence”?
38
79
664
@Hilbe
Chris Hilbert
9 days
Here’s what unexpected scenarios look like in camera, radar, and LiDAR.
117
85
893