Andre Infante
@AndreTI
Followers
1K
Following
25K
Media
609
Statuses
23K
Making games and robots and sometimes other things. (Formerly: 1X, Meta)
Bay Area
Joined January 2009
I wrote my first substack piece about AI's economic impact, and some reasons why models seem to underperform in the real world.
5
3
38
Even large VLAs can play ping-pong in real time! 🏓⚡️ In practice, VLAs struggle with fast, dynamic tasks: • slow reactions, jittery actions. • demos often shown at 5-10× speed to look “smooth”. We introduce VLASH: • future-state-aware asynchronous inference with >30Hz
16
82
427
🪶Our hand can take a hammer hit but also detect a feather. The finger moves until it meets a tiny resistance and stops, with no tactile sensors. Back drivability and torque transparency let it feel the world through its own drive currents, enabling simple, reliable interaction.
9
22
184
Especially scientists. I've seen the software you people write. You're basically animals.
0
0
1
You can add so much free verisimilitude to any filmic portrayal of scientists or engineers by just making all the software they use look like shit.
2
0
0
New paper with Marcus Hutter at AI Magazine! "Imitation Learning is Probably Existentially Safe" We contest 6 arguments to the contrary from Eliezer, Paul Christiano, David Krueger, Gwern, and Evan Hubinger et al. We've also tried to make those arguments more accessible.
2
5
19
Posting this because people keep defending the (fundamentally wrong and somewhat silly) AI 2027 projections by saying "well at least they gave concrete predictions". And they have a point. So these are mine.
0
1
1
But these risks are different in kind from the 'paperclip maximizer' superhuman bootstrapping optimizer type concerns that don't really emerge from the current technology path.
0
0
0
Systems that can be configured to do narrow tasks with high reliability with a modest amount of work can and will pose their own dangers (automated bio and chemical warfare, self replicating intelligent malware, autonomous weapons, mass manipulation campaigns, etc.).
1
0
0
"Early 2040s or later" would be my weak guess for >= human goal directed learning efficiency, but it is really hard to time the next paradigm.
0
0
0
For the record, I expect that we will get 'AGI' in the sense of having systems that can do >80% of current knowledge work tasks with high reliability in the mid to late 2030s. AGI in the existential risk sense requires fundamental breakthroughs that are difficult to forecast.
3
0
0
Can ultrasound make you smell things that aren’t there? Turns out, yes! We reliably triggered distinct scents like a campfire burn or a garbage truck by targeting our brains with ultrasound. To our knowledge, this has never been done before, even in animals. This may be a
259
592
4K
This presidency remains just an unending disaster, inflicting worldwide casualties. Hopefully Europe can get its shit together without us.
0
0
0
The fact that AI labs, which have a ton of cash, are not frantically hiring Infinite Grad Students for a fraction of their compute apend is evidence against the idea that near term AI models applied to AI research are going to trigger recursive self improvement.
0
0
1
I'm also not getting the impression that it's consistently better at specifically programming than C4.5, although as always these anecdotal impressions are statistically underpowered and domain sensitive and generally not worth much.
0
0
0
Gemini 3 is a pretty good model so far, but it does seem prone to going off the rails pretty frequently (deranged COT ravings leaking into output, code comments, etc, and getting stuck in infinite loops).
1
0
0
We've built a full-scale deployable aerobrake unit showcasing technology that uses planetary atmospheres to slow spacecraft, saving significant mass and cost and enabling heavy cargo delivery from the Moon, to Mars, and point-to-point missions on Earth. Lighter and larger than
287
680
7K