Laurent Ach
@ach3d
Followers
415
Following
592
Media
89
Statuses
883
CTO, leveraging artificial and human intelligence @ https://t.co/S7FTM8wYNA - https://t.co/x8nrqGm8Lw
Paris, France
Joined May 2009
Stéphane Mallat nous fait avancer dans la compréhension théorique des modèles de réseaux de neurones, avec rigueur et modestie, et une vision passionnante et intelligente de l'IA qui se démarque de ce dont on se contente souvent dans la tech
En alliant abstraction théorique et retombées concrètes, Stéphane Mallat, lauréat 2025 de la médaille d’or du CNRS, a marqué de son empreinte les mathématiques appliquées à l’informatique. Du format de compression d’images JPEG 2000 aux fondements ... https://t.co/8Q67v9lNis
0
0
1
This corresponds to the limits of combining symbolic and connectionist approaches, where it's impossible to create an ontology of the whole world and it's impossible to automatically generate all the explicit concepts we need for reasoning.
0
0
0
I suspect there is a Heisenberg-like law that limits how explicit concepts can be and how many can be used.
0
0
0
World models are clearly missing in current LLMs as @ylecun says but it’s unclear how future model architectures will balance useful emergent representations of the world with explicit concepts that are understandable to humans. https://t.co/9DUTl9fSrL
techcrunch.com
Meta's chief AI scientist, Yann LeCun, says that a "new paradigm of AI architectures" will emerge in the next three to five years, going far beyond the
2
0
0
Interesting that Adam Brown anticipates LLMs will have Einstein-level intelligence in 10 years, but explains that at the moment physicists (like most of us) mainly use LLMs to search for and explain information and theories that are well known in the literature
New episode w Adam Brown, a lead of Blueshift at DeepMind & theoretical physicist at Stanford. Stupefying, terrifying, & absolutely fascinating. On destroying the light cone with vacuum decay, mining black holes, holographic principle, & path to LLMs which make Einsteinian
0
0
0
Excellent thread by @fchollet about the performances of the new model by OpenAI on ARC-AGI. Evaluation dataset creation is an interesting challenge and is also a never-ending task: my guess is that we will always be able to design problems easy for humans and hard for machines.
Today OpenAI announced o3, its next-gen reasoning model. We've worked with OpenAI to test it on ARC-AGI, and we believe it represents a significant breakthrough in getting AI to adapt to novel tasks. It scores 75.7% on the semi-private eval in low-compute mode (for $20 per task
0
0
2
Tester l'intégration d'IA générative par Qwant dans ses résultats de recherche - résumés de réponses, réponses détaillées, toujours en citant les sources d'information
✨ L’IA DE QWANT EST EN OPEN WEEK ✨ Notre IA est disponible à toutes et tous pendant une semaine ! Plus besoin d’avoir un compte (même si c’est gratuit 👀) pour l’utiliser. Elle répond à toutes vos questions et requêtes en un clin d’oeil. Plus d’excuses pour tester ;)
0
0
1
"Information is a matter of questions and answers, it's not an objective thing, information doesn't just sit there." ... "it entails a relationship between a subject and an object", brilliant remarks by @Mark_Solms
0
0
2
Intéressante discussion comme toujours avec @JeromeColombain, sur les limites de l’IA, dans le cas du développement logiciel
0
0
3
I had the pleasure participate in an interesting talk with @jonsvt and @brucel, about developing technologies like web search engines and web browsers in Europe
🎙️ On a réalisé un petit podcast avec nos amis de chez @vivaldibrowser ! Notre CTO, @ach3d et Jon von Tetzchner, CEO chez Vivaldi, discutent de la manière dont il est possible de concevoir des technos respectueuses de la vie privée en ligne 🔐 https://t.co/xfmuTFICyF
0
0
3
OSI now defines what open-source AI is, information about the training data should be provided
technologyreview.com
Researchers have long disagreed over what constitutes open-source AI. An influential group has offered up an answer.
0
1
1
Interesting thoughts on the Turing test by @MelMitchell1
https://t.co/HZcPDmtJot It was once believed that beating a human at chess required general intelligence. The story goes on with AI mastering one task after another, without any intelligence
science.org
“Can machines think?” So asked Alan Turing in his 1950 paper, “Computing Machinery and Intelligence.” Turing quickly noted that, given the difficulty of defining thinking, the question is “too...
0
0
0
"Attempts to erase and devalue the most humane parts of our existence are nothing new; AI is just a new excuse to do it.” Great thoughts by @ShannonVallor on the usual comment "You don’t think that your brain is a machine?" this time by Yoshua Bengio
noemamag.com
The rhetoric over “superhuman” AI implicitly erases what’s most important about being human.
0
5
7
As usual, @fchollet gives the clearest and most synthetic explanations about the capabilities of LLM. Everything is said, really!
The question of whether LLMs can reason is, in many ways, the wrong question. The more interesting question is whether they are limited to memorization / interpolative retrieval, or whether they can adapt to novelty beyond what they know. (They can't, at least until you start
0
0
0
if biology can do it, I don't see why silicon can't do it (@davidchalmers42). A usual argument for considering that there is no fundamental difference between humans and machines. Rather weak. Consciousness exists in life, can you re-create life? Can you recreate the universe?
David Chalmers says it is possible for an AI system to be conscious because the brain itself is a machine that produces consciousness, so we know this is possible in principle
0
0
0
Looks like Good Old Fashioned AI makes a comeback… Symbols are still a hard problem for neural networks
If you're working on ARC-AGI, you should take a look at Sebastien Ferre's approach. It feels conceptually closer to human reasoning, compared to other program synthesis methods. https://t.co/Zqxl9Mr44R
1
0
1
"Prompt engineer. I think you mean types question guy" Amazingly funny show @weeklyshowpod on the false promise of AI
0
0
0
I recommend this @ykilcher analysis of a paper about the reliability of AI legal research tools, with many good general comments about LLM, about RAG, and about their application to the legal domain https://t.co/bkEi7h7VUQ, it's a pleasure listening to it all along!
0
0
3
LLM improvement means better predicting next word in a sequence, which doesn't directly matter to end users, who are more interested in emergent capabilities : @random_walker and @sayashk,
normaltech.ai
Scaling will run out. The question is when.
0
0
2