Tyler Moore
@TylerMooreUS
Followers
107
Following
15K
Media
100
Statuses
5K
Joined September 2022
My top ten favorite books meaning I think about them at least every few days: Foundation Rendezvous With Rama Midnight at the Well of Souls Little Big The Arabian Nightmare The Maltese Falcon Solomon's Vineyard My Family and Other Animals The French Lieutenant's Woman Voss
1
0
15
"Pros won’t use generative AI, and when the bubble pops, nobody will ever talk about it again." No. That’s delusional. 1/ Generative AI is already being used professionally at the level of big studios like Disney ($1B to OpenAI), and there’s zero doubt that studios like
68
45
391
Almost no one has articulated a positive vision for what comes after superintelligence. What should we be trying to aim for? Utopias from history look clearly dystopian to us, and we should expect the same for our own attempts. We don’t know enough to know what utopia looks
43
39
360
Why the path to AGI runs through LLM's, not around them:
arxiv.org
Influential critiques argue that Large Language Models (LLMs) are a dead end for AGI: "mere pattern matchers" structurally incapable of reasoning or planning. We argue this conclusion...
0
0
2
2. Cell Reprogramming It won't work. Cellular reprogramming includes “pluripotent reprogramming,” “partial reprogramming,” and “direct reprogramming.” In pluripotent reprogramming, somatic cells are completely reverted to a pluripotent state; in partial reprogramming, these
New article: The Information Theory of Aging (ITOA) states that epigenetic drift is a cause of aging, with cells taking up new identities. New review says slowing a process called "mesenchymal drift" has emerged as is a new strategy for rejuvenation
8
9
87
very cool paper - tl;dr it's possible to steer models by taking a weight difference rather than an activation difference.
arxiv.org
Providing high-quality feedback to Large Language Models (LLMs) on a diverse training distribution can be difficult and expensive, and providing feedback only on a narrow distribution can result...
7
33
283
In five hours this has only gotten 31 likes and about 50 more from my reshare. Nuts how hard it is to get reach here on X. Everyone is missing how big a deal this is. It introduces deep physics understanding into world models. Google and World Labs sucked the oxygen out of
Introducing PAN — MBZUAI’s New World Model for Interactive Intelligence Developed by MBZUAI’s Institute of Foundation Models, PAN is built for simulation, prediction, and agentic reasoning. Unlike traditional video generators that only output frames, PAN maintains a persistent
38
45
424
Great work from @SmithaMilli, @MicahCarroll, and team. Smitha keeps putting out bangers that have immediate application but are also address longer-term alignment concerns.
can we finally use natural language to optimize for deeper notions of what users want from their recommender systems?
3
2
13
Unified Theory of Aging - Why Do We Age?The Telomere DNA and Ribosomal DNA Co-regulation Model for Cell Senescence (TRCS) Bilu Huang Available at SSRN: https://t.co/l8HEMftMrc Abstract When killifish species with different lifespans that evolved under distinct rainy-season
We still don’t have a proper theory of aging. That’s remarkable, given how far biology has come. Despite centuries of speculation and decades of data, there is still no unified, quantitative framework that explains how and why living systems age—and how we might stop it. What
19
60
256
How can LLMs evolve continually in real-world industry without forgetting past tasks? Enter: MoE-CL, a parameter-efficient adversarial mixture-of-experts framework for continual instruction tuning: - Dedicated LoRA experts per task → preserve task knowledge - Shared LoRA
1
31
178
Most LLMs learn to think only after pretraining—via SFT or RL. But what if they could learn to think during it? 🤔 Introducing RLP: Reinforcement Learning Pre-training—a verifier-free objective that teaches models to “think before predicting.” 🔥 Result: Massive reasoning
8
43
258
"AI slop" seems to be everywhere, but what exactly makes text feel like slop? In our new work (w/ @TuhinChakr, @dgolano, @byron_c_wallace) we provide a systematic attempt at measuring AI slop in text! https://t.co/9bKQceSjkn 🧵 (1/7)
help me fix get-4o slop reply with examples of slop behavior just a single sentence nothing crazy what annoys you what makes you wanna frisbee your laptop into a river i'll respond to every comment rt so we can maximize slop feedback help me de-sloptimize our models go
14
37
223
The fear of death first hit me when I was 8. Since then, I have determined to bet on my life to conquer aging. My goal is to help humanity bid farewell to the cycle of the four seasons and usher in an eternal spring as soon as possible! I hope everyone cherishes life and can see
6
6
38
🚀 We introduce a THIRD PATH for teaching deep reasoning, REverse-Engineered Reasoning (REER), which instills deep reasoning ✨from scratch, ✨using only instruction tuning datasets, ✨without RL or pricey distillation! And we target the challenging, long-tailed,
8
64
351
Really cool idea in this paper 💡 They propose intelligence works by reusing stored inference loops, not recomputing every time. It means the system keeps a memory of how it solved problems before, then reuses that stored know-how instead of solving from scratch every time. It
21
51
342
Performance M3-Agent beats a Gemini-GPT-4o hybrid and other baselines on M3-Bench-robot, M3-Bench-web, and VideoMME-long. Semantic memory and identity equivalence are crucial, and RL training plus inter-turn instructions and explicit reasoning materially improve accuracy.
1
2
25
Reinforcement Fine-Tuning Naturally Mitigates Forgetting in Continual Post-Training.
arxiv.org
Continual post-training (CPT) is a popular and effective technique for adapting foundation models like multimodal large language models to specific and ever-evolving downstream tasks. While...
0
3
3
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
14
45
210
Inside the race to turn 5-MeO-DMT, one of the most powerful psychedelic drugs in the world into a pharmaceutical.
nymag.com
Psychedelics devotees are racing biotech entrepreneurs to turn 5-MeO-DMT, one of the world’s most powerful drugs, into a pharmaceutical.
2
4
4