Tyler Moore Profile
Tyler Moore

@TylerMooreUS

Followers
107
Following
15K
Media
100
Statuses
5K

Joined September 2022
Don't wanna be here? Send us removal request.
@TylerMooreUS
Tyler Moore
2 years
My top ten favorite books meaning I think about them at least every few days: Foundation Rendezvous With Rama Midnight at the Well of Souls Little Big The Arabian Nightmare The Maltese Falcon Solomon's Vineyard My Family and Other Animals The French Lieutenant's Woman Voss
1
0
15
@javilopen
Javi Lopez ⛩️
1 month
"Pros won’t use generative AI, and when the bubble pops, nobody will ever talk about it again." No. That’s delusional. 1/ Generative AI is already being used professionally at the level of big studios like Disney ($1B to OpenAI), and there’s zero doubt that studios like
68
45
391
@willmacaskill
William MacAskill
2 months
Almost no one has articulated a positive vision for what comes after superintelligence. What should we be trying to aim for? Utopias from history look clearly dystopian to us, and we should expect the same for our own attempts. We don’t know enough to know what utopia looks
43
39
360
@BiluHuang
Bilu Huang
4 months
2. Cell Reprogramming It won't work. Cellular reprogramming includes “pluripotent reprogramming,” “partial reprogramming,” and “direct reprogramming.” In pluripotent reprogramming, somatic cells are completely reverted to a pluripotent state; in partial reprogramming, these
@davidasinclair
David Sinclair
4 months
New article: The Information Theory of Aging (ITOA) states that epigenetic drift is a cause of aging, with cells taking up new identities. New review says slowing a process called "mesenchymal drift" has emerged as is a new strategy for rejuvenation
8
9
87
@DanielCHTan97
Daniel Tan
4 months
very cool paper - tl;dr it's possible to steer models by taking a weight difference rather than an activation difference.
Tweet card summary image
arxiv.org
Providing high-quality feedback to Large Language Models (LLMs) on a diverse training distribution can be difficult and expensive, and providing feedback only on a narrow distribution can result...
7
33
283
@Scobleizer
Robert Scoble
4 months
In five hours this has only gotten 31 likes and about 50 more from my reshare. Nuts how hard it is to get reach here on X. Everyone is missing how big a deal this is. It introduces deep physics understanding into world models. Google and World Labs sucked the oxygen out of
@mbzuai
MBZUAI
4 months
Introducing PAN — MBZUAI’s New World Model for Interactive Intelligence Developed by MBZUAI’s Institute of Foundation Models, PAN is built for simulation, prediction, and agentic reasoning. Unlike traditional video generators that only output frames, PAN maintains a persistent
38
45
424
@edelwax
Joe Edelman 🥞
5 months
Great work from @SmithaMilli, @MicahCarroll, and team. Smitha keeps putting out bangers that have immediate application but are also address longer-term alignment concerns.
@SmithaMilli
smitha milli
5 months
can we finally use natural language to optimize for deeper notions of what users want from their recommender systems?
3
2
13
@BiluHuang
Bilu Huang
5 months
Unified Theory of Aging - Why Do We Age?The Telomere DNA and Ribosomal DNA Co-regulation Model for Cell Senescence (TRCS) Bilu Huang Available at SSRN: https://t.co/l8HEMftMrc Abstract When killifish species with different lifespans that evolved under distinct rainy-season
@fedichev
Peter Fedichev
5 months
We still don’t have a proper theory of aging. That’s remarkable, given how far biology has come. Despite centuries of speculation and decades of data, there is still no unified, quantitative framework that explains how and why living systems age—and how we might stop it. What
19
60
256
@jiqizhixin
机器之心 JIQIZHIXIN
6 months
How can LLMs evolve continually in real-world industry without forgetting past tasks? Enter: MoE-CL, a parameter-efficient adversarial mixture-of-experts framework for continual instruction tuning: - Dedicated LoRA experts per task → preserve task knowledge - Shared LoRA
1
31
178
@__SyedaAkter
Syeda Nahida Akter
6 months
Most LLMs learn to think only after pretraining—via SFT or RL. But what if they could learn to think during it? 🤔 Introducing RLP: Reinforcement Learning Pre-training—a verifier-free objective that teaches models to “think before predicting.” 🔥 Result: Massive reasoning
8
43
258
@ChantalShaib
Chantal
6 months
"AI slop" seems to be everywhere, but what exactly makes text feel like slop? In our new work (w/ @TuhinChakr, @dgolano, @byron_c_wallace) we provide a systematic attempt at measuring AI slop in text! https://t.co/9bKQceSjkn 🧵 (1/7)
@aidan_mclau
Aidan McLaughlin
1 year
help me fix get-4o slop reply with examples of slop behavior just a single sentence nothing crazy what annoys you what makes you wanna frisbee your laptop into a river i'll respond to every comment rt so we can maximize slop feedback help me de-sloptimize our models go
14
37
223
@BiluHuang
Bilu Huang
6 months
The fear of death first hit me when I was 8. Since then, I have determined to bet on my life to conquer aging. My goal is to help humanity bid farewell to the cycle of the four seasons and usher in an eternal spring as soon as possible! I hope everyone cherishes life and can see
6
6
38
@GeZhang86038849
Ge Zhang
6 months
🚀 We introduce a THIRD PATH for teaching deep reasoning, REverse-Engineered Reasoning (REER), which instills deep reasoning ✨from scratch, ✨using only instruction tuning datasets, ✨without RL or pricey distillation! And we target the challenging, long-tailed,
8
64
351
@rohanpaul_ai
Rohan Paul
7 months
Really cool idea in this paper 💡 They propose intelligence works by reusing stored inference loops, not recomputing every time. It means the system keeps a memory of how it solved problems before, then reuses that stored know-how instead of solving from scratch every time. It
21
51
342
@omarsar0
elvis
7 months
Performance M3-Agent beats a Gemini-GPT-4o hybrid and other baselines on M3-Bench-robot, M3-Bench-web, and VideoMME-long. Semantic memory and identity equivalence are crucial, and RL training plus inter-turn instructions and explicit reasoning materially improve accuracy.
1
2
25
@jiqizhixin
机器之心 JIQIZHIXIN
8 months
AlphaGo Moment for Model Architecture Discovery Paper: https://t.co/1h1BzGKyyn
3
10
87
@ryan_t_lowe
Ryan Lowe 🥞
8 months
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
14
45
210
@_akhaliq
AK
8 months
MemOS A Memory OS for AI System
7
66
407
@NYMag
New York Magazine
9 months
Inside the race to turn 5-MeO-DMT, one of the most powerful psychedelic drugs in the world into a pharmaceutical.
Tweet card summary image
nymag.com
Psychedelics devotees are racing biotech entrepreneurs to turn 5-MeO-DMT, one of the world’s most powerful drugs, into a pharmaceutical.
2
4
4