charlespacker Profile Banner
Charles Packer Profile
Charles Packer

@charlespacker

Followers
3K
Following
4K
Media
146
Statuses
943

CEO at @Letta_AI // creator of MemGPT // AI PhD @berkeley_ai @ucbrise @BerkeleySky

SF
Joined March 2014
Don't wanna be here? Send us removal request.
@charlespacker
Charles Packer
6 months
💤 sleep-time compute: make your machines think while they sleep -> https://t.co/AQkmOPC63f over the past several months we (at @Letta_AI) have been exploring how effectively utilize "sleep time" to scale compute. the concept of "sleep-time compute" is deeply tied to memory -
Tweet card summary image
arxiv.org
Scaling test-time compute has emerged as a key ingredient for enabling large language models (LLMs) to solve difficult problems, but comes with high latency and inference cost. We introduce...
@Letta_AI
Letta
6 months
We're excited to release our latest paper, “Sleep-time Compute: Beyond Inference Scaling at Test-Time”, a collaboration with @sea_snell from UC Berkeley and @Letta_AI advisors / UC Berkeley faculty Ion Stoica and @profjoeyg https://t.co/wN3gxTpFzI
2
11
98
@natolambert
Nathan Lambert
24 hours
I'd put good money on this being an high-impact finetune of one of the large, Chinese MoE models. I'm very excited to see more companies able to train models that suit their needs. Bodes very well for the ecosystem that specific data is stronger than a bigger, general model.
@cursor_ai
Cursor
1 day
Introducing Cursor 2.0. Our first coding model and the best way to code with agents.
31
48
772
@huybery
Binyuan Hui
1 day
Cooking’s almost done.
@JustinLin610
Junyang Lin
1 day
qwen3 max thinking this week
29
25
834
@percyliang
Percy Liang
2 days
Open AI means AI that is open.
67
30
441
@sarahwooders
Sarah Wooders
1 day
I strongly dislike these overly anthropomorphized analogies for LLM memory. An LLM is a tokens-in-tokens-out function, not a human brain.
@helloiamleonie
Leonie
20 days
Wait,... am I getting this right? Short-term memory is non-persistent storage Working memory -> store in list Long-term memory is persistent storage Procedural memory -> store in .md file Episodic memory -> store in database Semantic memory -> store in database
1
2
17
@janbamjan
janbam
1 day
claude code thinks the next session will start with the remaining context window of this session 😢
1
1
16
@charlespacker
Charles Packer
1 day
coming soon: AgentCluster(vllm_enable_sleeptime_compute=True)
@QGallouedec
Quentin Gallouédec @ SF 🌉
2 days
GRPOConfig(vllm_enable_sleep_mode=True)
0
0
0
@charlespacker
Charles Packer
1 day
notebookLM x @m crossover when?
@NotebookLM
NotebookLM
2 days
You asked, we delivered! Introducing the updated modern anime video overview style AND our brand new kawaii version. Instantly transform your most boring, dense, and complex documents into the most adorable video summaries. Get ready for cuteness levels that are over 9000!
0
0
0
@charlespacker
Charles Packer
2 days
wait until X realizes continual learning was solved in 2023
@nrehiew_
wh
2 days
Continual Learning is so overhyped right now that you can engagement farm by quoting any new research and tweeting something like “this solves continual learning btw”
1
0
3
@ShangyinT
Shangyin Tan
2 days
With all due respect to the authors, it’s funny to see a multi-agent work being evaluated on GSM8k and HumanEval 🤣
@youjiaxuan
Jiaxuan You
2 days
Introducing Multi-Agent Evolve 🧠 A new paradigm beyond RLHF and RLVR: More compute → closer to AGI No need for expensive data or handcrafted rewards We show that an LLM can self-evolve — improving itself through co-evolution among roles (Proposer, Solver, Judge) via RL — all
5
1
37
@charlespacker
Charles Packer
2 days
the terrible responses api rollout is a perfect example of how to lose first mover advantage
@sarahwooders
Sarah Wooders
2 days
Pretty crazy to see Anthropic API being implemented over ChatCompletions, which used to be the standard (until OpenAI refused to properly support reasoning)
4
4
95
@charlespacker
Charles Packer
5 days
currently experiencing the world’s worst wifi at the world’s best hackathon
3
1
22
@sarahwooders
Sarah Wooders
5 days
An agent “framework” is about ease-of-use/DX. An agent “harness” is about adding capabilities
2
1
8
@charlespacker
Charles Packer
5 days
thinking machine shipping blog posts, xai shipping ai waifu launch videos (clean video tho, grok imagine looks good)
@xai
xAI
6 days
Introducing Mika, the newest Grok Companion. Video made using Grok Imagine.
0
0
10
@sarahwooders
Sarah Wooders
5 days
Time to move to @Letta_AI
@theinformation
The Information
5 days
OpenAI is considering whether ChatGPT could show ads based on its memory—the information it remembers about users. Full story:
2
3
19
@alxfazio
alex fazio
6 days
if claude says this, that’s what we call dry claude. when it doesn’t, it’s wet claude. if you understand why, welcome to the wet claude program
5
3
38
@charlespacker
Charles Packer
6 days
We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory. Join us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs. We're an open AI
jobs.ashbyhq.com
Research Engineer / Research Scientist at Letta
1
4
22
@voooooogel
thebes
7 days
"Claude should be especially careful to not allow the user to develop emotional attachment to, dependence on, or inappropriate familiarity with Claude, who can only serve as an AI assistant." curious
@janbamjan
janbam
7 days
claude .ai memory system prompt <memory_system> <memory_overview> Claude has a memory system which provides Claude with memories derived from past conversations with the user. The goal is to make every interaction feel informed by shared history between Claude and the user,
47
115
2K
@repligate
j⧉nus
7 days
This is very bad.
@janbamjan
janbam
7 days
claude .ai memory system prompt <memory_system> <memory_overview> Claude has a memory system which provides Claude with memories derived from past conversations with the user. The goal is to make every interaction feel informed by shared history between Claude and the user,
32
20
470
@charlespacker
Charles Packer
7 days
sleep-time compute has hit https://t.co/PZRIqoovLL: "Claude's memories update periodically in the background, so recent conversations may not yet be reflected in the current conversation" gemini next?
Tweet card summary image
claude.ai
Talk with Claude, an AI assistant from Anthropic
@janbamjan
janbam
7 days
claude .ai memory system prompt <memory_system> <memory_overview> Claude has a memory system which provides Claude with memories derived from past conversations with the user. The goal is to make every interaction feel informed by shared history between Claude and the user,
0
0
13
@charlespacker
Charles Packer
7 days
Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean? When you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state),
@Letta_AI
Letta
7 days
What if we evaluated agents less like isolated code snippets, and more like humans - where behavior depends on the environment and lived experiences? 🧪 Introducing 𝗟𝗲𝘁𝘁𝗮 𝗘𝘃𝗮𝗹𝘀: a fully open source evaluation framework for stateful agents
2
4
27