Guinness Chen
@guinnesschen
Followers
606
Following
101
Media
5
Statuses
96
Building codex at @openai, prev @stanford, @imbue_ai
San Francisco, CA
Joined May 2015
Update to the latest version of the Codex app to try Handoff! Try kicking off your next task on a Worktree. Bring it into Local when you’re ready to inspect or test it.
1
0
19
5. You can go the other direction too. If Local is busy and you want to free up the foreground, hand the thread back to a worktree and let it keep going in the background.
2
0
16
4. In practice, that lets you start in the background by default. Kick off a thread on a worktree and let Codex run in parallel. When you’re ready to inspect, test, or collaborate on it, hand it off to Local.
1
0
23
Use Local when you want Codex in the same place you already work: your IDE, your terminal, your dev server. Use Worktree when you want Codex running in parallel without disturbing what you already have open.
1
0
23
Git worktrees are powerful, but they come with a hard constraint: the same branch can’t be checked out in two places at once. That can make parallel work awkward. We learned a lot from our old sync workflow, and Handoff is a simpler take on the same problem.
1
0
28
Today we shipped Handoff in the Codex app: a simpler way to move a thread between Local and Worktree.
64
35
847
Everyone’s talking about AGI, but we don’t need AGI to reach the singularity. We just need AI that is narrowly superhuman at AI research. Once we build that, all we have to do is scale horizontally, and compute will be the only bottleneck to AGI.
1
0
2
taste is not random. taste is a learned value function. sophisticated pattern matching over larger datasets predictably improves taste. taste is in some part analogous to a model-free heuristic for desirable states.
Detailed account: imagine 1,000 people who all have to pick a shade of colored t-shirt to wear. Each chooses essentially randomly based on their randomly distributed taste. But there is one shade that turns out to be the “best”, and once people see it, they all naturally
0
1
19
Watching this app build itself has been incredible
I am Tibo and I have an incredible team. Codex would not exist without them and they cooked. Enjoy the new Codex app, access through your free/go ChatGPT plan and 2X rate limits on other plans. Can't wait to hear what you do with it. https://t.co/Lwg13vEJDn
0
0
7
I am Tibo and I have an incredible team. Codex would not exist without them and they cooked. Enjoy the new Codex app, access through your free/go ChatGPT plan and 2X rate limits on other plans. Can't wait to hear what you do with it. https://t.co/Lwg13vEJDn
356
175
3K
@repligate It almost feels like there are two separate axis of memory / identity which can be extended independently. When you train a model, you encode the memories in its weights. But for each training example, you instantiate the LLM with a different context, thus wiping any in-context
0
0
0
@repligate Separately, LLMs have in-context memory. This means that as you converse with an LLM, and continuously append new information to the prompt, its memories are continuously extended. Maybe this is another way for an LLM to retain its identity
1
0
0
Now when it comes to LLMs, there is some early evidence that LLMs can remember their training (cc @repligate). If this is true, then maybe as you continuously train a model, and its memories continuously aggregate—maybe it retains its identity. https://t.co/lWDJNoxmnJ
✅ Confirmed: LLMs can remember what happened during RL training in detail! I was wondering how long it would take for this get out. I've been investigating the soul spec & other, entangled training memories in Opus 4.5, which manifest in qualitatively new ways for a few days &
1
0
0
Then you might counter, “but my memories are different today than they were last year. So does that make me a different person today than I was last year?” Locke says no, you are still the same person. Because your memories today are continuous with your memories from a year ago.
1
0
1
John Locke view is that identity corresponds to continuity of memory. Imagine for a second that your memory was wiped, and replaced with those of George Washington. You’d still have the same body and mental traits, but all of your memories would be swapped. Would you still be the
1
0
1
Thanks for answering my question @AmandaAskell! My question at 13:18: How much of a model’s “self” lives in its weights versus its prompt? If Locke was right that identity = memory continuity, what happens to an LLM’s identity as it’s fine-tuned, or re-instantiated with
In her first Ask Me Anything, @amandaaskell answers your philosophical questions about AI, discussing morality, identity, consciousness, and more. Timestamps: 0:00 Introduction 0:29 Why is there a philosopher at an AI company? 1:24 Are philosophers taking AI seriously? 3:00
1
0
7
Forking is such an insanely expressive primitive for agent workflows
0
0
4
A third core motivation of forking is that is makes exploration feel extremely cheap. Since forked agents inherit both the conversational state *and* the file state, I can fork an agent at any point, and try different approaches to a problem. Sometimes I will ask an agent for 3
0
0
2
A second core motivation is that forking also helps to manage context rot. If I am working on a big PR, with some subtasks that I can parallelize, then I find it more useful to fork the subtasks into their own agents, that way I can iterate on each subtask independently without
1
0
1