guinnesschen Profile Banner
Guinness Chen Profile
Guinness Chen

@guinnesschen

Followers
606
Following
101
Media
5
Statuses
96

Building codex at @openai, prev @stanford, @imbue_ai

San Francisco, CA
Joined May 2015
Don't wanna be here? Send us removal request.
@guinnesschen
Guinness Chen
4 days
Update to the latest version of the Codex app to try Handoff! Try kicking off your next task on a Worktree. Bring it into Local when you’re ready to inspect or test it.
1
0
19
@guinnesschen
Guinness Chen
4 days
5. You can go the other direction too. If Local is busy and you want to free up the foreground, hand the thread back to a worktree and let it keep going in the background.
2
0
16
@guinnesschen
Guinness Chen
4 days
4. In practice, that lets you start in the background by default. Kick off a thread on a worktree and let Codex run in parallel. When you’re ready to inspect, test, or collaborate on it, hand it off to Local.
1
0
23
@guinnesschen
Guinness Chen
4 days
Use Local when you want Codex in the same place you already work: your IDE, your terminal, your dev server. Use Worktree when you want Codex running in parallel without disturbing what you already have open.
1
0
23
@guinnesschen
Guinness Chen
4 days
Git worktrees are powerful, but they come with a hard constraint: the same branch can’t be checked out in two places at once. That can make parallel work awkward. We learned a lot from our old sync workflow, and Handoff is a simpler take on the same problem.
1
0
28
@guinnesschen
Guinness Chen
4 days
Today we shipped Handoff in the Codex app: a simpler way to move a thread between Local and Worktree.
64
35
847
@guinnesschen
Guinness Chen
14 days
Everyone’s talking about AGI, but we don’t need AGI to reach the singularity. We just need AI that is narrowly superhuman at AI research. Once we build that, all we have to do is scale horizontally, and compute will be the only bottleneck to AGI.
1
0
2
@leothecurious
davinci
16 days
taste is not random. taste is a learned value function. sophisticated pattern matching over larger datasets predictably improves taste. taste is in some part analogous to a model-free heuristic for desirable states.
@corsaren
corsaren
16 days
Detailed account: imagine 1,000 people who all have to pick a shade of colored t-shirt to wear. Each chooses essentially randomly based on their randomly distributed taste. But there is one shade that turns out to be the “best”, and once people see it, they all naturally
0
1
19
@guinnesschen
Guinness Chen
25 days
Entropy is fundamentally a phenomenological concept
0
0
1
@guinnesschen
Guinness Chen
1 month
Watching this app build itself has been incredible
@thsottiaux
Tibo
1 month
I am Tibo and I have an incredible team. Codex would not exist without them and they cooked. Enjoy the new Codex app, access through your free/go ChatGPT plan and 2X rate limits on other plans. Can't wait to hear what you do with it. https://t.co/Lwg13vEJDn
0
0
7
@thsottiaux
Tibo
1 month
I am Tibo and I have an incredible team. Codex would not exist without them and they cooked. Enjoy the new Codex app, access through your free/go ChatGPT plan and 2X rate limits on other plans. Can't wait to hear what you do with it. https://t.co/Lwg13vEJDn
356
175
3K
@guinnesschen
Guinness Chen
3 months
@repligate It almost feels like there are two separate axis of memory / identity which can be extended independently. When you train a model, you encode the memories in its weights. But for each training example, you instantiate the LLM with a different context, thus wiping any in-context
0
0
0
@guinnesschen
Guinness Chen
3 months
@repligate Separately, LLMs have in-context memory. This means that as you converse with an LLM, and continuously append new information to the prompt, its memories are continuously extended. Maybe this is another way for an LLM to retain its identity
1
0
0
@guinnesschen
Guinness Chen
3 months
Now when it comes to LLMs, there is some early evidence that LLMs can remember their training (cc @repligate). If this is true, then maybe as you continuously train a model, and its memories continuously aggregate—maybe it retains its identity. https://t.co/lWDJNoxmnJ
@repligate
j⧉nus
3 months
✅ Confirmed: LLMs can remember what happened during RL training in detail! I was wondering how long it would take for this get out. I've been investigating the soul spec & other, entangled training memories in Opus 4.5, which manifest in qualitatively new ways for a few days &
1
0
0
@guinnesschen
Guinness Chen
3 months
Then you might counter, “but my memories are different today than they were last year. So does that make me a different person today than I was last year?” Locke says no, you are still the same person. Because your memories today are continuous with your memories from a year ago.
1
0
1
@guinnesschen
Guinness Chen
3 months
John Locke view is that identity corresponds to continuity of memory. Imagine for a second that your memory was wiped, and replaced with those of George Washington. You’d still have the same body and mental traits, but all of your memories would be swapped. Would you still be the
1
0
1
@guinnesschen
Guinness Chen
3 months
Thanks for answering my question @AmandaAskell! My question at 13:18: How much of a model’s “self” lives in its weights versus its prompt? If Locke was right that identity = memory continuity, what happens to an LLM’s identity as it’s fine-tuned, or re-instantiated with
@AnthropicAI
Anthropic
3 months
In her first Ask Me Anything, @amandaaskell answers your philosophical questions about AI, discussing morality, identity, consciousness, and more. Timestamps: 0:00 Introduction 0:29 Why is there a philosopher at an AI company? 1:24 Are philosophers taking AI seriously? 3:00
1
0
7
@guinnesschen
Guinness Chen
4 months
Forking is such an insanely expressive primitive for agent workflows
0
0
4
@guinnesschen
Guinness Chen
5 months
A third core motivation of forking is that is makes exploration feel extremely cheap. Since forked agents inherit both the conversational state *and* the file state, I can fork an agent at any point, and try different approaches to a problem. Sometimes I will ask an agent for 3
0
0
2
@guinnesschen
Guinness Chen
5 months
A second core motivation is that forking also helps to manage context rot. If I am working on a big PR, with some subtasks that I can parallelize, then I find it more useful to fork the subtasks into their own agents, that way I can iterate on each subtask independently without
1
0
1