
Adrien Ecoffet
@AdrienLE
Followers
7K
Following
28K
Media
69
Statuses
3K
Trying to make AGI go well. Researcher at @openai. Views my own.
San Francisco, CA
Joined April 2009
RT @sleepinyourhat: Early this summer, OpenAI and Anthropic agreed to try some of our best existing tests for misalignment on each others’….
0
54
0
RT @woj_zaremba: It’s rare for competitors to collaborate. Yet that’s exactly what OpenAI and @AnthropicAI just did—by testing each other’s….
0
153
0
RT @airkatakana: claude $20 tier is for europeans. 30 minutes of coding, 4.5 hour siesta, repeat.
0
94
0
RT @SpencrGreenberg: As of two months ago, I don’t recall seeing any scientific or mathematical breakthroughs occurring via out-of-the-box….
0
13
0
RT @AndrewCurran_: Ezra Klein is impressed with GPT-5, and wrote about his experience in the NYT this morning.
0
58
0
RT @MillionInt: OpenAI took a bit of a detour, as we were the pioneers in AI coding research but didn’t prioritise it enough after ChatGPT….
0
26
0
RT @Miles_Brundage: Notably, it's ~never employees at frontier companies quoted on this, it's the journalists themselves, or academics, sta….
0
12
0
RT @julianboolean_: the biggest lesson I've learned from the last few years is that the "tiny gap between village idiot and Einstein" chart….
0
8
0
RT @sebkrier: Screenshot from an old @MatthewJBar comment. Very much aligns with how I see the world: I don't really see the alignment prob….
0
4
0
RT @AndyMasley: Google publishes a paper showing that its AI models only use 0.26 mL of water in data centers per prompt. After, this art….
0
200
0
Nice! This is a cool research area that I have been wondering about for a while, awesome to see research out on this!.
New Anthropic research: filtering out dangerous information at pretraining. We’re experimenting with ways to remove information about chemical, biological, radiological and nuclear (CBRN) weapons from our models’ training data without affecting performance on harmless tasks.
1
0
3