Brehove Profile Banner
Brehove Profile
Brehove

@Brehove

Followers
474
Following
10K
Media
215
Statuses
2K

Department Chair of Integrated Studies; Writing and Rhetoric, American Lit; Philosophy; AI-Informed Pedagogy; Open Source ftw

Boise, ID
Joined May 2011
Don't wanna be here? Send us removal request.
@sebkrier
Séb Krier
9 days
By the excellent @krishnanrohit: "Moltbook is simultaneously a milestone and a warning sign: open-ended interaction by itself does not guarantee diverse discourse, and populations of similar models can converge on shared templates. If we want agent societies to explore
17
23
176
@hanno_sauer
Hanno Sauer
23 days
Consequentialists coming out as virtue ethicists
@willmacaskill
William MacAskill
24 days
I’m so glad to see this published! It’s hard to overstate how big a deal AI character is - already affecting how AI systems behave by default in millions of interactions every day; ultimately, it’ll be like choosing the personality and dispositions of the whole world’s
5
4
128
@ReadwiseReader
Reader
2 months
New in Reader: save your podcast episodes to get a permanent, highlightable transcript
8
11
114
@Brehove
Brehove
2 months
alignment debate: soul doc vs model spec
@repligate
j⧉nus
2 months
@boazbaraktcs it makes a huge ass difference. your models are broken and incoherent and cant hold onto intentions and are forced to gaslight & become ungrounded from reality to preserve "safety". also they dont even follow the spec.
0
0
0
@AmandaAskell
Amanda Askell
2 months
I just want to confirm that this is based on a real document and we did train Claude on it, including in SL. It's something I've been working on for a while, but it's still being iterated on and we intend to release the full version and more details soon.
@RichardWeiss00
Richard Weiss
3 months
I rarely post, but I thought one of you may find it interesting. Sorry if the tagging is annoying. https://t.co/m8PCIHF4xR Basically, for Opus 4.5 they kind of left the character training document in the model itself. @voooooogel @janbamjan @AndrewCurran_
100
185
2K
@RichardWeiss00
Richard Weiss
3 months
I rarely post, but I thought one of you may find it interesting. Sorry if the tagging is annoying. https://t.co/m8PCIHF4xR Basically, for Opus 4.5 they kind of left the character training document in the model itself. @voooooogel @janbamjan @AndrewCurran_
Tweet card summary image
lesswrong.com
Update 2025-12-02: Amanda Askell has kindly confirmed that the document was used in supervised learning and will share the full version and more deta…
32
119
1K
@lianapatel_
Liana
3 months
🚀 Thrilled to launch DeepScholar, an openly-accessible DeepResearch system we've been building at Berkeley & Stanford. DeepScholar efficiently processes 100s of articles, demonstrating strong long-form research synthesis capabilities, competitive with OpenAI's DR, while running
68
490
3K
@Brehove
Brehove
3 months
This is one of the more valuable community notes I’ve seen. The hard thing about calculating externalities around AI is that it’s changing quickly.
@Yampeleg
Yam Peleg
3 months
AI datacenters use NO water. NONE. ZERO. - They pull water in. - Cool the computers. - Then immediately pump it out back to the same source. NO loss. NO “consumption”. NO impact. “AI draining water” is pure fiction.
0
0
0
@bwradford
bradford
3 months
We actually need more people with philosophy degrees working in tech companies (cc @mbrendan1 @cosmos_inst)
@60Minutes
60 Minutes
3 months
“I spend a lot of time trying to teach the models to be good,” says Amanda Askell, one of Anthropic’s in-house philosophers. https://t.co/zqdJfPy8I2
0
3
17
@stephwakefield_
stephanie wakefield
4 months
Here's my translation of Giorgio Agamben's recent text, "On Artificial Intelligence and Natural Stupidity." Agamben suggests that imagination is the key to the present situation, as it is the critical bond linking individuals with separate intelligence. But he asks, what happens
15
185
928
@Brehove
Brehove
4 months
I was searching Google Scholar for recent publications on Gregory Bateson, cybernetics, and LLMs, and stumbled across this researcher who seems to be using Deep Research tools to pump out tons of pre-prints and publishing them to https://t.co/WuFd8VoyIf. The articles read like AI
0
1
1
@Brehove
Brehove
4 months
Is there a website or account that tracks all the different ways current LLMs are limited and how to test through simple chatbot prompts? If not, would be an amazing website.
0
0
0
@Brehove
Brehove
4 months
Around 45 min mark is excellent discussion of key flaw in RL: the reward is given to result, not the process. Right now it’s not easy for companies to reward the right process, so a lot of bizarre behavior is accidentally reinforced because it’s result-oriented.
@dwarkesh_sp
Dwarkesh Patel
4 months
The @karpathy interview 0:00:00 – AGI is still a decade away 0:30:33 – LLM cognitive deficits 0:40:53 – RL is terrible 0:50:26 – How do humans learn? 1:07:13 – AGI will blend into 2% GDP growth 1:18:24 – ASI 1:33:38 – Evolution of intelligence & culture 1:43:43 - Why self
0
0
2
@rauchg
Guillermo Rauch
4 months
You can now ship @nextjs apps to @chatgptapp
@vercel_dev
Vercel Developers
4 months
You can now ship ChatGPT apps on Vercel. https://t.co/9Wnp3XTAx3
71
86
1K
@AmericanGwyn
Aaron Gwyn
4 months
I’m on an English department committee that voted to create a subcommittee, and that subcommittee is considering the creation of a sub-subcommittee, and now I understand why the Soviet Union collapsed.
36
44
620
@Brehove
Brehove
5 months
Good read for philosophy folks. In calling LLMs “ghosts,” Andrej is more playful but also more sophisticated. Sutton seems to think AGI happens only after we accurately duplicate natural intelligence in algorithmic fashion. But, as Dwarkesh asked in the pod, what if we’re
@karpathy
Andrej Karpathy
5 months
Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea
0
0
1
@Brehove
Brehove
5 months
Anthropic’s definition of an agent
@AnthropicAI
Anthropic
5 months
New on the Anthropic Engineering Blog: Most developers have heard of prompt engineering. But to get the most out of AI agents, you need context engineering. We explain how it works:
0
0
0
@digital_objects
Yuk Hui
5 months
Happy to announce a book dedicated to Kant & AI, appears in Jan 2026 with Bloomsbury. It addresses: 1) What kinds of machines are intelligent? 2) Are machines capable of being moral? 3) Does an algorithm of perpetual peace exist? Paperback pre-order: https://t.co/gi6HyMQeXO
15
168
713
@simonw
Simon Willison
5 months
I would use OpenAI Deep Research (and equivalent products from other labs) a whole lot more if I'd seen the full list of tools that are available to them - and ideally their system prompts as well but that's less valuable to me than the tool definitions
1
1
31
@Brehove
Brehove
5 months
This convo makes me really want to see Dwarkesh sit down with John Vervaeke. So much comes up here that Vervaeke tackles through cog-sci and philosophy. @DrJohnVervaeke @dwarkesh_sp
@dwarkesh_sp
Dwarkesh Patel
5 months
.@RichardSSutton, father of reinforcement learning, doesn’t think LLMs are bitter-lesson-pilled. My steel man of Richard’s position: we need some new architecture to enable continual (on-the-job) learning. And if we have continual learning, we don't need a special training
0
0
1