
Seth Lazar
@sethlazar
Followers
7K
Following
6K
Media
73
Statuses
2K
Professor working on AI safety, governance and resilience. Now: ANU Philosophy. Soon: Johns Hopkins Government and Policy. Visiting Faculty at Google.
Australia, for now...
Joined August 2011
I hope as we move past the first wave of AI criticism ("it doesn't work, all hype") we get a new wave of AI criticism rooted in the fact that these systems are very powerful & quite useful and focusing a deep exploration of when AI uses are uplifting and when they are detrimental
35
50
380
Interested in policy research on cutting-edge AI? We’ve got the job opening for you! We are currently seeking applicants for a Research or Senior Fellow to help guide our research on advanced AI systems. See more below 👇 https://t.co/Oi6nBMjxKl
cset.georgetown.edu
The Center for Security and Emerging Technology (CSET) is currently seeking candidates to lead our Frontier AI research efforts, either as a Research Fellow or Senior Fellow (depending on experienc...
1
3
9
If you're interested in gradual disempowerment, consider applying to work with ACS (@jankulveit and @raymondadouglas):
0
5
27
Huh, I guess I went from "agents are meaningless jargon hype that's never going to happen" in January to "Claude Code is a General Agent" in October
Claude Skills are awesome, maybe a bigger deal than MCP https://t.co/1wIYcTFrzI
29
41
834
Very helpful overview. Also along the way Simon revisits and recants his early 2025 scepticism about the year of agents (under some description).
Claude Skills are awesome, maybe a bigger deal than MCP https://t.co/1wIYcTFrzI
0
0
3
📣New paper: Rigorous AI agent evaluation is much harder than it seems. For the last year, we have been working on infrastructure for fair agent evaluations on challenging benchmarks. Today, we release a paper that condenses our insights from 20,000+ agent rollouts on 9
17
87
383
If you assume a massive reduction in principal agent problems across the board, what are areas of life where we've been relying on frictions as features rather than bugs? As usual I think AI will force us to reconsider the design of systems we rely on, and making explicit what we
Gradually, I expect AI to be increasingly used to automate monitoring and enforcement of laws. If done well, this will ultimately be a net positive, helping address issues of poor state capacity and inconsistency. But as usual the transition will likely be messy. Many outdated
4
6
65
This is really good. Describes succinctly what I most hope we’ll achieve at Hopkins school of gov and policy—to be friends of the future, and build its new ways of acting together.
I've been reflecting on a lot lately--too much to write separate pieces. So I have combined musings on preemption, AI politics, institutional transformation, the future of science, Italy, and a recent AI conference called The Curve. I hope you enjoy.
0
1
12
I've been reflecting on a lot lately--too much to write separate pieces. So I have combined musings on preemption, AI politics, institutional transformation, the future of science, Italy, and a recent AI conference called The Curve. I hope you enjoy.
5
6
68
In our Nature article, @yaringal and I outline how building the technical toolkit for open-weight AI model safety will be key to both accessing the benefits and mitigating the risks of powerful open models. https://t.co/t9JoHc25a9
2
9
50
Consistently so impressed with this group. Urgent, important work being executed with great distinction. Encourage anyone with related interests to follow and get involved.
We’re building the academy for philosopher-builders—people with both technical ability and moral vision to steer AI toward human flourishing. Here's a few September highlights from the @cosmos_inst community 🧵
1
3
21
Anything would be better than this obvious facade—lazy, overworked reviewers either relying on clumsy heuristics or just passing on the verdict from some commercial model.
0
0
1
At the very least, i hope *someone* is working on evaluating LLMs at peer review.
1
0
4
Program chairs should take over responsibility for initial technical review, using validated, scaffolded LLM approach. Human reviewers should be asked to assess the work’s significance/news value. Human reviewers for taste. Good LLMs for applying methodological standards.
2
0
5
I think we’re being luddites about this. The best models, prompted well, using tools, can do a *much* better job of technical review (and can be structured to give more diverse feedback than most CS reviewers, esp if they just use commercial models and don’t do their job).
3
0
5
CS conference peer review is hilariously broken. A free opportunity to hear what two-three LLMs think about your paper (extended thinking OFF). Nice when they agree…
5
1
57
This also misses one of the major benefits of AI for education—something @jliemandt @mackenzieprice are pioneering @AlphaSchoolATX. It’s the idea of using AI to create more time for the irreducibly human work of moral formation. AI can compress 6 hours of content delivery into
Man, this is a bad take. Education is not only, or even especially, about utility (especially today!). It’s about enlarging the range of things a person can care about + enabling you to become the person you’re capable of becoming. Children don’t automatically intuit that
3
3
18
I knew someone who was trained as a revolutionary guard in Iran and the first thing they told him was "everything we do is to accelerate the coming of the Imam of Time; no destruction is not worth this outcome." When I hear (some) hyper deterministic Silicon Valley techies I feel
19
25
335
There will definitely be roles for political philosophers too—whether working on AI or not—so please share to those groups as well.
1
1
6