
Nora Ammann
@AmmannNora
Followers
907
Following
4K
Media
17
Statuses
2K
Technical Specialist - Safeguarded AI - ARIA https://t.co/aIwOFs2jv7 Co-founder, ex-Director & Board at https://t.co/GphUSABGT9 My views are my own ✨🤖🧠
Joined November 2020
RT @jankulveit: We're presenting ICML Position "Humanity Faces Existential Risk from Gradual Disempowerment" : come talk to us today East E….
0
13
0
RT @ARIA_research: Trust ‘building blocks’, like encryption, have enabled today's digital industries. In our latest opportunity space, PD @….
0
9
0
RT @ai_ctrl: Joe Rogan on superintelligence: "I feel like when people are saying they can control it, I feel like I'm being gaslit. I don't….
0
45
0
Even irrespective of the object level timeline discussion, for a solid portfolio of interventions under uncertainty, a good chunk of people should be betting on non-modal worlds!.
I'm pretty confident we won't have AGI/country of geniuses in a datacenter within 2 years. I like ai-2027 as a piece of futures work, but I think too many people are treating it as a mainline scenario, rather than unlikely-but-not-impossible. I think this is resulting in too.
0
0
8
RT @sethlazar: Good thread here. We’re finishing up (ok writing up) a talk I’ve been giving lately: AI personhood w/o sentience (with Ned H….
0
1
0
It has always been a bit of a mystery to me that it wasn't obvious a priori that CoT isn't faithful, but glad to see more empirical work coming out that really remove remaining doubt (I hope!).
Excited to share our paper: "Chain-of-Thought Is Not Explainability"! . We unpack a critical misconception in AI: models explaining their Chain-of-Thought (CoT) steps aren't necessarily revealing their true reasoning. Spoiler: transparency of CoT can be an illusion. (1/9) 🧵
1
0
7
We are looking for a world-class, dedicated team to develop Safeguarded AI capabilities to enable AI adoption into safety-critical domains with quantitative safety guarantees. Our final & largest funding call is now live! £18m, deadline Oct 1st.
📢 £18m grant opportunity in Safeguarded AI: we're looking to catalyse the creation of a new UK-based non-profit to lead groundbreaking machine learning research for provably safe AI. Learn more and apply by 1 October 2025:
0
2
5
I believe hardware-enabled verification technologies are the key to enabling this.
Congressman @sethmoulton (D-MA), (wisely imo) backing a contain-and-verify approach:."We have to somehow get to an international framework, a Geneva Convention-like agreement that has a chance, at least, at limiting what our adversaries might do with AI at the extremes.".
1
0
3
Happy to have contributed to this brief on the potential of verification to help us secure and differentially enable beneficial use cases of AI!.
📈 As AI advances rapidly, where does the science stand on AI verification?. @YoshuaBengio leads a new @ScienceBoard_UN Brief on verifying frontier AI models, spotlighting tools to assess claims and boost global safety. 📘 Read more:
0
1
18
Yes! And we need not just Musk but many more thoughtful, high agency entrepreneurs and creators to think this way.
I don't understand @elonmusk's framing re there being two options: participant that accelerates or passive spectator. Feels unusually fatalistic for someone so high agency. He didn't just accept e.g. climate change - he founded Tesla and changed the direction of the whole system.
1
1
7
RT @mer__edith: 'Meredith,' some guys ask, 'why won't you shove AI into Signal?' . Because we love privacy, and we love you, and this shit….
0
247
0
Read the list of questions! . You will see: so much we still got to think through regarding what post-agi trajectories are possible, desirable, & how to steer towards them. Join the discussion, come to our event & bring your friends from across the all social sciences and AI!.
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop!. Post-AGI Civilizational Equilibria: Are there any good ones?. Vancouver, July 14th. Featuring: @jkcarlsmith @RichardMCNgo @eshear 🧵
0
0
3
RT @geoffreyirving: New alignment theory paper! We present a new scalable oversight protocol (prover-estimator debate) and a proof that hon….
0
55
0
RT @JanMBrauner: 🧵1/5 The EU is building something unprecedented: a Scientific Panel with real teeth to assess the impacts and risks of gen….
0
17
0
In this piece, @littIeramblings & I argue that technological solutions can lower the barrier to meaningful international deals - by providing assurance that no party can circumvent the agreed upon rules. What's it called? Assurance tech.
How hardware-enabled mechanisms (HEMs) can make global cooperation on powerful AI possible — even amid geopolitical tensions.
6
15
103
RT @aif_media: How hardware-enabled mechanisms (HEMs) can make global cooperation on powerful AI possible — even amid geopolitical tensions.
0
4
0