
Nora Ammann
@AmmannNora
Followers
925
Following
4K
Media
17
Statuses
2K
Technical Specialist - Safeguarded AI - ARIA https://t.co/aIwOFs2jv7 Co-founder, ex-Director & Board at https://t.co/GphUSABGT9 My views are my own ✨🤖🧠
Joined November 2020
For you who wants to be in contact with the fabric of reality, your default state will be confusion. It is the natural state. Your resting state. Internalize it as such. Be confused.
0
0
38
Very excited to see this come out, and to be able to support! . Beyond the funding itself, the RfP itself is a valuable resource & great effort by the @AISecurityInst team! It shows there is a lot of valuable, scientifically rigorous work to be done.
📢Introducing the Alignment Project: A new fund for research on urgent challenges in AI alignment and control, backed by over £15 million. ▶️ Up to £1 million per project. ▶️ Compute access, venture capital investment, and expert support . Learn more and apply ⬇️.
0
5
25
RT @MauricBaker: Say the US wants a deal with China on powerful AI. Could we verify that China doesn’t cheat?. For the last year, my team p….
0
32
0
RT @jankulveit: We're presenting ICML Position "Humanity Faces Existential Risk from Gradual Disempowerment" : come talk to us today East E….
0
16
0
RT @ARIA_research: Trust ‘building blocks’, like encryption, have enabled today's digital industries. In our latest opportunity space, PD @….
aria.org.uk
Trust 'building blocks', like encryption, enable digital industries to flourish securely, but they don't extend into the physical world. With emerging technology blurring the line between digital and...
0
9
0
RT @ai_ctrl: Joe Rogan on superintelligence: "I feel like when people are saying they can control it, I feel like I'm being gaslit. I don't….
0
45
0
Even irrespective of the object level timeline discussion, for a solid portfolio of interventions under uncertainty, a good chunk of people should be betting on non-modal worlds!.
I'm pretty confident we won't have AGI/country of geniuses in a datacenter within 2 years. I like ai-2027 as a piece of futures work, but I think too many people are treating it as a mainline scenario, rather than unlikely-but-not-impossible. I think this is resulting in too.
0
0
8
RT @sethlazar: Good thread here. We’re finishing up (ok writing up) a talk I’ve been giving lately: AI personhood w/o sentience (with Ned H….
0
1
0
It has always been a bit of a mystery to me that it wasn't obvious a priori that CoT isn't faithful, but glad to see more empirical work coming out that really remove remaining doubt (I hope!).
Excited to share our paper: "Chain-of-Thought Is Not Explainability"! . We unpack a critical misconception in AI: models explaining their Chain-of-Thought (CoT) steps aren't necessarily revealing their true reasoning. Spoiler: transparency of CoT can be an illusion. (1/9) 🧵
1
0
7
We are looking for a world-class, dedicated team to develop Safeguarded AI capabilities to enable AI adoption into safety-critical domains with quantitative safety guarantees. Our final & largest funding call is now live! £18m, deadline Oct 1st.
📢 £18m grant opportunity in Safeguarded AI: we're looking to catalyse the creation of a new UK-based non-profit to lead groundbreaking machine learning research for provably safe AI. Learn more and apply by 1 October 2025:
0
2
5
If you have skills in AI, security or hardware engineering, and you want to contribute to building these solutions. reach out! . Also see the work going on at
flexheg.com
We are an open-source R&D community building next-generation software and hardware that provide trustworthy assurance for AI.
0
0
1
I believe hardware-enabled verification technologies are the key to enabling this.
Congressman @sethmoulton (D-MA), (wisely imo) backing a contain-and-verify approach:."We have to somehow get to an international framework, a Geneva Convention-like agreement that has a chance, at least, at limiting what our adversaries might do with AI at the extremes.".
1
0
3
Happy to have contributed to this brief on the potential of verification to help us secure and differentially enable beneficial use cases of AI!.
📈 As AI advances rapidly, where does the science stand on AI verification?. @YoshuaBengio leads a new @ScienceBoard_UN Brief on verifying frontier AI models, spotlighting tools to assess claims and boost global safety. 📘 Read more:
0
1
18
Yes! And we need not just Musk but many more thoughtful, high agency entrepreneurs and creators to think this way.
I don't understand @elonmusk's framing re there being two options: participant that accelerates or passive spectator. Feels unusually fatalistic for someone so high agency. He didn't just accept e.g. climate change - he founded Tesla and changed the direction of the whole system.
1
1
7
RT @mer__edith: 'Meredith,' some guys ask, 'why won't you shove AI into Signal?' . Because we love privacy, and we love you, and this shit….
0
248
0
Read the list of questions! . You will see: so much we still got to think through regarding what post-agi trajectories are possible, desirable, & how to steer towards them. Join the discussion, come to our event & bring your friends from across the all social sciences and AI!.
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop!. Post-AGI Civilizational Equilibria: Are there any good ones?. Vancouver, July 14th. Featuring: @jkcarlsmith @RichardMCNgo @eshear 🧵
0
0
3
RT @geoffreyirving: New alignment theory paper! We present a new scalable oversight protocol (prover-estimator debate) and a proof that hon….
0
55
0