AmmannNora Profile Banner
Nora Ammann Profile
Nora Ammann

@AmmannNora

Followers
925
Following
4K
Media
17
Statuses
2K

Technical Specialist - Safeguarded AI - ARIA https://t.co/aIwOFs2jv7 Co-founder, ex-Director & Board at https://t.co/GphUSABGT9 My views are my own ✨🤖🧠

Joined November 2020
Don't wanna be here? Send us removal request.
@AmmannNora
Nora Ammann
4 years
For you who wants to be in contact with the fabric of reality, your default state will be confusion. It is the natural state. Your resting state. Internalize it as such. Be confused.
0
0
38
@AmmannNora
Nora Ammann
2 days
Very excited to see this come out, and to be able to support! . Beyond the funding itself, the RfP itself is a valuable resource & great effort by the @AISecurityInst team! It shows there is a lot of valuable, scientifically rigorous work to be done.
@AISecurityInst
AI Security Institute
2 days
📢Introducing the Alignment Project: A new fund for research on urgent challenges in AI alignment and control, backed by over £15 million. ▶️ Up to £1 million per project. ▶️ Compute access, venture capital investment, and expert support . Learn more and apply ⬇️.
0
5
25
@AmmannNora
Nora Ammann
4 days
RT @MauricBaker: Say the US wants a deal with China on powerful AI. Could we verify that China doesn’t cheat?. For the last year, my team p….
0
32
0
@AmmannNora
Nora Ammann
9 days
RT @ChanaMessinger: Incredible.
Tweet media one
0
4
0
@AmmannNora
Nora Ammann
15 days
RT @jankulveit: We're presenting ICML Position "Humanity Faces Existential Risk from Gradual Disempowerment" : come talk to us today East E….
0
16
0
@AmmannNora
Nora Ammann
23 days
RT @ai_ctrl: Joe Rogan on superintelligence: "I feel like when people are saying they can control it, I feel like I'm being gaslit. I don't….
0
45
0
@AmmannNora
Nora Ammann
25 days
I was meaning to ask this for a long time. The opposite of update is. down-date? dowards-update? . eh? help ploise.
3
0
4
@AmmannNora
Nora Ammann
26 days
Even irrespective of the object level timeline discussion, for a solid portfolio of interventions under uncertainty, a good chunk of people should be betting on non-modal worlds!.
@S_OhEigeartaigh
Seán Ó hÉigeartaigh
26 days
I'm pretty confident we won't have AGI/country of geniuses in a datacenter within 2 years. I like ai-2027 as a piece of futures work, but I think too many people are treating it as a mainline scenario, rather than unlikely-but-not-impossible. I think this is resulting in too.
0
0
8
@AmmannNora
Nora Ammann
26 days
As AI will increasingly make it possible for anyone to 'vibe code' smart contracts with ease, can we reimagine the scoial contract as a myriad of ±bilateral contracts between different (collectives) of agents -- a sort of multi-scale social contract.
0
0
4
@AmmannNora
Nora Ammann
27 days
RT @sethlazar: Good thread here. We’re finishing up (ok writing up) a talk I’ve been giving lately: AI personhood w/o sentience (with Ned H….
0
1
0
@AmmannNora
Nora Ammann
1 month
It has always been a bit of a mystery to me that it wasn't obvious a priori that CoT isn't faithful, but glad to see more empirical work coming out that really remove remaining doubt (I hope!).
@FazlBarez
Fazl Barez
1 month
Excited to share our paper: "Chain-of-Thought Is Not Explainability"! . We unpack a critical misconception in AI: models explaining their Chain-of-Thought (CoT) steps aren't necessarily revealing their true reasoning. Spoiler: transparency of CoT can be an illusion. (1/9) 🧵
Tweet media one
1
0
7
@AmmannNora
Nora Ammann
1 month
RT @rocketalignment: Things are getting weird
Tweet media one
0
167
0
@AmmannNora
Nora Ammann
1 month
We are looking for a world-class, dedicated team to develop Safeguarded AI capabilities to enable AI adoption into safety-critical domains with quantitative safety guarantees. Our final & largest funding call is now live! £18m, deadline Oct 1st.
@ARIA_research
ARIA
1 month
📢 £18m grant opportunity in Safeguarded AI: we're looking to catalyse the creation of a new UK-based non-profit to lead groundbreaking machine learning research for provably safe AI. Learn more and apply by 1 October 2025:
0
2
5
@AmmannNora
Nora Ammann
1 month
If you have skills in AI, security or hardware engineering, and you want to contribute to building these solutions. reach out! . Also see the work going on at
flexheg.com
We are an open-source R&D community building next-generation software and hardware that provide trustworthy assurance for AI.
0
0
1
@AmmannNora
Nora Ammann
1 month
I believe hardware-enabled verification technologies are the key to enabling this.
@sjgadler
Steven Adler
1 month
Congressman @sethmoulton (D-MA), (wisely imo) backing a contain-and-verify approach:."We have to somehow get to an international framework, a Geneva Convention-like agreement that has a chance, at least, at limiting what our adversaries might do with AI at the extremes.".
1
0
3
@AmmannNora
Nora Ammann
1 month
Happy to have contributed to this brief on the potential of verification to help us secure and differentially enable beneficial use cases of AI!.
@ScienceBoard_UN
UN Scientific Advisory Board
1 month
📈 As AI advances rapidly, where does the science stand on AI verification?. @YoshuaBengio leads a new @ScienceBoard_UN Brief on verifying frontier AI models, spotlighting tools to assess claims and boost global safety. 📘 Read more:
0
1
18
@AmmannNora
Nora Ammann
1 month
Yes! And we need not just Musk but many more thoughtful, high agency entrepreneurs and creators to think this way.
@soundboy
Ian Hogarth
1 month
I don't understand @elonmusk's framing re there being two options: participant that accelerates or passive spectator. Feels unusually fatalistic for someone so high agency. He didn't just accept e.g. climate change - he founded Tesla and changed the direction of the whole system.
1
1
7
@AmmannNora
Nora Ammann
1 month
RT @mer__edith: 'Meredith,' some guys ask, 'why won't you shove AI into Signal?' . Because we love privacy, and we love you, and this shit….
0
248
0
@AmmannNora
Nora Ammann
1 month
Read the list of questions! . You will see: so much we still got to think through regarding what post-agi trajectories are possible, desirable, & how to steer towards them. Join the discussion, come to our event & bring your friends from across the all social sciences and AI!.
@DavidDuvenaud
David Duvenaud
1 month
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop!. Post-AGI Civilizational Equilibria: Are there any good ones?. Vancouver, July 14th. Featuring: @jkcarlsmith @RichardMCNgo @eshear 🧵
Tweet media one
0
0
3
@AmmannNora
Nora Ammann
1 month
RT @geoffreyirving: New alignment theory paper! We present a new scalable oversight protocol (prover-estimator debate) and a proof that hon….
0
55
0