 
            
              Cas (Stephen Casper)
            
            @StephenLCasper
Followers
                6K
              Following
                5K
              Media
                355
              Statuses
                2K
              AI technical gov & risk management research. PhD student @MIT_CSAIL, fmr. @AISecurityInst. I'm on the CS faculty job market! https://t.co/r76TGxSVMb
              
              Joined March 2016
            
            
           📌📌📌 I'm excited to be on the faculty job market this fall. I updated my website with my CV.  https://t.co/4Ddv6tN0jq 
          
          
            
            stephencasper.com
              Visit the post for more.
            
                
                8
              
              
                
                22
              
              
                
                172
              
             I worked so, so hard on this piece: It's about OpenAI bringing back erotica, what's been going on with users' mental health, and how it all relates to making AI go well. 
          
                
                10
              
              
                
                17
              
              
                
                112
              
             My questions (all not clear from their blog post): • Have the attorneys general approved this plan? • In what sense will the foundation 'remain in control' of the Public Benefit Corporation, other than the ability to hire and fire PBC directors? • What will the PBC do to 
           LIVE at 10:30am PT: The future of OpenAI and Q&A with @sama and @merettm Bring your questions.  https://t.co/EOvjGJsf0R 
            
          
                
                7
              
              
                
                21
              
              
                
                137
              
             I cowrote this paper w a great team of Chinese and UK colleagues in 2019/20. Just noticed it's had by far its most citations this year. An indicator, I hope, that the appetite for international cooperation on AI safety & governance is stronger than ever.  https://t.co/u7sOjnGzuW 
          
          
            
            link.springer.com
              Philosophy & Technology - Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing...
            
                
                0
              
              
                
                3
              
              
                
                26
              
             I am sad for the family and very disappointed in OpenAI. I can’t imagine how this could be a legally justified request. This isn’t what being a pro human tech company looks like. 
           OpenAI has sent a legal request to the family of Adam Raine, the 16yo who died by suicide following lengthy chats with ChatGPT, asking for a full attendee list to his memorial, as well as photos taken or eulogies given. His lawyers told the FT this was "intentional harassment" 
            
                
                0
              
              
                
                0
              
              
                
                17
              
             A veteran’s home. A community’s support. ❤️ Together with @SouthShorePlmb, the Panthers Foundation has launched Vets Homefront Help to provide plumbing repairs for veterans in need. Carole’s story is just the beginning! 
          
                
                1
              
              
                
                10
              
              
                
                59
              
             How might the world look after the development of AGI, and what should we do about it now? Help us think about this at our workshop on Post-AGI Economics, Culture and Governance! 
          
                
                3
              
              
                
                13
              
              
                
                61
              
             It’ll be co-located with NeurIPS. Our workshop is a separate event, so no need to register for NeurIPS to attend ours! Ours is free but invite-only, please apply here:  https://t.co/M3UtBxUxWL  Co-organized with @jankulveit @raymondadouglas @StephenLCasper and Maria Kostylew 
          
            
            docs.google.com
              This is a non-binding form to express your interest for the second Post-AGI Workshop. It will be held concurrently with NeurIPS in San Diego on December 3, 2025. For more details, see the workshop...
            
                
                1
              
              
                
                2
              
              
                
                10
              
            
                
                1
              
              
                
                0
              
              
                
                7
              
             Our proposal for new AI watermarking characters for Unicode is officially in the document register for proposed additions. 🤞  https://t.co/ScTDQnhGz3 
             https://t.co/yJfp8ezU64 
          
          
                
                4
              
              
                
                22
              
              
                
                95
              
             I will produce your next single/album/ep, mix/master it & release/distribute/promote it on my Label for 1 fee..dm me 
          
                
                16
              
              
                
                65
              
              
                
                235
              
             Why is it common? Well, if you're an AI developer who wants less scrutiny, it's easy to downplay the potential harms of your system by pointing to worse ones. Laundering that idea by talking about "marginal risk" is a convenient industry talking point. 
          
                
                1
              
              
                
                0
              
              
                
                1
              
             And that is all easy enough to understand when we think things through. But it takes time and nuance to disentangle the right vs. wrong interpretations. Anecdotally, I think I see the trojan horse version of the "marginal risk" argument fairly often. 
          
                
                1
              
              
                
                0
              
              
                
                0
              
             Worst of all, this idea paves the way for a race to the bottom. If the bar is set at what the worst systems do, it will start low and keep getting lower forever. 
          
                
                1
              
              
                
                0
              
              
                
                1
              
             This is wrong for a lot of reasons: mitigations matter, access is key, the role of aggregate effects, and systemic impacts. It is entirely possible for an AI system that is strictly safer than alternatives to increase marginal risk depending on how it is deployed and used. 
          
                
                1
              
              
                
                0
              
              
                
                1
              
             It introduces a harmful idea: that the AI ecosystem is only as safe as the least safe systems within it. 
          
                
                1
              
              
                
                0
              
              
                
                0
              
             But it is easy to naively interpret the marginal risk argument as saying that we should evaluate systems based on whether they are more dangerous than existing peers or alternatives. This idea sounds reasonable at first, but... 
          
                
                1
              
              
                
                0
              
              
                
                2
              
             Properly interpreted, this claim rightly means that we should evaluate the decision to release an AI system using ecological context. AI safety isn't a model property. This is an important point. 👍 
          
                
                1
              
              
                
                0
              
              
                
                2
              
             🧵🧵🧵 Do you ever hear people saying that it's important to assess AI systems based on their "marginal risk"? Of course -- that's obvious. Nobody would ever dispute that. So then why are we saying that? Maybe it's a little too obvious... 
          
                
                1
              
              
                
                0
              
              
                
                7
              
             Argyle Coaches always pushing us to be our best both in season and out of season. @Msoko93, @argylegridiron, @Coach_Lundy, @CoachBrazil2nd, @ACUFootball, @TXSTATEFOOTBALL,@toddrodgers13
          
          
                
                1
              
              
                
                8
              
              
                
                41
              
             AI is evolving too quickly for an annual report to suffice. To help policymakers keep pace, we're introducing the first Key Update to the International AI Safety Report. 🧵⬇️ (1/10) 
          
                
                17
              
              
                
                89
              
              
                
                293
              
             In defense of OAI’s subpoena practice, @jasonkwon claims this is normal litigation stuff, and since Encode entered the Musk case, @_NathanCalvin can’t complain. As a litigator-turned-OAI-restructuring-critic, I interrogate this claim:🧵 
           There’s quite a lot more to the story than this. As everyone knows, we are actively defending against Elon in a lawsuit where he is trying to damage OpenAI for his own financial benefit. Encode, the organization for which @_NathanCalvin serves as the General Counsel, was one 
          
                
                8
              
              
                
                45
              
              
                
                273
              
             One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI. I held back on talking about it because I didn't want to distract from SB 53, but Newsom just signed the bill so... here's what happened: 🧵 
          
                
                324
              
              
                
                1K
              
              
                
                6K
              
             It draws closely from past work that we did with Kyle O'Brien et al. to mitigate risks from malicious fine-tuning.  https://t.co/us8MEhMrIh 
          
          
                
                0
              
              
                
                0
              
              
                
                4
              
             Insight into Coral Restoration Researchers uncovered how corals reattach to reefs through a three-phase process involving tissue transformation, anchoring, and skeleton formation. This insight could make coral restoration projects more precise and successful. Read more: 
          
                
                0
              
              
                
                2
              
              
                
                4
              
             
             
             
               
             
               
             
             
             
             
             
             
             
               
            