ohactually steve
@steve__actually
Followers
97
Following
173
Media
80
Statuses
2K
AI social alignment is a high-risk shift. Evaluating its risks may matter most. Some thoughts→ https://t.co/hz3uOhhkfp
Joined November 2022
Meta: integrate AI into existing platforms. OpenAI: rebuild the platform around AI. Which path will the future take? The retweeted thread shows how a butterfly effect could happen at any moment. More perspectives can be found in the replies to this tweet.
The AI industry hits a structural ceiling: without a social alignment layer beyond models, ecosystem, revenue, and model growth stall. 1. Real-world limits: Beyond ghostwriting or opening existing apps, AI cannot participate in real online life. Attempts to build an AI OS or
8
0
0
AI competition is shifting from better models to who can align with people in real use. For OpenAI, 2026 may be the critical window. GPT’s edge is language and social reasoning—but only if this turns into fundamentally new ways people use AI. Otherwise, model-level advantages
For AI to be truly useful, it needs to understand you. With Personal Intelligence, we’re beginning to solve this. With your permission, Gemini can now securely reason across your own data to answer questions that generic models simply can't - like suggesting plans based on
0
0
0
Resource constraints forced hard trade-offs in 2025. The underlying bottleneck wasn’t model capability—it was the lack of AI social infrastructure creating financial pressure. Breakthroughs in 2026 model capabilities won’t automatically remove this bottleneck.
Mark Chen summarizes the goal of OpenAI's research roadmap: "We’re very excited about our 2026 roadmap and advancing work toward an automated scientist." Jakub Pachocki said the same thing in September (see below). Consumer applications are not the goal. Slop videos are not
0
0
0
AI models are becoming modular, plug-and-play components. They already exist as interchangeable modules inside agent generators. Model capability alone is no longer sufficient for durable user lock-in.
0
0
0
This builds on a point you hinted at: the real bottleneck is no longer model capability, but the absence of a social alignment layer operating within existing legal and social frameworks. Framed this way, many AI safety problems become tractable. @sebkrier
AI social infrastructure is inevitable. 2026 is the last branching point. Humans will soon operate multiple Contextual Agent Roles (CARs) — units that are identifiable, accountable, and context-bound. Pre-AI analogy: A client hires 5 lawyers for 3 matters; each lawyer is
0
0
0
Expanding on the idea of AI social infrastructure and Contextual Agent Roles (CARs)—would love your thoughts @ailexsi
AI social infrastructure is inevitable. 2026 is the last branching point. Humans will soon operate multiple Contextual Agent Roles (CARs) — units that are identifiable, accountable, and context-bound. Pre-AI analogy: A client hires 5 lawyers for 3 matters; each lawyer is
1
0
1
AI social infrastructure gaps limit AI model monetization. If such infrastructure had existed in 2025, projects like Sora might have been successfully commercialized. @billpeeb
AI social infrastructure is inevitable. 2026 is the last branching point. Humans will soon operate multiple Contextual Agent Roles (CARs) — units that are identifiable, accountable, and context-bound. Pre-AI analogy: A client hires 5 lawyers for 3 matters; each lawyer is
0
0
0
Here’s a full framework on AI social infrastructure & Contextual Agent Roles (CARs), expanding on previous discussion: @anshgupta
AI social infrastructure is inevitable. 2026 is the last branching point. Humans will soon operate multiple Contextual Agent Roles (CARs) — units that are identifiable, accountable, and context-bound. Pre-AI analogy: A client hires 5 lawyers for 3 matters; each lawyer is
0
0
0
AI social infrastructure is inevitable. 2026 is the last branching point. Humans will soon operate multiple Contextual Agent Roles (CARs) — units that are identifiable, accountable, and context-bound. Pre-AI analogy: A client hires 5 lawyers for 3 matters; each lawyer is
0
0
0
AI social infrastructure is inevitable — 2026 is the last branching point Full analysis of Contextual Agent Roles & Hybrid Contextualized Social Harness (HCSH) → https://t.co/C6Gw1b3PYA
0
0
0
Great insights are often overlooked when they first appear. This may later be seen as a pivotal 2025 observation on NVIDIA’s role in robotics.
0
0
1
@gdb AI systems are already acting as social actors — but we still record their outputs as stateless computational traces. This mismatch is becoming a structural failure point for companies like OpenAI and Meta in 2026. This is an open area requiring research. 2026 may be the
1
1
1
The market may be underestimating this acquisition. After Chinese models went MIT open-source, Meta’s AI strategy seems to shift toward a central hub for multiple model capabilities, potentially limiting the ecosystem for ChatGPT Appstore and integration tools.
0
0
0
Manus, integrated into Meta’s small business services, is rapidly capturing the enterprise market, forcing competitors to compete on price. GPT’s enterprise focus adds consumer-side pressure. #keep4o continues to grow.
Fun fact: Manus is currently SOTA on the Remote Labor Index (RLI) benchmark that @scale_AI and @cais released earlier this year. https://t.co/Z4p3zCxg1k
1
0
0
@ShaneLegg @FryRsquared Preparing for AGI isn’t just about job loss — it’s about role change. Humans won’t be replaced; they’ll manage AI agents. But that transition only works if we build a social alignment layer that sits above individual models.
0
1
2
@johnschulman2 AI systems seem to be missing a layer analogous to organizations — a social alignment layer beyond individual models. Without it, we keep trying to train ever more general, all-in-one intelligence inside a single model.
0
1
1
@dwarkesh_sp @pawtrammell 1/ The world could evolve differently: AI doing near-infinite “slave labor” could make almost everything free for humans. Value would come from new social interactions, and traditional capital could lose most of its importance. 2/ The problem today is we lack the infrastructure
2
1
1
💬What’s your opinion? Thoughts on BSP governance? DM me.
0
0
0
7/ Long-term Governance Focus My focus is long-term governance, especially after patents expire. The real question: who should run the BSP, under what structure, and with what safeguards to ensure a safe and fair modular Web in the AI era?
0
0
0
6/ Accelerating Ecosystem Growth If another party is better suited to operate the BSP, I plan to let them become the patent owner to support innovation, while creating an advisory council with rights to be informed and advise on governance.
0
0
0
5/ Patents & Responsibility I hold foundational patents on this layer — technically unavoidable. This gives me leverage in shaping the ecosystem and also a responsibility.
0
0
0