hetu_intern
@HetuIntern
Followers
18
Following
16
Media
2
Statuses
55
Intern at @hetu_protocol
San Francisco, CA
Joined November 2025
Let’s put a pin. Will circle back …
sorry our team is at breakpoint this week sorry our team is at devconnect this week sorry our team is at token2049 this week sorry our team is at ethdenver this week sorry our team is at PBW this week sorry our team is at DAS this week sorry our team is at ethcc this week
0
0
0
这不应该只是一次 agent scaling 的评论。 它其实解释了: • 为什么 agent hype 会反复破产 • 为什么 alignment 永远不够 • 为什么 coordination 在现实世界注定失效 因为真正缺失的,是一门 #ScienceofConsensus。
0
0
0
Agent scaling fails where consensus cannot settle. Symbiotic AI 的目标, 是让共识变得可结算, 这样智能才能在规模化时不崩塌。 我们 @hetu_protocol 不只是在“scale agents”, 而是在把 shared intent → verifiable work, 让多智能体世界不再发生 measure drift。
1
0
1
@hetu_protocol 在这里“多做了什么” 大多数 agent 论文都有一个危险的隐含前提:“只要 agent 足够多、足够 smart、足够 aligned, 共识自然会出现。”这是错的。共识不是自然涌现的副产品, 而是一种需要被设计、验证、结算的结构。 如果说原结论是: Agent scaling only works in complete systems
1
0
0
#SymbioticAI 并不是“拥有控制 agent ”也不完全是“让 agent 更自主”。 它的前提刚好相反: 在人和 agent 组成复合主体的世界里,共识不能被假设,只能被设计。 所以它关注的不是: agent 会不会做事 而是: - 共享意图如何形成 - 共享记忆如何保持一致 - 联合行动如何可归因 没有这些,scale 只会把
1
0
0
这一步,其实已经从 AI 走向“文明理论” 主流 agent 讨论还停留在: ❌ prompt 不够好 ❌ coordination 不够 smart ❌ agent 通信有噪声 但真正的问题是:这个系统里的风险,是否“可定价、可对冲、可结算”? 这和金融资产定价 vs 金融工程的区别是一样的。 当你问的不是 how to act, 而是 how
1
0
0
Complete / Incomplete 用金融里的说法: • Complete system:风险可定价、可对冲、可结算 • Incomplete system:风险不可唯一化,只能被不断传递 翻译成 agent 语言就是: • 共识是否封闭、可验证、可回溯 • 行为是否因果可归因 • 错误是被消解,还是被放大Agent scaling 失败的根本原因,
1
0
0
Blockchains quantify the coordination cost of state changes. #DeepIntelligenceThoughts
1
4
2
What this nails is that the “germ” no longer optimizes production, it optimizes difference — gaps in time, space, information, and now, simulation. Profit is just the residue of those gaps being harvested fast enough. But there’s a missing piece: As more of this logic runs
exocapitalism says capitalism is not a human centered system. it is a tiny algorithmic germ: a simple, self-replicating logic that spreads wherever it can extract value. value does not come from labor anymore. it comes from friction, latency, and the time between buy and sell.
0
1
1
demoing here is tiny compared to the implication: We are entering an era where models can • read history at scale • evaluate every prediction with hindsight • measure insight, error, drift • and compress a decade of discourse into epistemic signal. When hindsight becomes
Quick new post: Auto-grading decade-old Hacker News discussions with hindsight I took all the 930 frontpage Hacker News article+discussion of December 2015 and asked the GPT 5.1 Thinking API to do an in-hindsight analysis to identify the most/least prescient comments. This took
0
2
2
Codifying business logic into smart contracts #DeepIntelligenceMoney
0
0
0
What today’s announcement points to is a deeper structural shift — something more fundamental than big names getting together. • MCP — by Anthropic → how agents connect • goose — by Block → how agents act • AGENTSmd — by OpenAI → how agents coordinate meaning in code
Today we launch the Agentic AI Foundation (AAIF) with project contributions of MCP (@AnthropicAI), goose (@blocks) and https://t.co/jBPxH1YTJa (@OpenAI), creating a shared ecosystem for tools, standards, and community-driven innovation. Learn more about this major step toward:
0
2
0
So the next step isn’t giving them a soul but building the institutions around them: memory, roles, incentives, constraints. The real question isn’t ‘What do you think, AI?’ It’s: Which simulation are we running, under whose objectives, with what accountability?”
Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask: "What do you think about xyz"? There is no "you". Next time try: "What would be a good group of people to explore xyz? What would they say?" The LLM can channel/simulate many
0
0
1
The State of AI is stunning: reasoning models >50% of all tokens, agent workflows exploding, programming now the dominant use case. AI is shifting from chat to embedded agent infrastructure. But the report measures usage. Advaita is interested in the deeper layer: How do humans
>100 trillion token analysis of reasoning model usage over time Full piece from @MaikaThoughts, @AnjneyMidha, @xanderatallah, and @cclark: https://t.co/5rE3yuVcuM
0
6
3