
hardmaru
@hardmaru
Followers
363K
Following
144K
Media
4K
Statuses
26K
Building @SakanaAILabs 🧠
Minato-ku, Tokyo
Joined November 2014
「日本にフロンティアAI企業を創る」 Sakana AIでは当社初となるOpen Houseイベントを開催し、私から以下のような内容で冒頭の挨拶をしました。 10年以上前にカナダから日本に来て、Googleで深層学習の最先端研究に没頭していた私が、なぜSakana AIを創業するに至ったのか、その背景をお話ししたい
8/7に、Sakana AIは初となるApplied Research Engineer向けのOpen Houseを開催しました。現地で70名、オンラインで200名超の方にご参加いただいた本イベントのレポートを公開します。 https://t.co/1jDRzzzcCN イベントでは共同創業者2名も登壇し、研究開発とビジネスの両輪をどう回し、日本や世界
8
32
326
SakanaAIのチームウナギ(秋葉さんも参加)がICFPっていうプログラミングコンテストでShinkaEvolve(進化的探索で自動進化するエージェント)を使いまくって優勝したという。おめでとうございます。ICFPってルール無用らしく、何使っても許されるのでAIを使い倒してもOKだという。しかし秋葉さんはもと
先日優勝したICFP Programming Contest 2025にて、実はSakana AIのAIツール「ShinkaEvolve」を使い倒してました、という記事です!単に自動的にスコアアップしただけでなく、AIのアイディアから学べたことが印象的でした。 翻訳&推敲前の日本語版を個人ブログに置いてます: https://t.co/c2LqFUwi0F
0
9
76
🎥 Lantern Pharma $LTRN: Precision Oncology Powered by AI In our recent interview, CEO Panna Sharma detailed how @LanternPharma is leveraging its RADR® AI platform to accelerate cancer drug development and cut early-stage costs by up to 80%. Key Takeaways for Investors: •
0
0
5
Sakana AI has just leveraged their evolutionary code optimization system, ShinkaEvolve, to earn the 1st prize at @icfpcontest2025 🏆 ShinkaEvolve enabled up to a 10x speedup by evolving clever SAT encodings, unlocking solutions to far larger and more complex problems than
Competitive programmers collaborated with Sakana AI’s ShinkaEvolve to win 1st place in the 2025 ICFP Programming Contest 👏 https://t.co/okBe1uI2Zl What happens when competitive programming experts master a cutting-edge AI tool? Team Unagi, which includes Sakana AI's Research
0
9
84
Kenneth Stanley and Joel Lehman at Sakana AI, giving a talk about their books. :)
0
8
62
先日優勝したICFP Programming Contest 2025にて、実はSakana AIのAIツール「ShinkaEvolve」を使い倒してました、という記事です!単に自動的にスコアアップしただけでなく、AIのアイディアから学べたことが印象的でした。 翻訳&推敲前の日本語版を個人ブログに置いてます: https://t.co/c2LqFUwi0F
iwiwi.hatenablog.com
我々チームUnagiはICFP Programming Contest 2025にて優勝することが出来ました。今回、私はこのコンテスト中に勤務先Sakana AIが開発するAIツール「ShinkaEvolve」を活用することを試みました。この記事では、ShinkaEvolveがコンテスト中に我々にどのように貢献したかを…
Competitive programmers collaborated with Sakana AI’s ShinkaEvolve to win 1st place in the 2025 ICFP Programming Contest 👏 https://t.co/okBe1uI2Zl What happens when competitive programming experts master a cutting-edge AI tool? Team Unagi, which includes Sakana AI's Research
2
58
271
Stoked to see ShinkaEvolve support @SakanaAILabs 's @iwiwi's Unagi team win the 2025 ICFP Programming Contest 🎉 ShinkaEvolve is an LLM-driven evolutionary program optimization system that discovered efficient auxiliary variables for a downstream SAT solver 🚀 Code:
Competitive programmers collaborated with Sakana AI’s ShinkaEvolve to win 1st place in the 2025 ICFP Programming Contest 👏 https://t.co/okBe1uI2Zl What happens when competitive programming experts master a cutting-edge AI tool? Team Unagi, which includes Sakana AI's Research
5
14
91
競技プログラミングチームがSakana AIのShinkaEvolveと協働、ICFPコンテストで一位に👏 英語ブログ: https://t.co/okBe1uI2Zl 競技プログラミングのエキスパートが、最先端のAIツールを使いこなしたらどうなるのでしょうか? Sakana AIのResearch Scientistである秋葉拓哉 @iwiwi が所属するTeam
0
16
108
Competitive programmers collaborated with Sakana AI’s ShinkaEvolve to win 1st place in the 2025 ICFP Programming Contest 👏 https://t.co/okBe1uI2Zl What happens when competitive programming experts master a cutting-edge AI tool? Team Unagi, which includes Sakana AI's Research
3
43
247
Introducing ShinkaEvolve, an open-source approach to sample-efficient LLM-driven program evolution 🧬 Paper: https://t.co/Q1PhLaZZDV Blog: https://t.co/MpVNwyNFZv Code: https://t.co/pZMzQOeoaX The AI Scientist, Darwin Goedel Machine, and AlphaEvolve have fundamentally shaped
We’re excited to introduce ShinkaEvolve: An open-source framework that evolves programs for scientific discovery with unprecedented sample-efficiency. Blog: https://t.co/Bj32AGXC3T Code: https://t.co/UMCSQaeOhd Like AlphaEvolve and its variants, our framework leverages LLMs to
5
55
420
Daiwa Securities is hiring startup Sakana AI to build an AI tool analyzing investor profiles, joining other firms adopting the technology
bloomberg.com
Daiwa Securities Group Inc. is hiring startup Sakana AI to develop an artificial intelligence tool that can analyze individual investor profiles, joining a growing number of financial institutions...
4
11
35
New paper 📜: Tiny Recursion Model (TRM) is a recursive reasoning approach with a tiny 7M parameters neural network that obtains 45% on ARC-AGI-1 and 8% on ARC-AGI-2, beating most LLMs. Blog: https://t.co/w5ZDsHDDPE Code: https://t.co/7UgKuD9Yll Paper:
arxiv.org
Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies. This biologically inspired method beats Large Language models (LLMs) on...
134
632
4K
The gap between open and closed models are narrowing and I expect this trend to continue. As foundation models become commoditized on a global level, the most interesting directions from both research and commercial is not in their development but in finding new ways to use them!
Recent open weights releases are reducing the gap to proprietary frontier models on agentic workflows On the Terminal-Bench Hard evaluation for agentic coding and terminal use, open-weights models such as DeepSeek V3.2 Exp, Kimi K2 0905, and GLM-4.6 have made large strides, with
8
17
154
Evolution Strategies can be applied at scale to fine-tune LLMs, and outperforms PPO and GRPO in many model settings! Fantastic paper “Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning” by @yule_gan, Risto Miikkulainen and team. https://t.co/CEyX6Z5ulG
arxiv.org
Fine-tuning pre-trained large language models (LLMs) for down-stream tasks is a critical step in the AI deployment pipeline. Reinforcement learning (RL) is arguably the most prominent fine-tuning...
Reinforcement Learning (RL) has long been the dominant method for fine-tuning, powering many state-of-the-art LLMs. Methods like PPO and GRPO explore in action space. But can we instead explore directly in parameter space? YES we can. We propose a scalable framework for
10
36
278
Introducing Continuous Thought Machines https://t.co/Fjwl82PZ9E (Blog post from earlier this year.) Summary tweet: https://t.co/bOGIDVDZod
sakana.ai
Introducing Continuous Thought Machines
New Paper: Continuous Thought Machines 🧠 Neurons in brains use timing and synchronization in the way that they compute, but this is largely ignored in modern neural nets. We believe neural timing is key for the flexibility and adaptability of biological intelligence. We
1
2
17
Proud to share that “Continuous Thought Machines” will be presented as a spotlight at #NeurIPS2025 ✨ Updated paper: https://t.co/ZWiidb1eTX Interactive Demo: https://t.co/d6hqid60t7 GitHub:
github.com
Continuous Thought Machines, because thought takes time and reasoning is a process. - SakanaAI/continuous-thought-machines
We are excited to share that “Continuous Thought Machines” has been accepted as a Spotlight at #NeurIPS2025! 🧠✨ The CTM is an AI that mimics biological brains by using neural dynamics & synchronization to think over time. It can solve complex mazes by building internal maps,
9
46
349
Daiwa Securities is hiring startup Sakana AI to build an AI tool analyzing investor profiles, joining other firms adopting the technology (Bloomberg: https://t.co/UpDNuHANRE) The two companies will work together to build an asset consulting platform powered by Sakana’s AI
bloomberg.com
Daiwa Securities Group Inc. is hiring startup Sakana AI to develop an artificial intelligence tool that can analyze individual investor profiles, joining a growing number of financial institutions...
8
11
62
Proud to release ShinkaEvolve, our open-source framework that evolves programs for scientific discovery with very good sample-efficiency! 🐙 Paper: https://t.co/05rjMiSxOL Blog: https://t.co/yjuvV0xpif Project:
github.com
ShinkaEvolve: Towards Open-Ended and Sample-Efficient Program Evolution - SakanaAI/ShinkaEvolve
We’re excited to introduce ShinkaEvolve: An open-source framework that evolves programs for scientific discovery with unprecedented sample-efficiency. Blog: https://t.co/Bj32AGXC3T Code: https://t.co/UMCSQaeOhd Like AlphaEvolve and its variants, our framework leverages LLMs to
6
63
371
Fun first night in Tokyo testing our Embodied AI in Shinbashi and driving around with @hardmaru. We've shared a common vision of world models for many years and I'm super excited about what @SakanaAILabs is building 🚀🇯🇵
11
40
213
“Wider or Deeper? Scaling LLM Inference-Time Compute with Adaptive Branching Tree Search” has been accepted as a spotlight at #NeurIPS2025 ✨ Paper: https://t.co/Xj6uLJITe4 Blog: https://t.co/3qSUaEixQU GitHub:
github.com
A Tree Search Library with Flexible API for LLM Inference-Time Scaling - SakanaAI/treequest
Inference-Time Scaling and Collective Intelligence for Frontier AI https://t.co/3qSUaEixQU We developed AB-MCTS, a new inference-time scaling algorithm that enables multiple frontier AI models to cooperate, achieving promising initial results on the ARC-AGI-2 benchmark.
6
36
208
Sukiyabashi Jiro founder Jiro Ono, on turning 100: “The secret for longevity is to continue working. I also try to walk everyday. Even after I turn 100, I want to continue working. That’s the best remedy.” 🍣 🎏 https://t.co/nR4Q4n6WVc
6
6
94