
Robert Lange
@RobertTLange
Followers
9K
Following
4K
Media
305
Statuses
617
Founding Research Scientist @SakanaAILabs 🎏 💬 Agentic Discovery 🔬 AI Scientist 🧬 EvoLLM 🏋️ gymnax 🦎 evosax 🤹 MLE-Infra Ex: SR & Intern @Google DeepMind
TKY/BLN
Joined April 2017
🎉 Stoked to share The AI-Scientist 🧑🔬 - our end-to-end approach for conducting research with LLMs including ideation, coding, experiment execution, paper write-up & reviewing. Blog 📰: Paper 📜: Code 💻:
Introducing The AI Scientist: The world’s first AI system for automating scientific research and open-ended discovery!. From ideation, writing code, running experiments and summarizing results, to writing entire papers and conducting peer-review, The AI
13
67
364
RT @fchollet: Impressive results from Sakana AI on ARC-AGI-2 with a new method for test-time-search and ensembling!. Please be mindful when….
0
120
0
RT @iwiwi: Go wider, deeper—or together? 🤔 Introducing AB-MCTS (Adaptive Branching Monte Carlo Tree Search), a new inference-time framework….
0
63
0
RT @SakanaAILabs: We’re excited to introduce AB-MCTS!. Our new inference-time scaling algorithm enables collective intelligence for AI by a….
0
200
0
RT @hardmaru: AI Improves at Improving Itself Using an Evolutionary Trick: Researchers use evolutionary algorithms to enhance AI coding ski….
0
45
0
RT @richardcsuwandi: Most AI systems today follow the same predictable pattern: they're built for specific tasks and optimized for objectiv….
0
21
0
RT @swyx: @thinkymachines @arcee_ai @jeremyphoward "what did Mira see?". well. @hardmaru et al apparently. 1.3yrs ago. .
0
10
0
RT @shengranhu: Very excited to share our latest work on Automated Design of Agentic Systems (ADAS) and Darwin Gödel Machine (DGM) with @Sy….
0
7
0
RT @hardmaru: Reinforcement Learning Teachers of Test Time Scaling. In this new paper, we introduce a new way to teach LLMs how to reason b….
0
122
0
RT @SakanaAILabs: Introducing Reinforcement-Learned Teachers (RLTs): Transforming how we teach LLMs to reason with reinforcement learning (….
0
242
0
How do different coding agents perform file edits? 📝. Great blog by @FabianHertwig 🧑💻: Different instruction tuning protocols imply that there is no clear winning approach. Most agents are either model-specific (Codex/Claude Code) or deploy robust
2
6
24
RT @hardmaru: Sakana AI developed a new coding agent, ALE-Agent, trained to solve NP-hard optimization problems. Our agent participated in….
0
69
0
RT @oswaldjoh: Super happy and proud to share our novel scalable RNN model - the MesaNet! . This work builds upon beautiful ideas of 𝗹𝗼𝗰𝗮𝗹𝗹….
0
64
0
RT @iwiwi: AI will soon master Codeforces. So, what's the next challenge?. 🚀Introducing ALE-Bench (ALgorithm Engineering Benchmark) 🏆 A new….
0
41
0
RT @SakanaAILabs: Introducing ALE-Bench, ALE-Agent!.Towards Automating Long-Horizon Algorithm Engineering for Hard Optimization Problems. B….
0
150
0
How far can we take this paradigm? Imagine a world of small language models continuously refined by a large Adapter Foundation Models 🤯. Paper: Code: HuggingFace: Tan's tweet:
Very excited to share this work. Here's an example of how to interact with T2L through a webui provided in the repo More results in this 🧵
0
0
5
RT @hardmaru: Text-to-LoRA: Instant Transformer Adaption. Generative models can produce text, images, video. They….
0
131
0
Text-to-LoRA: What if you no longer had to fine-tune your LLM for every single downstream task?. 🚀 Stoked to share our work on instant LLM adaptation using meta-learned hypernetworks 📝 → 🔥. The idea is simple yet elegant: We text-condition a hypernetwork to output LoRA
We’re excited to introduce Text-to-LoRA: a Hypernetwork that generates task-specific LLM adapters (LoRAs) based on a text description of the task. Catch our presentation at #ICML2025!. Paper: Code: Biological systems are capable of
8
62
385
RT @tan51616: Very excited to share this work. Here's an example of how to interact with T2L through a webui provided in the repo https://t….
0
21
0