Eiso Kant
@eisokant
Followers
10K
Following
23K
Media
251
Statuses
5K
Co-founder & Co-CEO @poolsideai “The best way to predict the future is to invent it.” - Alan Kay
Joined September 2007
The real win from being able to train a 1T parameter model on a “shoe string” budget isn’t the cost savings. It’s the efficiency gain that lets you move faster and increase your iteration speed. Pay attention to the slope. Since I can remember, the best deep learning models
5
5
102
Intelligence Isn’t Enough: Why Energy & Compute Decide the AGI Race – my conversation with @eisokant, co-CEO of @poolsideai 00:00 - Cold open – “Intelligence becomes a commodity” 00:23 - Host intro – Rumored $2B round; Project Horizon & RL2L 01:19 - Why Poolside exists amid
4
7
26
We are building together with our customers. Our Forward Deployed Research Engineers are closing the gap between model + agent capabilities and customer's hard problems. We're hiring: https://t.co/n1HMSRUKYb
1
5
28
Heading over to @mattturck's office to record a podcast, almost 2 years since we recorded the first podcast we did for Poolside. Any questions/topics you'd like us to cover? On the episode 2 years ago, it was highly contrarian to describe our approach to Reinforcement Learning.
4
3
19
Really excited to be partnering with Alex & Redpanda! Expanding our platform and opening up our models and agents to +300 enterprise data sources + a highly capable query engine is a huge unlock for the capabilities we can deliver to our customers. And man are they great to
LET'S GOOOOO. Couldn't be more proud to partner with a frontier model company on the Agentic Data Plane. Poolside and Redpanda see the world from a similar lense. From Air Gap deployments to BYOC. 🚀🚀🚀🚀 https://t.co/5B8Mg9EwGq Stoked on this.
0
4
23
Great work! Very bullish on this direction. Almost everything we do in foundation models is either improving compute efficiency or improving data. Allowing for adaptive computation can be one of the bigger unlocks for improving model capabilities within a compute budget.
Scaling Latent Reasoning via Looped Language Models 1.4B and 2.6B param LoopLMs pretrained on 7.7T tokens match the performance of 4B and 8B standard transformers respectively across nearly all benchmarks time to be bullish on adaptive computation again? great work by
0
1
18
some thoughts on @poolsideai's project horizon announcement last week: building a 2gw ai factory "This level of vertical integration - from model and agent development through to power generation and deployment - gives poolside a durable advantage as AI systems become
2
3
16
Would love to connect with anyone who's impacted, and is looking to join a small, but well-resourced team to push to the frontier and beyond. We have one of the highest ratios of GPU resources per researcher. No politics or siloes.
0
2
10
Will be a fun conversation! Should be on at 1pm PST
Happy Thursday. Here's today's lineup: – @benioff (Salesforce) – @eric_seufert (Heracles Capital) – @PimDeWitte (General Intuition) – @Alicebentinck (EF) – @DVaisbort (Albacore) – @eisokant (poolside) See you on the stream.
1
1
13
We believe that to compete at the frontier, you have to own the full stack: from dirt to intelligence. Today we’re announcing two major unlocks for our mission to AGI: 1. We're partnering with @CoreWeave and have 40,000+ NVIDIA GB300s secured. First capacity comes online
poolside.ai
When people ask what it takes to build frontier AI, the focus is usually on the model—the architecture, the training runs, the research breakthroughs. But that’s only half the story.
35
50
424
I’ve known Nathan for almost a decade and for the last 8 years have seen him and his team treat this as a labor of love. Thank you for creating one of the most comprehensive reports on our industry!
🪩The one and only @stateofaireport 2025 is live! 🪩 It’s been a monumental 12 months for AI. Our 8th annual report is the most comprehensive it's ever been, covering what you *need* to know about research, industry, politics, safety and our new usage data. My highlight reel:
1
0
9
Ari is one of my favorite people in our field. Worth listening to him. When we started poolside, one of our guiding research beliefs was "not all tokens are equal".
This was an excellent pod and @arimorcos is a hella charismatic speaker. After we wrapped, I told him he and Datology reminded me exactly of @jefrankle and MosaicML and... of course they worked together at FAIR! the clip below is when i realized this is "Mosaic for people
0
3
22
Founders WhatsApp when you're both vibe checking your latest model
2
0
20
What if training a foundation model felt less like chaos… and more like running a finely-tuned factory? 🏭 @poolsideai Model Factory shows how. A quick tour 👇
1
2
4
In the limit, evaluations are the ~only thing that matters. When models are self-improving, and every metric can be hill climbed, picking the metric becomes the most important thing. Evals will shift from being "writing unit tests" for research to being the *main thing*
2
4
16
We've not been very public about our progress on model building, but I fully believe poolside will be the next lab joining the frontier. We're now sharing a bit more how we're doing this, with a systems-first approach we're taking with our model factory.
2
3
22