TractoAI
@tractoai
Followers
81
Following
17
Media
1
Statuses
22
Simple & powerful runtime for AI and Data teams. Scale experiments and generate impact. No prior cloud or K8s experience required.
NYC and Amsterdam
Joined January 2025
Write any function you want to run on your data and Tracto takes care of running it on a distributed CPU/GPU compute. From code to result in seconds.
0
0
1
Here's a stack for building custom agent eval pipelines. For AI researchers and engineers who care about quality of AI outputs this is practical tutorial and building blocks to scale your evals. Offload batch processing to Tracto. https://t.co/M6hBvYdcqg
nebius.com
Research on SWE agents involves building and running thousands of containers, quickly surpassing the limits of a single host. Our AI R&D team unveils the large-scale infrastructure that powers this...
1
0
1
While you wait run serverless GPU workloads on us. We aren't sending them to AWS but process in our own data center. Dynamic scaling, pay as you go. https://t.co/0zBxJUbz9S - 30% cheaper than Modal playground cluster with samples at
tracto.ai
Tracto is the data lakehouse for AI builders. Built on Nebius AI Cloud, it unifies data and compute so teams can scale models, agents, and innovation faster.
Yep. Expect a lot of internet services to break soon unless things get better. Can’t pull images from ECR for instance. The Modal team is randomly at an offsite in Italy so we’re all hands on deck!
0
0
1
AI is pushing traditional data stacks to their breaking point. ⚠️ In his new blog, Maxim Akhmedov, Head of TractoAI at Nebius, explains why unified, scalable infra is essential for multimodal data + inference at AI scale and why DIY point solutions are too costly to manage.
2
19
138
Excited to see @pleiasfr featured in @sciencedirect. Our AI infra allowed the team to run multi node LLM training with maximum customizations and control.
1
0
4
Excited to see Nebius on the show with the boys.
IN NEWS: Nebius lands a $17.4B partnership with Microsoft to provide high-performance AI infrastructure. The CRO @marcboroditsky told us: "We believe the real opportunity is delivering the AWS equivalent for AI.” “The $100B+ business opportunity is servicing the
0
0
0
We agree with YC's Gary Tan - you need evals. lmk if need help automating eval pipelines. https://t.co/7iJdQDyq48
One of the key components for training code #agents is access to verifiable real-world tasks, which are not easy to collect. We automated and scaled this process using @TractoAI as our main platform for data processing and storage, and decided to #opensource it. 3/4
0
0
1
If you itching to experiment with gpt oss 120b using vllm here's a working notebook on our GPU sandbox. Run, experiment, share for free. https://t.co/ehHLY56XBD
0
1
1
When a team of biotech researchers sets out to build the future of drug discovery.. tech choices matter
Our customer SieveStack is building the world’s largest dataset of molecular simulations to train a multi-layered stack of foundational models and advance dynamics-driven drug discovery. Read the story: https://t.co/WHCHxWWvpn 🔹 Goal: To unlock treatment pathways for
0
0
5
the future is about smart tokens
What if models could learn which problems _deserve_ deep thinking? No labels. Just let the model discover difficulty through its own performance during training. Instead of burning compute 🔥💸 on trivial problems, it allocates 5x more on problems that actually need it ↓
0
3
7
Free h100 for AI experiments and no strings attached. We just dropped a dozen of notebooks with product grade code for LLM tuning, inference and even data prep. Head on over to https://t.co/lfFFGutED6 and spin up your job in less than 30 sec
1
1
3
AI workloads of the near future will be data intensive. We are building to it. Excited to be data management infra for @nebiusai AI R&D team. Onwards!
Our AI R&D team presents SWE-rebench, a new benchmark for evaluating agentic LLMs on a continuously updated and decontaminated set of real-world software engineering tasks mined from real GitHub repos. Explore the leaderboard and methodology behind it: https://t.co/WGOcul8eo9
0
0
14
Announcing the greatest bundle for AI & data teams! TractoAI now supports https://t.co/p1a9bh1LT0 - with built-in storage, job scheduler, workflow manager, and observability dashboard. Why manage 5-10 different tools and vendors? Get one integrated stack and save $$$
0
0
3
we work with world class AI teams to give them unfair advantage 🤖
Read how @synth_labs, a startup developing AI solutions tailored for logical reasoning, is advancing AI post-training with our @TractoAI: https://t.co/jePovolgcG 🔹 Goal: Develop an ML system that empowers reasoning models to surpass pattern matching and implement sophisticated
0
0
2
Our approach to AI compute is different. Learn about serverless GPU model and when it's a better choice vs dedicated compute. We are talking at @nebiusai event this Thurs in SF. https://t.co/ipN9JXwKQH
nebius.com
Discover the most efficient way to build, tune and run your AI models and applications on top-notch NVIDIA® GPUs.
0
1
1
TractoAI + DeepSeek r1. Batch Inference at scale with no rate limits. Bulid your own enterprise grade batch inference for LLMs (DeepSeek r1 example included) https://t.co/8H8VCsdbvh
0
1
1
as AI builders we've noticed it a while back. - inference optimization frameworks and platforms around it (eg. @FireworksAI_HQ ) - specialized hardware optimized for inference, eg @GroqInc - large scale distributed inference , eg @tractoai
If you're not already paying attention to this shift, you should be: the balance of compute is moving from pre-training to inferencing. We're seeing massive gains here from scaling up test-time compute, with no ceiling in sight
0
1
0