valohaiai Profile Banner
Valohai Profile
Valohai

@valohaiai

Followers
2K
Following
1K
Media
249
Statuses
885

ML. The Pioneer Way. #MachineLearning #MLOps

🇺🇸 / 🇫🇮
Joined October 2016
Don't wanna be here? Send us removal request.
@valohaiai
Valohai
7 months
1️⃣ Register for our webinar at:. 2️⃣ Explore other webinars by @OVHcloud at:.
0
0
1
@valohaiai
Valohai
7 months
WEBINAR ALERT 📢. Join us on January 29 at 10:00 GMT to learn best practices for automating ML pipelines on secure multi-cloud infrastructure. This webinar is part of a series organized by @OVHcloud, the leading European provider of cloud infrastructure. Links in the thread ⬇️
Tweet media one
1
0
0
@valohaiai
Valohai
8 months
🔥 In 2024, we made yet another leap towards accomplishing our mission: making MLOps scalable, efficient, and effortless!. Join us in recapping the key additions to our MLOps platform and ecosystem integrations throughout the year ⬇️.
Tweet card summary image
valohai.com
Read the annual review from Valohai, the leading end-to-end MLOps platform, about the feature releases and ecosystem integrations in 2024 and future plans.
0
0
0
@valohaiai
Valohai
8 months
Spotify's 2024 Wrapped is here, uncovering your guilty pleasures and poor ML practices… Is it too early to make New Year's resolutions for 2025?
Tweet media one
1
0
1
@valohaiai
Valohai
9 months
Here's how troubleshooting gets "done" with a glue stack 🩹🥴. But you can take the guesswork out of troubleshooting with Valohai's new Audit Log. It gives you log entries that detail all events and activities, ensuring traceability and accountability.
Tweet media one
1
0
1
@valohaiai
Valohai
9 months
Introducing a significant enhancement to the Valohai MLOps platform: . Audit Log is an out-of-the-box solution that gives you transparency and control when navigating compliance requirements, debugging issues, or ensuring accountability within your team.
Tweet card summary image
valohai.com
Valohai’s Audit Logs are built for AI governance and traceability necessary to navigate AI compliance requirements, debugging, and accountability in ML teams.
0
0
0
@valohaiai
Valohai
9 months
Is @AMD's MI300X GPU the best pick for LLM inference on a single GPU ❓. We compared Nvidia's H100 and AMD's MI300X GPU and found that MI300X can be a better fit thanks to its larger memory and higher memory bandwidth. Read the full benchmark:
Tweet card summary image
valohai.com
AMD's MI300X GPU outperforms Nvidia's H100 in LLM inference benchmarks with its larger memory and higher bandwidth, impacting AI hardware performance and model capabilities.
0
1
3
@valohaiai
Valohai
10 months
What can you do with Valohai's new Model Hub?. 1️⃣ Get a holistic view of all your models in one place.2️⃣ Trace the entire lineage of every model version.3️⃣ Automate workflows across the lifecycle. But we're only scratching the surface. Lear more at:.
Tweet card summary image
valohai.com
Model Hub is a key functionality in the Valohai MLOps platform that simplifies and automates the end-to-end lifecycle management of machine learning models.
0
0
0
@valohaiai
Valohai
10 months
Introducing a new major addition to Valohai: the Model Hub 🔥. It's the single pane of glass for managing all model versions across the entire lifecycle with: . Lineage tracking, performance comparison, workflow automation, access control, and much more.
Tweet card summary image
valohai.com
Model Hub is a key functionality in the Valohai MLOps platform that simplifies and automates the end-to-end lifecycle management of machine learning models.
0
0
0
@valohaiai
Valohai
11 months
Here's a sneak peek into our upcoming content on AI governance and the AI EU Act, Valohai's new feature, and MLOps at AI-native companies. If these sound interesting to you, you can get notified when they're out by subscribing to our newsletter at: .
Tweet card summary image
valohai.com
Stay up to date with Valohai’s blog posts on AI governance and the AI EU Act, machine learning pipelines in production, GPU benchmarks, and new features.
0
0
0
@valohaiai
Valohai
11 months
Over the next weeks, we'll publish 3 exciting stories about:. 🔸 AI governance and the AI EU Act.🔸 A new key feature in the Valohai MLOps platform.🔸 Production pipelines for scheduled retraining an deployment. Subscribe to these updates at:
Tweet media one
0
0
0
@valohaiai
Valohai
11 months
Valohai's new Smart Instance Selection helps you choose machines where your training data is cached. With just a single click in the UI, you can optimize compute resources and increase iteration speed. Learn more and get started at:
Tweet card summary image
valohai.com
Execute machine learning jobs in less time by leveraging historical data locality with the Valohai MLOps platform and its Smart Instance Selection.
0
0
0
@valohaiai
Valohai
11 months
Our new feature analyzes historical data to identify instances with the highest cache hit rates. So when you submit a new ML job to Valohai, the platform will assign it to an instance with the most data cached, avoiding unnecessary downloads. Try it at:
Tweet card summary image
valohai.com
Execute machine learning jobs in less time by leveraging historical data locality with the Valohai MLOps platform and its Smart Instance Selection.
0
0
0
@valohaiai
Valohai
11 months
Download speed: way too slow 🐌.Time remaining: infinity ♾️.Rinse and repeat 🧺. Or better not!. Valohai's new feature helps assign ML workloads to machines that have the necessary data cached from previous runs, saving you tons of time. Learn more at:
Tweet card summary image
valohai.com
Execute machine learning jobs in less time by leveraging historical data locality with the Valohai MLOps platform and its Smart Instance Selection.
0
0
0
@valohaiai
Valohai
11 months
Our new integration with @OVHcloud enables you to scale computational resources on-demand and without changing your ways of working while controlling costs and reproducing all runs. Get started at:
Tweet card summary image
valohai.com
Valohai’s integration with OVHcloud makes it easy to access its secure and scalable cloud environments without having to change existing ML workflows.
0
0
0
@valohaiai
Valohai
11 months
No cloud lock-in. No vendor-specific SDKs and IDEs. No worries about infrastructure management. Learn how @Preligens orchestrates and versions dozens of experiments each day across multiple cloud services (including @OVHcloud) and on-prem hardware:.
Tweet card summary image
valohai.com
Preligens builds cutting-edge software for intelligence analysts to monitor strategic sites. With 50 data scientists at the time of writing, they have one of the largest deep learning-focused teams...
0
0
1
@valohaiai
Valohai
11 months
🎉 We’re excited to announce our partnership with @OVHcloud, Europe’s leading cloud provider. Our shared goal is to accelerate ML development by combining:.☁️ Scalable and secure cloud environments.⚙️ Hybrid-cloud pipeline automation. Learn more at:
Tweet card summary image
valohai.com
Valohai announces its partnership with OVHcloud, enabling its users to access OVHcloud’s scalable and secure environments directly from its MLOps platform.
0
2
1
@valohaiai
Valohai
1 year
What’s worse than waiting for an expensive computation to perform ❓. Waiting for the exact same computation to perform for the second time… . We've built a new functionality so you don't need to wait and pay for redundant computations!. Learn more at:
valohai.com
Accelerate machine learning pipelines and optimize MLOps efficiency with Valohai’s Pipeline Step Caching. Reduce computational costs by reusing previous runs.
0
1
1
@valohaiai
Valohai
1 year
📢 New feature alert… Make it two… Make it three!. We’ve just released multiple significant improvements to the Valohai MLOps platform, designed to help you further accelerate time-to-market and optimize costs. Learn more:
Tweet media one
0
1
1
@valohaiai
Valohai
1 year
Tired of hidden costs and unexpected fees? 🤨 Spending more time on expenses than innovation? 🥴. Valohai offers predictable pay-per-license pricing instead of usage-based. Scale your development without scaling the bills ✅. Learn about other benefits:
Tweet media one
0
0
1