Voxel51
@Voxel51
Followers
2K
Following
375
Media
688
Statuses
2K
The most powerful open-source visual AI and computer vision data platform. Maximize AI performance with better data: https://t.co/T3S1us7FBF
Ann Arbor, MI
Joined August 2017
One of the biggest bottlenecks in deploying visual AI and computer vision is annotation, which can be both costly and time-consuming. Today, we’re introducing Verified Auto Labeling, a new approach to AI-assisted annotation that achieves up to 95% of human-level performance while
2
207
112
i recently integrated 4 ocr models into fiftyone as remote zoo models these handle text extraction and document parsing. all available as remote zoo sources, you can get started with a few lines of code different approaches for different needs: 1. mineru-2.5 1.2b params,
0
2
10
The result: an end-to-end workflow that prevents downstream failures — cutting iteration cycles from weeks to hours and eliminating GPU waste on failed reconstructions. We’re excited to see this workflow power autonomous vehicles @NVIDIADRIVE and robotics @NVIDIARobotics
voxel51.com
Transform raw sensor data into simulation-ready datasets using AI data enrichment, neural reconstruction, and synthetic data generation with NVIDIA and Voxel51.
0
0
2
FiftyOne Physical AI Workbench sits at the start of your simulation pipeline and provides turnkey access to NVIDIA technology to ensure every simulation run begins with trustworthy data: ✅Audits 75+ critical checkpoints to ensure every reconstruction starts with validated
1
0
2
Introducing Physical AI Workbench — integrated with @nvidiaomniverse NuRec and @NVIDIA Cosmos to solve the biggest bottleneck in physical AI. More than half of all physical AI simulations fail — not from bad models, but from bad input data. A timestamp off by milliseconds or a
2
1
5
i just created a dataset of visual ai papers that are being presented at neurips this year you can checkout the dataset here: https://t.co/qFYJNphfGx what can you do with this? good question. find out at this virtual event i'm presenting at this week:
0
1
3
Join Voxel51 and @nvidia for a first-of-its-kind live demo showing how Physical AI systems are being transformed—from raw sensor data to validated, simulation-ready datasets. As autonomous vehicles and humanoid robots move from development to deployment, teams need rigorous
1
6
28
nanonets integrated into fiftyone because all of twitter things is a new thing
1
1
7
We just hit 10,000 stars on GitHub! ⭐ When we started FiftyOne our vision was clear: to make computer vision workflows easier, faster, and more reliable. Today, it’s become a global community of developers, researchers, and innovators who have embraced the tool, contributed to
0
1
3
Voxel51 is headed to GTC DC, Oct 28–29! Meet us at booth #411 for a live demo of the @nvidiaomniverse NuRec + FiftyOne integration. As AV and robotics systems move from R&D to deployment, dataset quality remains one of the biggest challenges. Unvalidated data and fragmented
1
0
4
We just won @FastCompany's Next Big Things in Tech award! This recognition reflects a broader industry shift: visual and multimodal AI is the future. As data systems grow to billions of samples, teams need meaningful ways to inspect, understand, and improve how that data
0
0
4
As Ido Greenfeld, AI Team Lead at Taranis, puts it: “We’ve been using FiftyOne for over a year and it has drastically changed the way we work.” 📖 Read the full story here:
voxel51.com
Taranis uses FiftyOne to visualize & analyze computer vision datasets, evaluate models, & deliver cutting-edge AI technology for agriculture
0
0
1
As Kemal Eren, Lead Computer Vision Engineer at Ancera, said "FiftyOne has helped us shrink our feedback loop from months to days, allowing us to catch issues we didn’t know existed. It’s a big part of our strategy to speed up model development across teams." 📖 Read the full
voxel51.com
FiftyOne helps Ancera increase its speed of AI development.
0
0
1
After incorporating FiftyOne Enterprise, Ancera was able to: 👉 Review and relabel 20,000 detections across hundreds of high-resolution images in just a few days vs. multiple weeks 👉 Improve model performance by 7% and cut feedback loops from months to days 👉 Freed up the ML
1
0
1
Pathogen risks like Salmonella can cost poultry supply chains millions. Ancera uses high-resolution imaging systems and computer vision models to detect and respond to those risks—but as operations scaled, manual workflows and limited visibility were stretching model deployment
1
0
1
We analyzed the most common failure patterns across high-stakes domains like AV, retail, and healthcare. This guide outlines why most breakdowns are data failures in disguise—and what it really takes to build robust, production-ready models. 👉 Download the free whitepaper:
voxel51.com
Why vision AI models fail — and how to prevent it. Learn data-centric strategies to catch labeling errors, bias, and drift before they hit production.
0
0
1
That’s why coverage analysis matters. If your dataset is missing conditions like poor lighting, weather variations, or rare object types, your model may be “accurate” in testing—but ineffective in reality. Using coverage analysis to spot and fill gaps is just one of the
1
0
1
Even high-performing vision models can collapse in production if their training data is too small or misses real-world variation. Case in point: Tesla's Full Self-Driving (FSD) system struggled to detect pedestrians and obstacles in low-visibility conditions, leading to several
1
0
1
👉 Explore how scenario-based evaluation changes the game in our latest blog:
voxel51.com
Learn how to uncover hidden data issues with FiftyOne scenario-based data and model workflows.
0
0
1
By pairing performance metrics with visual context across meaningful slices of data, ML teams can actually understand where their models are failing. 🔍 Pinpoint brittle failure modes hidden by averages 📊 Trace errors back to root causes in the data (coverage gaps, bias, label
1
0
1
Even models that perform well on benchmarks and pass QA checks can fail in production. The issue isn’t the algorithms or the architecture—it’s the data. When edge cases, labeling errors, or low-quality samples slip through unnoticed, even well-tested models can collapse in
1
0
1