Model

Model

Ship performant models with confidence

Models are only as good as training data. Discover a new way to improve data and models with data-centric tools and workflows. Model is the command center to evaluate and test models, take action to boost model performance, and orchestrate re-training jobs.
Ship performant models with confidence
Systematically improve model performance across edge cases

Systematically improve model performance across edge cases

Go beyond aggregated model performance evaluation. Find and fix errors in AI models within your dataset by using model embeddings and performing qualitative and quantitative analysis across data slices.

Find and fix errors in labels

Find and fix errors in labels

Discover a new and powerful way to find label errors -- Perform a comparative analysis between your ground truth and predictions from your AI models to find and fix human labeling errors. Easily send data to annotation teams for re-work.

Track essential metrics across model versions

Track essential metrics across model versions

Evaluating and comparing model performance has never been easier. Perform differential diagnosis between model predictions and labels or a previous model version. Track essential performance metrics like F1, IoU, precision and recall across model versions.

Efficiently improve models in development and production

Efficiently improve models in development and production

Not all data impacts model performance equally. Curate and label or re-label the correct data–not just more data. Use your model as a guide to identify targeted improvements to your training data that will boost model performance.

Use embeddings to uncover new insights

Use embeddings to uncover new insights

Model embeddings help you quickly uncover high-level patterns and visually similar data from across all your datasets. Labelbox offers precomputed embeddings by default or you can upload your own via the SDK. Use Similarity Search to find more examples of low-performing classes, edge cases, or other rare data.

Curate data split and test models before release

Curate data split and test models before release

Ensure that your model constantly learns from an accurate representation of real-world scenarios. Curate training, validation, and testing data split to evaluate performance and prevent overfitting. No more saving data to manually managed folder structures.

Version data at every iteration

Version data at every iteration

Track model experiments and automatically version labels, data splits, data rows, and model parameters across each model run iteration. Reproduce model results by restoring previous versions of data without 3rd party tools or custom scripts.

DataStore
DataStore
DataStore
Customization

Integrate and orchestrate model re-training jobs

Turn your labeled data into a trained model without friction. Connect Labelbox to your preferred model training cloud provider or your custom model training service via webhooks or the Python SDK and launch training jobs all from the Labelbox UI. Simplify your pipeline with a single low-code integration.

Testimonial, Model, Deque

“We were able to reduce our data requirements and spend by more than 50%. This was done by targeting the model’s weaknesses using Model and prioritizing the right data in Catalog to more quickly address model failures. Rather than relying on scattershot data collection, we were able to target the data we already had in Labelbox and make fixes. This allowed our team to save time and target data that we knew would make a difference in our model’s performance.”

Noé Barrell

ML Engineer

See it in action