logo

Guide to Foundation Models

Combine the power of foundation models with human-in-the-loop review to accelerate your labeling operations. Learn how Labelbox Foundry enables machine learning teams to automate data labeling and enrichment.

What are foundation models?
What are foundation models?

Foundation models are models that are trained on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning.

How to use open source foundation models
How to use open source foundation models

How do open-source models provide a foundation for developers and researchers to create specialized AI solutions? Read on to learn more.

A list of foundation models to get the most out of your data
A list of foundation models to get the most out of your data

Discover a list of foundation models that AI builders can leverage to accelerate the development of common enterprise AI applications at scale.

Frequently asked questions

  • How does Labelbox work with foundation models?
    +

    Foundry brings together world-class AI models to automate data labeling and enrichment, using advanced intelligence to pre-label data in just a few clicks. This AI "copilot" can help kickstart your labeling efforts so your team can redirect manual labeling efforts to high-quality review. Accelerate time-to-value while saving over 80% in labeling time and cost with automated pre-labeling.

  • Does Labelbox have other products that help with improving model performance?
    +

    In addition to Foundry, Labelbox Model provides collaborative and data-centric tools to debug, diagnose, and optimize your machine learning data and models. With Model, you can:

    Automatically debug model errors with robust error analysis

    Diagnose model errors and debug them:

    • Leverage auto-metrics to surface mispredictions or mislabeled data

    • Analyze the distribution of annotations and predictions

    • Filter on specific metrics such as IOU or confidence score to drill into model errors and find specific examples

    Compare models across experiments

    Understand how a model is improving:

    • Collaboratively version and compare training data and hyperparameters across iterations in a single place

    • Visually compare and view comparison metrics between two experiments

    • Measure and understand the value in each iteration

    Continuously boost and improve model performance

    Unlock gains at every iteration:

    • Surface candidate labeling mistakes and send them to a labeling project to be corrected

    • Curate high-impact unlabeled data to boost model performance

    • Identify data on which the model is underperforming and find similar unlabeled data in Catalog to send into a labeling project

    Model can be used in conjunction with our Catalog and Annotate products.

  • How much does Labelbox Foundry cost?
    +

    Only pay for the models that you want to pre-label or enrich your data with. Foundry pricing will be calculated and billed monthly based on the following:

    1. Inference cost – Labelbox will charge customers for inference costs for all models hosted by Labelbox. Inference costs will be bespoke to each model available in Foundry. The inference price is determined based on vendors or our compute costs – these are published publicly on our website as well as inside the product. 

    2. Labelbox's platform cost – each asset with predictions generated by Foundry will accrue LBUs. 

    Learn more about the pricing of the Foundry add-on for Labelbox Model on our pricing page