logo

Welcome to the Foundry

Build intelligent applications with foundation models

Open AI Whisper

Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual data, offering improved robustness to accents, noise, and technical language. It transcribes and translates multiple languages into English.


Intended Use

  • Whisper is useful as an ASR solution, especially for English speech recognition.

  • The models are primarily trained and evaluated on ASR and speech translation to English.

  • They show strong ASR results in about 10 languages.

  • They may exhibit additional capabilities if fine-tuned for tasks like voice activity detection, speaker classification, or speaker diarization.


Performance

  • Speech recognition and translation accuracy is near state-of-the-art.

  • Performance varies across languages, with lower accuracy on low-resource or low-discoverability languages.

  • Whisper shows varying performance on different accents and dialects of languages.


Limitations

  • Whisper is trained in a weakly supervised manner using large-scale noisy data, leading to potential hallucinations.

  • Hallucinations occur as the models combine predicting the next word and transcribing audio.

  • The sequence-to-sequence architecture may generate repetitive text, which can be partially mitigated by beam search and temperature scheduling.

  • These issues may be more pronounced in lower-resource and/or lower-discoverability languages.

  • Higher word error rates may occur across speakers of different genders, races, ages, or other demographics.


Citation

https://openai.com/index/whisper/

Google Gemini 2.0 Pro

Gemini 2.0 Pro is the strongest model among the family of Gemini models for coding and world knowledge, and it features a 2M long context window. Gemini 2.0 Pro is available as an experimental model in Vertex AI and is an upgrade path for 1.5 Pro users who want better quality, or who are particularly invested in long context and code.


Intended Use

  • Multimodal input

  • Text output

  • Prompt optimizers

  • Controlled generation

  • Function calling (excluding compositional function calling)

  • Grounding with Google Search

  • Code execution

  • Count token


Performance

Google Gemini 2.0 Pro boasts the strongest coding performance and superior ability to handle complex prompts, demonstrating better understanding and reasoning of world knowledge than any previous model. 

It features the largest context window of 2 million tokens, enabling comprehensive analysis and understanding of large amounts of information. Additionally, it can call external tools like Google Search and execute code, enhancing its utility for a wide range of tasks, including coding and knowledge analysis.


Limitations

  • Context: Gemini 2.0 Pro may struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As Gemini 2.0 Pro is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, Gemini 2.0 Pro may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Gemini 2.0 Pro can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Gemini 2.0 Pro might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of Gemini 2.0 Pro's output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2#2.0-pro

Open AI o3 mini

OpenAI o3-mini is a powerful and fast model that advances the boundaries of what small models can achieve. It delivers, exceptional STEM capabilities—with particular strength in science, math, and coding—all while maintaining the low cost and reduced latency of OpenAI o1-mini.


Intended Use

OpenAI o3-mini is designed to be used for tasks that require fast and efficient reasoning, particularly in technical domains like science, math, and coding. It’s optimized for STEM (Science, Technology, Engineering, and Mathematics) problem-solving, offering precise answers with improved speed compared to previous models. 

Developers can use it for applications involving function calling, structured outputs, and other technical features. It’s particularly useful in contexts where both speed and accuracy are essential, such as coding, logical problem-solving, and complex technical inquiries.


Performance

OpenAI o3-mini performs exceptionally well in STEM tasks, particularly in science, math, and coding, with improvements in both speed and accuracy compared to its predecessor, o1-mini. It delivers faster responses, with an average response time 24% quicker than o1-mini (7.7 seconds vs. 10.16 seconds).

In terms of accuracy, it produces clearer, more accurate answers, with 39% fewer major errors on complex real-world questions. Expert testers preferred its responses 56% of the time over o1-mini. It also matches o1-mini’s performance in challenging reasoning evaluations, including AIME and GPQA, especially when using medium reasoning effort.


Limitations

  1. No Vision Capabilities: Unlike some other models, o3-mini does not support visual reasoning tasks, so it's not suitable for image-related tasks.

  2. Complexity in High-Intelligence Tasks: While o3-mini performs well in most STEM tasks, for extremely complex reasoning, it may still lag behind larger models.

  3. Accuracy in Specific Domains: While o3-mini excels in technical domains, it might not always match the performance of specialized models in certain niche areas, particularly those outside of STEM.

  4. Potential Trade-Off Between Speed and Accuracy: While users can adjust reasoning effort for a balance, higher reasoning efforts may lead to slightly longer response times.

  5. Limited Fine-Tuning: Though optimized for general STEM tasks, fine-tuning for specific use cases might be necessary to achieve optimal results in more specialized areas.


Citation

https://openai.com/index/openai-o3-mini/

Bring your own model
Register your own custom models in Labelbox to pre-label or enrich datasets
Google Gemini 2.0 Flash

Gemini 2.0 Flash is designed to handle high-volume, high-frequency tasks at scale and is highly capable of multimodal reasoning across vast amounts of information with a context window of 1 million tokens.


Intended Use

  • Text generation

  • Grounding with Google Search

  • Gen AI SDK

  • Multimodal Live API

  • Bounding box detection

  • Image generation

  • Speech generation


Performance

Gemini 2.0 Flash outperforms the predecessor Gemini 1.5 Pro on key benchmarks, at twice the speed. It also features the following improvements:

  • Multimodal Live API: This new API enables low-latency bidirectional voice and video interactions with Gemini.

  • Quality: Enhanced performance across most quality benchmarks than Gemini 1.5 Pro.

  • Improved agentic capabilities: 2.0 Flash delivers improvements to multimodal understanding, coding, complex instruction following, and function calling. These improvements work together to support better agentic experiences.


Limitations

  • Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2#2.0-flash

Claude 3.7 Sonnet

Claude 3.7 Sonnet, by Anthropic, can produce near-instant responses or extended, step-by-step thinking. Claude 3.7 Sonnet shows particularly strong improvements in coding and front-end web development.


Intended Use

Claude 3.7 Sonnet is designed to enhance real-world tasks by offering a blend of fast responses and deep reasoning, particularly in coding, web development, problem-solving, and instruction-following.

  • Optimized for real-world applications rather than competitive math or computer science problems.

  • Useful in business environments requiring a balance of speed and accuracy.

  • Ideal for tasks like bug fixing, feature development, and large-scale refactoring.

Coding Capabilities:

  • Strong in handling complex codebases, planning code changes, and full-stack updates.

  • Introduces Claude Code, an agentic coding tool that can edit files, write and run tests, and manage code repositories like GitHub.

  • Claude Code significantly reduces development time by automating tasks that would typically take 45+ minutes manually.


Performance

Claude Sonnet 3.7 combines the capabilities of a language model (LLM) with advanced reasoning, allowing users to choose between standard mode for quick responses and extended thinking mode for deeper reflection before answering. In extended thinking mode, Claude self-reflects, improving performance in tasks like math, physics, coding, and following instructions. Users can also control the thinking time via the API, adjusting the token budget to balance speed and answer quality.

Early testing demonstrated Claude’s superiority in coding, with significant improvements in handling complex codebases, advanced tool usage, and planning code changes. It also excels at full-stack updates and producing production-ready code with high precision, as seen in use cases with platforms like Vercel, Replit, and Canva. Claude's performance is particularly strong in developing sophisticated web apps, dashboards, and reducing errors. This makes it a top choice for developers working on real-world coding tasks.


Limitations

  • Context: Claude 3.7 Sonnet may struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As Claude 3.7 Sonnet trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, Claude 3.7 Sonnet may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Claude 3.7 Sonnet can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Claude 3.7 Sonnet might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of the output of Claude 3.7 Sonnet are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://www.anthropic.com/news/claude-3-7-sonnet

Amazon Nova Pro

Amazon Nova Pro is a highly capable multimodal model that combines accuracy, speed, and cost for a wide range of tasks. 

The capabilities of Amazon Nova Pro, coupled with its focus on high speeds and cost efficiency, makes it a compelling model for almost any task, including video summarization, Q&A, mathematical reasoning, software development, and AI agents that can execute multistep workflows. 

In addition to state-of-the-art accuracy on text and visual intelligence benchmarks, Amazon Nova Pro excels at instruction following and agentic workflows as measured by Comprehensive RAG Benchmark (CRAG), the Berkeley Function Calling Leaderboard, and Mind2Web.


Intended Use

  • Multimodal Processing: It can process and understand text, images, documents, and video, making it well suited for applications like video captioning, visual question answering, and other multimedia tasks.

  • Complex Language Tasks: Nova Pro is designed to handle complex language tasks with high accuracy, such as deep reasoning, multi-step problem solving, and mathematical problem-solving.

  • Agentic Workflows: It powers AI agents capable of performing multi-step tasks, integrated with retrieval-augmented generation (RAG) for improved accuracy and data grounding.

  • Customizable Applications: Developers can fine-tune it with multimodal data for specific use cases, such as enhancing accuracy, reducing latency, or optimizing cost.

  • Fast Inference: It’s optimized for fast response times, making it suitable for real-time applications in industries like customer service, automation, and content creation.


Performance

Amazon Nova Pro provides high performance, particularly in complex reasoning, multimodal tasks, and real-time applications, with speed and flexibility for developers.


Limitations

  1. Domain Specialization: While it performs well across a variety of tasks, it may not always be as specialized in certain niche areas or highly specific domains compared to models fine-tuned for those purposes.

  2. Resource-Intensive: As a powerful multimodal model, Nova Pro can require significant computational resources for optimal performance, which might be a consideration for developers working with large datasets or complex tasks.

  3. Training Data: Nova Pro's performance is highly dependent on the quality and diversity of the multimodal data it's trained on. Its performance in tasks involving complex or obscure multimedia content might be less reliable.

  4. Fine-Tuning Requirements: While customizability is a key feature, fine-tuning the model for very specific tasks or datasets might still require considerable effort and expertise from developers.


Citation

https://www.amazon.science/publications/the-amazon-nova-family-of-models-technical-report-and-model-card

Grok

Grok is a general purpose model that can be used for a variety of tasks, including generating and understanding text, code, and function calling.

Intended Use

  • Text and code: Generate code, extract data, prepare summaries and more.

  • Vision: Identify objects, analyze visuals, extract text from documents and more.

  • Function calling: Connect Grok to external tools and services for enriched interactions.


Performance


Limitations

Context: Grok may struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

Bias: As Grok trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

Creativity Boundaries: While capable of creative outputs, Grok may not always meet specific creative standards or expectations for novel and nuanced content.

Ethical Concerns: Grok can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

Comprehension: Grok might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

Dependence on Prompt Quality: The quality and relevance of the output of Grok are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://x.ai/blog/grok-2

Bring your own model
Register your own custom models in Labelbox to pre-label or enrich datasets
Llama 3.2

The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.


Intended Use

Llama 3.2's 11B and 90B models support image reasoning, enabling tasks like understanding charts/graphs, captioning images, and pinpointing objects based on language descriptions. For example, the models can answer questions about sales trends from a graph or trail details from a map. They bridge vision and language by extracting image details, understanding the scene, and generating descriptive captions to tell the story, making them powerful for both visual and textual reasoning tasks.

Llama 3.2's 1B and 3B models support multilingual text generation and on-device applications with strong privacy. Developers can create personalized, agentic apps where data stays local, enabling tasks like summarizing messages, extracting action items, and sending calendar invites for follow-up meetings.


Performance

Llama 3.2 vision models outperform competitors like Gemma 2.6B and Phi 3.5-mini in tasks like image recognition, instruction-following, and summarization.


Limitations

Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

Medical images: Gemini Pro is not suitable for interpreting specialized medical images like CT scans and shouldn't be used for medical advice.

Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/

Claude 3.5 Sonnet

Claude 3.5 Sonnet sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone. Claude 3.5 Sonnet operates at twice the speed of Claude 3 Opus. This performance boost, combined with cost-effective pricing, makes Claude 3.5 Sonnet ideal for complex tasks such as context-sensitive customer support and orchestrating multi-step workflows.


Intended Use

  • Task automation: plan and execute complex actions across APIs and databases, interactive coding

  • R&D: research review, brainstorming and hypothesis generation, drug discovery

  • Strategy: advanced analysis of charts & graphs, financials and market trends, forecasting


Performance

  • Advanced Coding ability: In an internal evaluation by Anthropic, Claude 3.5 Sonnet solved 64% of problems, outperforming Claude 3 Opus which solved 38%. 

  • Multilingual Capabilities: Claude 3.5 Sonnet offers improved fluency in non-English languages such as Spanish and Japanese, enabling use cases like translation services and global content creation.

  • Vision and Image Processing: This model can process and analyze visual input, extracting insights from documents, processing web UI, generating image catalog metadata, and more.

  • Steerability and Ease of Use: Claude 3.5 Sonnet is designed to be easy to steer and better at following directions, giving you more control over model behavior and more predictable, higher-quality outputs.


Limitations

Here are some of the limitations we are aware of:

  • Medical images: Claude 3.5 is not suitable for interpreting specialized medical images like CT scans and shouldn't be used for medical advice.

  • Non-English: Claude 3.5 may not perform optimally when handling images with text of non-Latin alphabets, such as Japanese or Korean.

  • Big text: Users should enlarge text within the image to improve readability for Claude 3.5, but avoid cropping important details.

  • Rotation: Claude 3.5 may misinterpret rotated / upside-down text or images.

  • Visual elements: Claude 3.5 may struggle to understand graphs or text where colors or styles like solid, dashed, or dotted lines vary.

  • Spatial reasoning: Claude 3.5 struggles with tasks requiring precise spatial localization, such as identifying chess positions.

  • Hallucinations: the model can provide factually inaccurate information.

  • Image shape: Claude 3.5 struggles with panoramic and fisheye images.

  • Metadata and resizing: Claude 3.5 doesn't process original file names or metadata, and images are resized before analysis, affecting their original dimensions.

  • Counting: Claude 3.5 may give approximate counts for objects in images.

  • CAPTCHAS: For safety reasons, Claude 3.5 has a system to block the submission of CAPTCHAs.


Citation

https://docs.anthropic.com/claude/docs/models-overview

Claude 3.5 Haiku

Claude 3.5 Haiku, the next generation of Anthropic's fastest and most cost-effective model, is optimal for use cases where speed and affordability matter. It improves on its predecessor across every skill set.

Intended Use

Claude 3.5 Haiku offers fast speeds, improved instruction-following, and accurate tool use, making it ideal for user-facing products and personalized experiences. 

Key use cases include code completion, streamlining development workflows with quick, accurate code suggestions. It powers interactive chatbots for customer service, e-commerce, and education, handling high user interaction volumes. It excels at data extraction and labeling, processing large datasets in sectors like finance and healthcare. Additionally, it provides real-time content moderation for safe online environments.


Performance


Limitations

Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://docs.anthropic.com/claude/docs/models-overview

Amazon Rekognition

Common objects detection and image classification model by AWS Rekognition.


Intended Use

Amazon Rekognition's object detection model is primarily used for detecting objects, scenes, activities, landmarks, faces, dominant colors, and image quality in images and videos. Some common use cases include:

  • Detect and label common objects in images 

  • Identify activities and scenes in visual content

  • Enable content moderation and filtering

  • Enhance image search capabilities

Learn more


Performance

Amazon Rekognition's object detection model has been reported to have high accuracy ind detecting objects and scenes in images and videos. Its capabilities include:

  • Can detect thousands of object categories

  • Provides bounding boxes for object locations

  • Assigns confidence scores to detections

Learn more


Limitations

the performance of the model may be limited by factors such as the quality and quantity of training data, the complexity of the image content, or the accuracy of the annotations. Additionally, Amazon Rekognition may have detection issues with black and white images and elderly people.

Other limitations include:

  • May struggle with small or partially obscured objects

  • Performance can vary based on image quality and lighting

  • Limited ability to understand context or relationships between objects

  • Cannot identify specific individuals (separate face recognition API for that)

  • May have biases in detection rates across different demographics


Citation

Amazon Rekognition documentation

OpenAI o1-mini

The o1 series of large language models are designed to perform advanced reasoning through reinforcement learning. These models engage in deep internal thought processes before delivering responses, enabling them to handle complex queries. o1-mini is a small specialized model optimized for STEM-related reasoning during its pretraining. Despite its reduced size, the o1-mini undergoes the same high-compute reinforcement learning pipeline as the larger o1 models, achieving comparable performance on many reasoning tasks while being significantly more cost-efficient.


Performance

Human raters compared o1-mini to GPT-4o on challenging, open-ended prompts across various domains to assess performance and accuracy in different types of tasks. As seen from the graph above, o-1 mini is optimized for STEM related tasks.


Limitations

  • Optimization for STEM Knowledge: o1-mini is not optimized for tasks requiring non-STEM factual knowledge, which may result in less accurate responses when handling queries outside of technical or scientific domains.

  • Domain Preference: o1-mini is preferred to GPT-4o in reasoning-heavy domains, but is not preferred to GPT-4o in language-focused domains, where linguistic nuance and fluency are more critical.

  • Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/

OpenAI O1 Preview

The o1 series of large language models are designed to perform advanced reasoning through reinforcement learning. These models engage in deep internal thought processes before delivering responses, enabling them to handle complex queries. o1 preview is designed to reason about hard problems using broad general knowledge about the world.


Performance

Human raters compared o1-preview to GPT-4o on challenging, open-ended prompts across various domains to assess performance and accuracy in different types of tasks. As seen from the graph above, o-1 preview is optimized for STEM related tasks.


Limitations

  • Optimization for STEM Knowledge: o1-preview is not optimized for tasks requiring non-STEM factual knowledge, which may result in less accurate responses when handling queries outside of technical or scientific domains.

  • Domain Preference: o1-preview is preferred to GPT-4o in reasoning-heavy domains, but is not preferred to GPT-4o in language-focused domains, where linguistic nuance and fluency are more critical.

  • Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://openai.com/index/introducing-openai-o1-preview/

Open AI Whisper

Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual data, offering improved robustness to accents, noise, and technical language. It transcribes and translates multiple languages into English.


Intended Use

  • Whisper is useful as an ASR solution, especially for English speech recognition.

  • The models are primarily trained and evaluated on ASR and speech translation to English.

  • They show strong ASR results in about 10 languages.

  • They may exhibit additional capabilities if fine-tuned for tasks like voice activity detection, speaker classification, or speaker diarization.


Performance

  • Speech recognition and translation accuracy is near state-of-the-art.

  • Performance varies across languages, with lower accuracy on low-resource or low-discoverability languages.

  • Whisper shows varying performance on different accents and dialects of languages.


Limitations

  • Whisper is trained in a weakly supervised manner using large-scale noisy data, leading to potential hallucinations.

  • Hallucinations occur as the models combine predicting the next word and transcribing audio.

  • The sequence-to-sequence architecture may generate repetitive text, which can be partially mitigated by beam search and temperature scheduling.

  • These issues may be more pronounced in lower-resource and/or lower-discoverability languages.

  • Higher word error rates may occur across speakers of different genders, races, ages, or other demographics.


Citation

https://openai.com/index/whisper/

Google Gemini 2.0 Pro

Gemini 2.0 Pro is the strongest model among the family of Gemini models for coding and world knowledge, and it features a 2M long context window. Gemini 2.0 Pro is available as an experimental model in Vertex AI and is an upgrade path for 1.5 Pro users who want better quality, or who are particularly invested in long context and code.


Intended Use

  • Multimodal input

  • Text output

  • Prompt optimizers

  • Controlled generation

  • Function calling (excluding compositional function calling)

  • Grounding with Google Search

  • Code execution

  • Count token


Performance

Google Gemini 2.0 Pro boasts the strongest coding performance and superior ability to handle complex prompts, demonstrating better understanding and reasoning of world knowledge than any previous model. 

It features the largest context window of 2 million tokens, enabling comprehensive analysis and understanding of large amounts of information. Additionally, it can call external tools like Google Search and execute code, enhancing its utility for a wide range of tasks, including coding and knowledge analysis.


Limitations

  • Context: Gemini 2.0 Pro may struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As Gemini 2.0 Pro is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, Gemini 2.0 Pro may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Gemini 2.0 Pro can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Gemini 2.0 Pro might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of Gemini 2.0 Pro's output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2#2.0-pro

Open AI o3 mini

OpenAI o3-mini is a powerful and fast model that advances the boundaries of what small models can achieve. It delivers, exceptional STEM capabilities—with particular strength in science, math, and coding—all while maintaining the low cost and reduced latency of OpenAI o1-mini.


Intended Use

OpenAI o3-mini is designed to be used for tasks that require fast and efficient reasoning, particularly in technical domains like science, math, and coding. It’s optimized for STEM (Science, Technology, Engineering, and Mathematics) problem-solving, offering precise answers with improved speed compared to previous models. 

Developers can use it for applications involving function calling, structured outputs, and other technical features. It’s particularly useful in contexts where both speed and accuracy are essential, such as coding, logical problem-solving, and complex technical inquiries.


Performance

OpenAI o3-mini performs exceptionally well in STEM tasks, particularly in science, math, and coding, with improvements in both speed and accuracy compared to its predecessor, o1-mini. It delivers faster responses, with an average response time 24% quicker than o1-mini (7.7 seconds vs. 10.16 seconds).

In terms of accuracy, it produces clearer, more accurate answers, with 39% fewer major errors on complex real-world questions. Expert testers preferred its responses 56% of the time over o1-mini. It also matches o1-mini’s performance in challenging reasoning evaluations, including AIME and GPQA, especially when using medium reasoning effort.


Limitations

  1. No Vision Capabilities: Unlike some other models, o3-mini does not support visual reasoning tasks, so it's not suitable for image-related tasks.

  2. Complexity in High-Intelligence Tasks: While o3-mini performs well in most STEM tasks, for extremely complex reasoning, it may still lag behind larger models.

  3. Accuracy in Specific Domains: While o3-mini excels in technical domains, it might not always match the performance of specialized models in certain niche areas, particularly those outside of STEM.

  4. Potential Trade-Off Between Speed and Accuracy: While users can adjust reasoning effort for a balance, higher reasoning efforts may lead to slightly longer response times.

  5. Limited Fine-Tuning: Though optimized for general STEM tasks, fine-tuning for specific use cases might be necessary to achieve optimal results in more specialized areas.


Citation

https://openai.com/index/openai-o3-mini/

Bring your own model
Register your own custom models in Labelbox to pre-label or enrich datasets
Google Gemini 2.0 Flash

Gemini 2.0 Flash is designed to handle high-volume, high-frequency tasks at scale and is highly capable of multimodal reasoning across vast amounts of information with a context window of 1 million tokens.


Intended Use

  • Text generation

  • Grounding with Google Search

  • Gen AI SDK

  • Multimodal Live API

  • Bounding box detection

  • Image generation

  • Speech generation


Performance

Gemini 2.0 Flash outperforms the predecessor Gemini 1.5 Pro on key benchmarks, at twice the speed. It also features the following improvements:

  • Multimodal Live API: This new API enables low-latency bidirectional voice and video interactions with Gemini.

  • Quality: Enhanced performance across most quality benchmarks than Gemini 1.5 Pro.

  • Improved agentic capabilities: 2.0 Flash delivers improvements to multimodal understanding, coding, complex instruction following, and function calling. These improvements work together to support better agentic experiences.


Limitations

  • Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2#2.0-flash

Claude 3.7 Sonnet

Claude 3.7 Sonnet, by Anthropic, can produce near-instant responses or extended, step-by-step thinking. Claude 3.7 Sonnet shows particularly strong improvements in coding and front-end web development.


Intended Use

Claude 3.7 Sonnet is designed to enhance real-world tasks by offering a blend of fast responses and deep reasoning, particularly in coding, web development, problem-solving, and instruction-following.

  • Optimized for real-world applications rather than competitive math or computer science problems.

  • Useful in business environments requiring a balance of speed and accuracy.

  • Ideal for tasks like bug fixing, feature development, and large-scale refactoring.

Coding Capabilities:

  • Strong in handling complex codebases, planning code changes, and full-stack updates.

  • Introduces Claude Code, an agentic coding tool that can edit files, write and run tests, and manage code repositories like GitHub.

  • Claude Code significantly reduces development time by automating tasks that would typically take 45+ minutes manually.


Performance

Claude Sonnet 3.7 combines the capabilities of a language model (LLM) with advanced reasoning, allowing users to choose between standard mode for quick responses and extended thinking mode for deeper reflection before answering. In extended thinking mode, Claude self-reflects, improving performance in tasks like math, physics, coding, and following instructions. Users can also control the thinking time via the API, adjusting the token budget to balance speed and answer quality.

Early testing demonstrated Claude’s superiority in coding, with significant improvements in handling complex codebases, advanced tool usage, and planning code changes. It also excels at full-stack updates and producing production-ready code with high precision, as seen in use cases with platforms like Vercel, Replit, and Canva. Claude's performance is particularly strong in developing sophisticated web apps, dashboards, and reducing errors. This makes it a top choice for developers working on real-world coding tasks.


Limitations

  • Context: Claude 3.7 Sonnet may struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As Claude 3.7 Sonnet trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, Claude 3.7 Sonnet may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Claude 3.7 Sonnet can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Claude 3.7 Sonnet might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of the output of Claude 3.7 Sonnet are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://www.anthropic.com/news/claude-3-7-sonnet

Amazon Nova Pro

Amazon Nova Pro is a highly capable multimodal model that combines accuracy, speed, and cost for a wide range of tasks. 

The capabilities of Amazon Nova Pro, coupled with its focus on high speeds and cost efficiency, makes it a compelling model for almost any task, including video summarization, Q&A, mathematical reasoning, software development, and AI agents that can execute multistep workflows. 

In addition to state-of-the-art accuracy on text and visual intelligence benchmarks, Amazon Nova Pro excels at instruction following and agentic workflows as measured by Comprehensive RAG Benchmark (CRAG), the Berkeley Function Calling Leaderboard, and Mind2Web.


Intended Use

  • Multimodal Processing: It can process and understand text, images, documents, and video, making it well suited for applications like video captioning, visual question answering, and other multimedia tasks.

  • Complex Language Tasks: Nova Pro is designed to handle complex language tasks with high accuracy, such as deep reasoning, multi-step problem solving, and mathematical problem-solving.

  • Agentic Workflows: It powers AI agents capable of performing multi-step tasks, integrated with retrieval-augmented generation (RAG) for improved accuracy and data grounding.

  • Customizable Applications: Developers can fine-tune it with multimodal data for specific use cases, such as enhancing accuracy, reducing latency, or optimizing cost.

  • Fast Inference: It’s optimized for fast response times, making it suitable for real-time applications in industries like customer service, automation, and content creation.


Performance

Amazon Nova Pro provides high performance, particularly in complex reasoning, multimodal tasks, and real-time applications, with speed and flexibility for developers.


Limitations

  1. Domain Specialization: While it performs well across a variety of tasks, it may not always be as specialized in certain niche areas or highly specific domains compared to models fine-tuned for those purposes.

  2. Resource-Intensive: As a powerful multimodal model, Nova Pro can require significant computational resources for optimal performance, which might be a consideration for developers working with large datasets or complex tasks.

  3. Training Data: Nova Pro's performance is highly dependent on the quality and diversity of the multimodal data it's trained on. Its performance in tasks involving complex or obscure multimedia content might be less reliable.

  4. Fine-Tuning Requirements: While customizability is a key feature, fine-tuning the model for very specific tasks or datasets might still require considerable effort and expertise from developers.


Citation

https://www.amazon.science/publications/the-amazon-nova-family-of-models-technical-report-and-model-card

Grok

Grok is a general purpose model that can be used for a variety of tasks, including generating and understanding text, code, and function calling.

Intended Use

  • Text and code: Generate code, extract data, prepare summaries and more.

  • Vision: Identify objects, analyze visuals, extract text from documents and more.

  • Function calling: Connect Grok to external tools and services for enriched interactions.


Performance


Limitations

Context: Grok may struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

Bias: As Grok trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

Creativity Boundaries: While capable of creative outputs, Grok may not always meet specific creative standards or expectations for novel and nuanced content.

Ethical Concerns: Grok can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

Comprehension: Grok might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

Dependence on Prompt Quality: The quality and relevance of the output of Grok are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://x.ai/blog/grok-2

Bring your own model
Register your own custom models in Labelbox to pre-label or enrich datasets
Llama 3.2

The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.


Intended Use

Llama 3.2's 11B and 90B models support image reasoning, enabling tasks like understanding charts/graphs, captioning images, and pinpointing objects based on language descriptions. For example, the models can answer questions about sales trends from a graph or trail details from a map. They bridge vision and language by extracting image details, understanding the scene, and generating descriptive captions to tell the story, making them powerful for both visual and textual reasoning tasks.

Llama 3.2's 1B and 3B models support multilingual text generation and on-device applications with strong privacy. Developers can create personalized, agentic apps where data stays local, enabling tasks like summarizing messages, extracting action items, and sending calendar invites for follow-up meetings.


Performance

Llama 3.2 vision models outperform competitors like Gemma 2.6B and Phi 3.5-mini in tasks like image recognition, instruction-following, and summarization.


Limitations

Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

Medical images: Gemini Pro is not suitable for interpreting specialized medical images like CT scans and shouldn't be used for medical advice.

Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/

Claude 3.5 Sonnet

Claude 3.5 Sonnet sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone. Claude 3.5 Sonnet operates at twice the speed of Claude 3 Opus. This performance boost, combined with cost-effective pricing, makes Claude 3.5 Sonnet ideal for complex tasks such as context-sensitive customer support and orchestrating multi-step workflows.


Intended Use

  • Task automation: plan and execute complex actions across APIs and databases, interactive coding

  • R&D: research review, brainstorming and hypothesis generation, drug discovery

  • Strategy: advanced analysis of charts & graphs, financials and market trends, forecasting


Performance

  • Advanced Coding ability: In an internal evaluation by Anthropic, Claude 3.5 Sonnet solved 64% of problems, outperforming Claude 3 Opus which solved 38%. 

  • Multilingual Capabilities: Claude 3.5 Sonnet offers improved fluency in non-English languages such as Spanish and Japanese, enabling use cases like translation services and global content creation.

  • Vision and Image Processing: This model can process and analyze visual input, extracting insights from documents, processing web UI, generating image catalog metadata, and more.

  • Steerability and Ease of Use: Claude 3.5 Sonnet is designed to be easy to steer and better at following directions, giving you more control over model behavior and more predictable, higher-quality outputs.


Limitations

Here are some of the limitations we are aware of:

  • Medical images: Claude 3.5 is not suitable for interpreting specialized medical images like CT scans and shouldn't be used for medical advice.

  • Non-English: Claude 3.5 may not perform optimally when handling images with text of non-Latin alphabets, such as Japanese or Korean.

  • Big text: Users should enlarge text within the image to improve readability for Claude 3.5, but avoid cropping important details.

  • Rotation: Claude 3.5 may misinterpret rotated / upside-down text or images.

  • Visual elements: Claude 3.5 may struggle to understand graphs or text where colors or styles like solid, dashed, or dotted lines vary.

  • Spatial reasoning: Claude 3.5 struggles with tasks requiring precise spatial localization, such as identifying chess positions.

  • Hallucinations: the model can provide factually inaccurate information.

  • Image shape: Claude 3.5 struggles with panoramic and fisheye images.

  • Metadata and resizing: Claude 3.5 doesn't process original file names or metadata, and images are resized before analysis, affecting their original dimensions.

  • Counting: Claude 3.5 may give approximate counts for objects in images.

  • CAPTCHAS: For safety reasons, Claude 3.5 has a system to block the submission of CAPTCHAs.


Citation

https://docs.anthropic.com/claude/docs/models-overview

Claude 3.5 Haiku

Claude 3.5 Haiku, the next generation of Anthropic's fastest and most cost-effective model, is optimal for use cases where speed and affordability matter. It improves on its predecessor across every skill set.

Intended Use

Claude 3.5 Haiku offers fast speeds, improved instruction-following, and accurate tool use, making it ideal for user-facing products and personalized experiences. 

Key use cases include code completion, streamlining development workflows with quick, accurate code suggestions. It powers interactive chatbots for customer service, e-commerce, and education, handling high user interaction volumes. It excels at data extraction and labeling, processing large datasets in sectors like finance and healthcare. Additionally, it provides real-time content moderation for safe online environments.


Performance


Limitations

Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://docs.anthropic.com/claude/docs/models-overview

Amazon Rekognition

Common objects detection and image classification model by AWS Rekognition.


Intended Use

Amazon Rekognition's object detection model is primarily used for detecting objects, scenes, activities, landmarks, faces, dominant colors, and image quality in images and videos. Some common use cases include:

  • Detect and label common objects in images 

  • Identify activities and scenes in visual content

  • Enable content moderation and filtering

  • Enhance image search capabilities

Learn more


Performance

Amazon Rekognition's object detection model has been reported to have high accuracy ind detecting objects and scenes in images and videos. Its capabilities include:

  • Can detect thousands of object categories

  • Provides bounding boxes for object locations

  • Assigns confidence scores to detections

Learn more


Limitations

the performance of the model may be limited by factors such as the quality and quantity of training data, the complexity of the image content, or the accuracy of the annotations. Additionally, Amazon Rekognition may have detection issues with black and white images and elderly people.

Other limitations include:

  • May struggle with small or partially obscured objects

  • Performance can vary based on image quality and lighting

  • Limited ability to understand context or relationships between objects

  • Cannot identify specific individuals (separate face recognition API for that)

  • May have biases in detection rates across different demographics


Citation

Amazon Rekognition documentation

OpenAI o1-mini

The o1 series of large language models are designed to perform advanced reasoning through reinforcement learning. These models engage in deep internal thought processes before delivering responses, enabling them to handle complex queries. o1-mini is a small specialized model optimized for STEM-related reasoning during its pretraining. Despite its reduced size, the o1-mini undergoes the same high-compute reinforcement learning pipeline as the larger o1 models, achieving comparable performance on many reasoning tasks while being significantly more cost-efficient.


Performance

Human raters compared o1-mini to GPT-4o on challenging, open-ended prompts across various domains to assess performance and accuracy in different types of tasks. As seen from the graph above, o-1 mini is optimized for STEM related tasks.


Limitations

  • Optimization for STEM Knowledge: o1-mini is not optimized for tasks requiring non-STEM factual knowledge, which may result in less accurate responses when handling queries outside of technical or scientific domains.

  • Domain Preference: o1-mini is preferred to GPT-4o in reasoning-heavy domains, but is not preferred to GPT-4o in language-focused domains, where linguistic nuance and fluency are more critical.

  • Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/

OpenAI O1 Preview

The o1 series of large language models are designed to perform advanced reasoning through reinforcement learning. These models engage in deep internal thought processes before delivering responses, enabling them to handle complex queries. o1 preview is designed to reason about hard problems using broad general knowledge about the world.


Performance

Human raters compared o1-preview to GPT-4o on challenging, open-ended prompts across various domains to assess performance and accuracy in different types of tasks. As seen from the graph above, o-1 preview is optimized for STEM related tasks.


Limitations

  • Optimization for STEM Knowledge: o1-preview is not optimized for tasks requiring non-STEM factual knowledge, which may result in less accurate responses when handling queries outside of technical or scientific domains.

  • Domain Preference: o1-preview is preferred to GPT-4o in reasoning-heavy domains, but is not preferred to GPT-4o in language-focused domains, where linguistic nuance and fluency are more critical.

  • Context: May struggle with maintaining context over extended conversations, leading to inconsistencies in long interactions.

  • Bias: As it is trained on a large corpus of internet text, it may inadvertently reflect and perpetuate biases present in the training data.

  • Creativity Boundaries: While capable of creative outputs, it may not always meet specific creative standards or expectations for novel and nuanced content.

  • Ethical Concerns: Can be used to generate misleading information, offensive content, or be exploited for harmful purposes if not properly moderated.

  • Comprehension: Might not fully understand or accurately interpret highly technical or domain-specific content, especially if it involves recent developments post-training data cutoff.

  • Dependence on Prompt Quality: The quality and relevance of the output are highly dependent on the clarity and specificity of the input prompts provided by the user.


Citation

https://openai.com/index/introducing-openai-o1-preview/

Let foundation models do the work

We are bringing together the world's best AI models to help you perform data labeling and enrichment tasks across all supported data modalities. Achieve breakthroughs in data generation speed and costs. Re-focus human efforts on quality assurance.

Explore use case :
Image
Text

"We love our initial use of Model Foundry. Instead of going into unstructured text datasets blindly, we can now use pre-existing LLMs to pre-label data or pre-tag parts of it. Model Foundry serves as a co-pilot for training data." 


- Alexander Booth, Assistant Director of the World Series Champions, Texas Rangers

Pre-label data in a few clicks

Pre-label data in a few clicks

AI builders can now enrich datasets and pre-label data in minutes without code using foundation models offered by leading providers or open source alternatives. Model-assisted labeling using Foundry accelerates data labeling tasks on images, text, and documents at a fraction of the typical cost and speed.

Explore all models
Focus human expertise on where it matters most

Focus human expertise on where it matters most

The days of manually labeling from scratch are long gone. Foundation models can outperform crowd-sourced labeling on various tasks with high accuracy – allowing you to focus valuable human efforts on critical review. Combine model-assisted labeling using foundation models with human-in-the-loop review to accelerate your labeling operations and build intelligent AI faster than ever. 

Automate data tasks with Foundry Apps

Automate data tasks with Foundry Apps

Create custom Apps with Foundry based on your team’s needs. Deploy a model configuration from Foundry to an App for automated data management or to build custom intelligent applications.

Explore datasets using AI insights

Explore datasets using AI insights

Curate and explore datasets faster than ever. Query your data using predictions generated from the latest and greatest foundation models. Supercharge your data enrichment process and accelerate curation efforts across image and text data modalities.

Learn more
Accelerate custom model development

Accelerate custom model development

Data enriched by Foundry can be used to tailor foundation models to your specific use cases:

  1. Fine-tune leading foundation models: Optimize foundation models for your most valuable AI use cases. Enhance performance for specialized tasks in just days instead of months.  

  2. Distill knowledge into smaller models: Elaborate foundation models into lightweight models purpose-built for your applications. 

Learn more

Frequently asked questions

  • Who gets access to the Model Foundry?
    +

    To use Labelbox Foundry, you will need to be a part of our Starter or Enterprise plans. You will be billed monthly for the pass-through compute costs associated with foundation models run on your data in Labelbox.


    Learn more about how to upgrade to a pay-as-you-go Starter plan or about our Labelbox Unit pricing


    Contact us to receive more detailed instructions on how to create a Labelbox Starter or Enterprise account or upgrade to our Starter tier.

  • How is Foundry priced?
    +

    Only pay for the models that you want to pre-label or enrich your data with. Foundry pricing will be calculated and billed monthly based on the following:


    1. Inference cost – Labelbox will charge customers for inference costs for all models hosted by Labelbox. Inference costs will be bespoke to each model available in Foundry. The inference price is determined based on vendors or our compute costs – these are published publicly on our website as well as inside the product. 

    2. Labelbox's platform cost – each asset with predictions generated by Foundry will accrue LBUs. 


    Learn more about the pricing of the Foundry add-on for Labelbox Model on our pricing page

  • What kind of foundation models are available?
    +

    Labelbox Foundry currently supports a variety of tasks for computer vision and natural language processing. This includes powerful open-source and third-party models across text generation, object detection, translation, text classification, image segmentation, and more. For a full list of available models, please visit this page

  • How do I get started with Foundry?
    +

    If you aren’t currently a Labelbox user or are on our Free plan, you’ll need to: 


    1. Create a Labelbox account 

    2. Upgrade your account to our Starter plan

    In the Billing tab, locate “Starter” in the All Plans list and select “Switch to Plan.” The credit card on file will only be charged when you exceed your existing free 10,000 LBU. 

    Upgrades take effect immediately so you'll have access to Foundry right away on the Starter plan. After upgrading, you’ll see the option to activate Foundry for your organization. 

    1. Follow the steps below on how to get started generating model predictions with Foundry.


    If you are currently a Labelbox customer on our Starter or Enterprise plans, you automatically have the ability to opt-in to using Foundry:


    1. After selecting data in Catalog and hitting “Predict with Foundry,” you’ll be able to select a foundation model and submit a model run to generate predictions. You can learn more about the Foundry workflow and see it in-action in this guide

    2. When submitting a model run, you’ll see the option to activate Foundry for your organization’s Admins. You’ll need to agree to Labelbox’s Model Foundry add-on service terms and confirm you understand the associated compute fees.  

  • Does Labelbox sell or use customer data via Foundry?
    +

    Labelbox does not sell or use customer data in a way other than to provide you the services.

  • Do the third-party model providers use customer data to train their models?
    +

    Labelbox's third-party model providers will not use your data to train their models, as governed by specific 'opt-out' terms between Labelbox and our third-party model providers.

  • Can I use the closed source models available through Foundry for pre-labeling?
    +
    • OpenAI: Labelbox believes that you have the right to use OpenAI’s closed-source models (e.g, GPT-4, GPT3.5, etc) for pre-labeling through Foundry if you comply with applicable laws and otherwise follow OpenAI’s usage policies. If you wish to use your own instance of OpenAI, you can do so by setting up a custom model on Foundry. 

    • Anthropic: Labelbox believes that you have the right to use Anthropic’s Claude for pre-labeling through Foundry if you comply with applicable laws and otherwise follow Anthropic’s Acceptable Use Policy. If you wish to use your own instance of Anthropic, you can do so by setting up a custom model on Foundry. 

    • Google: Labelbox believes that you have the right to use Google’s Gemini (or other closed-source models such as Imagen, PalLM, etc.) for pre-labeling through Foundry if you comply with applicable laws and otherwise follow the Gemini API Additional Terms of Service, including without limitation the Generative AI Prohibited Use Policy. If you wish to use your own instance of Google, you can do so by setting up a custom model on Foundry.