Labelbox•January 31, 2024
The Labelbox Labs team is excited to share a preview of multimodal data labeling in Labelbox, codenamed "Label Blocks". As foundation models are becoming increasingly multimodal, AI developers want to capture human feedback at higher abstraction levels, closer to how humans might be making decisions in their respective jobs.
For example, a medical practitioner may want to look at patient history, PDF documents and medical images at the same time to decide if the patient is qualified for a clinical trial.
With Labelbox multimodal support, you can label every data modality within a task at the most granular level while also making a global judgment about the task.
Check out the brief demo of Label Blocks:
If you're interested in using Label Block for multimodal labeling, the Labelbox Labs team would love to get your feedback.
You can signup for preview access here and let us know about your multimodal use case.