Segmentation

Updated 3 weeks ago by Alex Cota

Overview

In the Segmentation tool, each annotated pixel in an image belongs to a single class. It is often used to label images for applications that require high accuracy. The output is a mask that outlines the shape of the object in the image.

Configure the Segmentation tool

During project setup, you can setup your ontology by adding all of the objects and classifications needed for your project.

Setup steps
  1. Create a project.
  2. Select "Image editor" as your label editor.
  3. Click "Add Object" and name your object.
  4. Select "Segmentation" as your labeling tool.
  5. [OPTIONAL] Configure nested classifications.
  6. Click "Confirm".
  7. Click "Complete setup".

Pen tool

The pen drawing tool is designed to be the fastest way to outline objects. It allows you to draw freehand as well as straight lines. You can also use the pen tool to erase, just click the (-) icon in the top bar. Tip: Hold Alt on your keyboard to temporarily switch to the erase mode while you draw.

Superpixel (beta)

The Superpixel tool will only appear in the tool bar when you are using the segmentation tool.

For segmentation features with complex boundaries, using the Superpixel tool first may be more efficient than using the pen tool alone. The Superpixel works by calculating segment clusters of similarly colored pixels in the image.

In the top toolbar is a slider, which allows you to increase or reduce the size of segment clusters. The number corresponds to segment size; higher values will have larger segments and lower values will have smaller segments.

After you have selected the optimal segment cluster size, you can choose an object class and use your cursor to select and classify each segment to be included in that segmentation feature. You can then adjust the boundaries using the pen and eraser tools.

Calculation time for Superpixel segments increases with image size. We advise using images that are smaller than 4000 by 4000 pixels.

Draw over existing objects

This functionality used to be named "Draw to back". "Draw over existing objects" is now on by default.

With this tool, you can overwrite existing segmentation features. When this tool is enabled, a new segmentation feature drawn over existing features will overlap the existing features, overwriting previously classified pixels. When this tool is disabled, a new segmentation feature drawn over existing features will be drawn behind the existing features.

This tool was designed to significantly speed up labeling time since it is not required to intricately outline around the border of other objects.

Creating instances

From the labeling interface, you can use the same class for more than one annotation. For example, if there are 5 fish in an image and you would like to assign the "Fish" class to all five, you can create multiple instances of the "Fish" class.

Follow these steps to create multiple instances of the same object:

  1. Select a class and draw the object.
  2. Select the same class again.
  3. Draw the next instance of the object.

Nested classifications

If you have configured the interface to have nested classifications for any of your objects, the labeler will be presented with classification questions after the annotation of the object.

Label format

When you export your labels as a JSON file, the file will be structured as follows. The instanceURI field is unique to the JSON files exported from the segmentation tool. The value for instanceURI is a URL to the image mask.

"Label": {
"objects": [{
"featureId": "cjxtm2d32i9aa07940tifrpuh",
"schemaId": "cjxtjkpjai8t80846iwqaa1d8",
"title": "Orange Fish",
"value": "orange_fish",
"color": "#3F51B5",
"instanceURI": "https://api.labelbox.com/masks/cjxtj...",
"classifications": [{
"featureId": "cjxtm2e8zheyt0863sds45xyx",
"schemaId": "cjxtjkphehbnj0848wknmg44u",
"title": "Is the fish blurry?",
"value": "is_the_fish_blurry?",
"answer": {
"featureId": "cjxtm2ebgi7tx0944gr8jx0lp",
"schemaId": "cjxtjkpfwh4y90721uodxps2b",
"title": "Yes",
"value": "yes"
}
}]
}, {
"featureId": "cjxtm2kf0i9b207944z3gypsn",
"schemaId": "cjxtjkpjai8t80846iwqaa1d8",
"title": "Orange Fish",
"value": "orange_fish",
"color": "#3F51B5",
"instanceURI": "https://api.labelbox.com/masks/cjxtjbwiah..."
}, {
"featureId": "cjxtm2spbh8xx0721s18nwqq0",
"schemaId": "cjxtjkpjai8t90846yzpbpo7t",
"title": "Blue Fish",
"value": "blue_fish",
"color": "#F4511E",
"instanceURI": "https://api.labelbox.com/masks/cjxtjbwia..."
}, {
"featureId": "cjxtm3i3ci9cx07940e5t7if8",
"schemaId": "cjxtk7z5qi5s90794v7wvseu6",
"title": "Water",
"value": "water",
"color": "#EF6C00",
"instanceURI": "https://api.labelbox.com/masks/cjxtjbwia..."
}],
"classifications": []
}

Convert mask to polygon coordinates

If you need to convert your vector masks to polygon coordinates, you can use this script here. You will need to pip install labelbox first. See our Getting started page in our Python docs for installation instructions.


Was this page helpful?