Labelbox provides code to leverage Vertex AI and other Google Cloud Services for running model training jobs. You can easily customize a model training pipeline based on the Labelbox reference implementation. You can run ETL jobs, train, deploy, and track model performance all from a single service.
The code deploys a service called the Coordinator to Google Cloud — it exposes a rest API for launching various pipelines. The Coordinator only has to be deployed once and then will be controllable via the Labelbox web app (WIP). The custom model training pipeline is designed to easily extend to custom jobs and pipelines.
We support the following models with no additional configuration required:
We've compiled key steps and requirements needed to successfully leverage Vertex AI and other Google Cloud Services for running model training jobs.
Watch the following short video tutorials on how to set up the integration and follow along with more detailed instructions in the Github Repo below:
Step 1: Create a service account in the Google Cloud UI. This account must have the following permissions:
Step 2: Download the private key for the service account. You can put this anywhere on your computer.
export GOOGLE_APPLICATION_CREDENTIALS=~/.config/gcloud/model-training-credentials.json
Step 3: Make sure docker and docker-compose is installed.
Step 4: Make sure gcloud CLI is installed and configured for the proper service account.
curl http://sdk.cloud.google.com | bash
source ~/.<bash_profile/zshrc/bashrc>
gcloud auth activate-service-account SERVICE_ACCOUNT_ID@PROJECT_ID.iam.gserviceaccount.com --key-file=$GOOGLE_APPLICATION_CREDENTIALS)
gcloud config set project PROJECT_ID
Step 5: Connect docker to GCR by running:
gcloud auth configure-docker
Step 1: Create a .env file to keep track of the following env vars (copy .env example to get started):
DEPLOYMENT_NAME
GCS_BUCKET
GOOGLE_PROJECT
SERVICE_SECRET
GOOGLE_APPLICATION_CREDENTIALS
GOOGLE_SERVICE_ACCOUNT
LABELBOX_API_KEY
Step 2: Once the .env file has the correct values, you can load the env vars by running source .env
source .env
to load the latest env vars.Step 3: Deploy the service
./deployment/deploy.sh
./run.sh
Step 4: Test that it is running with:
0.0.0.0
for a local deployment and the remote ip will be printed to the console when you run the deployment script.Step 1: Visit the Labelbox Models tab
Step 2: Create a model and a model run with a flat, single type ontology (bounding box, NER, or classification)
Step 3: Navigate to 'Settings' in your model run and click 'Model training'.
Step 4: Save the inputted values for IP and your chosen service secret.
Step 5: Click 'Train model' and select a 'job type'
Once you select a desired ML task, Labelbox will help train your model and pull the inference back in to provide model metrics, allowing you to quickly iterate on your model.
For more detailed instructions on troubleshooting, please refer to the Github Repo or reach out to our support team. You can also refer to our documentation for an overview of the model training integration.