The race to both mimic and create competitor models to OpenAI’s GPT3.5 energized the interest in model compression and quantization techniques.
Knowledge distillation, also known as model distillation, is one of many techniques that have grown in popularity and importance in enabling small teams to leverage foundation models in developing small (but mighty) custom models, used in intelligent applications.
In “A pragmatic introduction to model distillation for AI developers”, we illustrated some of the conceptual foundations for how model distillation works as well as why we even need smaller models.
We also provided an in-depth guide with a worked example in the second part of our series, “End-to-end workflow with model distillation for computer vision”.
Now we turn our attention to demonstrating the flexibility and power of model distillation in another domain and use case, where increased efficiency through supervised training of a smaller model by a foundation model is necessary.
In this tutorial we’ll demonstrate an end-to-end workflow for natural language processing, using model distillation to fine-tune a BERT model with labels created in Model Foundry using Google Gemini.
We’ll show how easy it is to go from raw data to cutting-edge models, customized to your use case, using a sentiment dataset (additional public datasets can be found here or on sites like Kaggle or HuggingFace).
In less than 30 min you’ll learn how to:
At the end of the tutorial we’ll also discuss advanced considerations in scaling your models up and out, such as automating data ingestion and labeling, and resources for incorporating RLHF into your workflows.
The walkthrough below covers Labelbox’s platform across Catalog, Annotate, and Model. We recommend that you create a free Labelbox account to best follow along with this tutorial. You’ll also need to create API keys for accessing the SDK.
Notebook: Text Bert Model Distillation
In our prior post on model distillation concepts, we discussed the different model distillation patterns, based on the following criteria:
“Benefits of Using Model Distillation”, Source: “A pragmatic introduction to model distillation for AI developers”, Fig 3.1
In this tutorial we’ll be demonstrating the most popular and easiest pattern to get started with: offline, response-based model (or knowledge) distillation.
The teacher model we’ll be using to produce the responses is Google Gemini and the student model is BERT (distilbert-base-uncased).
As you’ll see, we could have chosen any combination of teacher or student models, because the offline, response-based pattern of model distillation is incredibly flexible.
When implementing this process for your own use case, it’s important to understand the relative strengths and weaknesses of each model and match them according to your requirements and use cases (whether it’s detecting and removing PII to be GDPR compliant or detecting unsavory content).
Labelbox is the leading data-centric AI platform, providing an end-to-end platform for curating, transforming, annotating, evaluating and orchestrating unstructured data for data science, machine learning, and generative AI.
The Labelbox platform supports the development of intelligent applications using the model distillation and fine-tuning workflow enabling AI developers to easily:
Before beginning the tutorial:
Once you’re able to see your dataset in Labelbox Catalog, you’ll be able to do the following:
For additional details on how to use Catalog to enable data selection for downstream data-centric workflows (such as data labeling, model training, model evaluation, error analysis, and active learning), check out our documentation.
The first step of model distillation is to identify an appropriate teacher model, which will be used to produce responses that, when combined with the original text, will serve as the fine-tuning dataset for the student model.
Response-based model distillation is powerful because it can be used even when access to the original model weights is limited (or the model is so big that downloading a copy of the model would take a really long time). Response-based distillation also doesn’t require the user to have trained the model themselves; just that the model was pre-trained.
Labelbox allows you to pick any of the currently hosted, state-of-the-art models to use (as well as upload your own custom models) to use as the teacher model.
For now, let’s get started with preparing the text we’ll be labeling, or generating predictions with, using Google Gemini. The combination of text and label pairs will be used for BERT.
Steps:
When developing ML based applications, developers need to quickly and iteratively prepare and version training data, launch model experiments, and use the performance metrics to further refine the input data sources.
The performance of a model can vary wildly depending on the data used, the quality of the annotations, and even the model architecture itself. A necessary requirement for replicability is being able to see the exact version of all the artifacts used or generated as a result of an experiment.
Labelbox will snapshot the experiment, the data artifacts as well as the trained model, as a saved process known as a model run.
This includes the types of items the model is supposed to identify and label, known as an ontology.
Each model has an ontology defined to describe what it should predict from the data. Based on the model, there are specific options depending on the selected model and your scenario.
For example, you can edit a model ontology to ignore specific features or map the model ontology to features in your own (pre-existing) ontology.
Each model will also have its own set of settings, which you can find in the Advanced model setting.
Steps:
This prompt is designed to facilitate responses from the model with one of the following: sadness, joy, love, anger, fear, surprise.
While this step is optional, generating preview predictions allows you to confidently confirm your configuration settings:
Because each model run is submitted with a unique name, it’s easy to distinguish between each subsequent model run.
When the model run completes, you can:
Use the prediction results to pre-label your data for a project in Labelbox Annotate
Although fine-tuning a foundation model requires less data than pre-training a large foundation model from scratch, the data (specifically the labels) need to be high-quality.
Even big, powerful foundation models make mistakes or miss edge cases.
You might also find that there are additional categories that the parent model didn’t identify correctly because the ontology was incomplete.
Once a parent model like Gemini has been used for the initial model-assisted labeling run, those predictions can then be sent to a project, a container where all your labeling processes happen.
We’ve shown the first half of the model distillation to fine-tuning workflow.
The next step is to use the generated labels, along with the original texts, to fine-tune a student model in Colab.
Note: You’ll now need the API keys from earlier to follow along with the Colab notebook.
For brevity, we’ve omitted the surrounding code samples but you can copy or run the corresponding blocks in the provided notebook.
Check out our documentation to find out all the ways you can automate the model lifecycle (including labeling) using our SDK.
Steps:
There's additional processing that needs to happen, which we walk through below.
Steps:
Steps:
Oftentimes the initial training or fine-tuning step isn’t the final stop on the journey of developing a model.
One of the biggest differences between the traditional method of training models in the classroom versus the real-world is how much control you have over the quality of your data, and consequently the quality of the model produced.
As we mentioned earlier, developers can upload predictions and use the Model product to diagnose performance issues with models and compare them across multiple experiments.
Doing so automatically populates model metrics that make it easy to evaluate the model’s performance.
Steps:
There’s no single metric to rule them all when evaluating how your fine-tuned LLM performs.
Both qualitative and quantitative measures must be considered, combined with sampling and manual review.
With that being said, Model offers a number of the most common out-of-the-box. With the ‘Metrics view’ users can drill into crucial model metrics, such as confusion matrix, precision, recall, F1 score, false positive, and more, to surface model errors.
Model metrics are auto-populated and interactive, which means you can click on any chart or metric to immediately open up the gallery view of the model run and see corresponding examples, as well as visually compare model predictions between multiple model runs.
Steps:
How does our fine-tuned model perform?
Let's manually inspect a few examples of predictions from the fine-tuned BERT model.
In this step-by-step walkthrough, we’ve shown how anyone with any text-based dataset can leverage an LLM to label, fine-tune and analyze a smaller but mighty custom model.
Additional considerations users should address for scaling similar projects include:
In this tutorial we demonstrated an end-to-end workflow for natural language processing, using model distillation to fine-tune a BERT model with labels created in Model Foundry using Google Gemini.
Hopefully you were able to see how easy it is to go from raw data to cutting-edge custom models in less than 30 min.
You learned how the Labelbox platform enables model distillation by allowing developers to:
If you’re interested in learning more about model distillation, check out the previous posts in this series: “A pragmatic introduction to model distillation for AI developers”, “End-to-end workflow with model distillation for computer vision”.
Looking to implement a production-ready model distillation and fine-tuning in your organization but not sure how to get started leveraging your unstructured data?
Ask our community or reach out to our solutions engineers!