Benchmarks (Golden Standard)
Benchmarks (also known as Golden Standard) is a quality assurance tool for training data. Training data quality is the measure of accuracy and consistency of the training data. Benchmarks works by interspersing data to be labeled, for which there is a benchmark label, to each person labeling. These labeled data are compared against their respective benchmark and an accuracy score between 0 and 100 percent is calculated.
Use Benchmarks to ensure the labeling team is accurately labeling data initially, and throughout the lifecycle of the training data. Benchmarks are created and managed from the Labels>Benchmarks view. Benchmarks results are shown from the Labels>Benchmarks view as well as individually for each team member from the Performance view.
Setting up Benchmarks
- Start with an existing project or create a new project.
- Configure the project to use Benchmarks: go to Settings>Quality and select Benchmark.
- Create a benchmark from an existing label by clicking the star icon on the top bar.
- With a benchmark defined, the underlying source datum will now be served to every labeler. Each label for that datum will be scored against the benchmark label. View the average score of each Benchmark from Labels>Benchmarks.
- View the detailed results of a Benchmark by clicking on View Results.
How Benchmarks Test Data is Distributed to Labelers
When a team member starts labeling for the first time in a project, they are served five benchmark test data. From there, each benchmark test datum is served up in a semi-random but spaced apart manner.
Understanding Labeler Performance
Each team member’s work is quantified and shown on the Performance tab. To see the details for a particular team member, click on their row to expand it.