Computer vision in AI: the data you need to succeed


Developing the ability to annotate massive volumes of data while maintaining quality is a function of the model development lifecycle that companies often underestimate. It takes a lot of resources and requires specialized expertise.

At the heart of any successful machine learning / artificial intelligence (ML / AI) initiative is a commitment to high-quality training data and a path to quality data that is proven and well-defined. Without this pipeline of quality data, the initiative is doomed to failure.

Computer vision or data science teams often look to external partners to develop their data training pipeline, and these partnerships drive model performance.

There is no single definition of quality: “quality data” depends entirely on the specific computer vision or machine learning project. However, there is a general process that all teams can follow when working with an external partner, and this path to quality data can be broken down into four priority phases.

Annotation criteria and quality requirements

Training data quality is an assessment of the ability of a data set to fulfill its purpose in a given ML / AI use case.

The computer vision team should establish an unambiguous set of rules that describe what quality means in the context of their project. The annotation criteria are the set of rules that define the objects to be annotated, how to annotate them correctly and what are the quality objectives.

Accuracy or quality goals define the lowest acceptable result for evaluation metrics such as accuracy, recall, precision, F1 score, etc. Typically, a computer vision team will have quality goals to determine how accurately objects of interest have been classified, how accurately objects have been located, and how accurately relationships between objects have been identified.

Workforce training and platform configurationnot

Platform configuration. Designing tasks and configuring the workflow takes time and expertise, and precise annotations require task-specific tools. At this point, data science teams need an experienced partner to help them determine how best to configure labeling tools, classification taxonomies, and annotation interfaces for accuracy and throughput. .

Testing and scoring of workers. To accurately label data, annotators need a well-designed training program to fully understand the annotation criteria and the context of the domain. The annotation platform or external partner should ensure accuracy by actively tracking the skills of annotators in relation to gold data tasks or when a judgment is changed by a more qualified worker or administrator.

Ground truth or gold data. Ground truth data is crucial at this point in the process as a baseline for scoring workers and measuring the quality of output. Many computer vision teams are already working with a ground truth dataset.

Sources of authority and quality assurance

There is no single quality assurance (QA) approach that meets the quality standards of all ML use cases. Specific business goals, along with the risk associated with an underperforming model, will drive quality requirements. Some projects achieve target quality using multiple annotators. Others require complex examinations against ground truth data or escalation workflows with subject matter expert verification.

There are two main sources of authority that can be used to measure the quality of annotations and which are used to rate workers: gold data and expert opinion.

  • Gold data: Gold data or the set of ground truth records can be used both as a qualification tool to test and score workers early in the process and also as a measure of the quality of exit. When you use Gold data to measure quality, you compare worker ratings to your expert ratings for the same data set, and the difference between these two independent and blind responses can be used to produce quantitative measures such as accuracy, recall, precision and F1 scores. .
  • Expert review: This quality assurance method relies on expert review by a highly skilled worker, administrator, or client side expert, sometimes all three. It can be used in conjunction with quality control of gold data. The expert reviewer reviews the answer given by the skilled worker and approves it or makes the necessary corrections, producing a new correct answer. Initially, an expert review may take place for each instance of labeled data, but over time as the quality of workers improves, the expert review may use random sampling for quality control. continued.

Iterate over data success

Once a computer vision team has successfully launched a pipeline of high-quality training data, they can accelerate progress to a production-ready model. With ongoing support, optimization and quality control, an external partner can help them:

  • Track speed: For efficient scaling, it is good to measure the throughput of annotations. How long does it take for the data to go through the process? Is the process speeding up?
  • Adjust worker training: As the project evolves, labeling and quality requirements may change. This requires ongoing workforce training and grading.
  • Train on edge cases: Over time the training data should include more and more edge cases in order to make your model as accurate and robust as possible.

Without high quality training data, even the best funded and ambitious ML / AI projects cannot succeed. Computer vision teams need reliable partners and platforms to deliver the quality of data they need and to power life-changing ML / AI models.

Alegion is the proven partner to build the training data pipeline that will fuel your model throughout its lifecycle. Contact Alegion at [email protected].

This content was produced by Alegion. It was not written by the editorial staff of MIT Technology Review.

Leave a Reply

Your email address will not be published. Required fields are marked *