How to reduce AI bias like Google is doing


Social movements of the past year have helped shine a light the many ways in which human biases can creep into the algorithms that influence an increasing part of our daily lives – despite the developer’s lack of malicious intent.

During a panel at CES on Tuesday, Google’s head of product inclusion, Annie Jean-Baptiste, shared some of the ways her team tries to eliminate the biases that can manifest in the data on which a system machine learning is formed, or in the course of the development process.

Google itself has faced controversy in recent months over its dismissal of respected AI ethics researcher Timnit Gebru following his co-writing of an article highlighting the risks of major linguistic models, which constitute a key pillar of the activity of the search giant.

Yet Google and other big tech companies claim to have implemented more measures to help eliminate stigma in their algorithms in recent years in the face of pressure from a growing movement of activists, academics and technologists calling attention to the issue.

Here are some tips from Jean-Baptiste to reduce bias when developing an AI algorithm.

Contradictory tests

Conflicting testing is a common method of keeping a product or system safe by asking engineers to do their best to hack or break it to identify issues before it is released. Jean-Baptiste said that Google is also applying this form of testing to AI bias by having members of under-represented groups, especially those not reflected in the makeup of the development team, control products in the same way.

“We bring together what we call our Inclusion Champions; these are Googlers from under-represented backgrounds – who were able to smash the product before its launch and bring up the negative things we didn’t want him to say, but also, proactively, add positive cultural references, ”a declared John the Baptist. “And when it was launched, there were only a few requests that we had to act on.”

Shared language around diversity

Jean-Baptiste said his Google team has trained around 12,000 technical staff over the past year on a common framework for understanding bias and diversity issues so they have a foundation on which to establish communication between parties. disparities in the business.

“We all come with our own backgrounds and experiences, so I think it’s really important for an organization to think about the common language that we need to have around what we do,” said Jean-Baptiste.

Identify inflection points

According to Jean-Baptiste, it is important to first examine the entire development process and identify the points where bias is most likely to seep. While prominent researchers in the AI ​​community have been controversial recently arguing that bias is mainly the result of training data, Jean-Baptiste said that a more holistic approach to the problem is needed, which takes into account the potential for bias at each step of the process.

“Just like any other part of product design, or just like any other part of a process that you’re trying to be successful in, you have to have an infrastructure, you have to have responsibilities for it, or it won’t be successful. “Said Jean-Baptiste.



Leave a Reply

Your email address will not be published. Required fields are marked *