Microsoft fixes 'racist' face recognition software
07 August 2018 15:35 GMT

Microsoft has altered facial recognition software that researchers accused of racist due to its dataset.

Earlier this year, the software giant's Face API was criticized by researchers, which found the error rate was much higher in identifying non-white people.

“Microsoft announced Tuesday that it has updated its facial recognition technology with significant improvements in the system’s ability to recognize gender across skin tones,” Microsoft’s John Roach wrote in a post on the company’s AI blog. “That improvement addresses recent concerns that commercially available facial recognition technologies more accurately recognized gender of people with lighter skin tones than darker skin tones, and that they performed best on males with lighter skin and worst on females with darker skin.”

He continued, “With the new improvements, Microsoft said it was able to reduce the error rates for men and women with darker skin by up to 20 times. For all women, the company said the error rates were reduced by nine times. Overall, the company said that, with these improvements, they were able to significantly reduce accuracy differences across the demographics.”

Microsoft says the high error rates its tech experienced on female faces with darker skin is indicative of a serious challenge facing the industry as a whole. Artificial intelligence tech needs as broad a dataset as possible in order to be trained well. Of course, the blame for this issue still lies squarely on Microsoft: The company failed to provide enough image data on black people to build a useful facial recognition product.

“The Face API team made three major changes,” Roach wrote in his post. “They expanded and revised training and benchmark datasets, launched new data collection efforts to further improve the training data by focusing specifically on skin tone, gender and age, and improved the classifier to produce higher precision results.”