MIT Researchers Introduce an AI System for ‘De-Biasing’ Algorithms

Given the reach and pervasiveness of algorithms in our daily lives, fairness and equal treatment becomes of crucial importance – a fact which became very salient to many people last year, when a study conducted at MIT revealed both gender and racial bias in face-recognition algorithms.

The lesson seems rather clear – bias in, bias out, and that seems true even if no conscious prejudice was displayed by any specific person involved in drawing up the training datasets (found to sometimes be comprised of 75 per cent male and over 80 per cent white images by the study mentioned above).

To address the issue, a team at MIT’s Computer Science Artificial Intelligence Lab (CSAIL) is currently working on an algorithm capable of learning both – the task at hand (such as face detection) and the basic structure of the training data, which allows it to automatically identify and minimise bias by resampling.

Face recognition. MIT has developed an AI system which can recognise and correct sampling bias in facial-recognition algorithms without any input from human programmers. Image credit: Steven Lilley via Flickr, CC BY-SA 2.0

Face recognition. MIT has developed an AI system which can recognise and correct sampling bias in facial-recognition algorithms without any input from human programmers. Image credit: Steven Lilley via Flickr, CC BY-SA 2.0

In tests, the algorithm managed to decrease “categorical bias” by more than 60 per cent compared to the latest-and-greatest facial-recognition systems, all the while maintaining their characteristic overall precision.

The key innovation of the new system is that while its counterparts require at least some input from humans to learn the relevant biases, MIT’s digital ‘de-biaser’ requires no hand-holding at all – just throw a dataset at it and it will grok the underlying structure, and then resample it as appropriate.

“Facial classification in particular is a technology that’s often seen as ‘solved’, even as it’s become clear that the datasets being used often aren’t properly vetted,” said Ph.D. student Alexander Amini, who’s the author of a related paper that was presented this week at AIES.

Amini says that rectifying such issues is imperative as we start to see these kinds of algorithms being used in security, law enforcement and other domains.

In addition, the system developed by the team at MIT could become particularly relevant for larger datasets that cannot be checked manually, and might extend to other computer vision applications.

Full version of the paper is available on the website of the AIES Conference for free.

Sources: paper, techxplore.com.