In “How I’m fighting bias in algorithms,” MIT grad student Joy Boulamwini discusses the implicit bias of the code that facilitates our interactions with modern technology and her mission to fight it before it’s ingrained in the neural networks of artificially intelligent systems.
As Boulamwini explains, it’s the scale and pace of bias proliferation in machine learning that has the potential to lead to “exclusionary experiences and discriminatory practices.” She describes an experience she had working on social robots as an undergrad at Georgia Tech in which she struggled because the rudimentary facial recognition technology couldn’t recognize her face – presumably because she is black.
According to Boulamwini, the training sets used to teach many machines facial recognition don’t include enough black (and brown) faces to adequately detect people with darker skin tones. Therefore, there is an opportunity to create “full-spectrum” sets that are more inclusive. She goes on to explain how this bias is translated to law enforcement, as these facial recognition technologies are too often guilty of leading authorities to the wrong individuals.
I never thought about the impact of bias in machine learning, but it is easy to see how – as these technologies become a larger part of our society – they could impact the efficacy of some of our public institutions. I hope that law enforcement is careful about how they employ these algorithms, especially because of the dramatic circumstances a mistake could create.