Max Fern – How I’m fighting bias in algorithms (11/9)

In “How I’m fighting bias in algorithms,” MIT grad student Joy Boulamwini discusses the implicit bias of the code that facilitates our interactions with modern technology and her mission to fight it before it’s ingrained in the neural networks of artificially intelligent systems.

As Boulamwini explains, it’s the scale and pace of bias proliferation in machine learning that has the potential to lead to “exclusionary experiences and discriminatory practices.” She describes an experience she had working on social robots as an undergrad at Georgia Tech in which she struggled because the rudimentary facial recognition technology couldn’t recognize her face – presumably because she is black.

According to Boulamwini, the training sets used to teach many machines facial recognition don’t include enough black (and brown) faces to adequately detect people with darker skin tones. Therefore, there is an opportunity to create “full-spectrum” sets that are more inclusive. She goes on to explain how this bias is translated to law enforcement, as these facial recognition technologies are too often guilty of leading authorities to the wrong individuals.

I never thought about the impact of bias in machine learning, but it is easy to see how – as these technologies become a larger part of our society – they could impact the efficacy of some of our public institutions. I hope that law enforcement is careful about how they employ these algorithms, especially because of the dramatic circumstances a mistake could create.

3 thoughts on “Max Fern – How I’m fighting bias in algorithms (11/9)

  1. I didn’t read this paper, but its effects are a byproduct of what my research at Lehigh is attempting to solve. It’s interesting that you mention the justice system using biased algorithms, and I agree that they should be careful. There was a famous case of algorithm misuse in Cook County, Illinois where Judges would use the output of a model to help in sentencing decisions, but as it turns out the model was trained on arrests, which do not directly correlate with convictions, so there is no reason this algorithm should have been used. I think this case outlines why this topic is especially interesting and that solutions seem to only be clear in hindsight, which is troublesome.

  2. I agree that it is alarming that our own societal biases can infiltrate our technology as well. One thing that is interesting to think about is that often times, people think of AI as an equalizer, or as a way to avoid bias. However, if these systems have potentially been trained with biased data, can they really be equalizers?

  3. I also listened to this TED Talk. While watching this, I also recognized that I never truly thought about the impact of bias in machine learning. How could a robot be biased based on skin color? I always viewed robots as machines who couldn’t have any bias because they were not actually human. However, it’s obvious to me now how this can be the case based on how they are programmed and who programs them.

Leave a Reply

Your email address will not be published. Required fields are marked *