Facial recognition technology only works when Joy Buolamwini puts on a white mask, hinting at the racial and gender biases in artificial intelligence (Photo courtesy of Netflix)


Review: ‘Coded Bias,’ where data is destiny

The documentary takes a look at the danger of algorithms.
<a href="https://highschool.latimes.com/author/allisonwong85/" target="_self">Allison Wong</a>

Allison Wong

December 29, 2021
In the documentary “Coded Bias,” Joy Buolamwini, computer scientist and digital activist, poses a question: “What does it mean to be in a society where artificial intelligence is increasingly governing the liberties we might have?”

This film is a must-watch for anyone interested in the tech industry, but it also gives light to the dangers of artificial intelligence that anyone, anywhere, can and will experience. 

As a result of the ideas hammered in by sci fi-novels and movies over the decades, society today shares a common notion about artificial intelligence — it can become almost human and therefore take over the world, giving us a lot to fear.

Buolamwini disputes this misconception in her exploration of racial and gender bias in facial recognition technology. As a student at MIT, she was drawn to computer science because coding seemed detached from the “problems of the real world.” Buolamwini, like many others in her position, faces challenges in the tech world, being a woman of color who is always underestimated.

When she set out to create a mirror using computer vision technology, she found that it wouldn’t track her face until she put on a white mask. And further research only revealed that technology is highly susceptible to bias, where lighter-skinned, male individuals will always fare better in a rapidly changing world where destructive algorithms are abused to classify and categorize everyone. 

Large technology companies are employing these algorithms everywhere, but the dark truth behind artificial intelligence’s promising future is that even those in power don’t know how their algorithms work. There are clear goals, but the path to achieve that goal is obstructed.

Apple and Amazon, two companies at the forefront of artificial intelligence, both had mishaps where their algorithm was blatantly wrong. Without understanding their algorithms, bias can be impossible to prevent — if that is even the goal of some of these corporations. 

While in China, systems categorize citizens to maintain social order, in America, artificial intelligence is used by corporations for commercial applications and to earn revenue.

Throughout the documentary, the spheres of state and corporate surveillance overlap in how they focus on poor and working communities first. This is an example of the racial bias in the use of artificial intelligence.

Oftentimes we think the wealthy will be the first to benefit from advances in artificial intelligence, but Buolamwini proves that primarily Black and brown communities are the target for technology trials that don’t take into account their safety and rights. According to Forbes, a dataset called “Faces in the Wild” contained data that was 70% male and 80% white, which is the benchmark for testing facial recognition software, pointing towards how biased artificial intelligence programs can be.

“Coded Bias” demonstrates a real, tangible need for regulations on the algorithms that could potentially ruin people’s lives. Safiya Umoja Noble, professor and writer at UCLA, emphasizes in the documentary how harmful algorithmic oppression is, especially for women of color.

It’s imperative that legislation be put into place to keep technology in its place, because above all, lives should always be more important than profit. When we sacrifice equality for order or industrial gain, we strengthen existing structures of power in our communities. 

Several questions remain by the end of the film.

What will the powerful do to the powerless using artificial intelligence? How can we deconstruct the systems keeping a certain demographic in power? What steps can we take to prevent a future where not only our view of the world is being governed by skewed AI algorithms, but our actual lives?