“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”
-Representative Alexandria Ocasio-Cortez (D-NY)
We are in the midst of a second pandemic. No, not the “second wave” of this horrid coronavirus pandemic. The second pandemic is the one that is erupting after it has been in our cultures for centuries and is even more destructive than COVID-19: the pandemic of systemic racism in regions around in the world. The unrelenting protests seen around the world is the final clarion call to rid of this supremely unjust human-inflicted scourge of our societies.
Algorithms have evolved from straightforward mathematical abstractions to a much more pervasive and sophisticated paradigm today. People are intimidated by its potential, however, to exacerbate inequality in the many venues in society by discriminating particularly against women, minority groups, people of various sexual orientations, and the financially challenged. There has been myriad of examples of AI not sufficiently protecting these disadvantaged groups. First, Google’s photo app incredulously classified images of black people as “gorillas” in 2015 and similarly, Nikon’s software later mislabelled images of Asians as “blinking”. The AI-inspired Tay chatter bot from Microsoft in 2016 posted racially insensitive tweets and was forced to shut down in less than a day. In addition, the Algorithmic Justice League published that the error rate of correctly identifying a woman of darker complexion can be as high as 35% (vs 1% for white males). Finally, software to predict can possess over-reliance on historical data and thus perpetuate bias as well as discrimination: crime forecasts can end up with tendency for over-policing and more arrests of blacks while women can be less likely to be shown Google ads with more lucrative offers.
The countermeasures for these egregious mistakes would include not only assuring that the data input is balanced in representation but also encouraging these predictions to have human cognition in design and training of these models to safeguard against perpetuating biases. One can look at these algorithms as we observe our children: there is much mirroring and influence from adults so we must be extra cautious not pass on to these biases and worse, racism and sexism. Sometimes the bias is implicit but nevertheless impactful and even dangerous. Finally, although these algorithms can be very biased, the hope is that these algorithms can also neutralize these tendencies for bias in the future: it will be up to humans to create these algorithms that will reflect what the world should be, and not what the world has been in the past.