E. Tendayi Achiume is a Professor of Law and Faculty Director of the Promise Institute for Human Rights at University of California, Los Angeles. She serves as the United Nations special rapporteur on Racism, Racial Discrimination, Xenophobia and Related Intolerance.
As we enter into the one year anniversary of the COVID-19 pandemic, we have learned just how much of our society can be conducted in virtual landscapes. We keep up with our friends through endless scrolling on international social media platforms, we coordinate “zoom” calls with our colleagues across different time zones, and our children log into their classrooms from behind a screen: our virtual society is essentially intrinsic to our physical one. Yet, one could argue that the relationship between society and digital technology has always been heavily intertwined since the latter’s development, and in fact, the societal problems we see in our physical world are also manifested in and perpetuated by our online one.
In our interview with UN Special Rapeturer Tendayi Achiume, we were able to discuss such relationships between societal inequality and technology in covered in her 2020 thematic report on “Racial discrimination and emerging digital technologies: a human rights analysis”. Ultimately, we learned how different forms of racial discriminination appear “in the design and use of ‘‘ digital tools. In our brief interview, we not only gained an appreciation of the urgency and timeliness of Professor Achiume’s findings, but also began to understand the gravity of our virtual situation: not even our most advanced, “objective”, digital inventions can overcome historical legacies of social injustice.
Definitions and Examples of Techno-chauvinism and Technological Determinism
When discussing how societal problems are manifested in technology, Professor Achiume argues that we must be cognizant of the ills of techno-chauvinism. Techno-chauvinism is the idea that technology is the solution to everything, when in reality, some forms of digital technology can create more (or simply different) problems than before. If we make presumptions that technology will blanketly fix the issues that we cannot, Achiume claims that we will fail to critically investigate the ways that technology “reproduces human biases and perpetuates structural inequalities”. After all, the structures, designs, and functions of social media platforms and AI algorithms are shaped by the beliefs of those who create them.
The UN special rapporteur found that emerging digital technologies can directly perpetuate racial discrimintion. In the report, she discusses “digital redlining”, or “the creation and maintenance of technology practices that further entrench discriminatory practices against already marginalized groups’”. In Section III B 32, Achiume describes how Facebook used to have a function that would allow one to “narrow audiences” by providing the ability to filter Facebook users by “ethnic affinities” under a demographics category of its ad-targeting tool. Facebook by itself controls about a quarter of the market share for digital advertisements in the U.S., so this tool had a significant impact and resulted in the prevention of Black people from viewing housing advertisements. If we were to accept principles of techno-chauvinism and assume that technological tools will be ubiquitously advantageous, we would fail to see how problems of racial bias were created by this new tool. Techno-chauvinism shields digitals tools from critique and blinds us from seeing how racial discrimiantion can be ingrained in the structures of our social, political, and economic interactions online, including how these virtual inequalities can be actualized into very real harm in our physical lives.
It is also important to discuss the spaces in which these global networks are created. Digital technologies and social media platforms are mainly produced in a few, concentrated environments in the Global North, which directly or indirectly affect the product. Sillicon Valley, for example, is experiencing what has been called a “diversity crisis” and has had “a history of problems of discrimination, exclusion, and sexual harassment”. These findings prompt the question: Who is creating AI technology? And who is it being created for?
In an investigation of the effects of this lack of diversity, Achiume includes a 2019 study that reviewed 99 developers from around the world. It found that, “‘many of these algorithms were 10 to 100 times more likely to inaccurately identify a photograph of a black or East Asian face, compared with a white one. In searching a database to find a given face, most of them picked incorrect images among black women at significantly higher rates than they did among other demographics’”.
We can see how even the seemingly “neutral” AI systems are shaped by the lack of representation in the environments that they are created. The inequalities at various intersections of our social identities, such as race and gender, are then present in the technologies we produce. Thus, we cannot simply code away racial bias because technology can never be a completely “neutral” instrument: it is ultimately influenced by what it’s creators perceive, or rather, what they do not perceive. In fact, this problematic assumption that technology is objectively neutral, is codified in the term “technological determinism”.
Technological determinism, defined and discussed by Achiume, is the assumption that technology is somehow isolated and not influenced by social, political, or economic structures. This perspective is false. Our digital landscapes determine “everyday outcomes in employment, education, health care and criminal justice”, and will continue to facilitate harm unless we are able to isolate how society’s injustice manifests itself in our digital tools; a process that is only possible through the rejection of technological determinism.
Limitations of “Colorblindness”
The term “colorblind” is sprinkled liberally throughout texts, articles, and journals and while it traditionally had a positive connotation, Professor Achiume cautions against its use. She highlights the insidious effects of “colorblindness,” particularly when dealing with a world increasingly defined by technology and its perceived objectiveness. Achiume recognizes the fact that the idea of a colorblind world seems appealing. Who wouldn’t want to live in a world where a person is not judged by something as inconsequential as skin color?
But, the sad fact is that, “we live in a society which has been constructed on a racial basis.” Approaching issues such as racial discrimination in technology and AI programs without acknowledging the inherit systemic obstacles in this world does no one any favors. Achiume brings up the current COVID-19 pandemic and how black and brown communities have been disproportionately affected, in part, due to factors such as exclusion from healthcare systems and the overrepresentation of people of color in the social services. By stating that “the colorblind approach is not enough” she is calling for approaches that examine the racial and ethnic impact of policymaking that appear to be neutral. She asserts that it should not be about a blindness to difference but rather an engagement with how differences makes certain communities more vulnerable to exploitation, discrimination, and exclusion. In fact, she insightfully states that we don’t yet live in a world where taking a colorblind approach can lead us to justice but that it often sustains systems of injustice.
Importance of Public Awareness
To further expand on this theme of objectivity, Achiume coined the brilliant term “the veneer of neutrality” and this applies especially to the current design and use of emerging digital technologies. There is the widely-held belief that technology is completely impartial but it is in everyone’s best interests to shed these notions. There are people behind the creation of these programs who, whether consciously or subconsciously, perpetuate harmful stereotypes or prejudices.
One of the most terrifying manifestations of this is predictive policing, a phenomena seen around the world and contributes directly to over-policing of black and brown communities. Achiume highlights the transnational nature of these programs and that is not an issue seen only in the United States. For example, in the United Kingdom there is a predictive policing database intended to create profiles of those most likely to commit crime. A staggering number of 78% of black youth is represented despite the fact that when looking at the actual rates of violence, black youth are implicated only 20% of the time. She insists that one of the most urgent courses of action is public awareness, especially in terms of surveillance and predictive policing that exists around the world.
Even in Los Angeles, where many of us call home, the extent to which police surveillance exists is incredibly high. Professor Achiume spotlights the coalition Stop LAPD Spying whose name perfectly reflects their mission statement. They work hard to release reports that reflect the high levels of racialized surveillance of the LA Police Department. The large gap of awareness in the public is a huge wall to climb and Achiume believes this is why the devastating murder of George Floyd ignited the world in the way it did. Public awareness is truly the first step towards seeing action and one of the goals of her report is to raise the all-important point that this issue needs to be on national agendas. In a perfectly worded conclusion she declares that “you don’t get change if you have societies that are unaware of the depth and the nature of a problem. These new forms of surveillance disproportionately affect black and brown communities, whom are more subject to over-policing and discrimination than their white counterparts which is reflected in the makeup of prison populations that unfairly represent minorities to an absurd degree. Relating back to Professor Achiume, the LAPD’s police surveillance tactics are said to be completely objective but as discussed with the aforementioned veneer of neutrality, technology can never be truly “neutral”. Hence the need for caution, awareness, and action.
Reflection
As the world continues to grow more globalized, criticizing the unintended consequences of emerging technology can lead to the implementation of systems that are actually effective and useful. We must recognize how the location of the center of development for digital technologies in the Global North has resulted in both direct and indirect perpetuations of racial bias on the design and use of the tools produced. Unabated trust in our new virtual world, with no caution to how it could reflect racial bias in our physical one, will simply perpetuate such harm.
Dismantling principles such as techno-chauvinism, technological determinism, and colorblindness will help us understand how these tools discriminate along racial and gendered lines, such as heightened surveillance of certain communities. The growth of technology can be a wonderful thing but it also has the potential to be incredibly dangerous. In an increasingly interconnected world, nothing is truly as isolated as it seems. An innovation in Silicon Valley can have reverberations throughout the world. It is best that with the knowledge provided we work hard to create digital technologies that don’t perpetuate the problems and prejudices we face in the real world.