Subscription 790/year or 195/quarter

Racism is written into the codes, protocols, and algorithms of the Internet

AUTOMATION / The Internet is far from just a direct information highway towards freedom




(THIS ARTICLE IS MACHINE TRANSLATED by Google from Norwegian)

In one of the most iconic scenes from the movie The Matrix the protagonist Neo sees a black cat who makes exactly the same movements twice in a row. When Neo tells hackers Trinity and Morpheus about his déja-vu, they immediately launch a comprehensive security operation. It is clear that something is completely wrong. The two hackers, who are the leaders of an illegal resistance movement fighting the Matrix's digital totalitarianism, hastily explain to Neo that a déja-vu typically represents a "glitch" in the Matrix's operating system. A glitch is more than just a random and transient phenomenon, it is a sign of a more extensive systemic threat that means they are in imminent danger.

The famous movie sequence from The Matrix is the focus of Princeton Professor of African-American Studies Ruha Benjamin's discussion of systemic racism in the book Race After Technology: Abolitionist Tools for the New Jim Code.

Google's image algorithm

An example of one glitch which most people probably remember is Google's highly praised Photo App, which back in 2015 shone with automatic image recognition right up until it misidentified a young black couple as "gorillas". Understandably, the IT giant came out in a shitstorm. Google's chief social media architect at the time, Yonatan Zunger, was quick to apologize. It was, of course, a deep scratch in the paint for a smooth-polished company like Google, which has built its name and reputation around the motto "don't be evil". Google assured that they had put their best programmers to work on the image recognition algorithm. However, it turned out to be difficult to find the cause of the error, and the solution was to completely remove the image tag "gorillas" from the algorithm's vocabulary.

Algorithms can also actively discriminate and reproduce existing social prejudices.

The example of Google's image algorithm is imprinted on the collective retina as one of the clearest examples of the fact that artificial intelligence does not necessarily represent an open-minded approach to the world. Algorithms can also actively discriminate and reproduce existing social prejudices. The examples in Benjamin's book are legion: An image search for "three black teenagers" results in the police's criminal photos of young black men, while the search for "three white teenagers" conversely returns images of smiling high school students. Or what about the speech recognition software that cannot understand African-American dialect, because it has been developed to appeal to a more affluent target group of white, well-to-do middle-class citizens? And at the very heavy end of the scale, we have the crime-preventing "preventive algorithms", which classify infants from black neighborhoods as "gang members"...

According to Benjamin, there is no reason to suppose that such examples are the exception. All of these more or less spectacular glitches from everyday life in the United States, which when the GPS reads Malcolm X Boulevard as Malcolm 10 (!) Boulevard, point, according to Benjamin, in the direction of a more comprehensive, systemic flaw in the digital infrastructure we call the Internet. And the last decade, with the advent of the Alt-Right movement, Trump, and racist trolls, has made it clear to everyone that the Internet is far from just a direct information highway to freedom.

An overarching «hostile architecture»

The problem, according to Benjamin, is therefore not just racism of the Internet. As Google's Photo App exemplifies, racism is literally written into the Internet's codes, protocols, and algorithms. Of course, it is depressing when racist individuals come together in various internet forums for the purpose of troll people of a different ethnicity or sexual orientation than their own. But rather than studying racisme on an individual level, Benjamin is more interested in uncovering how our common digital infrastructure takes the form of an overarching "hostile architecture". Referring to the American urban planner Robert Moses, who allegedly deliberately built certain freeway bridges in New York so low that public transport buses from poorer and predominantly black neighborhoods could not pass under them, Benjamin writes: "The academic truism that race is 'constructed' , rarely takes into account such completely concrete concrete structures, and even less so for digital structures."

Much of what we call Artificial Intelligence or AI is nothing more than a digital potentiation of existing social prejudices. That racism is written into the codes of architecture, civil society and the legal system is, as Benjamin points out, already the case long before AI is introduced. Is there a scientifically more robust and ethically neutral alternative to human decision-making and data interpretation?

Automated racism

The more or less explicit racial segregation in the American Jim Crow era – which ranged from social norms to statutory apartheid systems in, for example, the school system or the prison system – has now been algorithmically translated into what Benjamin calls a new "Jim Code", a déja vu of historical dimensions . With Benjamin's description of the automation of racism, it thus becomes clear that "race" itself is a technology – a more or less meaningless social marker that can be used as coded data input to sort, rank and discriminate. Then as now.

What remains is the task of creating new abolitionist tools that can be used both to break down the enemy architecture of the Matrix and to build a new world with.t



(You can also read and follow Cinepolitical, our editor Truls Lie's comments on X.)


Dominique Routhier
Dominique Routhier
Routhier is a regular critic of Ny Tid.

See the editor's blog on twitter/X

You may also like