(Note: The article is mostly machine-translated from Norwegian by Gtranslate)
Things are no longer what they used to be. As more and more things equipped with sensors, assigned a network address and connected to Internetone, things have begun to behave in entirely new, and sometimes rather unpredictable, ways. Like when your new 'smart' baby monitor starts playing The Police's 80s surveillance song completely unmotivated Every Breath You Take in the middle of the night followed by unknown male voices shouting cruel insults of a sexual nature at your infant. Or more dramatically, when your Tesla's autopilot, as it happened in the United States a few years ago, sends you at full speed into the side of a truck, resulting in death.
The two media theorists Mercedes Bunz and Graham Meikle have with their new book The Internet of Things written a critical and readable introduction to this new order of things, where the number of internet-connected things has long since surpassed the number of people on the planet. In 2017, there were 8,4 billion things connected to the web, while reports project this figure for 2020 to be between 26 and 50 billion «smart» devices.
When Google's newly developed algorithm "recognized" an image of two black people as gorillas, it naturally caused scandal.
Things have started to behave, one might say, with that in mind Marx ' analysis of the goods '«metaphysical whims and theological intricacies', as if they were gifted with their own lives and possessed their own will. Marx's most famous example is familiar to most and deals with a dancing table. Today, the notion of dancing tables or talking coats (another of Marx's favorite examples) does not seem completely insane compared to some of the things we are presented with in our networked daily lives. Now everything from telephones, baby alarms and watches over refrigerators to cars and metros, even entire cities, possess so-called intelligent or "smart" features – which enable them to communicate with each other and act "autonomously".
But as the authors point out, this means that things around us become smarter, not necessarily that we ourselves become smarter, rather the opposite: "The paradox of a smart device is that its user does not have to understand it."
The more user-friendly something is, the less it is necessary to know about how it technically works. In that sense, the authors write, using Apple's iPad as an example, very new technology is actually designed to appeal to users as if they were children (which many iPad users probably are too).
Associated with this deskilling of the user is an increased expectation of, and dependence on, that the technology actually works when we use it. Not only do most average users become more or less technically incompetent, they are also often unable to complete relatively simple tasks without the help of technology – once they get used to it. An example of this from the book is a person who had to find the local train station in a small Belgian town, and who instead of orienting himself on signs or asking if the road ended up following the navigation system over 900 miles in the wrong direction to Italy. One could, of course, say that if the technique had just worked better, this undoubtedly extreme (but far from unique) episode could have been avoided. But it is precisely one of the book's strengths that it consistently emphasizes that the question of technique is not a pure one technical Question. Because even when the technique works optimally, it is far from 'neutral'.
The book is built around the various new features of the 'thing' (speaking things, seeing things, tracking things …) And shows how each of these new features is complicated by what the authors call «inbuilt politics». A good example is the new initiatives within artificial intelligence in image recognition (so-called computer vision), which is becoming an increasingly important part of, for example, Google's search engine features. For the development and 'training' of Google's self-learning algorithms, the so-called neural networks, some huge image datasets harvested from various social media platforms are used. When Google's newly developed algorithm "recognized" an image of two black people as gorillas, it naturally caused scandal.
One of the explanations for this was that the data sets on which the algorithm had trained were predominantly white faces, and that the algorithm was therefore unable to classify black faces as belonging to the category «human». The example shows how the algorithm's 'vision' is programmed to promote a particular policy of visibility that, though perhaps unintentionally, takes white skin color as the 'neutral starting point' for the definition of man.
Marx's most famous example is familiar to most and deals with a dancing table.
Google's algorithm was referred to in the press as the 'racist algorithm'. But even though it is fashionable with so-called object-oriented philosophy and the like, where one likes to describe 'things' as having agents on a par with humans, it seems strange to blame a machine for being prejudiced. Not only is the company Google absolved of its responsibility, but the problem is reduced to a clean slate technical questions which are in fact a political or political-economic question.
The more fruitful critical question here in the Trump era would therefore begin where the book's authors (unfortunately) end, namely with the question: Who benefits?