(THIS ARTICLE IS MACHINE TRANSLATED by Google from Norwegian)
This book is obviously critical, and the title, Artifictional Intelligence, suggests where the problem lies: With the help of science fiction and a self-lucrative IT sector, the belief in an omnipotent artificial intelligence seizes upon it. In the shelter of this ideology about the possibilities of the future the artificial intelligence we adorn actually has – imperfect, fallible and annoying – into ever-increasing areas of society. Blinded by relative success, we have become overly indulgent of the weaknesses of talking robots and digital helpers.
A deterministic hype
The tech industry's alliance with – and weakness for – science fiction often culminates in fantasies of the takeover of robots, which in practice function as part of a deterministic hyping, though the stories are never so dystopian, Collins points out. The notion that artificial intelligence will almost necessarily "take over the world" has been supported by authorities like Stephen Hawking is disturbing enough. That Elon Musk argues that we should connect the brains of the computer systems to "not be made irrelevant" by the machines – and that he himself is developing this technology – is worse. At Ray Kurzweil, the author of The Singularity is Near, the Bible itself for those who believe in artificial superintelligence, also the boss of the mighty Google, is perhaps the worst.
Despite such examples, Collins argues, the belief in artificial intelligence is far greater among philosophers, evolutionary biologists, and the general public than those who really know the field. The lack of specialized knowledge means that we rely on experts with a mixed agenda, while the films connect us to a dubious vision of tomorrow's world.
Illegal, but neutral
Movies like Here, where the protagonist falls in love with one talkbot to be deceived by 641 other men, or Ex Machina, where a love robot takes revenge on its creator, are interesting cultural symptoms, Collins says – but the underlying pathology lies in an inability to distinguish between simulation and reality. We project the human on the machines and the machines on the human, thus erasing the distinction. Computers may not be loyal or loving, he says soberly, but they are also not evil or manipulative.
Collins' arguments are based in part on Heidegger student Hubert Dreyfus of UC Berkeley – and his book with the talking title What Computers Can't Do (1971) and the sequel What Computers Still Can't Do (1991). Here's the argument that artificial intelligence needs a body to be in the world in a way that makes words meaningful – for example, words like "good" and "evil." When artificial intelligence falls short in trying to pass as a human being, it is because of those parts of intelligence that depend on bodily and linguistic intercourse with others – not only body language, but linguistic communication that gives a fine-tuned sensitivity to trust, disagreement, irony and creative rules.
The blind spot of the machines
Thus, language learning entails a silent knowledge that we cannot easily make explicit and simulate in a program. Since Dreyfus' books came out, it has begun to look as if even the sense of irony and humor can be learned by experiencing previous rule violations and double opinions, an experience that can apparently be gained through big data. The crown example of this breakthrough is the Watson program, which beats people in the game Jeopardy by trawling through millions of pages online. It recognizes patterns and understands puns based on past cases and probability-based analyzes.
While humans remain the same, the processing capacity of machines increases exponentially, and they can find ever more sophisticated patterns in ever larger collections of data. The singularity is close, Kurzweil confirms, and rejects all reservations with a pointed provocation, which Collins sees as a key: "If pattern recognition doesn't 'count' as real understanding, then humans don't have understanding either." On the contrary, Collins says: Recognizing things is only a small part of knowledge. Our understanding arises not only by recognizing things in the world, but also by endlessly acquiring many ways of organizing the world, ways that are culturally conditioned, infinitely nuanced, and ever-changing.
Diverse interpretations
Knowledge arises through a complex interplay of experience, language and social relationships. As a sociologist of science, Collins has become an expert on the phenomenon itself expertise. By examining how knowledge is developed, tested and debated in environments by specialists, he can also say something about the odds of the impact of artificial intelligence.
While humans remain the same, the processing capacity of machines increases exponentially.
Collins has for many years studied the environment as a researcher on gravitational waves, and has thus also learned enough about physics to participate in a meaningful conversation on the topic. Such familiarity with the discourse, he admits, is a kind of simulation that a computer might achieve. Talking about a topic, however, is not enough to give him authority to contradict the researchers: He cannot decide whether a controversial statement is an advanced joke, an ingenious input or is due to a deep misunderstanding. An understanding based on pattern recognition will always be based on the knowledge that already exists, but science is based on disagreement in a pursuit of new knowledge. This really requires expertise, understood as judgment.
Healthy skepticism
The great danger with the belief in artificial intelligence is that we misunderstand what natural intelligence is. That all knowledge should consist in analyzing the available data is a blurred picture of learning, and a deeper view of reality. The world is not a collection of finished facts, but can be seen in many different ways that are partly contradictory and partly complementary. This is the essential point of the book, which at Collins partially drowns in a host of pointed arguments. Yet the book's strength lies precisely in the detail fixation – in the author's willingness to examine and challenge the shortcomings of artificial intelligence point by point in the name of scientific skepticism. A discussion of the consequences of what he calls "surrender" to artificial intelligence goes beyond the book's project. Yet the book appears as a sober defense of a threatened humanism in the age of delirious posthumanism.