Can a machine understand all human preferences? Are there altruistic machines? Will superintelligent Artificial Intelligence (KI) ever come? Will it be possible to download human consciousness to a computer program? These are questions Stuart Russell takes up this book. Russell is a professor of computer science and honorary doctorate at Wadham College in Oxford. He has received numerous awards and honors for his research on the relationship between computers and humans, and he has written a number of books on the dangers and benefits of artificial intelligence.
The book addresses important and central issues, such as citizen pay and how technology should be used within the education system. Russell also discusses the problems of using humanoids – robots similar to humans.
Different understandings of context can create disruptions in communication between people
Alan Turing warned against doing so robot as much as possible to people. He believed that it could create emotional bonds from man to machine, which could cause the machines to take over. Russell agrees, but also mentions that no one listens to Turing in this area.
Should robots be given the status of electronic individuals with their own rights? How do we prevent humans from getting lower status than robots? Who should be punished if the robot does something criminal – the programmer or the robot? And how does one punish a robot? It is pointless to imprison robots, because they have no feelings.
Russell refuses to answer the question of when superintelligent KI will come. He points out all the crazy, inconsistent predictions that knowledgeable people have served up throughout history and believes that the question itself presupposes a "conceptual breakthrough" about which it is impossible to say anything.
Russell brings to the square many striking and entertaining examples of how wrong it is to take: For example, in 1929 nuclear scientist Rutherford came up with an oblique statement that a nuclear chain reaction would never occur, and only a few years later had the nuclear scientist Szilárd launched the concept of the first nuclear chain reaction.
Human preferences do not exist inside a machine and are not directly observable, but there must still be a connecting link between the machine and human preferences, Russell writes. These preferences can be observed through human choices. If the machine is to understand human preferences, it must become familiar with what humans want. A machine that knows nothing about human preferences will not be useful to man, and then the point of the machine is gone.
Humans are not always rational, and how should a machine be able to take the imperfect and irrational of our species into account when figuring out what we want? These "disruptions" the machines must learn to interpret.
Different understanding of context can create disruptions in communication between people and machine. If I say to a machine: "Get me a pizza no matter what it costs!" – is the machine willing to kill half of humanity to get me a pizza? Or will it buy a pizza that costs half a million? How can we avoid a robot running half the world in desperate search for coffee because the last stop was closed and I wanted a coffee "no matter what"? How should machines and humans learn to talk together? And how will they learn to understand each other?
Ethics and morals
Can you fill a machine with ethics og moral who will enable it to solve human problems? The monkey with machines is not that they cannot solve problems, Russell writes, but that we risk them solving the problems incorrectly: In search of suitable solutions to the ecological problems, for example, they could eradicate humanity.
Russell writes: "The fact that machines are uncertain about what people want is the key to creating usable machines." Why? Well, because a machine that thinks it has understood what man wants may well be able to act purposefully, but in an unfortunate way, because of a system failure that does not include the possibility of misinterpretation, and it could do something very destructive on the way to their goal.
In search of suitable solutions to the ecological problems, the machines could come up
to eradicate humanity.
However, Russell believes there is reason for optimism: "Should this go well," he writes, "we must disregard the idea that machine consciousness must align with that of humans." Within KI, this is called "The Alignment Problem." It is laying economic interests in creating machines that align with human intelligence. Subsequently, the machines can begin to evolve, eventually to become supercomputers that do not follow human instructions. Then it can end with the eradication of humanity.
There is greater utility in machines that do what man wants, than in machines that do something completely different from what man wants. Still, Russell points to the possibility that a single computer engineer might be able to destroy both himself and others by letting machines start acting on their own. Economic motives can ruin the entire industry and produce competitive, out-of-control machines.
If you focus more on the development of KI for its own sake than on the problem of control, things can go wrong. We don't lose anything by sharing knowledge with each other, because KI is not a zero-sum game, Russell points out. On the other hand: Competing to develop the first computer to be able to outperform humans will create a negative zero-sum game. He writes: "Perhaps the most important thing we can do is to develop KI systems that are – as far as possible – safe and useful to man. Only then will it make sense to regulate artificial intelligence. ”
Also read: A society built on artificial intelligence by Sami Mahroum