Subscription 790/year or 195/quarter

Artificial intelligence or human whims?

Hello World! How to Be Human in the Age of the Machine
Forfatter: Hannah Fry
Forlag: Doubleday (USA)
The mathematician Hannah Fry has explored the shortcomings and possibilities of artificial intelligence. A fruitful division of labor between machine and human is possible – and necessary, she believes.




(THIS ARTICLE IS MACHINE TRANSLATED by Google from Norwegian)

Teknisk Ukeblad reports that the Trondheim-based company Sevendof has received EU support to establish a network of drone stations all over Norway – with autonomous drones that can perform tasks for different clients. This is a clear picture of how artificial intelligence (KI) and automated systems occupy our living space. What these systems really do and are used for, obviously goes over our heads in different ways. If we are dissatisfied with them, we must ask ourselves whether we can solve the tasks better ourselves – limited as we are by our senses, the sense given to us and our human intelligence.

From servant to tyrant

Boken Hello World! How to Be Human in the Age of the Machine does not look at the dreams – or the nightmares – of the future KI of the future, but focuses on the present day – on our frustrating intercourse with the impressive but imperfectly artificial intelligence.

We are gradually getting used to programs and tools that have almost imperceptibly taken over much of our daily lives – search engines, apps and automated systems that will facilitate a simpler life. But when the algorithms are also used in medicine, law, defense and surveillance services, it is felt that the data technology servers are gradually taking over power and ending up as tyrants.

What do we prefer? The machine's objective fallibility or the subjective whims of humans?

Hello World! is built up as an amusing strip of anecdotes that illustrate the intricacies of automation. The examples range from the bizarre and laughable to the shocking and terrifying.

Fry reminds us of the dramatic incident in 1983 in which Soviet officer Stanislav Petrov failed to launch a counter-attack when it appeared that five US nuclear missiles were on their way to Russia. Had the decision been left to an automated algorithm, there would have been no room for Petrov's well-considered order refusal, the author points out.

An opposite and comical example is the story of Robert Jones, who relied so blindly on the GPS that he stepped onto a cobblestone path and just barely avoided driving off a cliff. We shake our heads when the GPS system wants us to choose an impractical route, and we feel confident that our own brain and judgment ability is unmatched after all. The great strength of Fry's book is that she does not stop at such "algorithmic antipathy", but rather nuances the problem: The crucial question is what to do in cases where we not know better yourself.

The machine judge is coming

Illustration to Hello WorldFry describes a computer system that was used in the United States to propose penalties in court, which experts, among others court historian Jørn Øyrehagen Sunde, believe will soon be applicable practice in Norway as well. Automated analysis helps the judges to judge who is most likely to fall back to crime – and thus becomes tools for judging sentence length and probation, if the machine's predictions are ever so uncanny.

It is not to be avoided that some of the results are intuitively perceived as unfair and accidental by judges, lawyers and laymen alike. Still: When American judges' ability to test and cross-check through anonymous surveys and blind tests, it turned out that they often rated the same case far differently. The criteria are unclear, the cases are many, and the form of the day plays a part.

So what do we prefer? The machine's objective fallibility or the subjective whims of humans? An important point of Fry is that algorithmic use in case law is popular both because it is labor-saving and because judges and others involved do not directly bear the responsibility for errors and sentencing. The elimination of direct responsibility is obviously a problem for the victims, a point that is constantly emerging in the debate about driverless cars.

People naturally have an aversion to cars that are programmed to downgrade pedestrians and thus potentially want to cut them down. But who wants to buy an altruistic programmed car that drives off the road and sacrifices your life to save a foreign pedestrian? Such issues can create an unmistakable moral boundary for automation – where responsibility and guilt have a kind of intrinsic value that no one really wants to leave to a machine.

Information against access

Most people have learned how to use the calculator more and more often, precisely because the increased use makes us worse in head calculation. Similarly, the use of computer programs weakens our critical sense, which should enable us to correct, nuance and defy the same programs.

Who wants to buy an altruistic car that drives off the road and sacrifices your life to save a foreign pedestrian?

But the error does not necessarily lie with ourselves. Fry wants more transparent algorithms that can be searched for in the seams, giving us choice options rather than clear answers. This is relatively easy to get where algorithms prioritize and make decisions, but more difficult in search engines that classify, associate and filter large volumes of data. Machine learning and algorithms big data processes a hyper-complex matter and is often often impervious to its complexity.

Fry highlights the many problematic cases where KI becomes a means of subtle manipulation. Knowing that Americans who drive Ford often vote Republican, you can tailor a campaign targeting Ford-running Democrats, where patriotism should win them over. Far more awkward is it when likes used to classify personality types. If you are categorized as nervous and at the same time a lonely woman, you can quickly become the target of a gun lobby promotion: The fear of being attacked in your own home makes you want a gun.

Conflict and cooperation

Big dataanalyzes are utilized in both advertising and election campaigns. With the social control system Sesame Credit, which is currently being implemented in China, everything you do is captured and returned to a point system that determines if you are to be trusted – and thus literally opens and closes doors in society. In the West, too, it is impossible for the individual to overlook the market for computer brokers, which sell huge collections of information that is in principle anonymous but easy to hack.

In social media like Facebook we exchange information about ourselves to access the service. This could be a reasonable bargain, Fry says – but if you let a health care company scan your DNA in exchange for medical services, you often don't know what barter will entail, especially since regulations typically follow technological developments.

If the algorithms and we are to live well together, we must both learn av og om them, Fry points out: Only this is how we can understand when cooperation is fruitful and when one should work separately. Conflict and frustration with computer programs should be exploited "co-therapeutically": they provide an opportunity to improve the relationship – and to make the systems more open and more collaborative. A full breakup with the artificial intelligence is becoming an increasingly less relevant alternative – which in itself is highly thought provoking.

Also read: Artificial idiocy and natural intelligence



(You can also read and follow Cinepolitical, our editor Truls Lie's comments on X.)


Anders Dunk
Anders Dunker
Philosopher. Regular literary critic in Ny Tid. Translator.

See the editor's blog on twitter/X

You may also like