Subscription 790/year or 190/quarter

An ethical artificial intelligence

TECHNOLOGY / Can artificial intelligence be replaced by imperfect human judgment, and social disputes resolved through automated decision-making systems?




(THIS ARTICLE IS MACHINE TRANSLATED by Google from Norwegian)

The use of routines and algorithms"Out in the open" to measure, quantify and optimize all aspects of our lives has led to both greater public concern and increased regulatory attention. Among the list of solutions, we find some ideas that lack credibility, not least those that are promoted with the term «ethical Artificial Intelligence (ethical AI). »

It is understandable that public authorities want to curb the undesirable consequences of certain forms of artificial intelligence, especially those related to increased surveillance, discrimination against minorities and wrongful administrative decisions. But governments plagued by lack of money are also eager to embrace anyone technology which can provide efficiency gains in public services, the judiciary's enforcement of the law and other tasks. The deadlock between these two priorities has shifted the debate away from law and policy, towards promoting a voluntary improvement of industry practices and ethical standards.

Automation is accelerating

So far, this pressure, which has been defended by public bodies as diverse as the European Commission and the US Department of Defense, revolves around the concept of "algorithmic justice." The idea is that one can counteract imperfect human judgment, and that social disputes can be resolved through automated decision-making systems – where the inputs (datasets) and processes (algorithms) are optimized so that they can reflect certain vaguely defined values, such as "justice" or "credibility". In other words, emphasis is not placed on the political, but on fine-tuning the machinery, as by removing disadvantages in existing data sets or creating new ones.

It is nothing new to mask deeper political conflicts behind a veil of technologically mediated objectivity. But when automation is constantly accelerating, it is not a practice we can afford to close our eyes to. The more politicians focus on promoting a voluntary KI ethic, the more likely they are to react in ways that are destructive and undemocratic and that also distract us from what is really going on.

For example, the distraction comes in when ethical artificial intelligence presupposes that a geographically impartial practice can be developed, which can then be repeated with a myriad of different parameters. Perhaps the idea is to do this through a multilateral forum, where a seemingly versatile group of experts will gather to prepare global guidelines, to develop and manage AI in an ethical way. So what would they come up with?

"Ethical money laundering"

Berkman Klein Center for Internet and Society at Harvard University recently published a review of around 50 "KI principles" that have emerged from the broader debate so far. The authors point out that the conversation tends to revolve around eight topics – privacy, accountability, security and safety, openness and clarity, justice and non-discrimination, human control of technology, professional responsibility and the promotion of human values. Particularly missed are two elephants in space: structural asymmetric power and the climate catastrophe that threatens the horizon. These omissions show that the experts who have been brought in to provide a framework for ethical AI operate in a relatively closed circle, and have problems giving advice that captures the spirit of the times.

Governments plagued by lack of money are also eager to embrace any technology that can provide efficiency gains.

The pursuit of ethical AI can also erode the public sector's administrative capacity. For example, have European Commission, after trying to participate in the debate with an expert group in KI, have been met with massive criticism, where it has been claimed that they have undermined their own credibility in terms of the ability to take the lead in this field. One of the group's own members, Thomas Metzinger at the University of Mainz, has even rejected the final version of the proposed guidelines as an example of "ethical money laundering". In the same way, the rights group has AlgorithmWatch  questioned whether "reliable KI" should ever be, given that it is unclear who – or what – should have the authority to define such matters.

Here we can see how the well-meaning work of directing the development of artificial intelligence can fail. As long as the conversation remains vague, no one can protest against the idea of ​​reliability in principle. Decision makers can begin to override decades of achievement in science, technology and society, responsible research and innovation, and human-data interaction. After all, there is no definitive list of guidelines that will be useful regardless of political level, for the simple reason that ethical artificial intelligence is already in principle a practically unworkable political goal.

Business

Finally. If we pay too much attention to KI ethics, we risk shifting the discussion away from more democratic forms of control. A story about a constant expansion of artificial intelligence has emerged, since the business community has seized many and higher education institutions and their partners. The conversation shifts open public arenas over to closed strongholds of competence, privileges and globally technocracy.

The private business community beats two birds with one stone: at the same time as you keep status quo and avoiding change in current lax regulation, technology companies may present themselves as socially responsible. For most people, however, the consequences are not as appealing. The false impression that there is already an ethical consensus on artificial intelligence precludes political debate – the very core of democracy. Ultimately, social tensions are heightened, while trust in the authorities erodes. Once the framework for ethically preferred practices has been established, automated systems will be given a semblance of objective knowledge, despite the fact that they have been deprived of any meaningful critical scrutiny. Their decisions will be the result of strict laws, with no room for context, nuance, appeal and reassessment.

Accountability

With the beginning of our new decade, it is clear that the KI policy must go further than platitudes about voluntary ethical frameworks. Like Frank Paschal at the University of Maryland explains, algorithmic accountability should come in two waves, with the first focusing on improving existing systems, while the second raises fundamental questions of power and governance.

For the future, politicians should let go of history where artificial intelligence is made into something exceptional. We must hope that the closed circle of newly appointed KI ethicists is broken up to make room for those most directly affected by the disruptive processes in KI and automation: the end users and citizens that governments have a duty to protect and serve.

 

        Translated by Anders Dunker

Maciej Ku Ziemia
Maciej Kuziemski
From MODERN TIMES's partner Project Syndicate

You may also like