The use of routines and algorithms"Out in the open" to measure, quantify and optimize all aspects of our lives has led to both greater public concern and increased regulatory attention. Among the list of solutions, we find some ideas that lack credibility, not least those that are promoted with the term «ethical Artificial Intelligence (ethical AI). »
It is understandable that public authorities want to curb the undesirable consequences of certain forms of artificial intelligence, especially those related to increased surveillance, discrimination against minorities and wrongful administrative decisions. But governments plagued by lack of money are also eager to embrace anyone technology which can provide efficiency gains in public services, the judiciary's enforcement of the law and other tasks. The deadlock between these two priorities has shifted the debate away from law and policy, towards promoting a voluntary improvement of industry practices and ethical standards.
Automation is accelerating
So far, this pressure, which has been defended by public bodies as diverse as the European Commission and the US Department of Defense, revolves around the concept of "algorithmic justice." The idea is that one can counteract imperfect human judgment, and that social disputes can be resolved through automated decision-making systems – where the inputs (datasets) and processes (algorithms) are optimized so that they can reflect certain vaguely defined values, such as "justice" or "credibility". In other words, emphasis is not placed on the political, but on fine-tuning the machinery, as by removing disadvantages in existing data sets or creating new ones.
It is nothing new to mask deeper political conflicts behind a veil of technology. . .
To continue reading, create a new free reader account with your email,
or logg inn if you have done it before. (click on forgotten password if you have not received it by email already).
Select if necessary Subscription (69kr)