Artificial intelligence is in the wind like never before. The potential of the technology is discussed Arendalsuk as on all other key broadcasters for socially engaged expression. Artificial intelligence is recognized as a crucial technology in the fight against the ongoing Coronaepidemic. And that was all that was missing. Is it at all conceivable to take effective countermeasures against the next pandemic without extensive use of artificial intelligence? The potential for infection tracking, global surveillance of outbreaks and identification of infected people is enormous. A lot of suffering can of course be avoided if we use such technology in a good way.
At the same time as artificial intelligence is being established as the central technological answer to challenges from poverty reduction to climate change, the question of regulating this technology seems to be getting a bit in the way. Privacy statements regarding infection tracing here at home are an example of this. The technology has a potential upside so enormous in most areas of society that the world is understandably impatient to use the technology. When the need is so great and immediately you get to first develop the technology, before you seek to regulate it?
In most cases, such a thought may make sense, but there are a number of exceptions to artificial intelligence. The road to the surveillance community is short. We already know the challenges of democracy. Sector by sector, it may also be that the technology is simply too powerful, and potentially socially uplifting, for us to be able to take on the consequences of the technology that is now on the drawing board. Thus, we can quickly make the world worse rather than better.
Fully autonomous weapon system
If the technology is powerful enough, and thus deeply socially changing, it can quickly feel impossible not to use the technology once it is available. The use of artificial intelligence in munitions, in so-called autonomous weapons, may be such a case.
Such weapons of the future, which are thus capable of attacking and destroying targets beyond meaningful human control, could fundamentally change the war of the future. It is speed that in practice will determine the wars of the future. And when decisions on the battlefield are left to machines, the response time for the opponent's move also drops somewhat sharply. The one who draws the fastest will win. Anyone who is able to outmaneuver the other party at cybernetic speed will hardly experience resistance from weapon systems controlled by humans. There are thus no conceivable countermeasures to fully autonomous weapon systems other than other autonomous weapons. It is such possible scenarios that have driven the work to ban the development, production and use of autonomous weapons, before this technology even exists.
Some form of meaningful human control must be required
autonomous weapon system.
Human Rights Watch is one of the key organizations in the campaign to ban killing robots. This campaign aims to ban autonomous weapons, and since 2014 has been crucial in driving a UN process on this. For the time being, the UN states are far from reaching agreement on such a ban. But they have at least set aside some time over the last six years to talk about the challenges such weapons will create – within the framework of the Convention against Inhumans arms. Recently came Human Rights Watch with a report on what the world states have so far declared of policy for the regulation of autonomous weapons. The report is entitled "Stopping killer robots: country positions on banning fully autonomous weapons and retaining human control", and is available at www.hrw.org
The report has identified publicly stated policies in the field from 97 states since 2013. This provides a good data basis for the analysis. The report finds that most of these states believe that some form of meaningful human control over autonomous weapons systems must be required. In 2018, Brazil, Chile and Austria also took the initiative to initiate negotiations on a ban on autonomous weapons in the UN. To date, thirty states have supported such a ban line. At the same time, states such as the United States and Russia have clearly opposed the idea of a ban. In order for a ban on negotiations to be initiated within the framework of the UN Convention against Inhuman Weapons, consensus is required. Thus, there is also no reason to believe that something like this will be able to happen in the foreseeable future.
The report also addresses Norwegian policy in this area. In essence, this policy is based on Norway recognizing ethical and legal issues by developing autonomous weapons with artificial intelligence. It is warned that such weapons "could blur lines of responsibility and accountability". At the same time, Norway has not supported the ban line in the UN so far. Although the government has not been clarified on the issue of bans, the debate on ethical management of the Petroleum Fund seems to bring the issue of bans forward, also in Norwegian politics.
In NOU 2020: 7, «Values and Responsibility, The ethical framework for the Government Pension Fund Global», from June 2020, a government-appointed committee advocates not investing in autonomous weapons. This is done by proposing a change in the product creations, which today means that producers of certain types of weapons, tobacco and coal will be excluded from the fund. The weapons criterion, which defines the types of weapons in which the fund should not be invested, is proposed to be specified and expanded. Deadly autonomous weapons are proposed to be added to the list. The Committee justifies this as follows: “With autonomous weapons, the decision to use force is not subject to direct human control. This makes the responsibilities unclear and is, in the committee's opinion, fundamentally problematic. " The committee's proposal will be further up for debate in the Storting next year. Thus, we will probably also see the parties in the Storting position themselves for or against an international ban.