An anti-utopia in which robotic killers destroy innocent people sounds awful, but let’s face it: it’s all science fiction
killer drones, who fall into the hands of terrorists, massacre innocent people. Robotic weapons of mass destruction are wreaking havoc and fear. The short film created by the supporters of the prohibitions of autonomous armament was created to make you believe that this anti-utopia is already very close , and that you have to take action today. The film Slaughterbots [Rezheboty] was released in November 2017, simultaneously with the UN conference on autonomous weapons. The UN meeting did not end with anything concrete, but the video is gaining popularity. It has already been viewed by more than 2 million people and it fell into dozens of articles. As propaganda it works perfectly. And with the urge to accept the ban on autonomous armament, it absolutely can not cope.
Of course, a world in which terrorists can infect a swarm of killer drones on innocent people would be terrible, but is the future of the film realistic? A beautiful picture helps to hide the gaps in logic. The film immerses the viewer into an anti-utopian nightmare, but let’s face it: it’s all science fiction.
The main assumption of the film is that in the future the military will create autonomous microdrons with shaped charges , capable of flying to the head of a person and activating explosives, thereby killing him. In the movie, these killer bots quickly fall into the hands of terrorists, which leads to a massive number of deaths around the world.
The main concept has a basis in reality. In our world, the Islamic state [a terrorist organization banned in Russia]used ready-made purchased quadrocopters , equipped with a small amount of explosives, for the attack of Iraqi troops, as a result of which several dozen soldiers were killed and wounded. Today’s drones of terrorists are mostly remote controlled, but amateur drones are becoming more independent. Latest modelsalready know how to fly to a fixed goal on their own, skirting obstacles, and also independently track and follow moving objects. A small drone equipped with a face recognition system, in principle, can be used to autonomously search for and destroy certain people, as Slaughterbots shows. In just a few minutes of searching the Internet, I found resources where you can download and train a free neural network, which will recognize faces. And although no one has yet combined these technologies, as shown in the movie, all components are already real.
But I want to make a statement: we can not prevent this technology from falling into the hands of future terrorists. It’s sad, but it needs to be understood. Just as terrorists can and use cars to ram crowds of people, the technology needed to turn amateur drones into a rude autonomous weapon is already too common for it to be stopped. This is a problem, and the best response to it will be to focus on defensive measures to counter drones and catch terrorists using surveillance systems. ”
The film uses this problem, but it inflates it beyond measure, arguing that drones can be used by terrorists as weapons of mass destruction, and kill people by the thousands. Fortunately, this nightmarish scenario is as likely as HAL 9000Blocking you entrance to the gateway. The technologies shown in the video are realistic, but everything else is complete nonsense. In the video, the following assumptions are made:
- Governments will launch mass production of microdrones for use as weapons of mass destruction.
- Against deadly microdrons, there is no effective protection.
- Governments are unable to protect army-level weapons from terrorists.
- Terrorists are capable of launching large-scale coordinated attacks.
These assumptions vary from controversial to fantastic.
Of course, the video itself is a fiction, and the people responsible for developing the defense often use fictitious scenarios to help politicians think through the consequences of probable events. I am a defense industry analyst working in the think tank, and in the previous work in the Pentagon I was engaged in strategic planning. And I used fictitious scenarios to illustrate the choice of military technologies that the US military should invest in. But for these scenarios to be useful, they should at least be probable. It must be something that can happen. The script, used in the film Slaughterbots, is unable to take into account the political and strategic reality of the use of military technology by the government.
First, there is no evidence that governments plan to mass-produce small drones to kill large numbers of civilians. In my forthcoming book Army of None: Autonomous Weapons and the Future of War , I explore the weapons of future generations that are being built in defense laboratories around the world. Russia, China and the United States all compete in the race for autonomy and artificial intelligence. But they create weapons, mostly aimed at fighting with military forces. This weapon is aimed at defeating military targets [ counterforce ], and not to defeat other assets to which civilians are also counted [ countervalue]. Autonomous weapons of the first type, of course, have their own problems, but it is not being developed for the purpose of mass destruction of civilians, and it can not be reconfigured for such use.
Secondly, the video refers to the drones that can overcome any “opposition”. Television experts shout that “we can not protect ourselves.” It’s not even an invention, but a farce. For every military technology there is opposition, and counteraction to small drones can not even be called hypothetical. The US government is actively developingways of shooting, jamming, toasting, hacking, catching and other methods of counteraction to small drones. Microdrones on video can be successfully resisted with something as simple as a wire mesh. The video shows heavy drones punching holes in walls, through which others penetrate – but the simplest multi-layered protection can help against this. Military analysts look at the ratio of the price of protection and attack, and in this case the advantage is clearly on the side of static protection.
In a world where terrorists periodically attack with improvised drones, people are unlikely to tolerate the inconvenience of creating reliable defensive structures, just as people do not wear bulletproof vests to protect against an unlikely bullet-rifle bullet. But if an unfriendly country builds hundreds of thousands of drones capable of wiping the city off the face of the earth, you can be sure that it will not do without grids. Video takes a real problem: terrorists attacking with drones – and scales it without taking into account the reaction of other parties. If the production of deadly microrods on an industrial scale began, protection and countermeasures against them would become a state priority, in which case the countermeasures would be simple. And any weapon that can be protected by wire mesh, weapons of mass destruction is not considered.
Thirdly, the video implies that the military is unable to prevent terrorists from gaining access to army-level weapons. But today we do not give the terrorists hand grenades, anti-tank rifles or machine guns [although this statement is denied even by the last biopic with Tom Cruisein the lead role / approx. trans.]. Terrorists, attacking with the help of drones, worry everyone because they use improvised explosives attached to the finished technology. This is a real problem, but again, the video scales this threat to unrealistic volumes. Even if the military began to make deadly microdrons, terrorists would be able to get a large number of them no easier than any other military technology. The weapons are indeed gradually falling into the hands of participants in military operations that are fighting on the wrong side, but only because Syria is full of anti-tank homing missiles, does not mean that they are easy to meet in New York. Terrorists use airplanes and trucks precisely because it is not so easy to successfully smuggle weapons of an army type into a Western country.
Fourth, the video assumes that terrorists are capable of carrying out an attack with incredibly exact coordination. In one episode, two people release a swarm of 50 drones from the van doors. This episode in itself is quite realistic; one of the problems with autonomous weapons is that a small group of people can launch a larger scale attack than if they had a conventional weapon. A van with 50 drones is quite a reasonable opportunity. But the film brings the idea to the point of absurdity. It is alleged that about 8300 people were killed in simultaneous attacks. Then, if people with a wagon picture a typical attack, then for such damage, terrorists would have to conduct about 160 attacks around the world. Terrorists do often carry out coordinated attacks, but their number is usuallydoes not exceed ten . And the video assumes not only the presence of a superweapon, but also that it fell into the hands of super-villains.
In the film, hype and fear are used to overcome critical assumptions, as a result of which it hampers rational discussion of the risks associated with accessing terrorists for autonomous armament. From the video it becomes clear that we must be afraid. But why be afraid? A weapon that chooses its goals independently (about this, by the way, the video does not understand everything)? Weapons without countermeasures? The fact that terrorists can get their hands on their hands? The capabilities of autonomous weapons to increase the scale of attacks? If you want to cause fear of robotic killers, then this video will suit you. But when you try to analyze the problem thoroughly, it does not stand up to even the simplest study. The video does not give any arguments, but is engaged in the creation of cheap sensations and the build-up of fear.
Naturally, the aim of the video is to encourage viewers to act with fear.
The video ends with the words of a professor at the University of California at Berkeley, Stuart Russell, warning about the dangers of an autonomous weapon and urging the viewer to act so that this nightmare does not become a reality. I highly respect Stuart Russell, both a researcher in the field of AI and a person contributing to the disputes over autonomous armament. I invited Russell to the events at the Center for New American Security , where I am engaged in a research program related to AI and world security . I have no doubt that Russell’s views are sincere. But the video in an attempt to convince the viewer makes unencumbered applications.
Worse yet, the proposed solution is a treaty banning autonomous weapons– will not solve the real problems facing humanity with the development of autonomy with weapons. The ban will not stop terrorists from making homemade robotic weapons. Also, the ban on such weapons, which is demonstrated in the video, will not in any way affect the risks associated with the emergence of autonomous weapons from the military. In fact, it is even unclear whether the weapon shown in the film will fall under such a ban, because it operates very selectively there.
Concentrating on extreme and improbable scenarios, the film actually interferes with progress in addressing real problems related to autonomous weapons. States that are among the leaders of the development of robotic weapons are likely to sweep away fears based on this film. The film pours water to the mill of those who approve, as if the fear of autonomous armament is irrational and subject to excessive hype.
Autonomous weapons raise important questions about subordination to the laws of war, about risk and controllability, and about the moral role of people in military operations. These are important problems worthy of serious discussion. When Russell and others join in a lively debate on these issues, I welcome this discussion. But this film does not fall into this category. The video has successfully captured the attention of the media, but its propensity for sensationalism interferes with serious intellectual discussions that need to be conducted on the subject of autonomous weapons.
Paul Sharr is a senior researcher, director of the program on technology and state security at the Center for New American Security (CNAS). From 2009 to 2012, he directed a working group of the US Department of Defense, which developed guidelines for the use of autonomy in armaments.