Unknown: Killer Robots is a new Netflix documentary film, part of the ongoing Unknown series, that intrigues and scares in equal measures, much like the instance of dual-use problems that it presents. In a world that is steadily heading towards AI advancement and also a potential AI takeover in some fields, the idea of artificial intelligence being put to use in warfare is not too uncommon. However, the drive in the weapons industry at this point in time is to come up with AI machines that can take decisions on their own and can operate and even kill without any human command. Although it avoids delving into the more detailed process of how AI works in such warfare, Unknown: Killer Robots is a well-made documentary that successfully raises a number of pertinent questions.
What Are Autonomous Weapons?
The use of Artificial Intelligence in the field of warfare is not necessarily a new occurrence since robots and other technology have been used by the US army in action in Iran and Afghanistan. The beginning of such uses of AI was mostly restricted to surveillance operations, in which robots replaced humans in getting into potentially dangerous areas and surveying for any attackers or threats. The technology was mostly used to seek out any IEDs or other such threats for the simple reason that if there was any bomb that exploded, it would destroy the robot rather than kill any human soldier. The surveillance robot could then be instantly replaced by a new one, and there was simply no guilt or grief associated with the loss of human life. These robots were also remotely controlled and moved around by humans, and the US military has performed drone strikes using such AI weapons as well.
However, where autonomous weapons become a novelty is in the fact that they are completely free of any human control or even command. In the case of a surveillance robot, for example, an autonomous weapon version would simply know where to go, what to scan, and what to mark as a threat or a target without having to be controlled by a human. Similarly, in the case of AI robots that are armed with weapons, the autonomous versions can decide by themselves who to attack and what to destroy solely based on the data that they have been fed. As Killer Robots states, at one point, the use of autonomous weapons has already started in the Ukraine war in the form of autonomous AI drone strikes.
What Are The Killer Robots Shown In The Documentary Film?
Among its series of new innovations in the field of automated AI weapons, Killer Robots begins with an autonomous quadcopter drone called Nova. Being developed by a company named ShieldAI, which is co-founded and run by former US Navy SEAL Brandon Tseng, the Nova drone is designed to be an independent surveillance device. Based on Tseng’s own experiences from his service days, one of the most challenging and dangerous situations that any military personnel has to go through is going into buildings and compounds in enemy territory without knowing what is inside. Since there can be a number of threats, like human attackers or IEDs, drones have been developed to do the same surveillance. However, with Nova, there needs to be no human control, for the autonomous drone can itself scan the entire area, create its own flying path, and check out every individual room or corner one after another. The data it collects is constantly relayed to the human soldiers, and in this way, the surveillance drone can become a crucial part of the army squad itself. The future plan is also to create a swarm of such drones that can all think and operate by themselves, making military reconnaissance even easier.
The next marvel in focus is MIT’s Improbable AI Lab, which has been trying to implement AI autonomy for robotic dogs. While the robot dogs were first designed to walk and function by themselves, the particular project that is shown in this documentary film covers teaching the AI to learn on its own as well. The main idea for the robot dog to be put to practical use is to send it to remote locations and provide information or support, especially to disaster-hit areas or places of such crises. But in order to do so, the robot dog must be able to walk on and traverse a number of different terrains, which can be rocky, slippery, and unusual in other ways as well. To enable such a technology, the Improbable AI Lab has been developing an autonomous robot dog that can register the terrain it is walking on and immediately adapt to it, all without any need for human intervention. Killer Robots also features a brief presentation of this experiment, as the robot dog is made to walk on a soft sponge floor, rough terrain with debris littered around it, and then an extremely slippery surface coated with oil. The robot indeed manages to understand and adapt its style of movement after a series of iterations.
Next to be presented is the work of pharmacologist Sean Ekins in his Collaborations Pharmaceuticals company, in which he and his team try to discover new drugs that can be used to cure more diseases. Generally, in this field, the usual practice is to tweak a molecule in an existing drug to see the effects and potentially find a new drug. But Ekins’ work focuses on developing AI to perform this tweaking process, therefore exponentially increasing the potential to find new drugs. The intention of this project is to make new medicines from newly discovered drugs and compounds that can be used to treat diseases that have been believed to be incurable for a long time.
The implementation of AI in the field of videogames and certain sports is also brought to attention, as the practice has been tried a number of times now. AI minds have already defeated human competitors in indoor games like chess and Go, as well as in complex videogames like StarCraft. Taking inspiration from these instances, the Defense Advanced Research Projects Agency has been supporting a project named AlphaDogfight, in which AI is taught the skill of dogfighting, which is basically aerial fighter aircraft combat. An ultimate virtual face-off between this AI and an experienced US Air Force Lieutenant Colonel is also presented in Killer Robots, in which the AI triumphs once again. This triumph is marked by sheer precision in the AI-controlled aircraft’s movements, which are simply not possible to make by humans, and also by a very distinct absence of fear, which often restricts human pilots from making certain moves.
What Are The Dangers Involved In The Use Of Autonomous AI In Warfare?
Along with these immense technological advancements, the possibility that they will be put to devastating use always remains and even grows stronger. The very basic and crucial discussion in this regard is whether we human beings should really give AI the autonomy to kill or harm other humans or even life forms. One of the experts in the documentary presents an apt scenario as an example, saying that once, during the US Army’s operations in the Middle East, the opposing insurgents sent a little girl onto their side for surveillance. Although the girl was immediately found out, she was obviously not harmed but instead sheltered and protected. But killing the little girl would have been perfectly allowed according to the law of war, and an AI that would only abide by these laws would not hesitate to kill the girl. The moral judgments and distinctions that human minds can make clearly cannot be expected from AI. In fact, many of the supporters and propagators of AI prefer it over human decisions precisely because of its efficiency.
The issue of racial profiling also seems to be a glaring possibility that seems linked with using AI, even though Killer Robots does not directly mention it. In the case of the Nova drone and the future swarms that are being planned, there needs to be some standard that needs to be fed in with regards to who these AI drones should consider enemies. If such generalizations are indeed used for operations that can take human lives, then there would obviously be the risk of killing innocent locals instead of any terrorists. A report of US airstrikes killing innocent civilians in Syria is also mentioned, and a combination of the two might be even worse. A UN report from a couple years ago is also presented, in which it was stated that an unmanned AI drone had possibly killed humans in Libya by itself, with no human supervision at all. Such occurrences have, of course, increased during the Russia-Ukraine conflict, where AI drones are regularly being used for direct warfare. But the question of whether we should completely disregard ethics and human judgments holds a tight grasp over such usage.
Each of the technological marvels shown in the film also has its own respective negative possibilities. The swarm of drones that Brandon Tseng wants to develop and use in warfare is based on swarm and hive mentalities in nature, making each of the drones deadly and efficient when they eventually come into existence. This means that whatever the AI minds would choose to do would be efficiently carried out, even if not all drones could put up with the task. Such a technology in the wrong hands would be devastating, to say the least. Similarly, the robot dog is already being fitted with guns and ammunition by different private security agencies and even national armies. In a matter of minutes, the technology that is being developed for rescue and aid missions can be turned into small autonomous units of warfare.
The molecule-tweaking AI developed by Sean Ekins and his team was used by the developers themselves in order to completely reverse their search and develop the most toxic drugs and compounds possible. While the team’s intention was only to know about the extent of the AI’s capabilities, the results that it presented were absolutely shocking. The AI was able to list hundreds of extremely toxic drugs that can be developed as bioweapons by anyone with access. Ekins also wrote about this in an academic paper to make people aware of the extent of the capabilities of our own inventions. Although the risk of people getting drawn to such illegal and evil ideas did exist, Ekins decided to focus more on the awareness that his paper would spread.
In the case of the AlphaDogfight project, the use of AI pilots in aerial warfare is also quite a scary scenario, for it might just lead to endless warfare. Any human pilot faced with AI counterparts would obviously not survive, but when two AI pilots are faced with reality, the effects can be devastating. Those in favor of using autonomous weapons in warfare harp on the fact that all of these weapons would essentially reduce the risk of human lives being lost in conflict. But then those against it are quick to remind us that these innovations can, in turn, make warfare an unnecessarily drawn-out process, which would lead to even more destruction and loss of lives. One of the commenters even compares it to the invention of the Gatling gun, which was originally made to reduce human effort and, therefore, reduce the risk of human lives being lost, but instead, it just resulted in more violence.
Although the need to lay down laws and ethical barriers with regard to the use of AI in warfare definitely grows, the modern world is sadly not close to achieving it yet. With major powerful nations like the US, China, and Russia all essentially still competing against each other, no one party would put down AI weapons unless everyone else did. Unknown: Killer Robots leaves with the shocking information that there are even developments going on in the US to implement AI that would combine all of its navy, army, and air force chains of command. Essentially, the US military is trying to develop AI that would send out commands to the different units, and wars would literally be fought out on the orders of AI minds.