The arms race and principle of protectionism has given rise to autonomous weapons that are now eventually coming in a battlefield. These autonomous weapons that are generally equipped with predictive, decision-making abilities are the courtesy of artificial intelligence. The autonomous warfare’s distinctive feature is to protect soldiers on the battlefield by keeping them off the battlefield. However, Justin Haner, a doctoral candidate at Northeastern University, argues that it is possible that soon countries will opt to send these machines into conflicts rather than human soldiers.
Justin Haner is working with Denise Garcia, who is an associate professor of political science and international affairs and who sits on the International Committee for Robot arms control. They recently published a paper that examined the growing graph of these machines around the world. They found that these weapons are the majorly produced in the most powerful countries of the world, like the United States, China, the European Union, and South Korea. The government expenditure is the highest in these countries.
These counties are spending billions of dollars in manufacturing autonomous weapons as this weapon identify targets, survey their surroundings, and independently attack those targets on the basis of algorithms. This creates artificial intelligence for autonomous weapons. Justin Haner says, ”No one is developing Terminator-style robots but(artificial intelligence) is being added incrementally to existing systems so that drones can fly on their own and target things on their own, for example.”
The most developed countries, like the US and Russia, started using these weapons in recent countries conflicts like Russian state-run news agency announced remote control drones to Syria that was deployed by Russian forces. The US uses semi-autonomous systems to “engage targets” that were initially selected by humans. Despite this vast use, there are no proper regulations or laws and no awareness amongst people. It seems advantageous for these powerful countries or military forces, however, the negative consequences of this technology are even higher. The terrorists will be able to choose their targets if they also operate AI used drones.
Artificial Intelligence is also prone to bias. For example, if there is facial recognition software and if it is mostly trained by white faces then it won’t harm the white people when it will be scanning the faces of the people. This could lead to over-targeting minority groups. On this account, these five powers leading the autonomous arms race should immediately set global norms and also push a ban or reduce the use of autonomous weapons.
Shweta Tripathi
Comments