Elon Musk, co-founders of DeepMind, as well as more than 160 companies and more than 2,4 thousand people signed the obligation not to participate in the development of autonomous military systems with artificial intelligence. According to the members of the association, the emergence of a weapon capable of independently determining goals and engaging them in battle will lead to ethical, moral and other problems.
Experts are sure that politicians are obliged to take steps to introduce strict norms for the development and creation of combat systems under the control of AI. A weapon capable of independently deciding a person’s murder is no less dangerous than a biological or chemical weapon, they believe. It is noted that in several countries (including the US and China) the development of smart, autonomous weapons is carried out without restrictions.
A few years ago, a study showed that in the world there are at least 284 military systems that are able to make a decision on how to conduct a fire. Their capabilities are limited, but it’s still about platforms not just helping a person on the battlefield, but acting to some extent independently. Future wars, according to the US military, will have to lead the robots.