Humans, Not AI, Will Decide on Nuclear Strikes: US General
- US General Anthony J. Cotton emphasized the importance of human involvement in nuclear strike decisions.
- Gen. Cotton hopes other countries will also prioritize human decision-making in similar situations.
- This stance aims to prevent undue reliance on artificial intelligence in critical military choices.
In a recent briefing at the U.S. Strategic Command Deterrence Symposium, Gen. Anthony J. Cotton, commander of United States Strategic Command, emphasized that artificial intelligence (AI) will not be responsible for making decisions regarding nuclear strikes. Cotton reassured reporters that human oversight remains paramount in military operations involving AI and machine learning, stating, “humans will absolutely be in the loop.” He acknowledged the potential of AI to enhance the analysis of intelligence, surveillance, and reconnaissance data but firmly opposed the idea of a computer making critical decisions akin to those depicted in the 1983 film "WarGames." Cotton highlighted the importance of managing the vast amounts of data provided by AI to support human decision-making rather than replacing it. He reiterated the U.S. stance, shared by its allies, that autonomous systems should not be involved in nuclear launch decisions. This position contrasts with China's reluctance to accept similar proposals, raising concerns about the implications of AI in military strategy. The general expressed hope that adversaries like China would eventually adopt a more cautious approach regarding AI in warfare. He warned against allowing systems like the fictional "WOPR" to make decisions about nuclear or conventional weapons without human intervention, citing the complexities of conflict dynamics. Additionally, Cotton addressed his relationship with President Joe Biden, affirming confidence in the president's ability to issue lawful orders, despite Biden's withdrawal from the 2024 race.