Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration - EA Forum
Possible interventions include funding new experimental wargaming efforts, funding research at key think tanks, and increasing international AI governance ef
Fortunately, both the U.S. and U.K. have made formal declarations that humans will always retain political control and remain in the decision-making loop when nuclear weapons are concerned.[10
New and more powerful ML systems could be used to improve the speed and quality of assessment completed by NC3
The other core reason for focusing on early warning and decision-support systems within NC3 is their susceptibility and influence on the possibility of inadvertent us
The machine had made the wrong call and Petrov’s skepticism and critical thinking combined likely contributed to preventing Soviet retaliation.
1980 NORAD “changed its rules and standards regarding the evidence needed to support a launch on warning.”
In one sense these stories demonstrate that even in the face of complex and flawed systems, organizational safety measures can prevent inadvertent use. However, they also speak to the frightening ease with which we arrive at the potential brink of nuclear use when even a small mistake is made.
anything that impacts early warning and decision support systems requires the utmost scrutiny even if the change stands to potentially improve the safety of such systems.
their history with false positives, anything that impacts early warning and decision support systems requires the utmost scrutiny even if the change stands to potentially improve the safety of such system
Furthermore, current NC3 systems are aging and the last major update was during the 1980s.[2
Not only will modernized NC3 incorporate ML but there is a real risk of rushed integration with higher risk tolerance than normally accepted
humans will always be in the loop, but isn't this just beefed up version of the original problem?