US unveils framework to address national security risks from AI
The United States unveiled Thursday a framework to address national security risks posed by artificial intelligence, a year after President Joe Biden issued an executive order on regulating the technology.
The National Security Memorandum (NSM) seeks to thread the needle between harnessing the technology to counter the military use of AI by adversaries such as China while building effective safeguards that uphold public trust, officials said.
“There are very clear national security applications of artificial intelligence, including in areas like cybersecurity and counterintelligence,” a senior Biden administration official told reporters.
“Countries like China recognize similar opportunities to modernize and revolutionize their own military and intelligence capabilities.
“It’s particularly imperative that we accelerate our national security communities’ adoption and use of cutting-edge AI capabilities to maintain our competitive edge.”
Last October, Biden ordered the National Security Council and the White House Chief of Staff to develop the memorandum.
The instruction came as he issued an executive order on regulating AI, aiming for the United States to “lead the way” in global efforts to manage the technology’s risks.
The order, hailed by the White House as a “landmark” move, directed federal agencies to set new safety standards for AI systems and required developers to share their safety test results and other critical information with the U.S. government.
U.S. officials expect that the rapidly evolving AI technology will unleash military and intelligence competition between global powers.
American security agencies were being directed to gain access to the “most powerful AI systems,” which involves substantial efforts on procurement, a second administration official said.
“We believe that we must out-compete our adversaries and mitigate the threats posed by adversary use of AI,” the official told reporters.
The NSM, he added, seeks to ensure the government is “accelerating adoption in a smart way, in a responsible way.”
Alongside the memorandum, the government is set to issue a framework document that provides guidance on “how agencies can and cannot use AI,” the official said.
In July, more than a dozen civil society groups such as the Center for Democracy & Technology sent an open letter to Biden administration officials, including National Security Advisor Jake Sullivan, calling for robust safeguards to be built into the NSM.
“Despite pledges of transparency, little is known about the AI being deployed by the country’s largest intelligence, homeland security, and law enforcement entities like the Department of Homeland Security, Federal Bureau of Investigation, National Security Agency, and Central Intelligence Agency,” the letter said.
“Its deployment in national security contexts also risks perpetuating racial, ethnic or religious prejudice, and entrenching violations of privacy, civil rights and civil liberties.”
Sullivan is set to highlight the NSM in an address at the National Defense University in Washington on Thursday, officials said.
Most of the memorandum is unclassified and will be released publicly, while also containing a classified annex that primarily addresses adversary threats, they added.