Advertisement

White House executive order on AI seeks to address security risks

Long-awaited EO attempts to toe the line between harnessing artificial intelligence’s vast capabilities and protecting “Americans from the potential risks of AI systems.”
President Joe Biden speaks about artificial intelligence at the White House after meeting with several AI and tech executives on July 21, 2023.(Photo by Andrew Caballero-Reynolds/AFP via Getty Images)

The White House announced a long-awaited executive order on Monday that attempts to mitigate the security risks of artificial intelligence while harnessing the potential benefits of the technology. 

Coming nearly a year after the release of ChatGPT — the viral chatbot that captured public attention and kicked off the current wave of AI frenzy — Monday’s executive order aims to walk a fine line between over-regulating a new and potentially groundbreaking technology and addressing its risks.

The order directs leading AI labs to notify the U.S. government of training runs that produce models with potential national security risks, instructs the National Institutes of Standards and Technology to develop frameworks for how to adversarially test AI models, and establishes an initiative to harness AI to automatically find and fix software vulnerabilities, among other measures. 

Addressing questions of privacy, fairness and existential risks associated with AI models, Monday’s order is a sweeping attempt to lay the groundwork for a regulatory regime at a time when policymakers around the world are scrambling to write rules for AI. A White House fact sheet describes the order as containing “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.”

Advertisement

Experts welcomed the order on Monday but cautioned that its potential impacts will depend on how it is implemented and the ability to fund various initiatives. Key provisions of the order, such as a call for addressing the privacy risks of AI models, will require Congress to act on federal privacy legislation, a legislative priority that remains stalled. 

Sen. Mark Warner, D-Va., said in a statement that while he is “impressed by the breadth” of the order, “much of these just scratch the surface — particularly in areas like health care and competition policy.”

“While this is a good step forward, we need additional legislative measures, and I will continue to work diligently to ensure that we prioritize security, combat bias and harmful misuse, and responsibly roll out technologies,” Warner said.

More broadly, the executive order represents a shift in how Washington approaches technology regulation and is informed in part by the failure to regulate social media platforms. Having failed to address the impact of social media platforms on everything from elections to teen mental health, policymakers in Washington are keen to not be caught flat-footed again in writing rules for AI. 

“This proactive approach is radically different from how the government has regulated new technologies in the past, and for good reason,” said Chris Wysopal, the CTO and co-founder of Veracode. “The same ‘wait and see’ strategy that the government took to regulate the internet and social media is not going to work here.”

Advertisement

This proactive approach, however, is one that some industry groups and free-market advocates caution could stifle innovation at an early stage of AI innovation.

“The administration is adopting an everything-and-the-kitchen-sink approach to AI policy that is, at once, extremely ambitious and potentially overzealous,” said Adam Thierer, a senior fellow at the free-market think tank R Street. “The order represents a potential sea change in the nation’s approach to digital technology markets as federal policymakers appear ready to shun the open innovation model that made American firms global leaders in almost all computing and digital technology sectors.”

Monday’s order takes a series of steps to address some of the most severe potential risks of AI, including its threat to critical infrastructure and its potential use as an aid to create novel biological weapons, in the design of nuclear weapons or the creation of malicious software.

To address growing concerns that AI could be used to supercharge disinformation used to influence elections — especially in next year’s presidential election — Monday’s order will require the Department of Commerce to develop guidance for “content authentication and watermarking” to show clearly labeled marks for AI-generated content.

The administration’s initiative to build cybersecurity tools to automatically find and fix software flaws builds on an ongoing competition at the Defense Advanced Projects Research Agency, and experts on Monday welcomed the focus on trying to harness AI to deliver broad improvements in computer security.  

Advertisement

The goal is to raise the barrier to entry in using these tools to either create malware or assist in cyber operations. “It feels like the early days of antivirus,” said David Brumley, a cybersecurity professor at Carnegie Mellon University and the CEO of the cybersecurity firm ForAllSecure. “I know it’s malicious when I see it and I can prevent that same malicious thing from occurring, but it’s hard to proactively prevent someone from creating more malware.”

Brumley cautioned that the agencies that Monday’s order relies on to implement new safety initiatives may lack the capacity to carry them out. The order, for example, calls on NIST to develop standards for performing safety tests of AI systems and directs the Department of Homeland Security to apply those standards to the critical infrastructure sectors it oversees. 

NIST will likely have to engage with outside experts to develop these standards, as it currently lacks the right know-how. “They’re relying on very traditional government agencies like NIST that have no expertise in this,” Brumley said.

DHS’ Homeland Threat Assessment recently called out AI as one of the more pertinent threats to critical infrastructure, warning that China and other adversaries are likely to use AI to develop industrial-specific malware.

“Malicious cyber actors have begun testing the capabilities of AI-developed malware and AI-assisted software development — technologies that have the potential to enable larger scale, faster, efficient, and more evasive cyber attacks — against targets, including pipelines, railways, and other U.S. critical infrastructure,” the DHS report reads.

Advertisement

The federal government is beginning to address these threats, as with the National Security Agency’s announcement last month of an AI Security Center that will oversee the development and use of AI. Monday’s order contains additional initiatives to address these more narrow security concerns, including the creation of an AI Safety and Security Board housed within DHS. What authority will be given to the board and its similarity to other review bodies, such as the Cyber Safety Review Board, remain to be seen.

The order also calls on the National Security Council and White House chief of staff to develop a national security memorandum that lays out how the military and intelligence community will use AI “safely, ethically, and effectively” in missions, as well as direct actions to counter adversary use of AI.

Latest Podcasts