The Cybersecurity and Infrastructure Security Agency unveiled a multipronged strategy on Tuesday to guide its efforts to guard critical infrastructure from the threat posed by artificial intelligence, to prevent malicious use of the technology and expand AI expertise among the agency’s workforce.
Coming two weeks after President Joe Biden signed an expansive executive order on artificial intelligence that seeks to both address the risks posed by the technology and harness its benefits, CISA’s strategy document notes the “parallel cybersecurity challenges” presented by the new technology and from older software systems that were built without “secure by design” principles in mind.
“Artificial intelligence holds immense promise in enhancing our nation’s cybersecurity, but as the most powerful technology of our lifetimes, it also presents enormous risks,” CISA Director Jen Easterly said in a statement. “Our Roadmap for AI, focused at the nexus of AI, cyber defense, and critical infrastructure, sets forth an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”
Protecting critical infrastructure from AI is an especially high priority as spelled out in CISA’s roadmap. The agency said it will use AI-enabled software to bolster cyber defenses and back its critical infrastructure mission, while partnering with other government and industry partners in the development, testing and evaluation of AI tools to combat AI threats. To that end, CISA will launch a website — JCDC.AI — to coordinate response to threats and vulnerabilities tied to AI systems.
CISA also plans to “assess and assist secure by design, AI-based software adoption” for federal civilian agencies, private sector firms and state, local, tribal and territorial governments. The agency will provide best practices and guidance for the development and implementation of AI, in addition to formalizing recommendations for the red-teaming of generative AI.
The agency intends to share its findings with interagency and international partners, as well as the public, and will play a key role in the Department of Homeland Security’s policy work for the overall U.S. strategy for AI and cybersecurity. In advancing global AI policies, CISA’s best practices will adhere to “responsible, ethical and safe use” of the technology.
Finally, the agency aims to grow its internal AI expertise, educating its workforce on AI software systems and approaches. CISA’s recruitment of workers with AI experience and training of staffers without background in the technology will emphasize “the legal, ethical and policy aspects of AI-based software systems in addition to the technical aspects.”
DHS Secretary Alejandro Mayorkas said in a statement that the AI strategy is just “one important element” of the agency’s cybersecurity work. “CISA’s roadmap lays out the steps that the agency will take as part of our Department’s broader efforts to both leverage AI and mitigate its risks to our critical infrastructure and cyber defenses.”