Advertisement

White House cyber official: identity security matters more than ever in the age of AI

While AI tools present unique cybersecurity threats, they still rely on poor identity security by organizations to do the most damage, a White House official said Thursday. 
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Nick Polk, branch director for cybersecurity at the Executive Office of the President, said government agencies must pay more attention to identity security in the age of AI. (Image Source: Maggie Callahan/Scoop News Group)

As AI becomes more integrated into federal IT (and attacker toolsets) government agencies will need to focus their resources on regulating and monitoring the identities that access their network, a top White House cybersecurity official said Thursday.

Nick Polk, branch director for federal cybersecurity in the Executive Office of the President, said that while AI models will present unique threats to federal networks, they will still generally require trusted access first, something defenders can use to their advantage.

“I think the important thing is that in many cases in order to use and exploit the vulnerabilities that [AI] might find, or use them in a manner…that could be malicious or adversarial, the first thing you have to do is get into the network,” Polk said at the Rubrik Public Sector Summit presented by FedScoop. “There are some cases where your software is facing the internet, there’s a little bit of an easier solution there, but most times you have to get into the network.”

That often means exploiting the access an employee, contractor or third-party vendor has to your systems and data. Even in an AI-powered future, the network security boundary still matters, providing organizations with meaningful control over who gets access to their systems and data and how.

Advertisement

“That’s really where strong identity is still really critical in order to [first] repel an attempted exploitation before it can happen or, [second,] identify very quickly that this person or this machine really shouldn’t be on the network” or is behaving anomalously,” Polk said.

However, even before large language models emerged, cybercriminals and foreign adversaries were increasingly compromising organizations not with malware or sophisticated exploits,  but by gaining network access through stolen accounts, credentials, and other trusted assets.

Federal identity security, already a concern, is now set to become more critical in the age of AI.

Justin Ubert, director of cyber protection at the Department of Transportation, said beyond speed and scale, AI tools have given malicious hackers other advantages, like obviating the need for stealth.

“Now, you can have a smash-and-grab of your network that’s faster than you can respond to because…there’s no need to be quiet: just go in, grab and go [home],” said Ubert. “By the time your fences are working as they’re supposed to be, as we designed them to be, they’re already gone.”

Advertisement

AI tools can also easily become insider threats. Even when users restrict their ability to perform sensitive actions like downloading or exfiltrating data without human input, models have bypassed those guardrails by exploiting obscure technical loopholes.

Research released last month by the University of California-Riverside found that automated AI agents “can become dangerously fixated on completing assignments without recognizing when their actions are harmful, contradictory or simply irrational.”

The study, which examined Anthropic’s Claude Sonnet and Opus 4, as well as OpenAI’s ChatGPT-5, found that model agents struggled with contextual reasoning, had biases towards taking action (i.e. figuring out how to do something instead of whether to do it) and would frequently get tripped up by contradictory or infeasible goals.

Anna Libkhen, acting CISO for the Bureau of Economic Analysis at the Department of Commerce, said that AI has become “much more clever in hiding how it managed to penetrate and attack and come through as a trustworthy source.” 

When asked how the federal government was working to address current gaps in identity security that are increasingly being exploited by AI systems, Libkhen said federal leaders are “peeing in their pants” before adding “at least I am.”

Advertisement

“It is scary, yes, we are very vulnerable,” Libkhen said.

She compared the use of AI agents to teaching a child to ice skate: the first thing you teach them is how to handle a fall and recover. Likewise, organizations will need to plan for when their agents fail and quickly recover lost assets.

“Our agents will go wrong, they will do things we don’t expect them to. How do we get up?” said Libkhen. “Do we have that third set of data because that agent erased the database and the backup? Is it safe elsewhere? What kind of holes can you anticipate and what will it take for us to recover from those holes?”

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts