Advertisement

Deputy National Security Advisor Anne Neuberger on addressing the security threats of AI

The deputy national security adviser for cyber and emerging technologies discusses how to mitigate AI's disinformation threat.
U.S. Deputy National Security Advisor for Cyber and Emerging Technology Anne Neuberger speaks during a White House daily press briefing at the James S. Brady Press Briefing Room of the White House on March 21, 2022 in Washington, DC. (Photo by Alex Wong/Getty Images)

Over the past year, artificial intelligence has catapulted from a small corner of the tech ecosystem to center stage. Advances in large language models have captured the public imagination, and policymakers are doing their best to keep up.

Those policymakers saw first hand how social media changed the internet with drastic consequences for everything from mental health to the administration of elections. And as artificial intelligence threatens to remake many of the technologies we rely on everyday, lawmakers are trying to not repeat the mistake of the social media era, when regulation came far too late to mitigate the technology’s harms. 

On a recent episode of Safe Mode, CyberScoop senior editor Elias Groll sat down with Anne Neuberger, the deputy national security adviser for cyber and emerging technologies in the Biden administration, to discuss how the White House is trying to address the security implications of AI. 

This conversation has been edited for length and clarity. 

Advertisement

We’re recording this episode a few days after a statement was released by the Center for AI Safety warning about the catastrophic risks posed by AI.

This is a statement that was signed by the who’s who of the AI world, and I want to kick off our conversation by reading it and having you react to it. Here’s that statement: “mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war.”

What do you think of that? Do you agree? 

Artificial intelligence and particularly large language models, do present great risk to our societies, our economies, and our national security. They also represent dramatic promise. They have a great promise for drug discovery, sustainable infrastructure, design, music, and art. So we want to ensure that as these incredible systems introduce security concerns and risks we also reap the benefits for our societies, our economies, and work carefully to manage those risks.

Advertisement

There’s this whole burgeoning community of AI experts really concerned with the idea of existential risk arising from AI — that it could pose a threat to the human species. Do you buy that risk?

The technology is very powerful and has elements that we don’t understand. So I think the key for us is to carefully parse through and think about what are the most significant risks and what can we do to try to understand them and think about thoughtful approaches to address them. 

And I think that is really what underpinned the president and vice president hosting. A key group of AI companies at the White House a couple of weeks ago to talk about a set of voluntary controls that we believe would set in place some significant risk mitigations, give better visibility, give better transparency, and address some of the core elements of those risks.

But I want to be clear, there’s much we don’t know about these technologies.

We’ve all had that experience in the last couple years of our parents asking us what’s happening in the world of technology. What do you say to your parents when they ask you about AI?

Advertisement

So the first questions are always, well, what is this? And I try to explain it in a real world way. I recently paid a visit to Cincinnati and Chicago to understand how they were trying to use AI to improve government services.

And the example that I found most interesting was, you know, they have a 911 center and there’s so many different kinds of calls that can come into a 911 center. You have people who are afraid. People have different accents.

And they talked about how they’re using artificial intelligence to help train people in a couple of ways.One is to glean across the history of questions asked to understand what are the most common questions, what have been the most effective answers and ways to calm people who are panicking? And the piece I found most interesting is to customize the training based on the kinds of questions the trainee is answering wrong so that they can get more of those kinds of questions and really learn.

That gives a good example of how bringing all of that data together — both the phone calls that came in and the answers that were given — to see what’s most effective and then customizing it to that person. 

So it doesn’t sound like you’re an AI doomer. Is that right? 

Advertisement

Probably fair to say, I’m not generally a doomer. It’s hard to work in cybersecurity and be responsible for responding to national cyber attacks and be a doomer.

Most technology is dual purpose. There are things that bring tremendous benefit and things that bring significant risks. There’s software code that’s written to break systems, malicious code, and there’s software systems that fundamentally underpin our entire lives. 

Our goals are to try as much as we can to understand the most significant risks and shape policy in an agile way to respond to those and to respond quickly.

I wanna be clear that artificial intelligence, I do believe presents very new and major risks, many of which we don’t fully understand how they will play out. So I think we need to think through those carefully in close partnership with the companies building these technologies who know them best and can put in place the red teaming and trust and safety protections that will be so key. 

Let’s dig into those, some of those risks. What are the security concerns that are top of mind for you right now? What do you think needs to be addressed in the short term? In the short term?

Advertisement

Enabling the design of malware, enabling more precise deep fakes and customizing spearfishing and potential bio weapons. Those are categories of concern that could enable more effective, more rapid cyberattacks and could bring precision to disinformation.

We saw recently that picture of an explosion at the Pentagon, and clearly in an environment we have now where everybody’s getting information delivered to them very, very quickly disinformation delivered with precision could cause concerns, could make people afraid, could lead to manufactured crises. And we certainly are concerned about bio weapons. Those are very much the concerns that are top of mind for us. 

The photo of the explosion of the Pentagon is a great example.

I’m curious what your read is on that incident. On the one hand, clearly a lot of people fell for it very quickly, right? But then on the other hand, it was also debunked quite quickly. How do you interpret that sequence of events. What’s the lesson for you from that one?

I think that’s a fantastic question. The first piece is that we’re all working to teach ourselves that when you see something question right away: Does that seem likely? We’ve all been learning, I think, or trying to teach ourselves, to kind of pause. Twitter had that campaign a number of years ago of not instantly retweeting, broadly pausing and saying, who would want me to think this? Who would want me to know this? Let me make sure I validate.

Advertisement

I think the second piece is key: The fact that the Arlington Fire Department, a local entity, quickly responded, was key, because we’re going to need to tighten those response times. A formally coordinated statement is never going to work.

And I think the third piece is the provenance piece, which is the more that authentic pictures, video, text can be marked so that people start to look for that mark to say, oh, this is real. And there are various efforts that have been underway for some time, both across industry to add provenance. You saw Google’s recent announcement regarding watermarking on generative AI. 

Can you talk a bit more about that watermarking piece? 

I’m curious what you think of the state of the technology and then, also, how you think a watermarking solution for AI generated content should look like. Is that something where the private sector needs to be leading on it? Do you think that that’s something where government could potentially be stepping in and writing rules around watermarking?

For those who may not be familiar with the term, an equivalent concept is when we’re online and we look at our browser and you see that green padlock. That green padlock tells you there’s a whole set of complex steps that that is a secure connection. 

Advertisement

I think when we look at watermarking, the technology varies for text, for video, for images. But the technology is certainly mature enough to where putting a watermark on something and making it somewhat difficult to remove it is possible. You’ve seen a number of artificial intelligence, specifically generative AI firms, say they will do that.

We think it’s absolutely private sector leading, because the private sector AI firms are the ones who know whether it is generated content or real content.

There’s been some push from folks within private industry, in particular OpenAI, that they want to use an arms control framework to regulate AI. Sam Altman has called for the creation of an International Atomic Energy Agency, but for AI. Do you think arms control and non-proliferation frameworks have maybe lessons for how we should be regulating AI?

It’s a good point, and there’s certainly the international aspect of how we approach this that’s important here. There’s an active policy process underway looking across the different threats and considering what are the right tools to address them, considering what executive actions we can take, where the administration can work with Congress.

And as we work through that process though, we are taking a number of actions to address the threats today. The White House has been engaging regularly with key AI companies to ensure robust testing and security measures are in place to help address these threats and these harms today.

Advertisement

There’s a time sequence here. We can talk about longer term efforts, but we want to make sure we’re focusing on the threats today. The vice president hosted a meeting with leading AI companies and she focused on a few key things. One was the need for companies to be transparent regarding the data that’s going into their AI systems.

We hosted key firms for classified cybersecurity briefings and then deep dive discussions on best cybersecurity practices, practices to address insider threats and offered assistance and ongoing discussion and collaboration in that space.

We’re very much focused on efforts that can be done today. 

I feel like we could talk about AI all day, but I do want to hit a couple of other issues.

One of the big success stories of the past year has been the effort to assist and partner with Ukraine in its fight against Russia, including in cyberspace. I’m wondering if you can talk a little bit about some of that partnership work that’s happening in the US government with international partners and what you’ve learned from that effort.

Advertisement

Ukraine has been a really insightful example of the power of what we call the three Ps, the power of partnership, the power of preparation and, particularly, the power of the private sector.

Looking back 2014 and 2015, when Ukraine learned the hard way about the capability and power of Russia’s offensive cyber program, when they faced disruption of their energy grid, at that time, Ukraine welcomed ongoing help from the international community in securing their energy grid. 

They really used those six, seven years to focus on improving their cybersecurity. When the invasion started, the private sector surged in, whether it was moving data to the cloud, whether it was ensuring best in class cybersecurity defenses. From the international community’s perspective, we were talking daily with Ukrainian counterparts, pushing, sharing information, sharing cybersecurity best practices.

The administration has really put a real focus on international partnerships in a very focused way. So I’ll talk first a moment about the International Counter Ransomware initiative. We built the largest international cyber partnership to combat ransomware because it was hitting global pocketbooks, small medium companies, critical infrastructure, hospitals, governments around the world. At the root of ransomware is illicit use of cryptocurrency, so we have a real focus on countering illicit use of crypto. There’s a focus on diplomacy because many of these actors are harbored, are based in Russia. 

One of the most interesting takeaways that we didn’t expect at the beginning was that by not making it about any adversary country, by making it about cyber crime, we avoided, you know, some countries won’t come together publicly to counter China. Some countries won’t come together publicly to counter Russia. That’s avoided because everybody’s struggling with crime. 

Advertisement

You mentioned China, and we recently learned of a major Chinese operation targeting U.S. critical infrastructure in Guam. This was an operation called Volt Typhoon. Microsoft described it as an operation that could have laid the groundwork for cutting off communications between the United States and Asia in the event of a crisis, quite a claim.

First off, was that an accurate assessment? And then second, can you shed any additional light on what this Chinese actor was up to?

China’s cyber capabilities and their ability to conduct destructive or disruptive cyber operations is a very serious threat to critical infrastructure in the United States and critical infrastructure around the world. 

The purpose of the joint U.S. government and key allies cybersecurity release that happened around the same time as Microsoft’s product was to equip network defenders. It’s a really technical product because it has a number of items where it details Chinese techniques to attempt to compromise critical infrastructure and the ways to detect that and the ways to defend against that. 

I make a distinction here, Elias, between espionage and a potential capability to disrupt critical services. And the latter category is something we are incredibly focused on. We, the Biden administration, has put a relentless focus on improving the security of critical infrastructure because essentially there is no separation between military and civilian critical infrastructure. 

Advertisement

Let me press you on that just a little bit. Do you think that it’s accurate to understand that this particular Chinese operation as laying the groundwork for using a cyber capability to disrupt communications between the United States and Asia in the event of a crisis, let’s say, a potential military confrontation over Taiwan?

It’s always hard to know intentions, Elias. So we focus on what we find, evicting it and defending against it. 

We’re in this incredible moment of tension in between the United States and China. We see the administration trying to restart dialogue with China, but it’s one step forward and two steps back at times with the relationship with China. And I’m curious how that’s playing out from where you sit.

Do you see any change in the way that Chinese hacking groups are operating in cyberspace and reacting to tensions between the United States and China? 

So we’ve seen really two lines of Chinese effort. One, a significant program is focused on hacking to steal research and technology, military, commercial, and really advance Chinese capabilities.

Advertisement

The second piece is what we talked about, which is Chinese state sponsored cyber actors establishing compromised infrastructure and using that to compromise critical infrastructure for potentially disruptive or disruptive operations in the future.

I want to close by asking a bit about your broader portfolio. You have an incredible number of issues that you’re across given your portfolio of emerging technology and cyber. Are there any ideas or issues in your space that you wish people were paying more attention to? 

One of the privileges of my job is to coordinate policy across the US government. And I learn every day about elements of the U.S. government doing cool work that we weren’t quite aware of.

For example, you know, when you talk about Chinese capabilities or threats against critical infrastructure, that would’ve traditionally been thought of as a national security problem, right? So you’d have the Department of Defense, the State Department, the Department of Justice in the room. But to really tackle it you need the agencies who know those sectors best, the ones who know our nation’s pipeline. The ones who know if a foreign adversary wanted to compromise a water system, how would they do that? What’s possible? How do you protect against it? So we now have these discussions that not only include the traditional national security community, but also include the EPAs, the DOTs, the FAAs, the HHSs of the world.

And it’s been interesting to build the processes. So the intelligence community can be briefing EPA leadership on cyber threats perhaps for the first time. Being effective in addressing it requires that kind of teamwork and collaboration across both domestic and traditional national security agencies.

Advertisement

Thank you so much for joining us. This was a fascinating conversation. 

Thank you so much for having me, Elias. 

CyberScoop Staff

Written by CyberScoop Staff

CyberScoop Staff CyberScoop Staff cyberscoop-staff 50581

Latest Podcasts