How Amazon Web Services uses AI to be a security ‘force multiplier’

When Amazon Web Services deploys thousands of new digital sensors around the globe, it often runs into a ruthless truth of the internet: Within minutes, the sensors are poked, prodded, and attacked. However, using large language models, the company is turning those immediate attacks into actionable security intelligence for its vast array of cloud-centered services.
According to Stephen Schmidt, Amazon’s chief security officer, examples like this demonstrate how AI enables capabilities that weren’t possible with earlier tools. During remarks at the AWS Summit on Tuesday, Schmidt highlighted this example to illustrate how AI is fundamentally transforming AWS’s approach to security — especially in areas like application security reviews and incident response.
“What we can do with AI is allow engineers to ask questions about what’s going on with that data much more easily than they could otherwise, and they can say things like ‘Find me all of the examples of situations where someone tried to break into this particular version of this particular database, and came from IP addresses that are associated with the VPNs that are normally used by this particular threat actor,” he told CyberScoop. “You can’t do that otherwise, and the tooling allows them to really dig into things much more deeply.”
The technology allows for more consistent and efficient security assessments, especially for junior engineers who may lack extensive experience.
By training large language models on previous security reviews, organizations can effectively transfer knowledge from senior security professionals to newer team members. This approach raises the overall security standard by embedding institutional expertise directly into AI-powered review processes.
“A junior engineer may not have all the knowledge, the background, the experience of the more senior engineers,” he said. “By training our large language models internally on the prior security reviews, it allows us to apply the knowledge and learning that our more senior staff have embodied in the documents that we all own, trained on, to our more junior staff. So it really raises the bar on the absolute level of security.”
The cybersecurity industry faces persistent personnel shortages, a problem AI can help mitigate. Schmidt noted that AI tools can handle significant “heavy lifting” previously performed manually, allowing security staff to focus on more complex tasks.
Critically, Schmidt highlighted the non-deterministic nature of AI systems, meaning identical queries can produce different responses. He pointed to this as a reason why humans still need to be involved in making decisions based on the model’s output.
“We look at it this way, if you’re just asking a question and getting an answer, that’s one set of scrutiny that you have to give a system,” he said. “But if you’re going to take an action to block something, to prevent something from occurring, you’ve got to be really sure it’s correct. So there has to be that skilled person at the end of the AI-use process, saying, ‘Yes, this is the right thing to do at this point in time with this context.’”
That need for a human in the process is why Schmidt believes that AI will not supplant entry- or junior-level positions, even if the technology continues to improve. He said conversations around AI replacing junior engineers are rooted in “FUD,” and he expects the models to raise the skill level faster than ever before.
“I don’t think it’s going to happen,” he said of AI replacing human-led security work. “The thing about security that’s both great and difficult is you’re never done, and it’s never perfect. So we always have the ability to raise the bar across things, and by using tooling that allows us to get those junior engineers up to speed more quickly and to learn more about why senior engineers make decisions. It means we’ve got this middle ground of staff who are really good, much more quickly than we would otherwise.”