Attacks by malicious hackers using artificial intelligence could swamp smaller companies that are already overwhelmed by cybercrime, experts warned lawmakers during a congressional hearing Tuesday.
Testifying before the House Homeland Security and Governmental Affairs subcommittee on cybersecurity and infrastructure protection, experts from the private sector discussed AI-related threats, including increased efficiency for malicious hackers to develop malware, spread disinformation and elevate the scale of attacks at a time when smaller businesses are constantly being impacted by hacks.
Bringing up the famous and complex Stuxnet virus that took down the Iranian nuclear plant, Alex Stamos, chief trust officer at SentinelOne, said that developing the worm required a substantial amount of resources. With AI, Stamos warned, such operations could become less costly for attackers.
“My real fear is that we’re going to have AI-generated malware that won’t need that,” Stamos said. “That if you drop it inside of an air gap network in a critical infrastructure network, it will be able to intelligently figure out, ‘Oh, this bug here, this bug here and take down the power grid even if you have an air gap.'”
Stamos also noted that in recent years, criminal cybercrime groups have become “professionalized” with the technical sophistication that one would expect from nation-backed hackers.
“The truth is, we’re not doing so hot,” Stamos said. “We’re kinda losing.”
Small and medium businesses, Stamos said, are “not ready to play at that level.” He advocated for moving those smaller players to the cloud so there is less responsibility on individual organizations and more “collective defense.”
Stamos said that one key thing that the Cybersecurity Infrastructure and Security Agency can do is get an incident reporting regime up and running. The agency is set to require critical infrastructure owners and operators to notify them of any major cyber incident.
The reporting is intended to fuel a better understanding of the current threat landscape, as there are few requirements currently for companies to report breaches to the federal government. Stamos did note that the Securities and Exchange Commission’s own incident reporting requirements are likely to have a negative impact on cybersecurity efforts due to the “over-legalization” that the ruling will have.
Stamos also said that CISA should help break information silos apart, saying that one of the issues in cybersecurity is that firms “don’t talk to each other enough.”
Ian Swanson, the chief executive officer and founder of Protect AI, said in his opening statement that in order to secure AI, there should be a “comprehensive inventory” that lists out the “ingredients” of AI.
“Only then do we have visibility and auditability of these systems, and then you can add security,” Swanson said.
Swanson recommended that the Department of Homeland Security create a machine learning bill of materials and invest and protect the open source software ecosystem that AI relies on. He noted that the Biden administration should be talking to all players in the AI space — startups as well as Big Tech companies like Open AI.
Debbie Taylor Moore, senior partner and vice president of global cybersecurity at IBM Consulting, noted in her opening statement that CISA should focus on AI education and workforce development, particularly within the critical infrastructure sectors, and share information like vulnerabilities and best practices.
“Addressing the risks posed by adversaries is not a new phenomenon,” Moore said. “Using AI to improve security operations is also not new. But both will require focus and what we need today is urgency, accountability and precision in our execution.”