Sen. Warner: AI firms should put security at the center of their work

The top Democrat on the Senate Intelligence Committee wants answers to questions ranging from supply chain security to privacy.
The ChatGPT logo. (Photo by OLIVIER DOULIERY / AFP) (Photo by OLIVIER DOULIERY/AFP via Getty Images)

The top Democrat on the Senate Intelligence Committee is pushing leading artificial intelligence companies to place greater emphasis on addressing security concerns posed by the technology as it rapidly advances.

In a set of letters sent Wednesday to the CEOs of top AI firms, Sen. Mark Warner, D-Va., urged the companies to put “security at the forefront of your work.” By embracing security-by-design principles, Warner noted, companies have the chance to address the harmful consequences of their technologies today, rather than further down the road — as has been the case too often in the recent history of technological development.

And as AI becomes integrated into a wide swathe of applications, “it is imperative that we address threats to not only digital security but also threats to physical security and political security,” Warner wrote.

The letters probe the companies on several security matters, including how they are protecting systems from attacks injecting bad data into models, their process for monitoring and auditing to detect data breaches and unauthorized use and how they have handled past security incidents. The recipients include OpenAI, Scale AI, Meta, Google, Apple, Stability AI, Midjourney, Anthropic, and Microsoft. The letters request answers to a set of detailed questions about the firms’ security practices no later than May 26.


Several recipients did not immediately return requests for comment. A spokesperson for Anthropic said the company thanked the Senator for his interest and intended to respond in “due course.”

To emphasize his point, Warner’s staff used AI to write the majority of the press release. The Virginia Democrat noted on Twitter the AI did “pretty darn well.”

While cybersecurity risks posed by AI, such as the rapid creation of new malicious code, have so far been largely hypothetical, the security of the companies producing the technology has been called into question on several occasions. For instance, in March Meta’s language model, known as LLaMA, was leaked early online.

Warner also asks the companies to provide more information on concerns related to trustworthiness, researcher transparency and potential algorithmic bias. The letter follows a joint announcement Tuesday from four major U.S. enforcement agencies committing to using existing U.S. regulations to crack down on AI products perpetuating discrimination and fraud.

While firms like OpenAI — the industry leader — have made commitments to address the security and societal concerns posed by its products, Warner, an influential voice on tech policy in the Senate, said this did not obviate the need for more stringent rules. “Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field,” Warner wrote in the letters.


The risks posed by AI have caught the attention of other top lawmakers. Earlier this month, Senate Majority Leader Chuck Schumer, D-N.Y., announced he was working on a regulatory framework for AI in. European Union lawmakers recently called for a global summit on AI, citing the risks of use of the technology by non-democratic countries, and are considering a wide-ranging proposal to regulate use of the technology, the EU AI Act. Regulators in Italy temporarily banned OpenAI’s ChatGPT over privacy concerns.

Latest Podcasts