Advertisement

Open access to AI foundational models poses various security and compliance risks, report finds

“Safety and mitigation of harm should not be sacrificed solely in the name of rapid innovation,” a new Institute for Security and Technology report states.
(Getty Images)

Overly accessible artificial intelligence foundation models could open companies up to malicious use, compliance failure and more, according to findings released Wednesday by the Institute for Security and Technology. 

IST’s report outlines the risks that are directly associated with models of varying accessibility, including malicious use from bad actors to abuse AI capabilities and, in fully open models, compliance failures in which users can change models “beyond the jurisdiction of any enforcement authority.”

While opportunities can arise from a more accessible AI foundational model, the report states that “once models are made fully open, it is impossible for developer organizations to walk back a model’s release.”

“Today’s digital ecosystem is not yet broadly secure and sustainable, in many ways due to the lack of secure and safe design principles built into emerging technologies from the outset,” the report states. “Safety and mitigation of harm should not be sacrificed solely in the name of rapid innovation, and the lessons of the recent past in other areas of technological innovation … provide us with the opportunity to leverage AI for good.”

Advertisement

The report displays a gradient of foundational models with associated risks, with the level of urgency tied to each of them. The levels of access outlined include fully closed, paper publication, query API access, modular API access, gated downloadable access, non-gated downloadable access and fully open.

Risk categories, such as malicious use, increase from a low-medium risk level for modular API access to the highest risk level for fully open access. The report points to gating as a method to provide some traceability for developers and accountability for whoever might download a model. 

The report states: “While AI can, and undoubtedly will, be employed by bad actors to advance their offensive aims, it must be noted that AI can also be employed on the defensive side to find and reduce vulnerabilities and support network defense operations.”

Caroline Nihill

Written by Caroline Nihill

Caroline Nihill is a reporter for FedScoop in Washington, D.C., covering federal IT. Her reporting has included the tracking of artificial intelligence governance from the White House and Congress, as well as modernization efforts across the federal government. Caroline was previously an editorial fellow for Scoop News Group, writing for FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. She earned her bachelor’s in media and journalism from the University of North Carolina at Chapel Hill after transferring from the University of Mississippi.

Latest Podcasts