Advertisement

Tech advocacy groups want a zero-trust framework to protect the public from AI

The framework is a response to tech executives and many in Washington who say that AI companies are capable of self-regulation.
The ChatGPT logo. (Photo by OLIVIER DOULIERY / AFP) (Photo by OLIVIER DOULIERY/AFP via Getty Images)

A coalition of public interest tech groups is pushing back against an increasingly self-regulatory approach for artificial intelligence gaining traction in Washington in what they describe as a zero-trust approach to AI governance.

In advance of a White House-endorsed hacking event to probe AI technologies during the upcoming DEF CON hacker conference in Las Vegas, the Electronic Privacy Information Center, AI Now and Accountable Tech published a blueprint for guiding principles for regulators and lawmakers that calls for government leaders to step up on reining in tech companies.

The framework is a response to a bevy of suggested frameworks by private companies on responsible AI use and regulation as well as more general efforts from Congress and the White House. Last month, top AI companies including Google, Amazon, Meta, and Open AI agreed to voluntary safety commitments that included allowing independent security experts to test their systems.

But authors of the “Zero Trust AI Governance” framework say that the solutions the private sector have volunteered aren’t enough and the frameworks they put forth “forestall action with lengthy processes, hinge on overly complex and hard-to-enforce regimes and foist the burden of accountability onto those who have already suffered harm.”

Advertisement

The framework is just the latest push by civil society to get the White House to take a firmer approach to AI regulation as the administration works on an anticipated AI executive order. Last week, several groups led by the Center for Democracy & Technology, the Center for American Progress, and The Leadership Conference on Civil and Human Rights sent a letter to the White House urging the President to incorporate the AI Bill of Rights, which the administration released a blueprint for earlier this year, into the executive order.

“We’re trying of flip the premise that companies can and should be trusted to regulate themselves into the zero-trust framework,” said Ben Winters, policy counsel at EPIC, and one of the authors of the framework. “They need to sort of have specific bright-line rules about what they can and can’t do, what types of disclosures to make, and also have the burden of proving their products are safe, rather than being able to just deploy widely.”

One of the framework’s guiding principles is urging the government to use existing laws to oversee the industry, including enforcing anti-discrimination and consumer protection laws. The Federal Trade Commission, Consumer Financial Protection Bureau, the Department of Justice Civil Rights Division and the U.S. Equal Employment Opportunity Commission in April issued a joint statement saying they planned to “vigorously enforce their collective authorities and to monitor the development and use of automated systems.” The FTC for instance has already issued warnings to companies about using deceptive marketing for their AI products.

The report takes a number of strong stances, including banning what the authors call “unacceptable AI practices,” including emotion recognition, predictive policing and remote biometric identification. The report also highlights concerns about the use of personal data and calls on the government to act to prohibit the collection of sensitive data by AI systems.

The burden to prove that systems are not harmful should be on the companies, according to the framework’s authors, who point to the fact that some companies have slashed their ethics teams as interest in AI products has boomed. The groups say that a useful corollary would be the pharmaceutical industry, which is required to undergo substantial research and development before products can receive FDA approval.

Advertisement

Winters said the group isn’t endorsing a new regulator for AI but instead intends to emphasize that companies have a responsibility to show their products are safe.

“Companies are pushing AI systems into wide commercial use before they’re ready, and the public bears the burden. We need to learn from the past decade of tech-enabled crises: voluntary commitments are no substitute for enforceable regulation,” Sarah Myers West, managing director of the AI Now Institute, said in a statement. “We need structural interventions that change the incentive structure, mitigating toxic dynamics in the AI arms race before it causes systemic harm.”

Tonya Riley

Written by Tonya Riley

Tonya Riley covers privacy, surveillance and cryptocurrency for CyberScoop News. She previously wrote the Cybersecurity 202 newsletter for The Washington Post and before that worked as a fellow at Mother Jones magazine. Her work has appeared in Wired, CNBC, Esquire and other outlets. She received a BA in history from Brown University. You can reach Tonya with sensitive tips on Signal at 202-643-0931. PR pitches to Signal will be ignored and should be sent via email.

Latest Podcasts