Feds’ spending on facial recognition tech expands, despite privacy concerns
The FBI on Dec. 30 signed a deal with Clearview AI for an $18,000 subscription license to the company’s facial recognition technology. While the value of the contract might seem just a drop in the bucket for the agency’s nearly $10 billion budget, the contract was significant in that it cemented the agency’s relationship with the controversial firm. The FBI previously acknowledged using Clearview AI to the Government Accountability Office but did not specify if it had a contract with the company.
The FBI didn’t respond to a request for comment, but it isn’t the only federal law enforcement agency to ramp up its procurement of privately-owned facial recognition technologies in recent months. In September, U.S. Immigration and Customs Enforcement spent almost $4 million on facial recognition technology from a company called Trust Stamp, as Business Insider first reported. The same month agency purchased a contract with Clearview AI starting at $500,000 with the potential to go up to $1.5 million dollars. In total, ICE investment in Clearview AI has more than doubled during the Biden administration, said Jack Poulson, executive director of the nonprofit Tech Inquiry.
The contracts demonstrate that despite a growing chorus of concerns from lawmakers, regulators and civil liberties advocates about the dangers of facial recognition technology, federal law enforcement agencies have no interest in rolling back their use of the technologies. Instead, they’re plowing ahead with private partnerships with companies whose databases of photos of private citizens eclipse government databases in scale.
In fact, CyberScoop identified more than 20 federal law enforcement contracts with a total overall ceiling of over $7 million that included facial recognition in the award description or to companies whose primary product is facial recognition technology since June, when a government watchdog released a report warning about the unmitigated technology. Even that number, which was compiled from a database of government contracts created by transparency nonprofit Tech Inquiry and confirmed with federal contracting records, is likely incomplete. Procurement awards often use imprecise descriptions and sometimes the true beneficiary of the award is obscured by subcontractor status.
This lack of transparency is especially noteworthy in light of a June report from the U.S. Government Accountability Office that found 13 federal agencies that have law enforcement, including the FBI, did not track which non-federal systems with facial recognition technology their employees used. If agencies don’t know what systems their employees are using, they cannot guarantee the technology has been vetted by the department for accuracy.
That’s a big concern when it comes to facial recognition technology. A 2019 study by the federal government found significantly higher false positives when facial recognition technology is deployed on Black and Asian individuals compared to white males. Even with mild improvements in recent years, civil liberties advocates warn it still poses a serious risk of discriminatory policing. The technology also raises serious privacy worries when deployed widely. Clearview AI, which underwent federal testing for accuracy for the first time in October, gained early scrutiny by building its database by scraping millions of images from social media companies without user knowledge. Facebook, Twitter and YouTube have all demanded Clearview AI stop the practices.
Clearview declined to comment on criticisms of the technology but CEO Hoan Ton-That told CyberScoop in an emailed statement “it is gratifying that Clearview AI has been used to identify the Capitol rioters who attacked our great symbol of democracy.”
Lawmakers have also expressed fears over the unregulated use of technology by law enforcement. Democrats last year introduced legislation that would prohibit federal entities from using biometric technologies absent congressional approval and would block funding to state and local law enforcement unless they enact their own bans. A separate bill introduced by Sen. Ron Wyden, D-Ore., that has gained bipartisan support would prevent law enforcement from purchasing data commercially that would otherwise require a warrant — including databases like Clearview AI. So far, neither bill has seen a floor vote.
“Clearview AI harvested millions of Americans’ personal photographs without their permission to build a massive facial recognition database,” Wyden told CyberScoop in a statement. “It is deeply disappointing that the government would choose to reward this practice with taxpayer dollars, and use its credit card to end-run Americans’ Fourth Amendment rights.”
Privacy advocates say that while the FBI’s recent contract sheds some light on its operations, it shows that transparency isn’t slowing down the harmful effects of the technology.
“This is a case where we see that transparency isn’t enough because now we see that the FBI has a contract with Clearview but that doesn’t make anyone any safer,” said Caitlin Seeley George, campaign director at nonprofit Fight for the Future.
The technology has been deployed against activists, as the June GAO report shows, and increasingly adopted by private companies, leading to evidence of discriminatory outcomes based on false matches.
But privacy advocates see promise in other avenues. More than 27 states and localities have passed some sort of ban on facial recognition use. And both the Federal Trade Commission and White House’s Office of Science and Technology Policy have expressed interest in addressing the privacy harms of facial recognition technology. The Federal Trade Commission recently told lawmakers that it was considering options including rulemaking to help regulate potentially discriminatory algorithmic technology, including facial recognition software. Meanwhile, the OSTP is developing an “A.I. Bill of Rights” and has hosted listening sessions with advocates on the issue.
“I think a couple of years ago these kinds of conversations weren’t happening at all so the fact that we’re having these conversations is a good sign,” said Seeley George. “And there are some good people in these agencies who care about human rights and want to do the right thing.”
Still, these steps lag behind a number of allied nations that have recently taken swift action to crack down on Clearview AI. Investigations by the Canadian, Australian, and United Kingdom governments found that Clearview violated local privacy laws. France’s data privacy regulator last month ordered the company to delete user data collected in violation of the European Union’s data privacy laws.
The Biden administration’s silence on the company, especially in light of its push elsewhere against surveillance technologies including spyware, is “the sign of a government that is not taking privacy issues seriously,” said Poulson.
Updated 1/13/22: Updated to include a response from Clearview AI.