Advertisement

Documents shed light on ID.me’s messaging to states about powerful facial recognition tech

Privacy groups are pushing states to drop the technology provider.
(Getty Images)

Identity verification technology company ID.me quietly deployed a powerful form of facial recognition on unemployment benefits applicants while encouraging state partners to dispel the idea that the company used the technology, according to Oregon state records the American Civil Liberties Union shared with CyberScoop. 

The documents show that in the months following the introduction of facial recognition software that matched a photo across a wider database — known as “1:many” — into its fraud detection service, ID.me disseminated talking points to the Oregon Employment Department (OED) and other state partners to combat media reports that it used the more powerful form of facial recognition.

Privacy advocates who are pushing states to drop the technology say the documents raise concerns that states working with ID.me may have been unaware of the risks involved with the use of facial recognition technology, the accuracy of which has been challenged by government and academic researchers. During the pandemic, 30 states contracted ID.me’s services in an effort to assist with a surge in unemployment claims and tamp down on fraud.

ID.me, in its communications with states, mentions known accuracy issues with facial recognition when it is used to match one photo against a database of photos.

Advertisement

“1:Many face matching, also known as 1:N, casts a much larger net and introduces a higher probability of error,” ID.me outlined in a July 24 email to the OED about a CNN article. “It is deeply irresponsible for the media to conflate 1:1 Face Verification with 1:Many Face Recognition,” the company wrote in a separate document sent to states that mentions the CNN article and an article by Reuters.

What isn’t addressed in the email is that six months prior, in February, ID.me began to deploy 1:many facial recognition in its identity verification technology as a means of fraud prevention. Upon setting up an account, users’ photos are compared to an internal database to check for matches that indicate a duplicate, and therefore a possibly fraudulent account.

ID.me continued to publicly deny its use of this technology until, amid growing scrutiny of its work with the IRS, the company’s CEO Blake Hall acknowledged in a LinkedIn post last month that it used facial recognition in its fraud detection process.

ID.me confirmed to CyberScoop that it informed state partners as early as November 2020 that it was considering the use of 1:many facial recognition. The company rolled out its “Duplicate Face Detection,” calling it in a February 2021 memo “a major technological breakthrough in fraud prevention” and “proven remarkably accurate.”

Neither the memo nor any of the other public records obtained by the ACLU ever described the Duplicate Face Detection fraud detection program as facial recognition or 1:many facial recognition. The company identified 1:many in several materials as “highly problematic.” Yet the Duplicate Face Detection system shares the same exact technical definition as 1:many, and the company confirmed in comments to CyberScoop that it was a 1:many system.

Advertisement

“The problem here is that nowhere in their Duplicate Face Detection description do they describe what they’re doing as facial recognition,” Olga Akselrod, a senior staff attorney at the ACLU, said of the documents.

The controversy over facial recognition

A spokesperson for the OED told CyberScoop that the agency’s understanding from conversations with ID.me was that the company did not use 1: many facial recognition. Instead, the spokesperson referred CyberScoop to information about the company’s Duplicate Face Detection system.

ID.me asserts that it only uses 1:many facial recognition for “fraud prevention” and that the process “is carefully configured to minimize impact to legitimate users who are moved to verify with an expert human agent.”

Akselrod said that the explanation “doesn’t work.”

Advertisement

“The whole purpose of identity verification is fraud detection,” she told CyberScoop. “So ID.me is really making a distinction without a difference and it’s not one that can absolve the apparent misrepresentations they’ve made about their process.”

It’s unclear what, if any, measures that many states took to assess ID.me’s accuracy before unleashing the software on millions of Americans. Spokespeople for both the Texas Workforce Commission and Louisiana Workforce Commission pointed to ID.me’s adherence to the National Institute of Standards and Technology guidelines for digital identity services when asked about how they vetted the program.

But adherence to federal guidelines alone isn’t enough to know what effects a program will have in the real world, said Joy Buolamwini, an artificial intelligence expert and founder and executive director of the Algorithmic Justice League.

“Failing a benchmark test is a red light, but passing them is not a green light,” Buolamwini explained to CyberScoop in an email.

ID.me told CyberScoop in an email that its technology “performs equitably among all groups.” But what few public examinations of the technology have been conducted suggest otherwise. An OED study found that the technology created a disadvantage for people aged 20 and under, Spanish speakers, African Americans and American Indian or Alaska Natives, according to Oregon officials who spoke at a Wednesday press conference. The OED did not make the full study available for CyberScoop’s review.

Advertisement

“While we found a correlation between some demographics and failure to use ID.me, we could not identify the cause, such as facial recognition,” OED communications director Rebeka Gipson-King wrote in an email. “It could also have been a variety of things, including lack of comfort with technology and individuals in certain populations who are more prone to having their identity stolen.”

Federal research has shown that facial recognition algorithms are more likely to misidentify people of color and the accuracy of performance can vary widely depending on the product and even factors such as quality of lighting. And while a January NIST study indicates that the technology has improved in recent years, its authors caution that the improvements do not remedy all of the technology’s known performance issues.

The real-world execution of facial recognition technology has already demonstrated the technology can result in harm. Several cities and states have banned the use of facial recognition by police, citing evidence of racial bias and multiple high-profile cases in which false matches were used in the arrest of Black men. There is no federal regulation of the use of facial recognition.

States under pressure

In light of pushback from both privacy advocates and lawmakers, the IRS announced earlier this month it would transition away from using ID.me. The Department of Veteran’s Affairs is also reevaluating its contract.

Advertisement

Groups — including the ACLU — are pushing states to follow. More than 40 civil liberties organizations on Monday called for states to end their contracts with the company. They say the company’s misleading public statements and lack of transparency in the accuracy of its technology pose a privacy risk Americans shouldn’t be required to take to access basic government services.

California’s legislative advisory body on Tuesday recommended that the state, which accounted for a quarter of all pandemic unemployment assistance fraud, end its contract with ID.me, Bloomberg reported. In addition to recommending that the state end the use of several other anti-fraud tools enacted during the pandemic, the advisory body recommended that the “Legislature pause and carefully consider the implications of requiring third‑party biometric scanning — in this case, facial recognition performed by artificial intelligence.”

But states say they still face a major barrier in dropping the system: a lack of viable alternative government verification systems that can compete with ID.me. A group of Democratic members of the Senate Finance Committee wrote to the Department of Labor on Tuesday urging it to develop government-run alternatives to assist state workforce agencies implement UI programs.

It’s a sentiment shared by Oregon’s top employment official.

“We would prefer that it was a national system that all states could use, but there isn’t one right now that provides the same level of identity verification security,” OED Acting Director David Gerstenfeld said at a Wednesday press conference.

Tonya Riley

Written by Tonya Riley

Tonya Riley covers privacy, surveillance and cryptocurrency for CyberScoop News. She previously wrote the Cybersecurity 202 newsletter for The Washington Post and before that worked as a fellow at Mother Jones magazine. Her work has appeared in Wired, CNBC, Esquire and other outlets. She received a BA in history from Brown University. You can reach Tonya with sensitive tips on Signal at 202-643-0931. PR pitches to Signal will be ignored and should be sent via email.

Latest Podcasts