Advertisement

Your AI doctor doesn’t have to follow the same privacy rules as your real one

AI apps are making their way into healthcare. It’s not clear that rigorous data security or privacy practices will be part of the package.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
AI-powered health apps are flooding the market. But their data security policy is more of a pinky promise than a legal mandate. (Source: Getty Images)

AI apps are making their way into healthcare. It’s not clear that rigorous data security or privacy practices will be part of the package.

OpenAI, Anthropic and Google have all rolled out AI-powered health offerings from over the past year. These products are designed to provide health and wellness advice to individual users or organizations, helping to diagnose their illnesses, examine medical records and perform a host of other health-related functions.

OpenAI says that hundreds of millions of people already use ChatGPT to answer health and wellness questions, and studies have found that large language models can be remarkably proficient at medical diagnostics, with one paper calling their capabilities “superhuman” when compared to a human doctor.

But in addition to traditional cybersecurity concerns around how well these chatbots can protect personal health data, there are a host of questions around what kind of legal protections users would have around the personal medical data they share with these apps. Several health care and legal experts told CyberScoop that these companies are almost certainly not subject to the same legal or regulatory requirements – such as data protection rules under the Health Insurance Portability and Accountability Act (HIPAA) – that compel hospitals and other healthcare facilities to ensure protection of your data.

Advertisement

Sara Geoghegan, senior counsel at the Electronic Privacy Information Center, said offering the same or similar data protections as part of a terms of service agreement is markedly different from interacting with a regulated healthcare entity. 

“On a federal level there are no limitations – generally, comprehensively – on non-HIPAA protected information or consumer information being sold to third parties, to data brokers,” she said. 

She also pointed to data privacy concerns that stemmed from the bankruptcy and sale of genetic testing company 23andMe last year as a prime example of the dangers consumers face when handing over their sensitive health or biometric data to a unregulated entity.

In many cases, these AI health apps carry the same kind of security and privacy risks as other generative AI products: data leakage, hallucinations, prompt injections and a propensity to give confident but wrong answers.

Additionally, data breaches in the healthcare industry have become increasingly common over the past several years, even before the current AI boom. Healthcare organizations are frequent targets for hacking, phishing, and ransomware, and even though companies can be held legally responsible under HIPAA for failing to protect patient data, breaches still happen because many systems rely on outdated software, depend on numerous outside vendors, and struggle to keep up with the cost and complexity of strong cybersecurity.

Advertisement

Carter Groome, CEO of First Health Advisory, a healthcare and cybersecurity risk management consulting firm, said that beyond concerns over whether these tech companies can even reasonably promise to protect your health data, it’s also not clear their security protections are anything more than a company policy.

“They’re not mandated by HIPAA,” Groome said. “Organizations that are building apps, there’s a real gray area for any sort of compliance” with health care data privacy laws.

Privacy is especially important in health and medicine, both for protecting sensitive medical information and for building trust in the health system overall. That’s why hospitals, doctor’s offices, lab testing facilities and other associated entities have been subject to heightened laws and regulations around protecting patient records and other health data.

Laws like HIPAA require covered entities and their business associates to “maintain reasonable and appropriate administrative, physical, and technical safeguards for the security of certain individually identifiable health information.”

It also subjects companies to breach notification rules that force them to notify victims, the Department of Health and Human Services and in some cases the public when certain health data has been accessed, acquired, used or disclosed in a data breach.

Advertisement

Groome and Andrew Crawford, senior counsel at Center for Democracy and Technology’s Data and Privacy Project, said that tech companies like OpenAI, Anthropic and Google almost certainly would not be considered covered entities under HIPAA’s security rule, which according to HHS applies to health plans, clearinghouses, health care providers and business associates who transfer Electronic Protected Health Information (ePHI). 

OpenAI and Anthropic do not claim that ChatGPT Health or Claude for Healthcare follow HIPAA. Anthropic’s web site describes Claude for Healthcare as “built on HIPAA-ready infrastructure,” while OpenAI’s page for its suite of healthcare-related enterprise products claims they “support” HIPAA compliance.

OpenAI, Anthropic and Google did not respond to a request for comment from CyberScoop. 

That distinction means “that a number of companies not bound by HIPAA’s privacy protections will be collecting, sharing, and using peoples’ health data,” Crawford said in a statement to CyberScoop. “And since it’s up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger.”

Laws like HIPAA contain strong privacy protections for health data but are limited in scope and “meant to help the digitization of records, not stop tech companies from gathering your health data outside of the doctor’s office,” Geoghegan said.

As they expand into healthcare, tech companies like OpenAI, Anthropic, and Google have emphasized data security as a top priority in their product launches.

Advertisement

OpenAI said their health model uses an added layer of built encryption and isolation features to compartmentalize health conversations, as well as added features like multifactor authentication. And, like other OpenAI models, ChatGPT Health encrypts its data at rest and in transit, has a feature to delete chats within 30 days and promises your data won’t be used for AI training.

For uploading medical records, OpenAI said it is partnering with b.well, an AI-powered digital health platform that connects health data for U.S. patients. On its website, the company says “it uses a transparent, consumer-friendly privacy policy that lets users control and change data-sharing permissions at any time, does not sell personal data, and only shares it without permission in limited cases. It also voluntarily follows the CARIN Alliance Trust Framework and Code of Conduct—making it accountable to the FTC—and says it aims to meet or exceed HIPAA standards through measures like encryption, regular security reviews, and HITRUST and NIST CSF certifications, though it notes no system can fully eliminate cyber risk.

Legal experts say that when tech companies promise their AI products are “HIPAA compliant” or “HIPAA ready,” it’s often unclear whether these claims amount to anything more than a promise not to use health data irresponsibly. 

These distinctions matter when it comes to personal health data. Geoghegan said it is not uncommon in some corners of the wellness industry for an unregulated business to ambiguously claim they are “HIPAA-compliant” to elude the fact that they aren’t legally bound by the regulations.

“Generally speaking, a lot of companies say they’re HIPAA compliant, but what they mean is that they’re not a HIPAA regulated entity, therefore they have no obligation,” said Geoghegan.

Advertisement

Groome suggested that AI companies are being “hyperbolic” in their commitment to security in an effort to assuage the concerns of privacy critics, noting that their product announcements contain “a comical level of how much they say they’re going to protect your information.”

An added wrinkle is that AI tools remain black boxes in some respects, with even their developers unable to fully understand or explain how they work. That kind of uncertainty, especially with healthcare data, can lead to bad security or privacy outcomes.

“It’s really shaky right now when a company comes out and says ‘we’re fully HIPAA compliant’ and I think what they’re doing is trying to give the consumer a false sense of trust,” said Groome.

Several sources told CyberScoop that despite these risks, they expect AI health apps to continue being widely used, in part because the traditional American healthcare system remains so expensive.  

AI tools – by contrast – are convenient, immediate and cost effective. While people like Geoghegan and Groome have said they are sympathetic to the pressures that push people towards these apps, the tradeoffs are troubling.

Advertisement

“A lot of this stems from the fact that care is inaccessible, it’s hard to get and it’s expensive, and there are many reasons why people don’t trust in health care provisions,” said Geoghegan. “But the solution to that care being inaccessible cannot be relying on big tech and billionaire’s products. We just can’t trust [them] to have our best health interest in mind.”

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts