Research reveals possible privacy gaps in Apple Intelligence’s data handling

LAS VEGAS — One of the big worries during the generative AI boom is where exactly data is traveling when users enter queries or commands into the system. According to new research, those worries may also extend to one of the world’s most popular consumer technology companies.
Apple’s artificial intelligence ecosystem, known as Apple Intelligence, routinely transmits sensitive user data to company servers beyond what its privacy policies indicate, according to Israeli cybersecurity firm Lumia Security.
The research, presented Wednesday at the 2025 Black Hat USA conference, detailed how Apple’s Siri assistant sends the content of dictated messages and commands, including WhatsApp communications, to Apple servers even when such transmission isn’t necessary to complete user requests. The data flows occur outside Apple’s heavily promoted Private Cloud Compute system, which the company markets as providing enhanced privacy protections.
The research comes as Apple has long positioned itself as a privacy-focused company, building marketing campaigns around the company’s concentration on privacy for individual users
Which Siri is which?
The investigation, led by Lumia senior security researcher Yoav Magid, concentrated on several different ways users can interact with Siri. While Siri has been around since 2010, the company announced it was part of Apple Intelligence in 2024.
Magid showed that when given a prompt, Siri automatically scans users’ devices for installed applications related to voice queries and transmits this information to Apple servers. When a user asks about weather, for example, Siri identifies and reports all weather-related apps on the device. Additionally, location data accompanies every Siri request regardless of whether location information is relevant to the query.
Further research showed that audio playback metadata, including the names of songs, podcasts, or videos being played, is sent to Apple servers without explicit user visibility into these data flows.
Perhaps most significantly, the research found that messages dictated through Siri to platforms like WhatsApp are transmitted to Apple servers, raising questions about the end-to-end encryption functionality built into WhatsApp. Magid found these messages are sent through Apple’s Private Cloud Compute infrastructure, which is specifically designed to provide additional privacy protections for sensitive AI processing tasks.
The practice raises questions about end-to-end encryption claims made by messaging platforms, since message content leaves the device through Apple’s systems before reaching intended recipients.
Testing revealed that message transmission to Apple servers continues even when users explicitly disable settings that allow Siri to “learn” from specific applications or network communication to Apple servers is blocked.
“I’m not quite sure why this communication is necessary,” Magid said.
In the course of conducting the research, he found that Apple sometimes processes the data depending on whether a request is processed through traditional Siri infrastructure or the newer Apple Intelligence system.
Similar queries can trigger different data- handling practices with different privacy implications. For example, asking “What is the weather today?” sends data to Siri servers under one privacy policy, while “Ask ChatGPT what is the weather today?” routes the request through Apple Intelligence’s Private Cloud Compute under different terms.
“Two similar questions, two different traffic flows, two different privacy policies,” Magid noted in a blog.
This dual system means users have no way to predict which privacy framework applies to their interactions, creating uncertainty about how their data will be handled.
Apple’s response and disputed claims
Apple acknowledged some aspects of the research findings after Lumia reported the issues in February. Initially, Magid said Apple indicated it would work toward fixes for identified problems.
However, by July, Magid said that Apple shifted its position, telling researchers that the message transmission behavior was not a privacy issue related to Apple Intelligence, but rather stemmed from third-party services’ use of SiriKit, Apple’s extension system for integrating external apps with Siri.
The company maintained that Siri’s servers operate separately from Apple’s Private Cloud Compute system, though this distinction is not clearly communicated to users.
Apple disputed characterizations that the data collection represented privacy violations, arguing that existing policies adequately disclose the practices.
The company told CyberScoop that it “respectfully disagrees” with the research, with an Apple spokesperson pointing back to the functionality of SiriKit and the privacy policies regarding Siri.
The research highlights how traditional privacy frameworks may be inadequate for governing AI systems that require extensive data analysis to function effectively. The complexity of modern AI systems makes it difficult for users to understand when their data is being transmitted to external servers, processed locally, or shared with third parties.
For enterprise users, the findings could raise compliance concerns when sensitive corporate information potentially leaves organizational networks through employee devices running Apple Intelligence.
“AI capabilities are now all around us. Any typical app these days incorporates AI, whether it’s Grammarly, Canva or Salesforce,” Magid wrote in the blog. “Knowing when a feature is powered by AI or not, is not really trivial anymore.”