Banning TikTok won’t protect Americans’ sensitive data
Once again, Washington is obsessing over TikTok. The White House is reportedly reviewing a draft agreement with the video app, according to The New York Times, that would involve the company changing its data security practices without officially cutting ties with its Chinese owner, ByteDance. And just days prior to that news surfacing last week, President Biden signed an executive order on CFIUS, the Committee on Foreign Investment in the United States, emphasizing the importance of data security in investment review decisions.
In Congress, Sen. Josh Hawley, R-Mo., and some other politicians want to continue one of the Trump administration’s most famed, and failed, tech crusades: undercutting TikTok’s presence in the U.S. by forcing it to sever all ties with ByteDance. Their argument is that doing so will protect Americans’ data from access by the Chinese government.
But the reality is that playing whack-a-mole against specific tech companies won’t protect Americans’ sensitive data. Pushing this line keeps the conversation entirely focused on corporate ownership when the U.S. policy debate should instead focus on the entire data ecosystem — and on building a comprehensive approach to all the security and data collection risks that come with it.
TikTok began attracting the U.S. government’s attention in 2019. That December, for example, the Navy banned it from government-issued mobile devices, vaguely citing a “cybersecurity threat.” Shortly thereafter, President Trump signed a now-infamous executive order in August 2020 that invoked the International Emergency Economic Powers Act to call for an effective ban on TikTok in the U.S. The app’s data collection, the order argued, “threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information — potentially allowing China to track the locations of federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.”
For all Trump’s chest-thumping on China, the order was not even about protecting Americans’ data. Trump himself said in July 2020, sitting with reporters on Air Force One, that banning TikTok would be a form of retaliation against Beijing for its handling of the coronavirus outbreak. Politics drove the order, not security, further evinced by trade advisor Peter Navarro’s remark that spinning off TikTok into a different corporate organization would do nothing to change the risks (betraying the idea that real data policy questions were involved at all). The order was repeatedly struck down as executive overreach. It was also badly written: It did not clearly define the problem, blurred different risks together (like platform manipulation and data collection), and did not consider the cost-benefit analysis fundamental to any risk assessment. Possibility does not equal probability, and vague gesticulations about possible data collection were not adequately weighed against the implications and costs of a White House-led software ban.
Even its defenders who saw a grain of national security truth, however, replicated a fundamental problem in the data and national security discourse: focusing on corporate ownership as the only risk vector, instead of analyzing the vast data gathering and sharing ecosystem.
The reality is, there are far more ways to get highly sensitive data on Americans than just making an app that millions of people download. Websites routinely collect information on visitors and share it with advertisers. App developers use software development kits (SDKs), essentially code toolkits developed by third parties, and in the process give data to those actors. The multi-billion-dollar data brokerage ecosystem thrives on gathering, analyzing and selling or sharing people’s information — putting the races, genders, religions, income levels, medical conditions, political beliefs and GPS locations of Americans on the open market for sale. It goes on.
Foreign actors can buy or license data from this ecosystem; they can run ads and get click-back data; they can listen in on real-time auctions for online ads, learning about people in the process; they can simply just hack into data brokers that have already done the work of compiling, cleaning and packaging datasets on U.S. individuals. And let’s not forget — as Facebook especially weaponizes every national security argument in the book to avoid regulation — that U.S. social media companies gather highly sensitive information that could be stolen, too. Corporate ownership is not the only risk vector. But making that the government’s primary focus sidelines a risk-assessment-focused conversation about what data collection paths are available and which might be most vulnerable to exploitation.
The federal government is currently ill-equipped to tackle this landscape. Recent news of a potential CFIUS-TikTok agreement is reminiscent of the committee’s reported 2019 decision to have a Chinese company sell dating app Grindr to a U.S. owner. CFIUS focused on the corporate ownership vector (as it does), identified the risk of a foreign actor getting highly sensitive data on Americans (from location data to sexual health information), and acted within its authorities. But a Norwegian government report showed Grindr sharing data widely, including with an SDK made by Chinese tech giant Tencent.
American ownership of a tech company does not mean Americans’ data is protected from exposure, plain and simple. While CFIUS did what it could, the U.S. government needs to fill in the rest of the picture with strong data privacy and security laws, regulations, and policy frameworks to account for the rest of the data ecosystem. This is not a pure national security issue, and the response should not principally reside with the military and intelligence community; simultaneously, there is only so much those organizations could even do given their foreign-facing authorities.
Congress needs to pass comprehensive privacy laws for all Americans, and that regulation should include strong controls on the sharing of people’s data. For that baseline, a company’s country of incorporation should not matter. It is possible that an app introduces additional risks because of the legal system in which it operates, among other trust factors, and that could merit additional privacy and security controls above the baseline. But focusing just on foreign ownership, under the false belief it is the best and only way for malicious actors to get Americans’ data, will not address every vector of data acquisition; if anything, it will compel foreign actors to exploit U.S. company data sources even more, further increasing the need for strong privacy and security regulations across the board.
Justin Sherman (@jshermcyber) is a senior fellow at Duke’s Sanford School of Public Policy, where he runs its data brokerage research project, and a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative.