Musk’s plans to make Twitter’s algorithms public raises disinformation conundrum
Tesla and SpaceX billionaire Elon Musk struck a deal to buy Twitter for $44 billion on Monday and he’s already rolled out a to-do list for big changes at the company. One of the more novel suggestions he has put forward is making Twitter’s algorithms available for pubic scrutiny.
Sharing the code for Twitter’s algorithms doesn’t pose nearly the same risk as pulling open the hood on the company’s entire technical infrastructure, any modifications to which could have enormous consequences for user privacy and security.
In fact, the kind of transparency Musk is hinting at could be a boon for Twitter, which has long been plagued with accusations that it limits the reach of content from certain groups of users. The opaque nature of how social media companies like Twitter rank and spread content has also long been a source of friction with researchers looking to understand biases on the platform.
But that kind of transparency doesn’t come without some risk. Exposing code to the world also exposes potential vulnerabilities that criminals and disinformation operators can use to sow havoc.
Companies like Twitter engaging in open source projects isn’t a new idea. Such projects pool together talent to spur new tech innovation and, in many cases, result in more secure software thanks to constant development and scrutiny. Proprietary information like the algorithms Musk is referring to, on the other hand, are normally kept secret.
Open source code isn’t immune to vulnerabilities. Take for instance Apache’s Log4J, a ubiquitous open-source tool, a bug in which left hundreds of millions of devices vulnerable and was exploited widely by cybercriminals. The example represents the extreme of a worst-case scenario for open-source code, which is largely volunteer-run.
Those kinds of security risks are more minimal when it comes to opening up algorithms like Musk plans to do, researcher Matt Tait wrote in a message.
“… The key systems that matter for cybersecurity risk are likely to be supporting systems, corporate systems, and databases that won’t be shared,” Tait explained.
A much stronger concern, Tait and other researchers warned, is that a peek at Twitter’s algorithms could help users game the system.
“The idea of open sourcing code allows for the community to inspect it — both the good guys and the bad guys,” said Chris Wysopal, co-founder and CTO of Veracode.
Without knowing what code Musk plans to make available, it’s hard to say precisely how it could be exploited. But everyone from malicious advertisers to nation-state hackers will likely be racing to figure it out.
One way of avoiding that scenario is having outside firms test for both vulnerabilities and algorithmic discrimination issues before the code is released publicly, says Katie Moussouris, CEO of Luta Security.
“Once Twitter has addressed anything that those experts have found, that would be the appropriate time to [make it public,]” said Moussouris. “If they do it beforehand and just trust the public to spot problems they may open up a window for disinformation campaigns.”
Gathering meaning from Twitter’s algorithms won’t be as simple as looking at them, however, experts cautioned.
“Having the source code for content recommendations without knowing what the algorithm is being trained on or what content is being recommended for different users will likely only give a blurred view,” said John Perrino, a policy analyst at Stanford’s Internet Observatory.
Still, it could provide researchers who have already studied bias on the platform with a way to further substantiate their findings.
What is ultimately more important than the sleuths looking through the source code is how Twitter responds to them. Both Twitter and Tesla, Elon Musk’s other major company, have bug bounty programs for reporting vulnerabilities. Those kinds of programs work when it comes to security vulnerabilities but they’re less valuable for the kind of consumer feedback Musk is suggesting, says Moussouris.
“Setting up a policy and an email address is only half the battle. The other half is dealing with the information you’re getting from the community,” said Wysopal. “They need to go into this with some thinking about how to respond to critiques and potential gaming.”
There is some precedent at Twitter in responding to such issues. Last May, Twitter stopped using an algorithm that automatically cropped photos to optimize them for users’ feeds after investigating concerns raised by users that the algorithm appeared to be favoring white individuals over black individuals and men over women in photo previews.
However Musk proceeds, his actions will be under heavy scrutiny from lawmakers and civil rights advocates. Republicans have largely hailed the purchase as a win for “free speech” while Democrats have expressed worries about how Musk might handle problems like disinformation.
“Twitter has been more forward-leaning than many of its competitors in its efforts to tackle false, deceptive and manipulated content — though even Twitter has significant room for improvement,” Senate Intelligence Chairman Mark Warner, D-Va., said in a statement Monday. “It is my hope that Mr. Musk will work in good faith to keep these necessary reforms in place and prevent a backslide that is harmful to democracy and to the important discourse that takes place on Twitter across the world every day.”
Advocates who already want to see Twitter make major changes to reduce bias and hate speech on the platform have also expressed skepticism that Musk is the right steward for the job.
Even if Twitter resists change, its algorithms could still push the industry forward.
“Another advantage of open source is that people can learn from the code,” said Wysopal. “Even if Twitter doesn’t implement improvements, it could lead to better social media algorithms on other or new platforms.”