OWASP postpones publication of Top 10 app vulnerabilities draft
The Open Web Application Security Project (OWASP) has postponed publication of its canonical Top 10 list of web application vulnerabilities this week, saying it needs more time to review the unprecedented amounts of data it’s received.
“We have data on 114,000 apps at the moment, but we got a lot of late submissions. That could rise to 120,000 or 130,000,” lead author Andrew van der Stock told CyberScoop. He said the team of volunteers preparing the new draft met over the weekend and agreed to push the scheduled Oct. 9 publication to Oct. 20.
“We needed more time to analyze all this new data,” he said.
“We still want to give people a month to comment” on the draft after it’s released, van der Stock said, but added the authors were determined to publish the final version before Thanksgiving. “We don’t want it to get lost in the holidays,” he concluded.
OWASP is a volunteer organization that’s become a mainstay of the cybersecurity community, and their Top 10 list of app vulnerability categories is the most downloaded document on their website, the organization says.
“It’s very reputable, very widely-referenced and widely used,” Tony Sager of the Center for Internet Security told CyberScoop.
Postponing the publication of the new draft highlights the controversy swirling around this latest effort to revise the ubiquitous Top 10 list, which was first released in 2003 and has been updated roughly every three or four years since — most recently in 2013.
The release of the first draft in May “got a lot of pushback … a lot of drama, way more than we’ve ever seen before,” acknowledged van der Stock. He said that many felt the process wasn’t sufficiently transparent. There were even accusations of corruption and nepotism on Twitter.
At a meeting of OWASP volunteers the following month, the two security researchers who had led the Top 10 process since its inception, Jeff Williams and Dave Wichers, “handed over the reins” to a new team, led by van der Stock, he said. Van der Stock, a long-time OWASP volunteer who has worked on the Top 10 since 2007, is a senior consultant at Synopsis.
“They felt they weren’t able to take in any further and they were looking for someone to help put it back on the rails,” he told a podcast earlier this year. He added that there had been important changes to the way the list was produced, as well, to improve transparency. “There’s no more [drafts] popping out fully formed,” he said. The authors would show their work, he added, making clear how eight of the 10 vulnerability categories were derived from the huge data sets they were analyzing. The final two categories would be derived from a survey completed by 550 security researchers.
“We wanted to make sure that people with loudest voices on social media didn’t drown out other opinions,” said van der Stock.
He said the two items added from the survey would be the failure to protect personally identifiable information, or PII — “the overwhelming majority choice” — and deserialization flaws like the Apache Struts vulnerability exploited by the Equifax attackers. “Deserialization was actually a close third,” he said, but the vulnerability category that came second — access control — is already in the eight drawn from the data.
The fight over what to include
Knock-down-drag-out fights over what goes into a list as widely cited as the OWASP Top 10 are not unprecedented, security veterans told CyberScoop.
“It’s so hard, almost impossible sometimes, to build consensus on these issues when people are so passionate about them,” said Katie Moussouris. Moussouris, a security researcher and former application pentester has never worked directly on preparing the OWASP Top 10, but says she’s familiar with the kind of pushback van der Stock describes from her work on international standards.
“A lot of people want to contribute,” she said. “One of the things that makes the Top 10 so valuable is that it’s produced by the community. It’s the openness and the transparency that makes it so important.”
“Security researchers agree on 90 percent of everything,” added Sager, “But then they spend 90 percent of their time arguing about the remaining 10 percent.”
He added that “anyone can assemble a list of every possible security control or vulnerability — that’s easy, but it’s also not very helpful.” He said such voluminous catalogues merely tend to confuse — a process he has dubbed “The fog of more.”
“What’s hard, what requires the discipline, is whittling down that huge list to something useful … Getting people to agree on the 10 most important,” said Sager. “There’s always plenty of room for reasonable people to disagree.”
Moussouris added that she hoped in the future the Top 10 would need to be revised more frequently. Many of the vulnerability categories in the last version, from 2013, had been in the list since its inception. “That means programmers are making the same mistakes [they were in 2003]. Each new generation [of webdevs] is making the same mistakes over and over again… Ideally we want to get to a point where the Top 10 change much more frequently, because people are learning, and not making the same mistakes.”