In search of a B.S. filter for software bugs
An organization can’t — and shouldn’t — care about each of the thousands of software vulnerabilities that are made public each year. A bug in a public-facing web browser probably won’t matter a lick for the control systems at an energy plant; an accounting firm can ignore a vulnerability in industrial computers it doesn’t use.
Yet for some organizations, it’s an ongoing struggle to understand how a software bug might impact their business. On Wednesday, cybersecurity company Rapid7 took a stab at the issue by going public with a project that uses crowd-sourced feedback to rate vulnerabilities.
The company invited security professionals of all stripes to use a web platform, known as Attacker Knowledge Base (KB), to assess the impact of a vulnerability to an organization, starting with a simple question: What could a malicious hacker do with the bug? The answers rate how easy it would be for a hacker to weaponize a vulnerability or what level of access it would yield in an organization’s network.
The initiative relies on the wisdom of the crowd to weed out bad information. Call it Yelp for software flaws, with more data and context thrown in.
“The process of monitoring and triaging new vulnerabilities is so time-consuming and effort-intensive that it often detracts from defenders’ ability to mitigate risk quickly and decisively,” wrote Caitlin Condon, Rapid7’s manager of software engineering, explaining the impetus for the project.
Security professionals can log in using GitHub to post analysis, and the platform has an open application programming interface to let users experiment with data. A thread on last year’s severe BlueKeep vulnerability in Microsoft Windows, for example, tracked the development of exploits for the vulnerability and how an organization could mitigate the issue.
The project is meant to build on the Common Vulnerability Scoring System (CVSS), a metric that many cybersecurity authorities use in issuing advisories.
“While CVSS may help with prioritization, it doesn’t help clarify and contextualize the individual risk models of the businesses and practitioners relying upon it,” Condon argued.
“A penetration tester’s personal story about chaining together two relatively low-CVSS-score vulnerabilities to gain high-privileged access to a critical business asset is a much more human-understandable way of expressing risk than an industry standard,” she told CyberScoop.
The project began with a series of surveys of researchers, engineers, and product managers on a variety of software bugs, Condon said. There were about as many opinions on the severity of vulnerabilities as there were people in the group. Reaching a consensus on something so multifaceted was never going to happen.
“We wanted to highlight the value of individual experience in AttackerKB instead of driving toward consensus as a singular goal,” Condon said.