Infosec pros: We need CVSS, warts and all
A key pillar of a strong cybersecurity program is identifying vulnerabilities in the complex mix of software programs, packages, apps, and snippets driving all activities across an organization’s digital infrastructure.
At the heart of spotting and fixing these flaws is the widely used Common Vulnerability Scoring System (CVSS), maintained by a nonprofit called the Forum of Incident Response and Security Teams (FIRST). CVSS is currently in its fourth iteration since its launch in 2005.
Although it’s the most common indicator of a vulnerability’s danger levels, CVSS has long been subject to a host of criticisms, with periodic appeals from software providers and security researchers to jettison the system altogether and start anew. However, after another round of criticism sparked conversation across the industry, experts say the criticisms are mostly unwarranted. Experts that spoke with CyberScoop advocated for staying the course with a system that, while imperfect, still provides valuable metrics that defenders need for quickly grasping the overall severity of vulnerabilities.
The CVSS score is “a way of capturing the properties of vulnerabilities in a systematic way,” said Sasha Romanosky, a senior policy researcher at the Rand Corporation who worked on the creation of the CVSS system 20 years ago. “When you talk about a vulnerability being exploited, there are different sort of ways and features about vulnerabilities that allow that to happen. The original question was, OK, let’s enumerate those different ways and consequences.”
Is CVSS getting swept up in NIST or NVD woes?
CVSS scores, along with the vulnerabilities themselves — referred to as CVEs (Common Vulnerabilities and Exposures) — are reported by CVE Numbering Authorities (CNAs). This information is published in two major databases widely used by cybersecurity defenders. CVSS scores can reach as high as 10.0 for the most critical vulnerabilities that organizations need to address urgently.
The first and higher-profile database is the National Vulnerability Database (NVD), maintained by the National Institute of Standards and Technology (NIST). The other database is maintained by the MITRE Corporation, a federally funded R&D center.
The NVD has sparked many complaints over the years, particularly after a recent funding shortfall created a backlog in NVD’s cataloging of up to 40,000 CVEs per year. The virtual standstill was so significant that the Cybersecurity and Infrastructure Security Agency (CISA) helped NIST with what it calls a “vulnrichment” project.
Some experts say the troubles surrounding the NVD have caused a negative spillover onto CVSS.
“It’s not CVSS they’re complaining about,” said Pete Allor, senior director of Red Hat. “Before losing their funding, they only had 11 analysts and were looking across 20,000 to 40,000 issues per year.”
As a consequence, NIST is scoring vulnerabilities based on limited knowledge, and, to be on the safe side, “they’re going for everything globally at its worst case,” Allor said. “Now people take that as, ‘it’s the national vulnerability database underneath NIST, so they should know.’ Well, the problem is they don’t. Then, regulators and auditors take that as a blanket score. ‘Oh, you have to be above this level and fix all of them.’ And that’s where the complaint comes from. It’s not that CVSS is bad; it’s the blind faith that someone’s CVSS score is immutable for everything.”
CVSS might be too complicated and yet imprecise
Not all experts think the issues with CVSS are byproducts of NVD’s woes. Some point to its foundation in quantitative analysis that has, from time to time, led to confusion and misinterpretation.
Critics say, “‘Look, the equations that you use don’t make any sense to me,’” Romanosky said. “’I don’t understand how you got them. This numbering that you have is sort of useless, irrelevant, distracting, unhelpful, you name it.’ That’s fine. And that kind of problem would exist no matter what; whenever you go from qualitative values to something numerical, you always have some conversion, and that will always be imperfect.”
Robert Fox, a CIO, CISO and CTO consultant, said the “one-off type of scoring” with CVSS makes some things especially “unclear.”
“It’s static,” he said. “It doesn’t take into account various other types of components in there to make it useful because a high score on the CVSS scale is not necessarily an imminent threat that needs to be addressed or patched.”
Jeff Williams, co-founder and CTO of Contrast Security, said there’s a problem with people “trying to use these risk rating systems for things that they’re not very good at. You see a lot of people in cybersecurity that are quants; they love metrics and data and precision, and even if you’re using one of these systems, I don’t care which one, it’s based on a bunch of factors that someone has to estimate.”
“People are looking for these systems to try to solve all those problems,” he continued. “And I’m just very practical about this. Let’s put a little work into ballparking the risk and then fix it if it falls into some level of risk we care about. But spending weeks on getting these numbers super precise is just a fool’s errand.”
Still, other experts think that CVSS critics might not understand the system as well as they should. “Most of the issues stem from not understanding what CVSS was designed for and trying to use it as a complete solution,” said Jerry Gamblin, research team lead for Cisco Vulnerability Management. “The best way to resolve the complaints is to consider CVSS as one of many tools you should use for measuring vulnerability risk.”
Alternatives or augmentations to CVSS
Over the years, several alternatives or augmentations to CVSS have been floated. Most recently, the United Kingdom’s National Cyber Security Centre (NCSC) released a paper extending a concept developed in 2007 by MITRE’s Steve Christie that classifies vulnerabilities into “forgivable,” “unforgivable,” or “unexploitable” based on a range of factors.
The NCSC said the paper “intends to generate discussion with vendors, and is a call on them to work to eradicate vulnerability classes and make the top-level mitigations” easier to implement.
“I will say that rating them based on forgivability can help an organization learn because now you’re looking backward to the root cause,” Williams said. “It’s not like, ‘Well, we had a vulnerability, and we should just fix it and stay on this hamster wheel of pain.’ It is, ‘Let’s look at why did that happen and is it something we can improve our process so that we prevent those vulnerabilities in the future.’”
Still, Williams thinks it’s a weird way of thinking about vulnerabilities.
“It has nothing to do with whether you should fix it or not, who you blame for it, or whether you should blame someone for it,” he said. “It’s an odd factor.”
EPSS entered the scene to measure exploitation likelihood
Another system has been developed to fill what some critics have said is missing in CVSS: the ability to gauge how likely a flaw is to be exploited. Also housed under FIRST, this Exploit Prediction Scoring System (EPSS) “is a data-driven effort for estimating the likelihood (probability) that a software vulnerability will be exploited in the wild.”
Romanosky, one of the developers of EPSS, said that “there was a kind of an important awareness a number of years ago that while CVSS may be a good measure of enumerating these features of vulnerability and establishing some number of severity, it wasn’t a good measure of exploitation, meaning it wasn’t always true that the vulnerabilities that scored the highest were those being exploited. For those people who are interested in actual exploitation, what vulnerabilities bad guys are exploiting in the world right now, you can’t use CVSS to help you figure that out. We needed a new mechanism. And from that grew EPSS.”
One of the issues with EPSS is that it is currently not included in the NVD or MITRE databases. “So, there’s some concern by NIST and maybe others that if they start to adopt it, but then it fails somehow, well, what do they do?” he added. “And part of that is just trust in its longevity.”
RedHat’s Allor, who was on the FIRST board when it created the EPSS special interest group (SIG), said that EPSS “is a cool idea.” But, he added, “EPSS has the ability to see certain sectors from certain geographies and concentrates on certain sets of software. It’s very good on Microsoft. It’s really good on Adobe. But you get to routers like Juniper and Cisco and open-source [software], and it doesn’t have visibility. You have to understand it’s a good tool with limitations.”
CVSS is here to stay
Despite the search for alternative scoring systems, most experts believe CVSS, which has undergone refinements for over two decades, should continue to be the cornerstone of vulnerability reporting.
“It’s been 20-some years now since it was first released,” Romanosky said. “It’s been adopted widely by government standards and commercial standards. It’s an international standard. It’s the de facto way now that people represent the severity of a vulnerability.”
“I recognize it’s imperfect,” he added, “but I have yet to see anyone in 25, 30 years who has come along with something better.”
Cisco’s Gamblin agreed. “I believe every organization should use multiple data sources when prioritizing vulnerabilities in its environment. However, I have yet to see a successful program that does not include the CVSS base score in its vulnerability evaluations.”