When a vulnerability is identified within an organization, how is its risk measured? One popular method is to assess likelihood vs impact. Numbers are assigned to both factors and plotted on a heat matrix (shown below).
In case you haven’t already guessed, any vulnerabilities that are plotted in the first quadrant are rated as high severity, which are given first priority for remediation. Quadrants two and four are ranked as medium risk, while the third are low and last in the queue. There are a couple flaws with this method. First, it is very, very difficult for a person to consistently assign ratings to vulnerabilities in relation to each other. So many environmental variables exist that could cause the rating to fluctuate that you either end up over-or-under-thinking its value. Instead of a scatter plot of dots, you’d almost need a bubble surrounding each point indicating error margin. Although not entirely arbitrary, its probably the next best thing to it. Secondly, since only two factors are involved, likelihood and impact are boiled up into very high level thought processes instead of calculated, consistent valuations. As the graph shows, there are only three possible risk rankings: high, medium, and low. This leads the assessor to become complacent and risk averse, by placing most vulnerabilities in the “medium” severity quadrants.
The solution? Enter CERT’s Vulnerability Response Decision Assistance (VRDA) Framework. They have published a proposal for further refining the process of vulnerability prioritization, as well as possible courses of action. Each vulnerability is given a value through a pre-defined set of criteria, or as CERT calls them, “facts”. To summarize:
Vulnerability Facts
World Facts
Constituency Facts
Although some of these facts I feel are irrelevant, this improves upon original methods greatly. The most obvious is that there are not only more criteria to evaluate, but they are consistent and specific. Also, you will notice that none of these have the standard low-medium-high ratings. The article explains that this was purposeful, as to “reduce the tendency of analysts to select the median ‘safe’ value”.
The article also presents a decision tree for actions items after you have performed your scoring. Although I won’t go into great lengths here, I think it is a novel concept and something that should be developed further. Every organization will need to sit down and plan their own, as models will vary by industry and size.
Comments