Kind of a misleading title, but the fact that I’m sitting in a Caribou Coffee with 6 other people who are all using their laptops to watch movies, surf, or do work, it struck me that everyone outside the corporate walls seems to be using a Mac. Of those 6 people I see 4 MacBook Pros, 1 Macbook, and 1 lonely HP Netbook. It could be that I’m on a college campus and that the Mac is the current “in” laptop to have based on Apple’s genius marketing campaign.
The other side of that campaign is based on the the fact misconception that the OS X is more secure than Windows. In 2007 OS X had 243 total software flaws that required patching versus just 44 for XP and Vista combined (OK, mainly XP since no one is actually using Vista). Also, the current release of the OS 10.5.7 fixes nearly 70 security flaws in OS X. One thing to keep in mind if you’re looking purely at the number is that OS X has many security patches to a singe Windows patch. As anyone on a linux box who has run the yum -y update command recently should be able to tell you, each component of the system requires an individual update or patch. Since OS X is built on many of these open source components it is no wonder the numbers seem to be in Windows’ favor.
If we can assume that OS X is as flawed as, if not more than, Windows then why aren’t we seeing a barrage of attacks against OS X? I think the right question should be is this even a viable platform to attack? If the motive of current attacks is money in the form of credit cards, bank accounts, identities, etc. then we can speculate as to why not. Ignoring the fact that OS X holds a low market share of the market, if most users are college student using Mac’s would it even make sense to compromise a system to access a credit card that has a $500 limit…mainly because the student filled out an application just to get a free $2 tee shirt. Or, is it the fact that Windows is so easily compromised that it makes no sense to go after Macs. I’m going with the latter for now.
While I wrote this a while ago I’m glad I held off on posting it. Since that time a study was conducted at the University of Virgina of incoming freshmen and which OS they picked for their laptop. Seems that Apple is starting to take a larger share of the higher ed. market...well, at least at Virginia. The issue around the total number of vulnerabilities in OS X was also covered today in an IBM ISS meeting I attended to discuss the findings from the X-Force 2008 Security Study.
When a vulnerability is identified within an organization, how is its risk measured? One popular method is to assess likelihood vs impact. Numbers are assigned to both factors and plotted on a heat matrix (shown below).
In case you haven’t already guessed, any vulnerabilities that are plotted in the first quadrant are rated as high severity, which are given first priority for remediation. Quadrants two and four are ranked as medium risk, while the third are low and last in the queue. There are a couple flaws with this method. First, it is very, very difficult for a person to consistently assign ratings to vulnerabilities in relation to each other. So many environmental variables exist that could cause the rating to fluctuate that you either end up over-or-under-thinking its value. Instead of a scatter plot of dots, you’d almost need a bubble surrounding each point indicating error margin. Although not entirely arbitrary, its probably the next best thing to it. Secondly, since only two factors are involved, likelihood and impact are boiled up into very high level thought processes instead of calculated, consistent valuations. As the graph shows, there are only three possible risk rankings: high, medium, and low. This leads the assessor to become complacent and risk averse, by placing most vulnerabilities in the “medium” severity quadrants.
The solution? Enter CERT’s Vulnerability Response Decision Assistance (VRDA) Framework. They have published a proposal for further refining the process of vulnerability prioritization, as well as possible courses of action. Each vulnerability is given a value through a pre-defined set of criteria, or as CERT calls them, “facts”. To summarize:
Vulnerability Facts
World Facts
Constituency Facts
Although some of these facts I feel are irrelevant, this improves upon original methods greatly. The most obvious is that there are not only more criteria to evaluate, but they are consistent and specific. Also, you will notice that none of these have the standard low-medium-high ratings. The article explains that this was purposeful, as to “reduce the tendency of analysts to select the median ‘safe’ value”.
The article also presents a decision tree for actions items after you have performed your scoring. Although I won’t go into great lengths here, I think it is a novel concept and something that should be developed further. Every organization will need to sit down and plan their own, as models will vary by industry and size.