Should I bring all my shoes and glasses?

//Home

Rethinking Vulnerability Scoring
General | | 1. December, 2008

When a vulnerability is identified within an organization, how is its risk measured? One popular method is to assess likelihood vs impact. Numbers are assigned to both factors and plotted on a heat matrix (shown below).

Heatmap Quadrants w/ Severity Ratings

Heatmap Quadrants

In case you haven’t already guessed, any vulnerabilities that are plotted in the first quadrant are rated as high severity, which are given first priority for remediation. Quadrants two and four are ranked as medium risk, while the third are low and last in the queue. There are a couple flaws with this method. First, it is very, very difficult for a person to consistently assign ratings to vulnerabilities in relation to each other. So many environmental variables exist that could cause the rating to fluctuate that you either end up over-or-under-thinking its value. Instead of a scatter plot of dots, you’d almost need a bubble surrounding each point indicating error margin. Although not entirely arbitrary, its probably the next best thing to it. Secondly, since only two factors are involved, likelihood and impact are boiled up into very high level thought processes instead of calculated, consistent valuations. As the graph shows, there are only three possible risk rankings: high, medium, and low. This leads the assessor to become complacent and risk averse, by placing most vulnerabilities in the “medium” severity quadrants.

The solution? Enter CERT’s Vulnerability Response Decision Assistance (VRDA) Framework. They have published a proposal for further refining the process of vulnerability prioritization, as well as possible courses of action. Each vulnerability is given a value through a pre-defined set of criteria, or as CERT calls them, “facts”. To summarize:

Vulnerability Facts

  • Security Product – Does the vulnerability affect a security product (Yes/No)
  • Network Infrastructure Product – Does the vulnerability affect a network infrastructure product? (Yes/No)
  • Multiple Vendors – Does the vulnerability affect multiple vendors? (Yes/No)
  • Impact 1 – What is the general level of impact of the vulnerability on a system? (Low, Low-Medium, Medium-High, High)
  • Impact 2 – What are the levels of impact for confidentiality, integrity, and
    availability of the vulnerability on a system? (Low, Low-Medium, Medium-High,
    High)
  • Access Required – What access is required by an attacker to be able to exploit the
    vulnerability? (Routed, Non-routed, Local, Physical)
  • Authentication – What level of authentication is required by an attacker to be able
    to exploit the vulnerability? (None, Limited, Standard, Privileged)
  • Actions Required – What actions by non-attackers are required for an attacker to
    exploit the vulnerability? (None, Simple, Complex)
  • Technical Difficulty – What degree of technical difficulty does an attacker face in
    order to exploit the vulnerability? (Low, Low-Medium, Medium-High, High)

World Facts

  • Public Attention – What amount of public attention is the vulnerability receiving?
    (None, Low, Low-Medium, Medium-High, High)
  • Quality of Public Information – What is the quality of public information available
    about the vulnerability? (Unacceptable, Acceptable, High)
  • Exploit Activity – What level of exploit or attack activity exists? (None, Exploit
    exists, Low activity, High activity).
  • Report Source – What person or group reported the vulnerability?

Constituency Facts

  • Population – What is the population of vulnerable systems within the
    constituency? (None, Low, Low-Medium, Medium-High, High)
  • Population Importance – How important are the vulnerable systems within the
    constituency? (Low, Low-Medium, Medium-High, High)

Although some of these facts I feel are irrelevant, this improves upon original methods greatly. The most obvious is that there are not only more criteria to evaluate, but they are consistent and specific. Also, you will notice that none of these have the standard low-medium-high ratings. The article explains that this was purposeful, as to “reduce the tendency of analysts to select the median ‘safe’ value”.

The article also presents a decision tree for actions items after you have performed your scoring. Although I won’t go into great lengths here, I think it is a novel concept and something that should be developed further. Every organization will need to sit down and plan their own, as models will vary by industry and size.

Reputation-based filters?
General | | 19. November, 2008

While looking through IPS alerts today I thought about the patterns of activity we are seeing on our network.  Since we do not have many Internet-facing systems we mainly see malware brought in by our user’s web browsing.  While URL filters and proxies may help cut down on the “bad” sites users are allowed to visit, how many malware-serving sites are not blocked if you have a URL list filter as opposed to a combination of list and reputation-based filtering?  The answer is…many.  If these sites were included in URL list filters then I wouldn’t see these as IPS alerts, right?

So all we need to do is stop the users from going to “bad” sites, right?  After investigating some of these alerts in more detail we noted that the “bad sites” were actually iframes/links to the malware-serving sites off of legitimate sites.  So the question is, should access to these “bad sites” have been allowed in the first place?  If you have reputation-based in addition to URL list filters you have a better chance that it wouldn’t be allowed.  But, if you have URL lists only chances are it will be allowed due to the near constant flux of where these sites reside.  In the end it isn’t necessarily true that the users not being security conscious, the trend seems to point towards legitimate sites which serve ads (which they do not control) that contain links to malware-serving sites.

While the network will always be a target, the end user is much more susceptible to compromise.  So now what do we do when they leave our network and go to Starbucks and jump on the AT&T wireless network?  How does our network list/reputation-based filter follow them?  This is where vendors seem to separate.  Some seem to be going the route of agent software which syncs policies with the network-based proxy.  Others seem to favor a proxy “in the cloud” model.  Not quite sure which one works best and is most effective?  Anyone have any thoughts?

wide.opus1.com – fail
General | | 13. November, 2008

Upon reviewing the web logs to see if anyone other than us and the bots were viewing our pages I couldn’t help but notice that we received a request from a system named wide.opus1.com. One of two things happened, either Joel Snyder from Opus One googled his own name (or someone at his company did) and read the post on security agility or he has a system that does this for him or his organization.

Joel, if it is the latter and that system has any level of intelligence please contact me at deron@malos-ojos.com. We have a specific need for such a system at the law firm…mainly due to the size of our firm and the recent presidential election results.

UPDATE: Epic fail Joel Snyder, epic fail.  Either you haven’t reviewed the results from your bot that downloaded the page, or it was some random person at Opus1 looking at the site.

Effective Vulnerability Management — A Preface

While its fresh in my mind, I thought I would write a little bit tonight about implementing a sound vulnerability management program within medium to large sized businesses. The good folks here at Malos Ojos and I are only starting to improve upon the VM processes in our day jobs, so the following posts in this category will be a mixture of me talking out loud; a splash of advice mixed with a dash of growing pains as we step down this well-traveled-but-not-quite-understood path.

Vulnerability management means many things to different people. Put simply, it is the act of obtaining a (semi?)-accurate depiction of risk levels in the environment, with the eventual goal of bringing those risks down to business-tolerable levels. At many companies, this translates into the following:

  1. A lone security analyst, sometime between the hours of 6pm-7am, will scan the internal/external networks with a vulnerability scanner of his/her choosing. The resulting output will be a 9MB XML or PDF file that is the equivalent of approximately 5,000 printed pages.
  2. This file is sent out via email to IT staff (print form would result in an arboreal holocaust which eventually leads to the Greenpeace reservists being called up to active duty…)
  3. IT staff member opens the attachment, notices a couple things: Many confusing words/numbers and systems that don’t belong to them. Attachment is closed and never opened again.
  4. GOTO 1

If you look at the people, technology, and processes used to deploy the vulnerability management solution in the scenario described above, I wouldn’t rate either of the three pillars very high. Well, maybe for people, at least this company HAS a security analyst…

There are also some of you that will also point out the laziness of the IT staff for either closing the document prematurely and not asking the analyst for help on making heads or tails of the report. While this is true, it is our jobs as security professionals to anticipate this laziness and create a report that will help ease their digestion of the information. Thats why they pay us the big bucks *cough* 😀

Back to my point, I’ve sat down and mused upon what a good vulnerability management program should consist of. Here are my notes thus far:

  1. Inventory of assets
  2. Data/system classification scheme
  3. Reoccurring process for identification of vulnerabilities -> risk scoring
  4. LOLCats
  5. Remediation and validation
  6. Reporting and metrics

The list above will be revised over time, almost assuredly as soon as I arrive into the office tomorrow morning with my coworkers having read this post. Because I’m a tad hungry, but more importantly because I don’t get paid by the word (Schneier, I’m looking at you), I will end the post here.

Fear not true believers, I have many more posts to come that expand on my vague aforementioned scribbles.

Security Videos Cont.
Videos | | 12. November, 2008

My contribution to security related sites with videos/tutorials is Security-Freak.net.  I originally used the site for help with socket programming but found several tutorials on common security tools (e.g., dig, netcat, airodump-ng).  Check it out and let me know what you think.

/edit It appears as though the site decided to go down in conjunction with my post.  If you receive the Service Unavailable message, try again later.

/edit2 The site is back up.

Other hacking tutorials on the web
Videos | | 12. November, 2008

For those of you who are getting sick of hearing Deron’s voice via Vimeo tutorial, I just wanted to post an honorable mention to http://www.irongeek.com.  It is published by an individual with an affinity for computer security and getting buff. This is laughable on its own, but throw in the creator’s speech impediment, and you have yourself an extra dimension of funny.

Regardless, the “Hacking Illustration Videos” that he has posted on his site are pretty informative, and there is enough variety in topics that you’re guranteed to find at least one that will interest you.

Security agility?
General | | 11. November, 2008

In listening to Joel Snyder from Opus One discuss security agility at the most recent Information Security Decisions conference in Chicago I had a few thoughts.  While I agree with some of his points I don’t think he hits the mark fully on the “security agility” topic.  One of the main points he makes during his “no punctuation-style” presentation is that security must become more agile to keep up with the business.  The reason for this is the business wants to innovate and be agile, and therefore security must be just as agile in order to not slow the “innovation” down.  But, I would ask how many companies are fine with average, who don’t innovate.  Maybe they aren’t innovative because they have the wrong people at the helm, maybe innovation in their industry or business model is an unacceptable risk?  I do agree that security must remain flexible….and if the business isn’t agile then are we spending too much on creating an unnecessary agile security function?

The one thing I did take away was that security must remain flexible and not hold the business back when they want to get a little creative…which I think is different than being innovative.  The general comment was also made at the conference about security groups moving away from daily operational tasks and closer to the business and risk management.  From my point of view I agree that this is happening to some extent.  So if this is true, and security is now performing risk management activities for the organization, of course they are going to slow the business down.  I think the main root cause of why they slow the business down is that they may not have an adequate security infrastructure in place in order to have the comfort level to say, “sure, go ahead because I know I can prevent, detect, or react to any possible security issues as a result of your project.”

In basing this on what I know about other organizations and their “security preparedness” I would say some are a long ways from getting to that comfort zone.  In fact, all the talk of where organizations are today made me feel good that we seem to be ahead of the curve a bit…which is a good thing given the security function is so new.

In the end I believe companies are not that innovative, and a strong security infrastructure which is adaptive to the needs of the business will win out over creating a truly “agile” security function.  How many times was the word agile used?  Not to mention the use of multiple vendors…how many people have trouble getting their staff trained on a few products from different vendors?  And if ISO27000 series keeps stating people, process, and technology are corner stones of good management practices, and I keep changing technology, how much work did we just add to the people and process side of that equation?

nvidia gtx 260 and a westinghouse lm2410 monitor
General | | 11. November, 2008

So I realize that this may seem very trivial, but I bought a new graphics card (EVGA GTX 260) to replace an 8800GTS and two things happened that I did not expect.  First, the card is very long and barely fit in my case (Antec 900)…which is surprising as this case has a decent amount of room inside.  Guess all those stream processors take up some serious real estate.  Also, it needs a 500 watt power supply minimum with 2 6-pin pcie power connectors (or an unused pair of IDE power connectors).  Second, when I hooked it up to my main monitor, which is a Westinghouse LM2410, and installed the most recent nvidia driver (178.24) the text looked jagged, blurry, and fuzzy.  There was a second monitor (Dell 1905FP) hooked up to the card as well.

When I looked at the nvidia control panel I noticed that the Westinghouse was being treated as an HDTV.  And although the resolution was set to 1920×1200 (native for the screen) it was not displaying 1920×1200.  BTW, this monitor connects via an HDMI (monitor) to DVI (card) cable.  The Dell monitor, being treated by the card as a monitor, looked perfect the entire time.  I also removed the nvidia drivers, and with the Windows XP drivers the Westinghouse looked fine (although it had no graphics acceleration).  Looking at a few posts on the nvidia and other forums I found the answer.

The easy solution was posted here and here.

The only change is that in the latest nvidia driver the inf, called nv4_disp.inf, needs to have the OverrideEdidFlags0 entry under the correct processor for the card (in my case under the GT2x heading of the [nv_SoftwareDeviceSettings] area).  You will also need to ensure the first 4 bytes of the reg key are correct for your monitor.  Use Phoenix EDID Designer to extract the current monitor EDID values, look at bytes 8-11 and make sure the values you are entering in the inf match.  If you have two monitors you’ll need to figure out which is the correct one in Phoneix.  The most popular EDID values appear to be:

Viewsonic VX2835wm – 5A,63,1F,0F
Westinghouse LM2410 – 5c,85,80,51
LG – 1E,6D,3F,56

I realize this is posted elsewhere and in other forums, I’m just hoping this helps someone find the answer quickly and without rebooting your system 1 million times trying to fix this issue.

Part 3 – Getting the password hashes
Videos | | 11. November, 2008

Part 3 of the password cracking series is an overview of some of the tools and techniques used to obtain password hashes from a MS system.


Part 3 – Getting the password hashes – TDC477 from Deron Grzetich on Vimeo.

Part 2 – MS password hash overview
Videos | | 11. November, 2008

The second part of the password cracking video series.  This section provides an overview of the MS LM and NTLM password hash formats and where theses are stored on local systems dn on Active Directory domain controllers.

Download the slides used in the presentation HERE


Part 2 – MS password hash overview from Deron Grzetich on Vimeo.

Theme made by Igor T. | Powered by WordPress | Log in | | RSS | Back to Top