So I’m on a kick now of attending conferences again and happened to be close by at a client at the same time Mandiant was holding the annual incident response conference called MIRCon in Alexandria, VA. This was only the second annual conference, however like DerbyCon you’d never know it was a fairly new conference based on the speakers and quality of the conference. I did take notes on some of the more interesting topics and wanted to share those in a post to the site. As a side note, pony tails (or as I was corrected, pwnie tails) and suits seemed to be all the rage at this conference. Don’t think I’ll be able to grow my hair out in time for next year so I guess I just have to feel out of place.
The keynotes for the two day conference included Richard Clarke and Michael Chertoff. In the first keynote, Clarke used the CHEW acronym to describe the current set of threats, meaning crime, hacktivism, espionage, and warfare. While not groundbreaking I tend to enjoy acronyms I find funny. A few snippets that were interesting include: “we tend to over classify in the government, and sometimes we use that to hide mistakes”, and “we had the software purchased that would have caught the private who accessed the cables (now known as wikileaks), but it was on a shelf and not installed”. Chertoff’s message was similar in terms of describing the threat types, but he also pushed a need for better information and intelligence sharing among responder and counterintelligence groups. There is a need to understand the motives and methods of the attackers. While I do think “some” sharing occurs, this is most likely limited to DIB (defense industrial base) type orgs and the government. In the commercial sector it would be tough to ask SecureWorks CTU to share with Mandiant or the other way around. Maybe the issue is they have too much data, but it is also this intelligence which differentiates (or doesn’t) between competitors in this space. But I think the overall message of his keynote was that the intelligence (shared or not) needs to be less about technology and more about the human element of the threat. When he was talking about sharing though I did get a quick flash of Joseph K. Black on his MegaCommunity soap box (which hasn’t come up recently, so I assume someone new is running his Twitter account)…I actually saw his Twitter profile pic in my head and now I’m scared.
Following the keynotes were various speakers where the sessions were broken into management and technical tracks. While I wasn’t able to attend all of the tracks due to calls and client commitments, I did attend a few that were interesting. Tony Sager from NSA talked about his experiences with contracting with the Red Team to perform assessments. One comment that came out of this was when he asked the Red Team if a well managed network was a harder target, and the obvious answer was yes. But when thinking about client engagements and their lack of IT management/operational maturity I couldn’t help but be discouraged. The comment related to that was “defense-in-depth has become a crutch”, something that we do because we don’t know or can afford it…but it doesn’t solve the management problems of IT. And the solution of good management and inclusion of security in what IT does, as we’ve said a million times, needs to come from the top of IT and not from the infosec level. Even in orgs where this is the case I don’t see that buy-in trickle down to the staff levels which is discouraging. On the topic of metrics, which is always enjoyable, was a presentation by Grady Summers on the how and what to measure to track your incident response metrics. I liked the intro of “what makes good metrics” which used the Security Metrics book by Andrew Jaquith as the list of “good” measures. The unfortunate thing is that consistency, context, and automation seem to be the biggest issues. That aside, there is a lot that you could, and probably don’t, measure and report on. Most orgs start with the most obvious of the 8 or 9 measures you could take and that is the “time to review”. That simply measures the response to opening and acknowledging a ticket or alert. Perhaps if incidents are tracking in a ticketing system this could be pulled and reported on, but in some cases the info just isn’t tracked at all and measurement becomes very difficult and time consuming (read: not a good metric). I think we need to get here, but my concern is we have orgs still working on getting monitoring off the ground and mature to a level which identify the alerts or events that require investigation. This approach to metrics would be great if you had a very mature monitoring and response function…sorry, just not seeing to many of these today.
Finally, there was a panel discussion on in-sourcing or out-sourcing your CIRT. While the panelists came from different size orgs and industries the message was quite similar. IR out-sourcing is not a solid option, however augmenting your team with a 3rd party is. Internal business knowledge and direct management over the responders is required to make it a successful response function. The topic of MSSPs and monitoring came up as well, which had a similar message. Either you throw it to the MSSP because you have noting and need something now or you need to augment staff (i.e. 24×7 monitoring). However, the message that this should also be an internal function was pretty strong. Again, as you move into monitoring your internal environment, not just the perimeter, you’re going to need people who understand the business and IT environment. MSSPs serve a purpose, but keep in mind they are “a SOC” not “your SOC”.
All in all a good conference and will definitely try to make it back next year. As another side note, Apneet Jolly was not present at this conference…I’m suprised, since I assume thats all he does for a living since I see him at every one.
Comments