In looking back at the last post about tapping your network prior an incident I thought to myself, why did I stop there…or more appropriately, why was I focused on having the right infrastructure in place? Perhaps it was out of frustration of showing up to a client and wondering why it was so difficult to start quickly monitoring the environment to get a handle on the issue we may, or may not, be dealing with. But when I thought about it what I really needed was a Delorean with a flux capacitor and a crazy Doc Brown of my own so I can travel back in time (no, I don’t own a vest jacket or skateboard but may on occasion rock some Huey). I don’t have a time machine and can’t go back in time, but what I can do is make some recommendations that we start capturing information that may be useful in an incident response PRIOR to one happening. I don’t mean to dismiss the ability to rapidly scale up your monitoring efforts during a response, even if that means dropping new tools into the environment or calling on 3rd parties to assist, but would be doing an injustice if I didn’t discuss what you should be collecting today.
Normally I’d be frustrated with myself for recommending that you collect information, logs, data, etc. and then do nothing with them. But, is that really a bad thing? I thought back to when I started at the law firm and many years ago when I asked what we monitored I was told we had network perimeter logs and that they were being sent to an MSSP for storage. Were we doing anything with these logs? No. Was the MSSP doing “some science” to them and telling us bad things may have been happening? No. So, at first I questioned the value, and sanity, of the decision to capture but do nothing with our information. But the more I thought about it the more I understood that IF something had happened I would/may have the information I need to start my investigation. Forensics guys may know this as the concept of the “order of volatility”, or how quickly something is no longer available for analysis. I’d say besides system memory that network connections would be very high up on that list. And if I weren’t at least capturing and storing these somewhere then they would be lost in the past because of their volatility. So, it isn’t that bad to just collect data, just in case.
I’d also like to temper the aspirations of folks who want to run out and log everything for the sake of logging everything. What I’d much rather see is that your logging and data collection be founded on sound principles. What you choose to focus on should be based on your risk, or what you’re trying to protect/prevent, and I hope to highlight some of this in the rest of this post. As an example, if you are logging successful authentications, or access to data it should be focused on your most valuable information. This will, for obvious reasons, vary from organization to organization. A manufacturing company is more likely trying to protect formulas, R&D data, plant specifications and not medical records like an organization in healthcare would. OK, on to the major areas of logging to focus on, or at least something to consider:
Perimeter/Network Systems
Application/Database
Security Controls
Contextual Information
And as Appendix A.2.1 of NIST 800-86 says, have a process to collect the data that is repeatable and be proactive in collecting data – be aware of the range of sources and application sources of this information.
While I realize the recommendations of this post are rather remedial, I still find organizations who haven’t put the right level of thought into what they log and why. Basically, the recommendation I may make is to first understand what you may log today, identify gaps in the current set of logs and remediate as necessary, and design your future state around a solid process for what you plan to collect and what you plan to do with it. More to come…
Comments
Hello Deron,
You stated that the law firm sent the logs to the MSSP for storage and the logs were not analyzed. My question is what frequency of analysis would you have been comfortable with for the logs that were sent to the MSSP from the law firm.
Thank you,
Troy
This isn’t for a client, is it? You know I can’t be helping out one of the other 3 🙂
My answer is, given today’s technology, nothing short of continuous monitoring is worthwhile…but what are they monitoring for and how good are they at monitoring for it is another question altogether.
In our case the only content provided to the MSSP were the firewall logs and IPS sensors. Firewalls would only show denies on the outside (standard port scanning, backscatter, etc.), denies on the inside (mis-configured proxy or proxy unaware malware), accepts on the inside (established connections), and accepts on the outside (rare, since we didn’t have anything allowed in except for inbound Postini). IPS would only show hits on known signatures of bad traffic. Given that, from the IPS they would be able to tell me about “blocked” attacks on a continuous basis…which doesn’t really do anything for me since they are blocked attacks other than to give an indication of attack activity (that I can see). On the firewall side they wouldn’t be able to tell me much unless they were monitoring for successful connections to known C2 networks or systems.
So, in the end, it isn’t reasonable to manually review the logs…and even on a continuous basis you need to know what you’re looking for. That is one reason I advocate for use case development for your monitoring program which defines what you’re looking for, how you will look for it, and what you’ll do when you find it. Given the patterns in malware these days there are quite a number of “things” to look for, trigger on, and respond to.
No not for a client. Thank you for the detailed response.