Asset management is a foundational IT service that many organizations continue to struggle to provide. Worse yet, and from the security perspective, this affects all of the secondary and tertiary services that rely on this foundation such as vulnerability management, security monitoring, analysis, and response to name a few. While it is very rare that an organization has an up-to-date or accurate picture of all of their IT assets, it is even rarer (think rainbow-colored unicorn that pukes Skittles) that an organization has an accurate picture of the criticality of their assets. While some do a decent job when standing up a CMDB of mapping applications to supporting infrastructure and ranking their criticality (although many tend to use a binary critical/not critical ranking), these criticality rankings are statically assigned and if not updated over time may turn stale. Manual re-evaluation of assets and applications in a CMDB is a time-consuming task that many organizations, after the pain of setting up the CMDB in the first iteration, are not willing or likely to make re-certification of assets and criticality rakings a priority…and it is easy to understand why. My other issue is that many CMDBs sometimes take into account the “availability” factor of an asset over the criticality of the assets from a security perspective. For example, it is not uncommon to see a rigorous change management process for assets in the CMDB with a slightly less rigorous (or non-existent) change management process for non-CMDB assets. But I digress…to summarize my problem:
The impact of inaccurate asset inventories and lack of up-to-date criticality rankings got me thinking that there has to be a better way. Being that I spend a majority of my time in the security monitoring space, and now what seems to be threat intel and security/data analytics space, I kept thinking of possible solutions. The one factor I found to be in common with every possible solution was data. And why not? We used to talk about the problems of “too much data” and how we were drowning in it…so why not use it to infer the critically of assets and to update their critically in an automated fashion. Basically, make the data work for us for a change.
To start I looked for existing solutions but couldn’t find one. Yes, some vendors have pieces of what I was looking for (i.e. identity analytics), but no one vendor had a solution that fit my needs. In general, my thought process was:
These 6 thoughts guided the drafting of a research paper, posted here (http://www.malos-ojos.com/wp-content/uploads/2014/08/DGRZETICH-Ideas-on-Asset-Criticality-Inference.pdf), that I’ve been ever so slowly working on. Keep in mind that the paper is a draft and still a work in progress and attempts to start to solve the problem using data and the idea that we should be able to infer the criticality of an asset based on models and data analytics. I’ve been thinking about this for a while now (the paper was dated 6/26/2013) and even last year attempted to gather a sample data set and to work with the M.S. students from DePaul in the Predicative Analytics concentration to solve but that never came to fruition. Maybe this year…
Comments
Deron, would using a graph data approach to support causal modeling be helpful in this case? Using something like Titan (which uses Hadoop services for data storage) might provide a way to test this concept with large scale data, while still preserving the flexibility of a data model that also supports walking the graph to determine causal impact (and help derive dynamic priorities).