While looking through IPS alerts today I thought about the patterns of activity we are seeing on our network. Since we do not have many Internet-facing systems we mainly see malware brought in by our user’s web browsing. While URL filters and proxies may help cut down on the “bad” sites users are allowed to visit, how many malware-serving sites are not blocked if you have a URL list filter as opposed to a combination of list and reputation-based filtering? The answer is…many. If these sites were included in URL list filters then I wouldn’t see these as IPS alerts, right?
So all we need to do is stop the users from going to “bad” sites, right? After investigating some of these alerts in more detail we noted that the “bad sites” were actually iframes/links to the malware-serving sites off of legitimate sites. So the question is, should access to these “bad sites” have been allowed in the first place? If you have reputation-based in addition to URL list filters you have a better chance that it wouldn’t be allowed. But, if you have URL lists only chances are it will be allowed due to the near constant flux of where these sites reside. In the end it isn’t necessarily true that the users not being security conscious, the trend seems to point towards legitimate sites which serve ads (which they do not control) that contain links to malware-serving sites.
While the network will always be a target, the end user is much more susceptible to compromise. So now what do we do when they leave our network and go to Starbucks and jump on the AT&T wireless network? How does our network list/reputation-based filter follow them? This is where vendors seem to separate. Some seem to be going the route of agent software which syncs policies with the network-based proxy. Others seem to favor a proxy “in the cloud” model. Not quite sure which one works best and is most effective? Anyone have any thoughts?
Comments
From the perspective of an end-user, I would imagine there is little difference between a client-side list and a proxy as a service…
The decision ultimately falls on the types of remote users your company has and what endpoint security controls are already in place.
If your users are rarely in the office to sync with the internal proxy you have set up, this does them no good.
I have mixed feelings about the proxy in the cloud, mostly because I don’t understand it very well. Does the cloud just contain the policies that are still pushed to the client? Or does all web traffic get routed through the cloud and blocked by the proxy on the other end?
In the “in-the-cloud” models I have seen so far it appears the user is redirected to a filtering server which goes out to the site on behalf of the user. My concern is performance. Are the users willing to wait an extra second or 3 to get their page? Did they get used to almost instant access from their home machines? If I get any more detail about the cloud models I’ll post something.