While looking through IPS alerts today I thought about the patterns of activity we are seeing on our network. Since we do not have many Internet-facing systems we mainly see malware brought in by our user’s web browsing. While URL filters and proxies may help cut down on the “bad” sites users are allowed to visit, how many malware-serving sites are not blocked if you have a URL list filter as opposed to a combination of list and reputation-based filtering? The answer is…many. If these sites were included in URL list filters then I wouldn’t see these as IPS alerts, right?
So all we need to do is stop the users from going to “bad” sites, right? After investigating some of these alerts in more detail we noted that the “bad sites” were actually iframes/links to the malware-serving sites off of legitimate sites. So the question is, should access to these “bad sites” have been allowed in the first place? If you have reputation-based in addition to URL list filters you have a better chance that it wouldn’t be allowed. But, if you have URL lists only chances are it will be allowed due to the near constant flux of where these sites reside. In the end it isn’t necessarily true that the users not being security conscious, the trend seems to point towards legitimate sites which serve ads (which they do not control) that contain links to malware-serving sites.
While the network will always be a target, the end user is much more susceptible to compromise. So now what do we do when they leave our network and go to Starbucks and jump on the AT&T wireless network? How does our network list/reputation-based filter follow them? This is where vendors seem to separate. Some seem to be going the route of agent software which syncs policies with the network-based proxy. Others seem to favor a proxy “in the cloud” model. Not quite sure which one works best and is most effective? Anyone have any thoughts?
Upon reviewing the web logs to see if anyone other than us and the bots were viewing our pages I couldn’t help but notice that we received a request from a system named wide.opus1.com. One of two things happened, either Joel Snyder from Opus One googled his own name (or someone at his company did) and read the post on security agility or he has a system that does this for him or his organization.
Joel, if it is the latter and that system has any level of intelligence please contact me at deron@malos-ojos.com. We have a specific need for such a system at the law firm…mainly due to the size of our firm and the recent presidential election results.
UPDATE: Epic fail Joel Snyder, epic fail. Either you haven’t reviewed the results from your bot that downloaded the page, or it was some random person at Opus1 looking at the site.
In listening to Joel Snyder from Opus One discuss security agility at the most recent Information Security Decisions conference in Chicago I had a few thoughts. While I agree with some of his points I don’t think he hits the mark fully on the “security agility” topic. One of the main points he makes during his “no punctuation-style” presentation is that security must become more agile to keep up with the business. The reason for this is the business wants to innovate and be agile, and therefore security must be just as agile in order to not slow the “innovation” down. But, I would ask how many companies are fine with average, who don’t innovate. Maybe they aren’t innovative because they have the wrong people at the helm, maybe innovation in their industry or business model is an unacceptable risk? I do agree that security must remain flexible….and if the business isn’t agile then are we spending too much on creating an unnecessary agile security function?
The one thing I did take away was that security must remain flexible and not hold the business back when they want to get a little creative…which I think is different than being innovative. The general comment was also made at the conference about security groups moving away from daily operational tasks and closer to the business and risk management. From my point of view I agree that this is happening to some extent. So if this is true, and security is now performing risk management activities for the organization, of course they are going to slow the business down. I think the main root cause of why they slow the business down is that they may not have an adequate security infrastructure in place in order to have the comfort level to say, “sure, go ahead because I know I can prevent, detect, or react to any possible security issues as a result of your project.”
In basing this on what I know about other organizations and their “security preparedness” I would say some are a long ways from getting to that comfort zone. In fact, all the talk of where organizations are today made me feel good that we seem to be ahead of the curve a bit…which is a good thing given the security function is so new.
In the end I believe companies are not that innovative, and a strong security infrastructure which is adaptive to the needs of the business will win out over creating a truly “agile” security function. How many times was the word agile used? Not to mention the use of multiple vendors…how many people have trouble getting their staff trained on a few products from different vendors? And if ISO27000 series keeps stating people, process, and technology are corner stones of good management practices, and I keep changing technology, how much work did we just add to the people and process side of that equation?
So I realize that this may seem very trivial, but I bought a new graphics card (EVGA GTX 260) to replace an 8800GTS and two things happened that I did not expect. First, the card is very long and barely fit in my case (Antec 900)…which is surprising as this case has a decent amount of room inside. Guess all those stream processors take up some serious real estate. Also, it needs a 500 watt power supply minimum with 2 6-pin pcie power connectors (or an unused pair of IDE power connectors). Second, when I hooked it up to my main monitor, which is a Westinghouse LM2410, and installed the most recent nvidia driver (178.24) the text looked jagged, blurry, and fuzzy. There was a second monitor (Dell 1905FP) hooked up to the card as well.
When I looked at the nvidia control panel I noticed that the Westinghouse was being treated as an HDTV. And although the resolution was set to 1920×1200 (native for the screen) it was not displaying 1920×1200. BTW, this monitor connects via an HDMI (monitor) to DVI (card) cable. The Dell monitor, being treated by the card as a monitor, looked perfect the entire time. I also removed the nvidia drivers, and with the Windows XP drivers the Westinghouse looked fine (although it had no graphics acceleration). Looking at a few posts on the nvidia and other forums I found the answer.
The easy solution was posted here and here.
The only change is that in the latest nvidia driver the inf, called nv4_disp.inf, needs to have the OverrideEdidFlags0 entry under the correct processor for the card (in my case under the GT2x heading of the [nv_SoftwareDeviceSettings] area). You will also need to ensure the first 4 bytes of the reg key are correct for your monitor. Use Phoenix EDID Designer to extract the current monitor EDID values, look at bytes 8-11 and make sure the values you are entering in the inf match. If you have two monitors you’ll need to figure out which is the correct one in Phoneix. The most popular EDID values appear to be:
Viewsonic VX2835wm – 5A,63,1F,0F
Westinghouse LM2410 – 5c,85,80,51
LG – 1E,6D,3F,56
I realize this is posted elsewhere and in other forums, I’m just hoping this helps someone find the answer quickly and without rebooting your system 1 million times trying to fix this issue.
Welcome to the Malos Ojos security blog. This site was created to allow security minded individuals to collaborate and share thoughts and ideas about IT security related issues.