So way off topic, but since it was on Encore the other day and I couldn’t resist watching it again (yes, it is a horrible movie) I noticed a few things that were predicted correctly in a movie from 1995. For one, the need for more storage (and data) is written in as Johnny’s need to store the PharamaKom data, which ends up being the cure for “the shakes” or NAS. He doubles his capacity to a whopping 160GB by using a memory doubler, but is somehow able to get 320GB into his head….odd, I tried this with a flash drive once and it didn’t work. Maybe because it wasn’t implanted in my brain, who knows? Also, did you notice the rate at which he can transfer data is about 1000 times faster than USB 3.0 or Thunderbolt? Ironic considering the data is being fed from an optical reader and a small CD-ROM disc. Anyway, larger data sets requires faster transfer mechanisms to removable media. There is also part of the plot centered on the internet and his ability to break into systems to get information, such as the location of the copy shop where the encryption key for the data in his head was sent. Yes, it also covered protection of data at rest with encryption as well. I know this is a stretch, but listen closely when he’s in the computer shop with the bodyguard for the first time. He asked her to get a bunch of stuff, apparently so he can connect to the internet. One of the items is an iPhone…that’s right, although he called it a Thompson iPhone he says it nonetheless. And yes, also that large corporations aren’t happy when they lose control of data, especially competitive data which could be used to ruin revenue streams. Also, since NAS is related to too much technology (as told to us by Henry Rollins), isn’t that the same as ADHD today?
The only part that doesn’t seem to be correctly predicted is the resistance. The “LoTeks”, who live in Newark on an abandoned bridge and are led by Ice-T, are the resistance against the evil corporations. The part that isn’t correct, and I’m drawing a line to Anonymous/LulzSec here, is that the resistance despises technology…their resistance is based on the fact that they refuse to use it. Given that resistance today seems to be very technology heavy my opinion is that this part was incorrectly predicted. Or is this change in resistance from heavy technology to no technology yet to come? You decide. If you’re hedging your bets that the movie is correct start buying land to create a technolgy-less hippy commune today in Montana (sorry Montana). All of that aside, the resistance still does get the data in the end, so they needed to rely on technology, which seems to go against their principals…so maybe you should hold off on that land grab. Then again the movie is set 10 years from now (2021)…so I’m back on the land grab wagon.
Final note, someone find me the laser lasso thumb thingy the Yakuza guy has and I will gladly pay you Tuesday for a laser lasso thingy today…that is all.
While I found the article on your personal thoughts and opinions about the recent LulzSec activity interesting, I can’t quite entirely agree. Is it bad that people’s information got dumped? Sure. Did I find it funny? Somewhat (blame my upbringing on the Internet for that one). Did the attacks get us talking about security again? Yes. Should organizations be doing more to secure their infrastructure and applications in the first place? Absolutely. Was all of this LulzSec’s true intention? Who the hell knows.
All of that aside, the one comment from your posting which I believe is way off is:
“When you attack someone for fun, all you do is contribute to the picture some execs have of security pros as young punks who care more about notoriety than about helping them secure their infrastructure”.
Really? You sincerely believe an executive is sitting in their office right now going…”Hell, I better get down to IT and watch those young punk security folks we hired, they may be up to no good or hacking stuff for notoriety”. Or do you think it is more likely they are sitting there saying, “Damn, we aren’t paying enough attention to them when they bring up issues with our security. How do we make it better?” I’d like to think it is the latter; at least that is what the little practical guy inside my head and real world experience is telling me.
Since the internet is “free” and I’m open to sharing, here are my thoughts:
1. The attacks shed light on the fact that we are, as a whole, fairly insecure even in 2011. We’d like to think that is not the case but the sad reality is that it is true.
2. We’d like to think we just learned about security and are behind because of our late entry into the game, but that is definitely not the case. We’ve been at this since NT4, and RACF before that.
3. Secure application development (oxymoron) has a long way to go. Does anyone else find it ironic that we can’t even say injection anymore and that it has to be shortened to SQLi because we say it so much?
4. These attacks have been going on without our knowledge for some time, by LulzSec or others who don’t have a Twitter account with witty sayings and posts. At least LulzSec released the files so everyone could see what was accessed…which saves a ton of time on the initial incident response from my perspective.
5. How many people does China have dedicated to infosec warfare again? Last time I checked I didn’t see any tweets from them telling me my data was available via torrents.
6. The media will report on anything as “fact”, true facts will be determined later. Must. publish. first.
So, we can write them off as “a bunch of punk kids”…or, we can take a lesson and move on. I pick option number 9000, I mean 2.
I realize this isn’t a new topic and even a few years ago at the law firm we considered buying Elcomsoft’s GPU cracker for the lab. The reason I think this is somewhat relevant today is that previously the cost to build a cluster of CPU-based crackers was somewhat prohibitive. Since we know GPU performance far exceeds the CPU when it comes to processing encryption or hashing algorithms it makes sense to transition the brute-force, and even rainbow table, attacks to a GPU-based system. Thanks to nVidia and CUDA people can develop these apps, and thanks to Bitweasil over at cryptohaze.com and the CUDA-Multiforcer app we can all mess around with this functionality.
So I decided to run some tests. Part of this was to confirm the results that others have posted, but I also wanted to determine what my old GTX260 card could do. Here is the test: I generated a NTLM hash of an 8 character password consisting of only lower alpha characters and numbers for testing (1deron10). The tests consisted of breaking the hash on 2 different systems (my system and a GPU cluster instance in the Amazon’s EC2 cloud) . I also used 2 different tools for comparison, Multiforcer (both 0.70 and 0.80) and JTR 1.6.37 patched for NTLM. For full disclosure I did feed Multiforcer with the loweralphanumeric character set file only.
Here are the results:
My system (Multiforcer):
My system (JTR)
Amazon EC2 (Multiforcer)
So while Multiforcer and JTR both took about the same amount of time on my system I’m going to claim that JTR got lucky this time. More tests to come. What do these results mean? Well, password lengths of 8 or less are no longer secure…even for NTLM/MD4 hashes…assuming you only use 2 of the 4 possible options from lower, upper, numbers, and symbols. At the same rate, using lower, upper, and numbers in an 8 character password gives you a key space of 62^8, or 218 trillion possibilities. At the rate of my system using the GPU it would take 13 days to check 100% of the space. On something with a little more power, say the RenderStream box (www.secmaniac.com), it would take 2 hours and 45 minutes at 22B pw/sec rates. That is pretty damn reasonable.
One final thought. If Multiforcer supported multiple cards on multiple cluster systems, then we could spin up 5 EC2 GPU instances giving us a total of 4880 CUDA cores to play with…that should get you much closer to the RenderStream box, but in place of spending $14k you’d use this at a $10.50 rate per hour (or 56 days of continuous use before I hit $14k)…well, that also doesn’t factor in the power draw from the RenderStream J
If I get time to test other options, lengths, and so on I’ll post an update.
Ok, so this is actually unrelated to what I was planning on posting, which was GPU brute-force password cracking and some stats I pulled using Multiforcer and different nVidia cards I had laying around. In order to get Multiforcer to run I needed to update my nVidia driver to the latest version, which is 266.58. The problem is I have a Westinghouse LM2410 monitor that has the good old EDID issue with the nVidia drivers. No problem, easy fix right? Modify the inf file with the correct values for my monitor in the OverrideEDIDFlags0 and we are back in business. Problem is that, with the new driver anyway, adding the binary value for the EDID override via the inf file on Windows 7 64-bit doesn’t seem to actually add the value to the required key. But, as this approach was simply adding a binary value to the registry during install, and it is just as easy to add it manually post driver install. If you’re not familiar here are some simple steps to manually add the value:
1. Run regedit, find the Video key under HKLM/System/CurrentControlSet/Control/Video
2. Inside you should see a bunch of GUIDs, open each one until you find the one with two folders, 0000 and 0001, that also have subkeys of Display, Settings, and VolatileSettings.
3. Create a .reg file to add this key, or do it manually. The binary value you need to add under the 0000 key is going to be specific to your monitor, so if you don’t have a LM2410 then do not use these settings. Here is what I added (right click in in the pane showing the contents of the 0000 key, choose new -> binary value):
Value name: OverrideEdidFlags0 (that’s a zero)
Value: 5c,85,80,51,00,ff,ff,04,00,00,00,7e,01,00
4. Reboot and your LM2410 should now be recognized as a monitor, not a HDTV, and should look good as new.
If you don’t have the LM2410 monitor then this probably isn’t going to help you. I know other people have posted this issue and solutions for Acer and Viewsonic monitors on other sites…good luck. Now I can brute-force those damn NTLM hashes and not strain my eyes doing so. Hope this works for you.
for those who are unfamiliar, mailinator is a site/service that accepts and temporarily stores any e-mail it receives. that’s right, any e-mail received by the mailinator mail servers is shuffled into the appropriate inbox as if it already existed. you simply choose which @mailinator.com e-mail address you would like, regardless of whether or not it has been used in the past, and use it as if it were your own. this makes it extremely useful when, for example, you are required to register for a site in order to download a white-paper. or, perhaps you would prefer not to use your @gmail.com address when browsing myhotmatez.com.
while recently using the site an idea came to me suddenly — if i were to mine data from mailinator inboxes, could i find anything interesting? i was curious to find out whether people used this service for legitimate reasons or if it was merely a dumping ground for spam. to entice me, i realized mailinator allows you to access an inbox by appending the username to the following url http://www.mailinator.com/maildir.jsp?email=x (where x is the username).
so, i fired up vi and began piecing together a python script to automate the discovery and reporting of messages across multiple inboxes. the logic was simple:
after completing the script, i began by slowly feeding it lists of 5 to 10 usernames — mostly names of actual people (e.g., bob, sarah, frank). even though i was able to scrape hundreds of e-mails, 99% of them were easily identifiable as spam. i then proceeded to words that could be associated with a specific task someone was trying to accomplish (e.g., code, temp, password, exchange). while still mostly spam, i managed to find a few interesting tidbits:
it looks as though lesley lupo was concerned about her wildlife points in zooworld. ok, maybe this isn’t that interesting but i did find it odd that someone would register a mailinator e-mail address for a service (zooworld) that accepts payments — where they are actively purchasing goods. the next one struck me as a little concerning:
you’ll notice i blurred sections of the e-mail out. that’s because it contains Robert’s full contact information including the company he works for. i can imagine that someone with a bit more time could easily modify the script to look for these types of sections within an e-mail. jackpot.
i’m continuing to mine for this type of data using various sets of usernames. i expect to post updates as i find more interesting information. my main reason for publishing this now is to gather ideas for improvement of the script and mostly because i know if i didn’t post it now, i never would. here are a few ideas for next steps:
/edit i forgot to mention the name of the file where usernames are read should be titled ‘words.txt’ and be in the same directory as mailinator.py.
So the tactic has been around for a while. A quick search found this posting by Neal Schaffer, http://windmillnetworking.com/2009/05/21/fake-linkedin-profile-how-to-spot/, in which the fake profiles use the same method as the profiles I found. The big difference is that these fake profiles targeted people not in security but those who are “specialists in social media”…which means it is more than likely that “sets” of fake profiles exist to target different groups (i.e. security, IT, marketing, etc.).
Note that this is not the same as the Robin Sage or Marcus Ranum fake profile experiments.
The author brought up a good point here, and a question I posed to LinkedIn directly…why can’t LinkedIn police this? They have direct access to the data and I’m sure finding similar profiles based on a set of simple logic shouldn’t be that difficult to data mine…right? So far their response has been: “We don’t have an answer yet.”
Sorry, but my curiosity got the best of me on this one and I dug a little deeper into the fake profile issue and it seems I only hit the tip of the iceberg before. I originally found 15 fake information security profiles, but that was because I limited my search to a specific job title in New York. The set of job titles I’ve identified that are associated with the fake profiles are (for the current title, case left as they used it in the title):
Again, all of the profiles show the “Greater New York Area” as the location. If you’re doing a search on LinkedIn just choose one of the titles above along with a limitation within 50 miles of the 10001 zip code. I stopped tracking the companies and universities they used in creating the profiles as it became too large of a list to be useful. The job descriptions are usually enough to give them away as they are weak and don’t make sense.
Let me go back to my assumption this was scripted, and they suck at scripting. Case in point:
Let’s take a look at my boy Dwayne (http://www.linkedin.com/pub/dwayne-larson/24/799/720). In addition to his killer profile photo, check out his past positions. Seems the script was supposed to randomly choose a company name that started with a “V” for his job between 2002 and 2007…hmmm, it seems to have gone a little haywire here. And how about my boy Alexander’s title (http://www.linkedin.com/pub/alexander-baldwin/20/b43/a9b) of Security Solutions ManaIT Project Managerger. Don’t know about you, but I wouldn’t hire a guy who couldn’t spell his own title.
One other interesting twist is the use of recommendations, links to the company website, and information in the summary section. This all goes to make the profile look more legit. Take for example our guy Gary (http://www.linkedin.com/pub/gary-jacobson/24/398/b04)…that’s funny, looks just like Ross’ summary, which appears legit (http://www.linkedin.com/in/rossboulton).
Let’s look at some recommendations. Harry (http://www.linkedin.com/pub/harry-bright/23/904/71b) took the time to recommend his buddy Stuart Michael (tp://www.linkedin.com/pub/stuart-michael/24/1bb/440). What a nice guy…too bad both are fake.
Finally, some numbers. I’ve identified 123 fake infosec profiles with connection numbers ranging from 52 to 500+, with the average number of connections at 250 for each profile. So, does LinkedIn even care?
I’m sure we’ve all seen the fake Facebook profiles by now….something along the lines of an invite from an attractive young woman with wall posts related to some “hot new pics” that she just took. Sure, you have to click on the link to see the pics, which then promptly redirects the browser and attempts to exploit some vulnerability on your machine and install malware. But what about the profiles that do little else than, 1.) appear to be legit, 2.) ask to be connected to you, and 3.) do nothing else (or so you think)?
Being a security professional I’m always a bit skeptical when I get an invite to connect with someone on LinkedIn and a few things throw up red flags when I review the profile. Do you live near me? Do you work for a client of mine? Did we go to the same school? And most importantly, do we have any connections in common? Being that security is a fairly small community I would find it odd that you know me but don’t know anyone else that I know.
Last week I received an invite from someone in NY who is working as a senior information security consultant for a big name firm. Interesting, but we didn’t have any connections in common and I didn’t know the person so I let it sit in my inbox for later review. This week I received another similar invite from someone in NY, with a similar title working for another big name company. One thing that caught my eye was the year the person graduated college. While I have a decent ability to remember names my brain has been wired in such a way that numbers tend to stick. So when I noticed they both had a MS in Computer Science and graduated in 2000 I became a little suspicious. Looking back at the previous invite I noticed they had the same title, during the same period of time, and only the company name was different. A review of these two prompted some further research. Here is what I found:
The fake profiles, 15 in all, used to mine your connections and probably map the infosec community, may have been generated by a script. But possibly not as there are some oddities with some of the profiles that make them appear to be created by hand…either way, someone had some time or they suck at scripting. Regardless, if you get an invite to connect on LinkedIn here are some things to look for:
Location:
Current Title:
College years:
Schools used (all with a major of MsC in Computer Science):
Current position description:
Job titles (seem to be randomly paired with a company):
Organizations/Companies:
Profile Names (** means they used the 1996-2001 grad years):
Yes, you may have noticed I have access to some of the last names…and that is because someone that is connected to me has accepted one of these fake profiles as a connection. I’m actually upset that I didn’t think of this. What better way to map the security resources at various companies? I started wondering a while ago if we could use the API to script a pull of public data and then do some quick analysis to see where everyone ends up once they leave a particular company. Maybe that is already happening?
Finally, I’ll let you figure the impact out, but these fake profiles have between 80 and 476 connections with an average of 321 per profile.
This post is only meant to shed some light on the data mining issues within LinkedIn specific to the InfoSec community. I’m sure this is happening in other fields as well…so if you’ve seen this please post in the comments section.
Building the foundations of a good security management program is much like building an ice cream sundae. Not to imply that building a mature security management process is as easy, but quite honestly both have been around for long enough that we have some good guidance to rely on. Given that everybody’s tastes are different, and the same goes for their risk appetite, the programs that result from going through this process are generally the same but always slightly different.
My analogy is from the perspective of the ice cream shopkeeper, although this may work from the customer point of view as well. I’m behind the counter and my job is to make the sundaes when a customer orders. A customer comes in, reviews my menu, and places an order. The key is that I have a menu. Compare that to the “catalog” your security function has created. If you don’t have a menu how do your customers (i.e. the business, IT, outsourced customers, etc.) know what you can provide and at what level?
1. Consider creating a security services catalog to outline the services your security organization can provider to its customers.
Next, after I have the order, I need to know where everything is in order to start making the sundae. I need to first grab a bowl which will hold the contents of the sundae I’m about to create. I compare the bowl to the asset management function. I need something to base my sundae on, and having a nice solid bowl instead of one that is cracked or has holes in it will make my job easier, my customer’s happier, and probably result in fewer stains on clothing. Now consider your “asset management” bowl. How solid is it? Do you know where your assets reside, who owns them, what data they hold or process, or their level of criticality to your organization?
2. Ensure asset management is mature to create a solid foundation upon which to build your security management process.
If you ordered a sundae the next logical ingredient is the ice cream…so let’s go with two scoops. And let’s pretend one is your patch management process and the other is configuration management. Once we have a comprehensive view into our portfolio of assets through asset management we can now ensure that these are configured securely and consistently and that they are up-to-date with patches. This is my equivalent of making sure the ice cream is not spoiled by checking the expiration date on the containers, but also that my scoops are round and of the same size every time I make a sundae. I like this from an owner’s standpoint because sick customers don’t buy much ice cream and I also have a repeatable process to better understand pricing and profits…after all, I’m in this to make money. So start asking yourself, is the ice cream spoiled and if not, am I consistently scooping the right size servings? A better question may be, do all of my deployed (and future) technologies have a configuration standard? Are my patching processes mature enough to ensure that all of my OS’s, applications, and devices are patched in a timely manner?
3. Configuration management and patch management are key areas in a mature security management process. Ensure that all systems are deployed following a secure and consistently applied baseline standard and that teams responsible for patching have the right processes and technologies.
To add to the above statement, many people believe configuration management is a function of security only. And for some organizations this may be true. I’d contend that configuration management should be a function of IT. From an operational standpoint ensuring systems are consistently configured cuts down on change control testing since we can test on a known configuration and our tests and back out plans are now more accurate. This has implications on patch testing as well. How many times has your organization deployed a patch to 10 systems and 2 went down because of the patch? I’d venture to guess the 2 that went down, even after “successful” QA testing, were the result of some configuration inconsistency from the other 8 that had no issues. This only hinders patch application and makes IT, whose job is mainly to keep systems available, less likely to deploy that off-cycle patch…which is ironic since those tend to be the vulnerabilities that are more critical. Finally, patch application is no walk in the park either. We tend to lack agentless solutions that patch both the underlying OS as well as the application layer.
Back to my sundae. Now it is time for the whipped cream and a cherry on top. The whipped cream is comparable to the vulnerability management process. It blankets the ice cream, and hence ensures that patch and configuration management are “covered” and that we haven’t missed anything. My opinion is that vulnerability assessment (not management) is the check to ensure that configuration and patch management are effective. If they aren’t then I have holes in the whipped cream and I start to “see” the issues at a layer deeper. Think about this, how does your organization use vulnerability management, or even assessment? As a way to make up for a lack of mature asset management? Are you even checking configuration compliance at this point? Do you have so many vulnerabilities “out for remediation” that you can’t keep track of the current state of vulnerabilities in your environment? Think about how many of those are related to patch and configuration issues…what, almost all of them?
4. Vulnerability management and the assessment process should be used as a check to ensure that patch and configuration management are effective. If you’re using this process as a gap-filler for poor asset management, patch, and configuration management then you’re doing it wrong. You’ve probably created an unrepeatable and very heavy vulnerability management process that is ineffective (read: everyone outside of security despises you).
Let’s not forget the cherry…which is nice to have but if you didn’t get one you wouldn’t be too upset…which in my analogy is pen testing. I’m sure there are many “pen testers” that will disagree with that statement. My opinion is that pen testing is a nice to have and I’d challenge those who feel otherwise to explain the value from a pen test. Given enough time, money, and effort everything is breakable. Someone created it therefore someone else will, given my statement above, break it. Where I do believe pen testing is of value is in examining a critical applications in more depth and detail than a vulnerability assessment would. Keep in mind that VA tools only check from known vulnerable conditions and it is possible, although rare, that new vulnerabilities are identified through pen testing. There are also those who say “it shows the impact”…well, if you’re forced to show someone in management there is an impact then you haven’t done a very good job of relating technical vulnerabilities into business impacts and terms they understand. I’m sorry to say that this happens quite too often and in that case maybe a pen test is exactly what you need to bridge that gap. I’d put it this way:
5. Pen testing is a nice to have. If my asset, patch, configuration, and vulnerability management processes are mature and effective the pen tester should be extremely bored during my assessment. If you can’t explain the technical risk in terms of business risk (or you don’t have a risk management group) then hire a pen tester, but only as a last resort.
One caveat in all of my statements is that secure application development and strong network architecture exist. In addition, I’m also assuming that you have defined remediation processes, the right technology and people, and some repeatable processes as well. A stretch I know, but I can’t cover everything in one post J
So I have a tendency to lose the passwords to VMs I’ve created over time, and when I want to access them I can’t remember the password or username. Here is a quick tutorial on using Ophcrack in bootable form to help us out. Sorry, video is old, just realized I never posted it.
Part 4 – Using Ophcrack from Deron Grzetich on Vimeo.