Detecting Infection Chain Redirections with custom Netwitness Flex Parser

In my ongoing battle with those who feel they should have their way with the computers I defend, I recently had an idea around detecting a common part of the infection chain.  That term, to me, is the process of exploitation often used in drive by attacks to thwart IP and domain blacklisting along with making attribution harder as the target is often bounced between several IPs/domains in different countries through dynamic DNS techniques.  This certainly makes it hard to determine what IPs/domains are links in that infection chain at any particular moment, and admittedly it is not what my toolset is best suited for.  That’s the job of the many different services who specialize in such things (ex. Malwaredomainlist.com, spamhaus.org, Bluecoat, etc).  My task is to try and detect the techniques and malware bad guys use to get their main task completed: run code of their choice inside my network.  Along that vein, after investigating a recent driveby landing page on a compromised site that it was easily the 3rd time I’ve seen use of an HTML page that incorporated the somewhat depreciated HTML Meta Refresh technique to move the user to the next link in the infection chain.  This made sense to me as once they are able to compromise a wordpress blog, discussion forum site, ftp brute force, etc they often get upload permissions and it is then trivial, or even scriptable to upload a .html file they have saved somewhere to redirect the user with some unsuspecting message like “Loading…”  Here is the example that triggered my thought during that investigation.  In this example a html file was uploaded to the good site http://dropyourtalent.com at /rumyn.html, click pic to make larger.

blacole landing page

Continue reading

RSA Netwitness Investigator Regular Expressions

In the blog post below (here) I talked about my theory to detect DGAs by looking for consecutive consonants in a row within a URL.  However my SNORT rule does not work like I wanted it to.  I thought the rule would look for the HOST: portion of the HTTP header and start the regex match after colon and whitespace. When what it seems to do is trying to match the whole line as one entity including the word HOST, this causes the match to fail if there are ANY vowels in the line vs. only caring if there are x number of consonants in a row before or after a vowel.  So I figured I’d leverage another tool, RSA’s Netwitness network forensics product would work. You can download a home version of the tool here.

Netwitness investigator allows users to use RegEx to filter the packet capture data including of course web surfing.  As I understand it they use the BOOST RegEx engine as opposed to the PCRE engine I’m more familiar with. Either way, there doesn’t seem to be a lot of documentation included with the product around the needed syntax. So I recommend you try just throwing in your regex and see what happens as Investigator will do some sanity checking. Taking my own advice I entered my SNORT regex as a custom drill in the alias.host field (a helpful message from an experienced Netwitness user made me realize I shouldn’t assume readers knew I was using a custom drill for this filter).  As you will quickly learn you need to backslash escape most non alpha-numeric characters until it works.  The pic below shows the error checking where I intentionally left off the backslash escape character in front of the comma in the repetition operator {9,}

regex_validation

Continue reading

Security I want around my online banking experience…

UPDATE 3/7/12:  So hitting the banks up on twitter went no where, they didn’t even approve my comments.  I have a good disscussion going on in the “Internet Banking” group on LinkedIn if you want to join.  Next step might be a reddit post or possible emailing some of the bank CSOs directly.  Wish me luck.

5/5/12 I reached out to BofA’s CISO Patrick Gorman and was ignored. Don’t see much point in banging my head against the wall anymore.

Online banking and bill pay are two of the conveniences of the information age. I have memories of seeing my mom paying the monthly bills late at night filling out checks, opening and addressing envelopes, putting on stamps etc. She then of course had to manually balance her checkbook with a calculator. I cringe at the thought of having to spend several hours a month in such a manner, with online banking and bill pay it usually takes me 30 mins to send out the money with little or no writing or mailing. As with many conveniences, however it comes with a risk of potential abuse. Individuals certainly aren’t the only ones being victimized, especially vulnerable are small to medium-sized business who have relatively a lot of money compared to the average Joe Schomo like me and unfortunately for them they are not covered by the FDIC or the bank’s policy. Here’s a Bloomberg article from August last year that goes into several examples of theft and how the banks don’t always reimburse those companies like they do you when your CC is stolen. Bloomberg article

So without further ado here is what I humbly ask from my bank, 3 fairly quick wins that should be implemented as soon as possible. I’ll let you know if I hear anything back from them. I plan on hitting some of the bigger ones up on Twitter and LinkedIn to see if any of that social media mumbo jumbo works for me and my little blog.

Continue reading

Detecting the Bad from the Good…

UPDATE 4/3/12: I worked with Joel Elser on Snort-sigs mailing list to develop the below signature. However there’s been some concern around system resources of Regexing every GET request to the internet.  I’m thinking I might have to adjust the rule to exempt .com and .net TLDs.  Less effective I know but at least it won’t kill the sensor. This technique is probably better for offline static analysis of logs then realtime IDS.  Damballa has two good papers on their work around detecting DGA (Domain generation algos) and how they haven’t gone away now that Conficker is out of the news.  Links
alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:”WEB-MISC http header with 9 or more consonants”; flow:to_server,established; content:”GET”; http_method; content:”Host: “; http_header; pcre:”/^Host:s[tnrshdlfcmgpwbvkxjyqz0-9]{9,}$/Hi”; metadata:service http; classtype:bad-unknown;)
UPDATE 10/28/12: The above sig does not match unless there are NO vowels in the host header.  Please see an updated post here
_________________________________________________________________________________
In security monitoring it’s your job to use your creativity to design rules and dashboards so you can identify evidence of malicious activity.  In general there are two strategic ways to do this, detect the bad (blacklisting) or detect the good (whitelisting).  It doesn’t take a genius to realize that whitelisting is the more effective strategy overall, although its much harder to implement.  Even so blacklisting is still useful to use for increasing defense in depth.   Speaking about getting creative to detect the bad guys, a recent thought I had is looking for one of the tactics they use to avoid being taken offline.  Namely registering and using a large number of domains so that content filtering and whitehat organizations can’t keep up.  You could see this during the Conficker worm battles, where early versions were programmed to connect to 250 domains a day, and when the Conficker Cabal launched an effort to pre-register those names later worm versions came out in direct response with algorithms for 50,000 domains per day.
The flaw I see in the “I can register more domains, faster then you can” tactic is often the make up of those domains. By their nature they frequently are a collection of puesdo random letters not valid strung together words in your language, like a normal domain.
For Example: (consonants in a row in red)
jxnrxlwmulpefpjt.org
qqwfddgtgfbafgnhnusmz.cx.cc
fhgis7afg7s6d7fgs76odf.ws
khetttttttt.coom.in

Detecting emails spoofing your domain using Outlook Rules…

The concern over targeted emails from sophisticated attackers is only increasing after the Aurora attacks, not to mention the latest Google vs. China row, and RSA’s compromise. One technique the attacker’s use to make their targeted email more legitimate is to fake or spoof the email’s from address using the recipient’s own domain, so it is more likely to be opened. The constant message from security about the dangers of email can make opening a spoofed message cause much alarm for the average user, which of course is usually only evident after the message has been opened. Which is quickly followed by the usual questions, why am I getting this email Joe said he didn’t send it? It says I sent it! It’s not in my sent items, am I infected? Spoofing email addresses has a long history of being allowed by the SMTP protocol and many companies currently rely on that “feature” to deliver mail legitimately, think surveys, send this link to a friend, web page forms, mailing lists etc. So it’s not as simple as just blocking everything with an internal domain that comes from the Internet; because it’s not all bad. It’s also made more difficult in how SMTP works, where the MAIL FROM (aka. Smtp from, RFC2821, envelope from) is often is what is evaluated at the mail gateway/server level, but the user will just see the FROM (aka. RFC2822, message from).

Continue reading

Countering the new threats…

The SANS Incident handling steps are

  • Preparation
  • Identification
  • Containment
  • Eradication
  • Recovery
  • Lessons Learned

I’m going to talk about Preparation and Identification today. See I enjoy the challenge of having a network and resources to protect.   Its funny to see how that translates into my non-work life, for example I always like RTS type games and often take a defensive posture by default.  Called turtling in the gaming world it’s more than likely to get you a loss because most games I played seem to favor the noob friendly kamikaze aggressive style, or rushing. However, that just makes me enjoy defense even more, who wants to take the uneducated inexperienced easy way out?  In the real world all out no fear, attack is not a legitimate info sec strategy for obvious reasons.

First it’s well documented how the “bad guys” have changed over the last 2 decades from notoriety seeking weekend hackers, to “hey I can make money at this” full time hackers, to organized criminal gangs.  What I don’t hear enough about is the current migration from gangs to an underground criminal marketplace, and that is just plain frightening.  Organized crime is dangerous and hard to stamp out but it’s a threat that can be met with equal good guy organization and cooperation.  Like when it took Elliot Ness and the federal government to stamp out the mobs and corruption of the 30’s.   Complicated Bad can be fought with complicated Good, hard but doable.  But how do you fight a decentralized economy of goods/service providers with a specialized skillset, profiting off loose, dynamic, and temporary connections to others?  That is really hard to do. Now throw in more recent fully state sponsored agencies targeting small subsets of the internet (Advanced Persistent Threat [APT] attacks on Google) and you’ll see why Information Security is so much in the news these days.

Continue reading

Cheap Website Monitoring…

A big part of being a security analyst is figuring out why something you admin is blocking what a customer is trying to get to.  I actually like those problems because I look at them like a mini-puzzle. The key to these issues, and many others for that matter, is being able to recreate the trouble with your own equipment.  I usually tell people, if it’s blocked for me too then I will be able to fix it.  The hard problems come when it’s an occasional issue, or only from one part of the network, etc.

Anyway, the latest scenario involved a city employee trying to get to a local high school’s website to get their layout.  The site was blocked with the category pornography, which seemed like a miscategorization.  After recreating the problem on my desktop I got a hunch, which leads me to the reason for this post.  I headed to google and searched “site:xxxxxx.edu nude”  and it came back with the results that would make any webmaster wince.  Pictured below (anonimized to protect the innocent)…

So that was quickly solved by making the school’s webmaster aware of the injected HTML SEO poisoning keywords and asking our vendor to re-evaluate the site once cleaned.  But more to the point such Google searches are a really cheap way to do some manual monitoring for websites under your protection. I personally do searches like these every few weeks, on the off chance one day I will get something other then no webpages found.  Don’t forget to submit requests to clear the major search engine’s cache if you’re hit or these results will stick around for a while.

PS I’ll leave it up to the reader’s imagination on which keywords to use.

Deleting Files Already in Use Remotely…

I had a bit of a frustrating day the other week, a machine was infected with an IRC bot that AV couldn’t detect or remove, beyond the relatively minor host file modifications. The machine was re-imaged and then even replaced, but kept getting reinfected a few hours later.  Of course 99% of the time that means a PEBCAK (google it).  It turned out to be a user’s personal infected thumb drive that was spreading the bad stuff.  Anyway since we were getting a large amount of email alerts from the IPS on outgoing IRC NICK registration to IPs in Asia, I RPDed to the machine and copied TCPview from a share to see what exe was creating the process sending the IRC traffic.  The malware was able to hide the process from taskmgr but not the sysinternals tool. It turned out to be the file run64dll.exe so I tried to stop the process and it immediately started back up, not surprising.  I tried the delete the exe but Windows complained about it being in use.  This is usually where safe mode comes into play but I was sitting miles away, so I employed a trick that is the reason I’m writing this blog entry.  Of course you will most likely have to be local Admin or have those rights on the target folder/file, but when you need to kill a file in use try changing the permissions on the file to DENY ALL (Right click file and go to permissions tab in WinExplorer).  This will then quickly stop the process you couldn’t from accessing the file and probably will allow you to delete the file.  Worked for me and hopefully it will for you.  After that I highly recommend a full re-image, you have proof that malicious software successfully ran on that machine and there is no way to know for sure how deep the infection went, don’t risk it just wipe it.