Archive
If You Are Doing Incident Response, You Are Doing It Wrong
I’d been thinking about this for awhile, but conversations with Rob Lee and then a presentation with him really helped me clarify my thinking on this issue. Here goes:
If you are doing incident response, you are psychologically, if not operationally, in a reactive rather than proactive mode. To do it right, incident response needs to be part of your ongoing daily business process. True incident response only occurs during major breaches. As part of your incident management, you proactively – days, months, even years in advance – address the issues that might create a need to respond to an incident.
By managing incidents rather than responding to them, you:
- Reduce the severity of the incidents that do occur.
- Reduce the number of incidents that do occur.
- Shift from responding to incidents to managing incidents as part of your normal operations
- Reduce unforeseen expenses related to incident investigations
- Increase your visibility within the business, and thus the support for your organization
- Strengthen security posture (Thank you to Corey)
- Reduce stress on your staff and increase their job satisfaction (unless they are adrenalin junkies)
An incident management mindset depends on accepting a truism:
Compromise Is Inevitable – Something truly malicious has been in, is in, and will be in your environment.
If you accept that compromise is inevitable, why wait for it to happen? Why not get ahead of it, reduce its impact, and increase your resilience?
Which leads me to my second point – traditional emergency management has been doing this for decades. If you do a search for “Emergency management cycle”, you will find many images similar to the following:
Tornados, earthquakes, fires, automobile accidents, heart attacks, and many more emergencies happen daily. Rather than treating these as one off incidents that require all hands on deck, emergency services plan, recruit, train, and respond in a very calm, business like manner because it is their normal business. (I speak from 15 years of emergency management experience. Find a fire fighter in your organization and run this past them.) When a fire engine rolls up to a fire, does everyone jump out, run around, and add to the chaos? No, they respond in a very consistent, calm, and methodical manner.
Take a hard look at ICS – Incident Command System. FEMA has several short, online courses to familiarize you with it. Step back and think about how it might apply to your organization. The modular, scalable nature of ICS enables effective response to incidents by multiple agencies. Sounds like something that might apply to a breach investigation? (You don’t need to buy into all the labels. Just think about the core concepts.)
In closing, let me ask you to think on two points:
- Manage incidents, and the entire lifecycle, in a way that enables you to treat incidents as part of your normal operational tempo.
- Pay attention to how traditional emergency management works and learn from them. An enormous amount of thought and effort has been invested in emergency management already. Build on that rather than try to recreate everything.
SANS DFIR Summit Prague – Blue Team Perspectives slides
I gave a presentation at SANS DFIR Summit in Prague this morning. My presentation was designed to introduce DFIR practitioners to the larger business context that they might be working within. This could help with career progression, avoiding frustration in the workplace, or developing your reputation within your firm to name just a few possibilities.
Any and all feedback on the presentation is welcome.
Patents in the DFIR community space
Good morning,
David Cowen announced that he has submitted a patent application for NTFS TriForce. Let me start off by stating that I admire David quite a bit, I think TriForce is very useful and pushes into new territory, and that I am not angry at anyone, least of all David.
That said, I’m very concerned. The discussion is going on over in G+. Here is my very off the top of my head contribution to what may be a very interesting discussion. I hope others chime in.
From the G+ post:
I think my response may have been phrased in overly strong language. I’ve given some thought to why – why I am concerned and why my response was more emotional than the situation warranted.
My perception was that your tool was well supported by the community, and that through your beta program, presentations, and blog posts you were engaging with the community to help develop it. I, rightly or wrongly, mentally lumped it in with other reasonably priced tools that were closely tied to the DFIR community. So when I saw your patent post my immediate thought was “There’s another tool that I’m not going to want to support any more.”
Your post set off warning bells in my hindbrain. My experience with patents has almost always been negative, and in some very personal ways at times. I’ve had colleagues say “I’m doing this for all of us” on more than one occasion. Even when they were being honest, there were negative repercussions.
I think you’re going to run into prior art issues, and quite a few of them. I think that this may be part of my emotional reaction. My perception is that the prior art may be the DFIR community’s work and I’m reacting to the perception that you may be trying, directly or indirectly, to patent the work of many other people.
I fear that you may set off an arms race in the DFIR community. Maybe it is going to happen anyhow and you’re just getting there first. I don’t really want to be a part of that, and I’m not going to be thrilled about watching it happen if it does occur.
Guidance is a very poor example to cite of good behavior with respect to public relations, community engagement, and good business decisions. If they are your model for pretty much anything, you’re elevating my level of concern.
There are other ways to protect and control your intellectual property without patents. Sharing your thoughts on why you’re going with patents rather than licensing would be helpful.
Someone asked me if I am angry. I’m not. I am quite concerned though and I look forward to seeing how this plays out.
-David
IRcollect – collect incident response information via raw disk reads and $MFT parsing
ircollect is a Python tool designed to collect files of interest in an incident response investigation or triage effort. This is very beta code. I’m hacking on it regularly, using it to learn about internal structures, finding minor and major issues, …. Use it at your own risk! If you have advice on how to address issues I’ve encountered, please share ….
In the process of writing this, I added data run parsing and ADS detection to analyzeMFT so those are now available.
The github site has more details and will be updated much more regularly than this blog.
Running as local admin, it:
- Opens the raw disk
- Reads the master boot record, collects a copy of it, and uses the MBR to find partition and disk information
- Using the MBR information, it finds the NTFS partitions.
- Working from the start of the NTFS partition, it finds the $MFT
- It collects a copy of the $MFT and then builds a list of all the files on the system and their data runs
- Using the file list and data runs, it collects interesting files through direct reads from the disk, bypassing access controls.
All collected files are stored in a directory specified with the -d option. They are further organized by hostname and the date-time the script was run.
Requirements:
pip install analyzemft
Status:
VERY beta. Active development daily, often hourly.
Currently collects master boot record, $MFT, and live (corrupted) registry hives. User can modify table in ircollect.py to specify any files they desire.
Thank you to:
- Jamie Levy for mbr_parser
- Willi Ballenthin – bit manipulation code, lots of useful tips for analyzeMFT
Adventures in Powershell for IR
So, I wanted to access locked registry hives. Simple enough using F-Response, but it devolves into various solutions that are not well supported after that. I came across one solution that was of particular interest from a response side but also from an attack side:
Using PowerShell to Copy NTDS.dit / Registry Hives, Bypass SACL’s / DACL’s / File Locks
In short, it opens a read handle to the C volume, parses the NTFS structures, and reads the files directly thus bypassing all access controls and locks. You do need to be local admin to run it.
This is great for getting locked registry hives, or for remotely copying NTDS.dit without deploying hacker tools on the remote system. Bear in mind that the remote system needs to be running the WS-Management service. This is not running by default on our Windows 7 desktops, but the author mentioned that it is running by default on Windows Server 2012.
There are a number of niggling issues with getting PowerShell scripts to run. This article covers almost all of them nicely: Execution Policy
However, it didn’t cover one issue – what happens when you try to do:
Set-ExecutionPolicy RemoteSigned
and get a registry access error?
This post explains how to edit the registry directly.
Once you’ve worked your way through those issues, you can grab local and remote files to your heart’s content.
DFIR Fiction Reading List
The Digital Forensics and Incident Response fiction reading list, in no particular order:
- Ender’s Game – Orson Scott Card
- Jumper and Reflex – Steven Gould
- Most anything by John Grisham
- Daemon – Daniel Suarez
- Zero Day and Trojan Horse – Mark Russinovich (yes, that Mark)
- Pretty much anything on the Access Data or Guidance Software support web sites
- Blue Nowhere – Jeffrey Deaver
- Halting State and Rule 34 – Charles Stross
- Zero History – William Gibson
- American Gods – Neil Gaiman
- The Magicians – Lev Grossman
- The Night Watch Trilogy – Sergei Lukyanenko
Dissecting a Blackhole 2 PDF (mostly) with peepdf.
I’m fairly new to malware analysis having spent most of the last ten years doing IT consulting, computer forensics, ediscovery, and some related work. I’m now doing a lot of incident response and am taking on some malware analysis responsibilities, at least on a triage and management level.
We got phished the other day, and a rather nice phish it was. Kudos to the mail team for shutting it down quickly and to an alert user who escalated it as well. Some quick dynamic analysis led to a PDF, and there our story starts.
If you open the PDF up you’ll get a rectangle and possibly an error message.
So what do we as analysts do with this? There are a lot of analysis tools out there and I worked my way through quite a few of them, partly just to see what worked, and how they worked. The one I ended up using was peepdf. I barely scratched the surface of its capabilities, but the following features sold me on it for this project:
- It handled the malformed references that many other PDF analysis tools failed to handle.
- It is command line based and scriptable. If you develop a peepdf workflow, put that into a file and execute it each time
- Search raw and decoded objects
- Spidermonkey built in
So, I loaded my malicious PDF into peepdf and got the following output:
Well, that’s pretty simple, there is only one object with JavaScript in it. Let’s take a look at it:
One thing immediately leaps out at you and another one follows soon after that. First off, this is very ugly code. I mean, rather than just saying “ff = charCode” the construction of ff is broken up over multiple lines. This is classic obfuscation technique, though a lot easier to detect in JavaScript than assembly code. If you look through this code you’ll start to see other similar techniques and will eventually be able to see some pretty simple structure. A hint – everything between /* */ pairs is a comment.
I cleaned up the code a bit and rewrote it in pseudocode (because I don’t know JavaScript yet) to try to figure out what was going on. As you run into JavaScript calls just do a Google search for them, just as you would for Windows APIs. You don’t need to be an expert programmer to figure out what the code is doing. In this case, the bulk of the code amounts to this:
s = s + char(str(int(concat(b1, b2), 0x1a)))
It is building a string up by concatenating pairs of bytes, converting that to an integer with radix 0x1a, and converting that to a string and then into a character.
So that is the second part, a decryption routine, but it needs something to decrypt. I guessed that the first part located the stuff to decrypt and that it found it using the keyword “creation date”. So, how to find that stuff? Back to peepdf:
The search for “creation date” didn’t turn up anything, but searching for “creation” produced hits in object 3 and object 43. We’re already looking at object 43 so let us see what is in object 3. Lo and behold, there is CreationDate and a lot of … stuff. Working on the assumption that the code in object 43 will decode the stuff in object 3, I proceeded as follows. (Yes, there is probably a way to do this all within peepdf, but I’m still learning how to use spidermonkey properly so I took this route.)
First, dump object 3 out.
PPDF> object 3 > object3.txt
Write a Python version of the code in object 43:
Strip the noise off the front of object3.txt (“<< /Title asdasdsad/CreationDate %#^&*%^#@&%#@3”) and then run the Python code against the object 3 stuff we saved earlier:
> jsparse.py -f object3.txt > object3.js
And then jump back into peepdf and clean up the newly created code:
PPDF> js_beautify file object3.txt > object3-clean.js
This illustrates one of the things that I love about peepdf – it includes a lot of very useful functionality in the application so you don’t need to jump in and out of the tool all the time. (My foray into Python is due to my own issues and not peepdf’s.)
object3-clean.js now contains the second stage of the malicious PDF.
There is a lot more that can be done with this, such as noticing that the JavaScript coding style looks a lot like the php code used elsewhere in this phishing attack, but I’ll leave that and leave decoding the second stage for another day. Readers interested in carrying on will note that var1 and var2 are awfully similar and may be headers for shellcode.
This was a pretty high level run through of a relatively simple problem done by someone rather new to the subject, but hopefully it left you with the confidence to dive into this sort of thing yourself. There are a lot of good tools out there, lots of examples to work on, and many good people to help you out. (Tip of the hat to Willi and to the folks from rem-alumni.)
It isn’t APT, it is SASPDT – Sometimes Advanced, Sometimes Persistent, Definitely a Threat.
I’m human (thankfully) and I get irked by simple things at times. Today it due to conversations such as this one:
Them: “That malware wasn’t very advanced, it is just a version of <insert commodity malware here>”
Me: “Interesting. What’d they do with it?”
Them: “Moved laterally to our domain controller, dumped all the hashes, and shipping them out via FTP.”
Me: <silent>
OK, so it isn’t APT, it is SASPDT – Sometimes Advanced, Sometimes Persistent, Definitely a Threat.
“Advanced” isn’t required if they (insert your favorite description of the threat actor) can get into your environment using commodity malware, move laterally and collect sensitive data due to poor security controls, and exfiltrate the data via FTP because you don’t have any DLP in place. Similarly, “Persistent” isn’t required if they can phish their way in at will.
As long as the less sophisticated attacks will work, there is no need for malicious actors to deploy more advanced tools. Why was Stuxnet used on Iran and why aren’t you seeing Stuxnet in your environment? Because the attackers needed something sophisticated to get into the Iranian nuclear program environment but don’t need the same level of sophistication to get into your environment.
I normally don’t get too hung up on the term “APT”. For me, it is a convenient shorthand for “groups of often well funded malicious threat actors who may or may not be state sponsored but who are definitely capable of breaking into most environments and taking sensitive data.” Dismissing an attack because it wasn’t advanced, or because it didn’t come from China, seems unwise to me. If they pose a significant risk to your business, then they’re DT – definitely a threat.
The ultimate collection kit.
So, there I was …. Or, in other words, once upon a time. Or, …. Anyhow, I’m off doing a really “interesting” collection job. Its a mix of ediscovery and forensics, with all the typical issues – custodians available only for a day, unexpectedly large hard drives, systems that cannot come down at all, 3 Sony Vaios with just one power cord, etc. And, par for the course, no real idea of what I’m getting into prior to showing up on site, despite efforts to gather information. So, what made this fun collection rather than a nightmare? The ultimate collection kit:
- WinFE with FTK Imager, IEF, and X-Ways. This successfully imaged a Vaio laptop with dual SSDs in a RAID configuration without a hitch.
- Tableau TD1 – if this thing would write to multiple destination drives simultaneously, I’d kiss it. Even without the dual destinations, it is a rock solid imaging solution. (Bring a USB keyboard to make things a bit easier.)
- FTK Imager CLI – Ok, I know how to use dd and its brethren, but FTK is a bit more full featured, and being able to use one software tool across all the platforms was great.
- FTK Imager – FTK Imager doing logical folder collections made packaging the loose files very easy. And, again, one software tool.
WinFE
- It will boot any Intel system, including Macs.
- It is forensically sound
- It is (relatively) easy to add your own tools
> diskpart (to run DiskPart) > list disks (to see the media connected to the system) > select disk “N” (where “N” is number of your destination drive) > online disk (to bring the disk online) > attributes disk clear READONLY (to allow writing to the disk) > list volume (in order to choose the volume on the destination disk to write) > select volume “V” (where “V” is the volume number to your destination disk) > attributes volume clear READONLY (to allow writing to the volume) > assign letter=Z (any letter you choose, to which your image will be written
Of course, there are all sorts of other things in my collection kit – two Pelican cases full of stuff, in fact, but everything mentioned here will fit in one case and will allow you to handle quite a bit of what might be thrown at you.
EnCase + RegRipper + dtSearch + … for incident response
So there I was, working an IR case …. The forensics version of “It was a dark and stormy night.” For all the typical reasons, I cannot share the details, but I can share some “what worked” tips that may be helpful to others. Please bear in mind that these are just small pieces of a much larger IR response plan, toolkit, and process.
1) Keep a running log of everything you do. This is SOP for forensics, IR, etc for well known reasons. One additional reason that isn’t often mentioned is that it serves as a governor on your pace. If you’re taking the time to collect your thoughts into a log, more of your thoughts have a chance to catch up with you, make it onto the page, and join in building the overall picture of the event. Without this governor, a lot of information is never documented, possible leads aren’t explored, and you will feel less like you’re in control of your response.
2) Dump the registry hives as early as you reasonably can and set up a process for running them through RegRipper. I used a combination of tools to simplify this:
- A batch file to create the default folder structure.
- An EnScript to locate and extract all the registry hives
- A Python script to walk through all the files in a given directory, run the appropriate RegRipper plug in on it, and write the output to a dedicated output directory.
This combination of tools allowed me to quickly process a system in a repeatable fashion while minimizing the chances of human error creeping in.
3) With all the RegRipper output organized by system, I pointed dtSearch at the folder containing all the output and indexed it. This allowed me to quickly search the RegRipper output from all of the systems for new information came in.
4) malware often stomps on the MAC times but may leave the information in the File Name attributes alone. There are a lot of ways of getting at this information, but I dumped the $MFT out, ran my analyzeMFT tool against it, and created a spreadsheet I could use to do further analysis. This is a very rough form of timeline analysis but, no matter how you do it, timelines are crucial and should be developed. Further, they should be developed with the most accurate data possible.
5) If you’ve got malware and don’t know anything about how it behaves, I suggest you take a look at Lenny Zeltser’s site and in particular at his material on reverse engineering malware. He teaches a superb SANS course on the topic, but he also has a one hour video and a PDF of his slides on the site that will really help you get started.
As I said, this is hardly a definitive list of what to do when working an IR case, but it might give you some ideas to include in your response toolkit.