Archive for the ‘Computer forensics’ Category

Finding funding for computer forensics tools, and eating crow

November 18, 2010 Leave a comment

In February, I wrote a post entitled “The High Cost of Computer Forensics Software – Your Tax Dollars not at Work“. While I am still frustrated that the company in question chose not to release the enhanced open source software, I am much more aware of the issues involved in getting funding for the development of computer forensic tools. One source of funding is, of course, the US Government in the form of SBIRs, STTRs, and BAAs.

For a brief primer from a school’s department that assists with submitting proposals, check out this link.

Quoting from that document:

SBIR One to three announcements per year Schedule:,

Phase I

– $75K – $100K (or more) award + Options – 6 months duration – Feasibility Study – Can sub-contract SBIRs up to 33.3%

Phase II

– $750K award (typical) – 18 mo – 24 months

Phase III

– Unfunded commercializationA Brief SBIR/STTR BAA Primer

• Same award value
• Prime must perform at least 40% of the work
• Research partner must perform at least 30% of the work
• A maximum of 60% can be subcontracted
Small business must submit
• Much smaller funding pool

BAA – Broad Agency Announcements
• A description of needed research and technology
• For projects not supported by current programs
• Initiated by a white paper
Funding not always available!
• Award amounts typically $600K – $850K

Some relevant points from my own experiences with these mechanisms:

  • Long lag between proposal submission and funding
  • Highly structured proposal format (which is a plus in my book)
  • No commercial restrictions on products developed with the funding
  • Must give product to government for free. (They paid for it with the funding.)

The last bullet point is the source of my crow lunch. With funding comes strings and if you want to get a product to market, you need to make some compromises.

So if you’re looking for funding for computer forensics products, you might want to keep an eye on the SBIRs and BAAs. Go read up on the requirements and proposal formats. Think about possible partners that will add value to your proposal. Plan ahead.

Not a sure thing, but a possibility, and there are other similar programs out there.

New version of analyzeMFT

November 17, 2010 Leave a comment

I’ve been awfully busy with real work, but thanks to the gentle prodding of some interested parties, I updated analyzeMFT over the past few weeks.

  • Version 1.5:
    • Fixed date/time reporting. I wasn’t reporting useconds at all.
    • Added anomaly detection, with many thanks to Greg Kelley. Adds two columns:
      • std-fn-shift:  If Y, entry’s FN create time is after the STD create time
      • usec-zero: If Y, entry’s STD create time’s usec value is zero
  • Version 1.6: Various bug fixes
  • Version 1.7: Bodyfile support, with thanks to Dave Hull

The anomaly detection isn’t perfect by any stretch of the imagination, it simply helps reduce the noise a bit.

  • On the $MFT from a volume on a workstation with 110593 total records, checking for FN creation times greater than STF creation times resulted in 19649 flagged records. Pretty significant reduction.
  • On the same file, checking to see if the STF creation time microseconds are zero resulted in 14571 flagged records.
  • Turning both on resulted in 2157 flagged records. Most appear to be benign. (I hope they all are!)

That’s still 2157 (or 19,649, or 14571) files that you need to check by other means, but it is a lot less than 110593.

If there’s some feature you’d like to see in analyzeMFT, please, do drop me a note.

You can find the source and more details here….

There’s also a great post on how to install Python and run analyzeMFT’s source code here….

One attempt at copier forensics

August 7, 2010 5 comments

In April of 2010, CBS News kicked off a bit of a firestorm with an article about the lack of security in digital copiers. Like too many mainstream news articles about security, this one was a bit sensationalistic and lacked a broad perspective. Yes, there certainly are some copiers out there that keep unencrypted digital copies of scanned documents but based on my own experience and the experiences of other forensic examiners, there are a lot of secure copiers out there as well.

A high level view of my experience with one Ricoh copier follows. This is one copier that is not susceptible to accidental information leakage and that would require tools beyond those available to a normal forensic examiner to crack.

I was able to determine that:

  1. The copier uses two hard drives that have an identical 193 byte boot (?) sector and are superficially close but not identical after that. They contain large sections of null bytes.
  2. The copier uses two operating systems, one a BSD derivative and one a proprietary OS using a proprietary file system.
  3. The processor is a MIPS processor that is bi-endian, capable of operating in little or big endian mode.

I imaged both drives. None of the following tools will recognized a file system, RAID, or any artifacts in the images:

– X-Ways
– FTK 3
– EnCase 6.15
– UFS Explorer
– RAID Reconstructor
– strings

To restate – running strings over the images produces no recognizable strings, none of the file carving tools locate any artifacts, and indexing produced no results.

To account for the bi-endian nature of the CPU, I swapped the bytes in both images with dd (‘swab’ option) and applied all the tools to the byte swapped images with the same negative results.

I looked at the images with a hex editor and found the 193 byte start to the drives along with the similar but not identical structure after that.

I don’t believe the drives were encrypted per se but it seems likely that they contain a proprietary file system.

I have output from the copier’s printer configuration utility that shows BSD style daemons and logging.

A Ricoh engineer who works in an area other than copiers confirmed that the copier does use two operating systems, and that one of them is proprietary and very tightly guarded.

The high cost of computer forensics software, your tax dollars NOT at work

February 23, 2010 5 comments

Finding quality tools is tough, particularly if you’re an independent practitioner or a small company. One tool at $1,000 to $2,500 is affordable, but we need an entire toolbox full of tools and they’re all trending towards $1,000 and 20% per year maintenance. Pretty soon you’re out $20,000 up front and then $4,000 per year to stay current. OSS and free tools are awfully welcome.

Thankfully, if you’re a US citizen, your tax dollars paid for the development of an OS X forensics tool called MEGA. (paper) Quoting from the paper: “This project was supported by Award No. 2007-DN-BX-K020 awarded by the National Institute of Justice….” Very cool, right? Alas, MEGA morphed into Mac Marshal and went commercial. (And when did this happen? The MEGA paper includes screenshots of the tool with the label “Mac Marshal” rather than “MEGA”.)

So go to the Mac Marshal web site where you find:

“Because of a special arrangement with the U.S. National Institute of Justice, Mac Marshal is available free of charge to U.S. Law Enforcement personnel. If you qualify, please use the instructions below.

Mac Marshal is available for purchase by the private sector, and law enforcement agencies outside of the United States, from Cyber Security Technologies.”

So, if you’re in law enforcement, you can get a copy of it for free. If you’re not LE, you get to pay $995 to Cyber Security Technologies for it. (order form)

Wait, didn’t I already pay for at least some of this tool through my tax dollars? I can see a private developer deciding to give their product away for free to LE, and corporations discounting the product to the government on GSA schedules. But in this case, the tool was developed using US tax dollars, and the price to the public isn’t just recovering costs, it is making a substantial profit.

It gets more interesting….

I got onto this because I was working on vfcrack (Google Code link, OpenCiphers link), a tool to brute force the encryption on DMGs. It’s a bit out of date, and I thought I’d bring it up to speed. Turns out that this has already been done – as part of Mac Marshal.

“Mac Marshal also include a modified version of vfcrack [11], which enables fast dictionary-based brute-force password cracking of FileVault sparseimage and sparsebundle images, as well as other encrypted Apple disk image formats (the original distribution of vfcrack does not support sparseimage and sparsebundle images).” (citation)

So there is open source code in Mac Marshal that may have been updated at the taxpayer’s expense but not been returned to the public domain. The vfcrack license doesn’t explicitly prohibit this, but MacMarshal’s developer’s refusal to put the updated code back in the public domain certainly seems to be in bad form.

A couple of suggestions if you accept tax dollars to support the development your tools:

  1. Price the resulting product so that the independent practitioner can afford to buy it without having to really think about it too much. A range of $200 – $300 I can see, but $995 is getting greedy. $200 covers distribution costs, the web site, answering questions, and the like.
  2. If you use open source code in your tool and update it, put the updated code back in the public domain for the rest of us to use. It costs you nothing to do so, it earns you good will, and we (the taxpayers) paid for some of that development.
  3. Remember that we are all working for the public, not just law enforcement. These tools are obviously used in civil matters, civil matters involving the same taxpayers.

And suggestions to tool vendors in general:

  1. Price your tools so they are affordable. We (small companies) aren’t going to drop $1,000 on a tool without thinking about it, much less $2,500 or $5,000. My gut (biased, I’ll admit) says that if some vendors dropped their prices significantly, they’d get a boost in sales that covers the decreased per-unit profit, and they’d get their product into more peoples’ hands, which would lead to more sales. (Or am I being idealistic?)
  2. Don’t discount the influence of someone who appears “small”. Many of us have clients in larger firms, and all of us talk (a lot) amongst ourselves. Check the CCE and HTCIA lists, look at Forensic Focus, go to the forensics conferences and talk to the smaller companies.
  3. Invest in the long term. The small customers you win over now, and who you help do better work so they can be more profitable in the future, will be your beta testers, promoters, and recurring customers in the future.

None of what I describe here is against the letter, or even the spirit, of the law. It probably even falls under “good business practices”. But in charging a premium for a tool that was funded in part by US tax dollars, and in taking public domain code and not returning the changes to the public, the pricing and failure to publicly update code borrowed from the public domain seems contrary to the spirit of the digital forensics community.

Updated analyzeMFT now with binaries! (And the tools required to get there.)

February 19, 2010 Leave a comment

I finally figured out how to build a standalone executable after an Alice in Wonderland run through redistributable libraries, py2exe, and Windows installers. There are still some issues, but it works well for the most part. Check the Download section on

Some tools that helped me turn a Python script into something that can run on any (most?) Windows systems are:

  1. py2exe – – Read the Tutorial page for some really good help with the .dlls
  2. Dependency Walker – – A great tool for determining what modules your application depends on
  3. Inno Setup – – A very simple yet powerful tool to build installation packages

Updated analyzeMFT, $MFT sequence numbers, and NTFS documentation

February 10, 2010 Leave a comment

analyzeMFT updates:

At the request of Harlan Carvey and Rob Lee I made some changes to analyzeMFT and fixed a few bugs along the way.

  • Version 1.1: Split parent folder reference and sequence into two fields. I’m still trying to figure out the significance of the parent folder sequence number, but I’m convinced that what some documentation refers to as the parent folder record number is really two values – the parent folder record number and the parent folder sequence number.
  • Version 1.2:
    • Fixed problem with non-printable characters in filenames. Any Unicode character is legal in a filename, including newlines. This presented some problems in my output. Characters that do not render well are now converted to hex and a note is added to the Notes column indicating this.
    • Added “compile time” flag to turn off the inclusion of any GUI related modules and libraries for systems missing tk/tcl support. (Set noGUI to True in the code)
  • Version 1.3: Added new column to hold log entries relating to each record. For example, a note stating that some characters in the filename were converted to hex as they could not be printed

The code and more details are available at

Quick note on $MFT sequence numbers:

Microsoft tells us that each record in the $MFT has a FILE_RECORD_SEGMENT_HEADER Structure. Within this structure is a sequence number, defined as follows:

“This value is incremented each time that a file record segment is freed; it is 0 if the segment is not used.”

Ok, that’s pretty straightforward. At least until you look at teh first 16 entries in any $MFT as all of their sequence numbers match their record number. I’ve been told that since these files can never be deleted, repurposing the sequence number adds an additional sanity check and disaster recovery option. However, I’ve found one volume where this behavior continues for 12,000 records or more. Still looking into that one.

NTFS Documentation:

One of the best sources for NTFS documentation isn’t Microsoft, it comes from the Linux NTFS developers and is available here.

Categories: Software Tools, Writing code Tags: , ,

Duplicating forensic images by splitting a RAID1

January 31, 2010 4 comments

It is considered very good practice to make two copies of any image collected, particularly in the field. On one very long collection trip we did this by collecting to one set of drives during the day and running Robocopy over night to duplicate the image set. FTK allows writing to two destinations, and the various versions of dd have always allowed this via one means or another. But these all require either time or precious IO bandwidth.

So, I thought, is there any way to create two images in real time without pushing the data down the pipe twice? Isn’t that what RAID1 is supposed to provide? But, are two drives in a hardware RAID 1 *really* identical? Turns out, that at least in my test case, they are.

I bought a  vAGE220-SAU two drive, USB 2.0/eSATA, RAID0/1 external enclosure. ($275 @ Amazon.)  It’s fairly well constructed, compact, and easy to use. The instructions weren’t clearly translated but were sufficient unto the task. Once I flipped the dip switches correctly and waited a few hours for it to do the initial mirroring, I was good to go.

I hooked my source drive up to one port on my field laptop’s eSATA card and the RAID enclosure up to the other one. Fired off FTK (but dd, or EnCase, or whatever would have done just as well.) Imaged the drive and it ran at near expected speeds. The process finished and the image was verified.

Now the test. I pulled both drives and hashed them via a writeblocker. The hashes matched. I had two identical, forensically sound, images of my source drive. This required less time that imaging to two destinations using the hardware available on my field laptop, and a lot less time than running a copy overnight.

I need to try this a few more times and do some more performance measurements, but I’m pretty happy with the outcome. I wish there was a drop in drive dock with RAID1 capability. That would eliminate the need to open the enclosure up when changing disks.


Get every new post delivered to your Inbox.

Join 49 other followers