Archive for November, 2010

Testing acquisition tools

November 18, 2010 2 comments

Lee was researching software acquisition tools and made some interesting findings. One of my first thoughts was “Why?” No, not why was he doing the sort of research that we all should be doing, but why was there such a big difference between FTK Imager and the other tools? System, Firewire, disks (source and target), compression, data on the disk combined with compression, ….? I don’t think I can test all the options, but I’d like to contribute some additional research.

Unfortunately, I do not have a Tableau eSATA writeblocker so I cannot include TIM in this research. (While I understand the benefits of tight integration between software and hardware, particularly in this case, it would be nice if  TIM had a “degraded mode” that worked with non-Tableau writeblockers.) So here’s my test environment:

Test components:

  • System: Dell Precision 690, Dual 64 bit Xeon CPUs, 8GB RAM, 64 bit Windows 7, internal RAID 5
  • Drives:Western Digital VelociRaptor 160GB drives, one wiped with “00” and one formatted and cloned with real world data.
  • Writeblocker: WiebeTech UltraDock via eSATA interface to drive and to system
  • Tools: FTK Imager and EnCase

To see what the various tools might do with different types of data on the test disks, I wiped two disks, one with a pattern of “00”,  and one with standard Windows formatting and then added real world data to it. These are listed as 00 and RW in the results.

All tests were conducted using a WiebeTech Ultradock connected via eSATA to both the drive and to the system.

All tests used 1500MB chunks, MD5 hashes,  and had verification turned off.

Imaging times:

Drive 00
Options dd E01 – no compression E01 – full compression
FTK Imager 0:34:58 0:34:20 0:38:36
EnCase 0:31:05 0:32:18
Drive RW
FTK Imager 0:30:43 0:33:56 1:48:23
EnCase 0:31:04 1:18:12

Compression results (size of resulting images):

FTK, full E01 compression, ’00’ drive – 262MB
EnCase, full E01 compression, ’00’ drive – 524MB

FTK, full E01 compression, real world drive – 61.5GB
EnCase, full E01 compression, real world drive – 62.3GB


Please bear in mind that this is a really limited data set, ok? As I write this, I’m imagining all the comments of the form “But if you did X, then ….” My suggestion to those people is “Why don’t you try doing X and let us know how it turns out?”

  1. Using compression will add significantly to your imaging times
  2. EnCase’s E01 compression is faster than FTK Imager’s.
  3. Both tools do equally well on compression
  4. The randomness of the data on the drive affects the time required to image the drive if compression is turned on.


  1. There are a lot of variables that affect imaging speed – drive, drive interface, write blocker, write blocker interface, system IO bus, CPU type and speed, and target drive to name the big ones. If you’re looking for performance, you can’t control the drive characteristics but you can invest in the other components. If you’re not using compression, the biggest bottleneck will be your IO bus so go with eSATA whenever possible.
  2. If you’re imaging for archival purposes, compressing while imaging makes sense. Otherwise, consider leaving the image uncompressed until you want to archive it.

Further research:

  1. If I had more time and hardware resources, I’d love to rerun these tests while adjusting each of the variables identified above.
Categories: Computer forensics

Finding funding for computer forensics tools, and eating crow

November 18, 2010 Leave a comment

In February, I wrote a post entitled “The High Cost of Computer Forensics Software – Your Tax Dollars not at Work“. While I am still frustrated that the company in question chose not to release the enhanced open source software, I am much more aware of the issues involved in getting funding for the development of computer forensic tools. One source of funding is, of course, the US Government in the form of SBIRs, STTRs, and BAAs.

For a brief primer from a school’s department that assists with submitting proposals, check out this link.

Quoting from that document:

SBIR One to three announcements per year Schedule:,

Phase I

– $75K – $100K (or more) award + Options – 6 months duration – Feasibility Study – Can sub-contract SBIRs up to 33.3%

Phase II

– $750K award (typical) – 18 mo – 24 months

Phase III

– Unfunded commercializationA Brief SBIR/STTR BAA Primer

• Same award value
• Prime must perform at least 40% of the work
• Research partner must perform at least 30% of the work
• A maximum of 60% can be subcontracted
Small business must submit
• Much smaller funding pool

BAA – Broad Agency Announcements
• A description of needed research and technology
• For projects not supported by current programs
• Initiated by a white paper
Funding not always available!
• Award amounts typically $600K – $850K

Some relevant points from my own experiences with these mechanisms:

  • Long lag between proposal submission and funding
  • Highly structured proposal format (which is a plus in my book)
  • No commercial restrictions on products developed with the funding
  • Must give product to government for free. (They paid for it with the funding.)

The last bullet point is the source of my crow lunch. With funding comes strings and if you want to get a product to market, you need to make some compromises.

So if you’re looking for funding for computer forensics products, you might want to keep an eye on the SBIRs and BAAs. Go read up on the requirements and proposal formats. Think about possible partners that will add value to your proposal. Plan ahead.

Not a sure thing, but a possibility, and there are other similar programs out there.

Categories: Computer forensics

New version of analyzeMFT

November 17, 2010 Leave a comment

I’ve been awfully busy with real work, but thanks to the gentle prodding of some interested parties, I updated analyzeMFT over the past few weeks.

  • Version 1.5:
    • Fixed date/time reporting. I wasn’t reporting useconds at all.
    • Added anomaly detection, with many thanks to Greg Kelley. Adds two columns:
      • std-fn-shift:  If Y, entry’s FN create time is after the STD create time
      • usec-zero: If Y, entry’s STD create time’s usec value is zero
  • Version 1.6: Various bug fixes
  • Version 1.7: Bodyfile support, with thanks to Dave Hull

The anomaly detection isn’t perfect by any stretch of the imagination, it simply helps reduce the noise a bit.

  • On the $MFT from a volume on a workstation with 110593 total records, checking for FN creation times greater than STF creation times resulted in 19649 flagged records. Pretty significant reduction.
  • On the same file, checking to see if the STF creation time microseconds are zero resulted in 14571 flagged records.
  • Turning both on resulted in 2157 flagged records. Most appear to be benign. (I hope they all are!)

That’s still 2157 (or 19,649, or 14571) files that you need to check by other means, but it is a lot less than 110593.

If there’s some feature you’d like to see in analyzeMFT, please, do drop me a note.

You can find the source and more details here….

There’s also a great post on how to install Python and run analyzeMFT’s source code here….

Categories: analyzeMFT