Difference between revisions of "Dcfldd"

From ForensicsWiki
Jump to: navigation, search
 
m (Precautions)
 
(18 intermediate revisions by 7 users not shown)
Line 1: Line 1:
(From the dcfldd documentation at http://dcfldd.sourceforge.net/)
+
{{Infobox_Software |
 +
  name = dcfldd |
 +
  maintainer = [[Nick Harbour]] |
 +
  os = {{Linux}}, {{Windows}} |
 +
  genre = {{Disk imaging}} |
 +
  license = {{GPL}} |
 +
  website = [http://dcfldd.sourceforge.net/ dcfldd.sf.net] |
 +
}}
  
dcfldd is an enhanced version of GNU dd with features useful for forensics and security. Based on the dd program found in the GNU Coreutils package, dcfldd has the following additional features:
+
'''dcfldd''' is an enhanced version of [[dd]] developed by the U.S. Department of [[Defense Computer Forensics Lab]]. It has some useful features for forensic [[investigator]]s such as:
  
* Hashing on-the-fly - dcfldd can hash the input data as it is being transferred, helping to ensure data integrity.
+
* On-the-fly [[hash]]ing of the transmitted data.
* Status output - dcfldd can update the user of its progress in terms of the amount of data transferred and how much longer operation will take.
+
* Progress bar of how much data has already been sent.
* Flexible disk wipes - dcfldd can be used to wipe disks quickly and with a known pattern if desired.
+
* Wiping of disks with known patterns.
* Image/wipe Verify - dcfldd can verify that a target drive is a bit-for-bit match of the specified input file or pattern.
+
* Verification that the image is identical to the original drive, bit-for-bit.
* Multiple outputs - dcfldd can output to multiple files or disks at the same time.
+
* Simultaneous output to more than one file/disk is possible.
* Split output - dcfldd can split output to multiple files with more configurability than the split command.
+
* The output can be split into multiple files.
* Piped output and logs - dcfldd can send all its log data and output to commands as well as files natively.
+
* Logs and data can be piped into external applications.
 +
 
 +
The program only produces [[raw image file|raw image files]].
 +
 
 +
== Example ==
 +
'''Unix/Linux'''
 +
dcfldd if=/dev/sourcedrive hash=md5,sha256 hashwindow=10G md5log=md5.txt sha256log=sha256.txt \
 +
        hashconv=after bs=512 conv=noerror,sync split=10G splitformat=aa of=driveimage.dd
 +
This command will read ten Gigabytes from the source drive and write that to a file called driveimage.dd.aa.  It will also calculate the [[MD5]] hash and the sha256 hash of the ten Gigabyte chunk.  It will then read the next ten gigs and name that driveimage.dd.ab.  The md5 hashes will be stored in a file called md5.txt and the sha256 hashes will be stored in a file called sha256.txt.  The block size for transferring has been set to 512 bytes, and in the event of read errors, dcfldd will write zeros.
 +
 
 +
'''Windows'''
 +
 
 +
While there is a Windows executable of DCFLDD out there, it can be difficult to use. There is currently a PowerShell Script that can be used to help newcomers out, located [https://github.com/Linuxuser437442/powershell-dcfldd here]
 +
 
 +
== Precautions ==
 +
 
 +
This tool is not suitable for imaging faulty drives:
 +
* dcfldd is based on an extremely old version of [[dd]]: it's known that dcfldd will misalign the data in the image after a faulty sector is encountered on the source drive ([https://www.cyberfetch.org/groups/community/test-results-digital-data-acquisition-tool-dcfldd-134-1 see the NIST report]), and this kind of bug (wrong offset calculation when seeking over a bad block) was fixed for [[dd]] in 2003 ([http://lists.gnu.org/archive/html/bug-coreutils/2003-10/msg00071.html see the fix in the mailing list]);
 +
* similarly, dcfldd can enter an infinite loop when a faulty sector is encountered on the source drive, thus writing to the image over and over again until there is no free space left.
 +
 
 +
== See Also ==
 +
* [[dc3dd]]

Latest revision as of 13:06, 14 March 2015

dcfldd
Maintainer: Nick Harbour
OS: Linux,Windows
Genre: Disk imaging
License: GPL
Website: dcfldd.sf.net

dcfldd is an enhanced version of dd developed by the U.S. Department of Defense Computer Forensics Lab. It has some useful features for forensic investigators such as:

  • On-the-fly hashing of the transmitted data.
  • Progress bar of how much data has already been sent.
  • Wiping of disks with known patterns.
  • Verification that the image is identical to the original drive, bit-for-bit.
  • Simultaneous output to more than one file/disk is possible.
  • The output can be split into multiple files.
  • Logs and data can be piped into external applications.

The program only produces raw image files.

Example

Unix/Linux

dcfldd if=/dev/sourcedrive hash=md5,sha256 hashwindow=10G md5log=md5.txt sha256log=sha256.txt \
       hashconv=after bs=512 conv=noerror,sync split=10G splitformat=aa of=driveimage.dd

This command will read ten Gigabytes from the source drive and write that to a file called driveimage.dd.aa. It will also calculate the MD5 hash and the sha256 hash of the ten Gigabyte chunk. It will then read the next ten gigs and name that driveimage.dd.ab. The md5 hashes will be stored in a file called md5.txt and the sha256 hashes will be stored in a file called sha256.txt. The block size for transferring has been set to 512 bytes, and in the event of read errors, dcfldd will write zeros.

Windows

While there is a Windows executable of DCFLDD out there, it can be difficult to use. There is currently a PowerShell Script that can be used to help newcomers out, located here

Precautions

This tool is not suitable for imaging faulty drives:

  • dcfldd is based on an extremely old version of dd: it's known that dcfldd will misalign the data in the image after a faulty sector is encountered on the source drive (see the NIST report), and this kind of bug (wrong offset calculation when seeking over a bad block) was fixed for dd in 2003 (see the fix in the mailing list);
  • similarly, dcfldd can enter an infinite loop when a faulty sector is encountered on the source drive, thus writing to the image over and over again until there is no free space left.

See Also