Difference between pages "Simson L. Garfinkel" and "Disk Imaging"

From ForensicsWiki
(Difference between pages)
Jump to: navigation, search
m
 
(Decryption while imaging)
 
Line 1: Line 1:
Simson L. Garfinkel is an Associate Professor at the [http://www.nps.edu Naval Postgraduate School] in Monterey, California, and a fellow at the Center for Research on Computation and Society at Harvard University.
+
{{expand}}
  
Dr. Garfinkel has research interests in computer forensics, the emerging field of usability and security, and in personal information management. He is also interested in information policy and terrorism, and has published in these areas since the late 1980s.
+
Disk imaging is the process of making a bit-by-bit copy of a disk. Imaging (in more general terms) can apply to anything that can be considered as a bit-stream, e.g. a physical or logical volumes, network streams, etc.
  
In addition to his work as an academic, Garfinkel is a contributing editor at Technology Review Magazine, where he writes a weekly blog on emerging technology, and an editor-at-large at CSO Magazine, where he writes the award-winning monthly column "Machine Shop." In the past Garfinkel was a weekly contributor to The Boston Globe, The San Jose Mercury News and The Christian Science Monitor He was a founding contributor of Wired Magazine. Overall, Garfinkel's popular articles have appeared in more than 70 publications around the world.
+
The most straight-forward disk imaging method is reading a disk from start to end and writing the data to a [[:Category:Forensics_File_Formats|Forensics image format]].
 +
This can be a time consuming process especially for disks with a large capacity.
  
Garfinkel is a consulting scientist at [[Basis Technology|Basis Technology Corp.]], which develops software for extracting meaningful intelligence from unstructured text, and a founder of Sandstorm Enterprises, a computer security firm that develops advanced computer forensic tools used by businesses and governments to audit their systems.
+
The process of disk imaging is also referred to as disk duplication.
  
Garfinkel is the author or co-author of fourteen books on computing. He is perhaps best known for his book Database Nation: The Death of Privacy in the 21st Century. Garfinkel's most successful book, Practical UNIX and Internet Security (co-authored with Gene Spafford), has sold more than 250,000 copies in more than a dozen languages since the first edition was published in 1991.
+
== Disk Imaging Solutions ==
 +
See: [[:Category:Disk Imaging|Disk Imaging Solutions]]
  
Garfinkel received three Bachelor of Science degrees from MIT in 1987, a Master's of Science in Journalism from Columbia University in 1988, and a Ph.D. in Computer Science from MIT in 2005.
+
== Common practice ==
 +
It common practice to use a [[Write Blockers|Write Blocker]] when imaging a pyhical disk. The write blocker is an additional measure to prevent write access to the disk.
  
Garfinkel's home page is http://www.simson.net. His CV is located on the Internet at http://www.simson.net/cv.
+
Also see: [[DCO and HPA|Device Configuration Overlay (DCO) and Host Protected Area (HPA)]]
  
=Forensics=
+
== Integrity ==
'''Simson L. Garfinkel''' is the author of [[AFFLIB]] (together with [[Basis Technology]]).
+
Often when creating a disk image a [http://en.wikipedia.org/wiki/Cryptographic_hash_function cryptographic hash] is calculated of the entire disk. Commonly used cryptographic hashes are MD5, SHA1 and/or SHA256.
  
On this wiki, Garfinkel is known as [[User:Simsong]].
 
  
[[Category:People]]
+
By recalculating the integrity hash at a later time, one can determine if the data in the disk image has been changed. This by itself provides no protection against intentional tampering, but can indicate that the data was altered, e.g. due to corruption. The integrity hash does not indicate where int he data the alteration has occurred. Therefore some image tools and/or formats provide for additional integrity checks like:
 +
* A checksum
 +
* Parity data
 +
* [[Piecewise hashing]]
 +
 
 +
== Smart imaging ==
 +
Smart imaging is a combination of techniques to make the imaging process more intelligent.
 +
* Compressed storage
 +
* Deduplication
 +
* Selective imaging
 +
* Decryption while imaging
 +
 
 +
=== Compressed storage ===
 +
 
 +
A common technique to reduce the size of an image file is to compress the data. Where the compression method should be [http://en.wikipedia.org/wiki/Lossless_data_compression lossless].
 +
On modern computers, with multiple cores, the compression can be done in parallel reducing the output without prolonging the imaging process.
 +
Since the write speed of the target disk can be a bottleneck in imaging process, parallel compression can reduce the total time of the imaging process.
 +
[[Guymager]] was one of the first imaging tools to implement the concept of multi-process compression for the [[Encase image file format]]. This technique is now used by various imaging tools including [http://www.tableau.com/index.php?pageid=products&model=TSW-TIM Tableau Imager (TIM)]
 +
 
 +
Other techniques like storing the data sparse, using '''empty-block compression''' or '''pattern fill''', can reduce the total time of the imaging process and the resulting size of new non-encrypted (0-byte filled) disks.
 +
 
 +
=== Deduplication ===
 +
Deduplication is the process of determining and storing data that occurs more than once on-disk, only once in the image.
 +
It is even possible to store the data once for a corpus of images using techniques like hash based imaging.
 +
 
 +
=== Selective imaging ===
 +
Selective imaging is a technique to only make a copy of certain information on a disk like the $MFT on an [[NTFS]] volume with the necessary contextual information.
 +
 
 +
[[EnCase]] Logical Evidence Format (LEF) is an example of a selective image; although only file related contextual information is stored in the format by [[EnCase]].
 +
 
 +
=== Decryption while imaging ===
 +
Encrypted data is worst-case scenario for compression. Because the encryption process should be deterministic, a solution to reduce the size of an encrypted image is to store it non-encrypted and compressed and encrypt it again on-the-fly if required. Although this should be rare since the non-encrypted data is what undergoes analysis.
 +
 
 +
== Also see ==
 +
* [[:Category:Forensics_File_Formats|Forensics File Formats]]
 +
* [[Write Blockers]]
 +
* [[Piecewise hashing]]
 +
* [[Memory Imaging]]
 +
 
 +
== External Links ==
 +
* [http://www.tableau.com/pdf/en/Tableau_Forensic_Disk_Perf.pdf Benchmarking Hard Disk Duplication Performance in Forensic Applications], by [[Robert Botchek]]
 +
 
 +
=== Hash based imaging ===
 +
* [http://www.dfrws.org/2010/proceedings/2010-314.pdf Hash based disk imaging using AFF4], by [[Michael Cohen]], [[Bradley Schatz]]
 +
 
 +
[[Category:Disk Imaging]]

Revision as of 05:29, 28 July 2012

Information icon.png

Please help to improve this article by expanding it.
Further information might be found on the discussion page.

Disk imaging is the process of making a bit-by-bit copy of a disk. Imaging (in more general terms) can apply to anything that can be considered as a bit-stream, e.g. a physical or logical volumes, network streams, etc.

The most straight-forward disk imaging method is reading a disk from start to end and writing the data to a Forensics image format. This can be a time consuming process especially for disks with a large capacity.

The process of disk imaging is also referred to as disk duplication.

Disk Imaging Solutions

See: Disk Imaging Solutions

Common practice

It common practice to use a Write Blocker when imaging a pyhical disk. The write blocker is an additional measure to prevent write access to the disk.

Also see: Device Configuration Overlay (DCO) and Host Protected Area (HPA)

Integrity

Often when creating a disk image a cryptographic hash is calculated of the entire disk. Commonly used cryptographic hashes are MD5, SHA1 and/or SHA256.


By recalculating the integrity hash at a later time, one can determine if the data in the disk image has been changed. This by itself provides no protection against intentional tampering, but can indicate that the data was altered, e.g. due to corruption. The integrity hash does not indicate where int he data the alteration has occurred. Therefore some image tools and/or formats provide for additional integrity checks like:

Smart imaging

Smart imaging is a combination of techniques to make the imaging process more intelligent.

  • Compressed storage
  • Deduplication
  • Selective imaging
  • Decryption while imaging

Compressed storage

A common technique to reduce the size of an image file is to compress the data. Where the compression method should be lossless. On modern computers, with multiple cores, the compression can be done in parallel reducing the output without prolonging the imaging process. Since the write speed of the target disk can be a bottleneck in imaging process, parallel compression can reduce the total time of the imaging process. Guymager was one of the first imaging tools to implement the concept of multi-process compression for the Encase image file format. This technique is now used by various imaging tools including Tableau Imager (TIM)

Other techniques like storing the data sparse, using empty-block compression or pattern fill, can reduce the total time of the imaging process and the resulting size of new non-encrypted (0-byte filled) disks.

Deduplication

Deduplication is the process of determining and storing data that occurs more than once on-disk, only once in the image. It is even possible to store the data once for a corpus of images using techniques like hash based imaging.

Selective imaging

Selective imaging is a technique to only make a copy of certain information on a disk like the $MFT on an NTFS volume with the necessary contextual information.

EnCase Logical Evidence Format (LEF) is an example of a selective image; although only file related contextual information is stored in the format by EnCase.

Decryption while imaging

Encrypted data is worst-case scenario for compression. Because the encryption process should be deterministic, a solution to reduce the size of an encrypted image is to store it non-encrypted and compressed and encrypt it again on-the-fly if required. Although this should be rare since the non-encrypted data is what undergoes analysis.

Also see

External Links

Hash based imaging