Difference between pages "Past Selected Articles" and "User:Alex116"

From ForensicsWiki
(Difference between pages)
Jump to: navigation, search
m
 
m (Creating user page with biography of new user.)
 
Line 1: Line 1:
''Archived past selected research articles''
 
  
<small>2008-July-20</small>
 
The [http://www.utica.edu/academic/institutes/ecii/ijde/ International Journal of Digital Evidence] is one of two publications by the [http://www.utica.edu/academic/institutes/ecii/ Electronic Crime Institute (ECI)] at Utica College. Current and previous issues are available online.
 
 
The current Fall 2007 issue has an interesting article [http://www.utica.edu/academic/institutes/ecii/publications/articles/1C33DF76-D8D3-EFF5-47AE3681FD948D68.pdf Mobile Phone Forensics Tool Testing: A Database Drive Approach] by Baggili, Mislan, and Rogers at Purdue University. Given that phones are increasingly a primary source of forensic information in many cases, we need to be sure that the tools that are used for forensic analysis present data that is accurate and repeatable. Unfortunately they frequently aren't because of there are so many different kinds of phones on the market and the forensic tools lag far behind the market.
 
 
<bibtex>
 
@article{title="Mobile Phone Forensics Tool Testing: A Database Driven Approach",
 
author="Ibrahim M. Baggili and Richard Mislan and Marcus Rogers",
 
journal="International Journal of Digital Evidence",
 
year=2007,
 
volume=6,
 
issue=2,
 
url="http://www.utica.edu/academic/institutes/ecii/publications/articles/1C33DF76-D8D3-EFF5-47AE3681FD948D68.pdf",
 
abstract="The Daubert process used in the admissibility of evidence contains major guidelines
 
applied in assessing forensic procedures, two of which are testing and error rates. The
 
Digital Forensic Science (DFS) community is growing and the error rates for the forensic
 
tools need to be continuously re-evaluated as the technology changes. This becomes
 
more difficult in the case of mobile phone forensics, because they are proprietary. This
 
paper discusses a database driven approach that could be used to store data about the
 
mobile phone evidence acquisition testing process. This data can then be used to
 
calculate tool error rates, which can be published and used to validate or invalidate the
 
mobile phone acquisition tools. "
 
}
 
</bibtex>
 
 
 
<small>2008-July-12</small>
 
 
<bibtex>
 
@article{
 
  misc="",
 
  publisher="DFRWS 2008",
 
  author="Anandabrata Pal and Taha Sencar and Nasir Memon",
 
  title="Detecting File Fragmentation Point Using Sequential Hypothesis Testing",
 
  year=2008,
 
  abstract="Abstract—File carving is a technique whereby data files are
 
extracted from a digital device without the assistance of file
 
tables or other disk meta-data. One of the primary challenges in
 
file carving can be found in attempting to recover files that are
 
fragmented. In this paper, we show how detecting the point of
 
fragmentation of a file can benefit fragmented file recovery. We
 
then present a sequential hypothesis testing procedure to identify
 
the fragmentation point of a file by sequentially comparing
 
adjacent pairs of blocks from the starting block of a file until
 
the fragmentation point is reached. By utilizing serial analysis we
 
are able to to minimize the errors in detecting the fragmentation
 
points. The performance results obtained from the fragmented
 
test-sets of DFRWS 2006 and 2007 show that the method can be
 
effectively used in recovery of fragmented files.
 
clear that recovery of fragmented files is a critical problem in
 
forensics. ",
 
  url="http://www.digital-assembly.com/technology/research/pubs/dfrws2008.pdf"
 
}
 
</bibtex>
 
 
This DFRWS 2008 article presents an improved approach for carving fragmented JPEGs using sequential hypothesis testing. According to the authors, "The technique begins with a header block identifying the start of a file and then attempts to validate via SHT each subsequent block following the header block. The fragmentation point is identified when SHT identifies a block as not belonging to the file. By utilizing this technique, we are able to correctly and efficiently recover JPEG images from the DFRWS 2006 [1] and 2007 [2] test sets even in the presence of tens of thousands of blocks and files fragmented into 3 or more parts. The bifragment gap carving technique enhanced with SHT allows us to improve the performance result of DFRWS 2006 challenge test-sets,
 
although the technique cannot be used for DFRWS 2007. We then show how Parallel Unique Path enhanced with SHT is able to recover all fragmented JPEGs from DFRWS 2006 and all recoverable JPEGs from 2007 challenge test-sets. As far as we are aware, no other automated technique can recover multi-fragmented JPEGs from the DFRWS 2007 test set."
 
 
 
 
 
 
<small>2008-July-5</small>
 
 
<bibtex>
 
@article{
 
  publisher="Taylor & Francis",
 
  journal="Journal of Digital Forensic Practice", 
 
  author="Yoginder Singh Dandass and Nathan Joseph Necaise and Sherry Reede Thomas",
 
  title="An Empirical Analysis of Disk Sector Hashes for Data Carving",
 
  year=2008,
 
  volume=2,
 
  issue=2,
 
  pages="95--106",
 
  abstract="Discovering known illicit material on digital storage devices is an important component of a digital forensic investigation. Using existing data carving techniques and tools, it is typically difficult to recover remaining fragments of deleted illicit files whose file system metadata and file headers have been overwritten by newer files. In such cases, a sector-based scan can be used to locate those sectors whose content matches those of sectors from known illicit files. However, brute-force sector-by-sector comparison is prohibitive in terms of time required. Techniques that compute and compare hash-based signatures of sectors in order to filter out those sectors that do not produce the same signatures as sectors from known illicit files are required for accelerating the process.
 
 
This article reports the results of a case study in which the hashes for over 528 million sectors extracted from over 433,000 files of different types were analyzed. The hashes were computed using SHA1, MD5, CRC64, and CRC32 algorithms and hash collisions of sectors from JPEG and WAV files to other sectors were recorded. The analysis of the results shows that although MD5 and SHA1 produce no false-positive indications, the occurrence of false positives is relatively low for CRC32 and especially CRC64. Furthermore, the CRC-based algorithms produce considerably smaller hashes than SHA1 and MD5, thereby requiring smaller storage capacities. CRC64 provides a good compromise between number of collisions and storage capacity required for practical implementations of sector-scanning forensic tools.",
 
  url="http://www.informaworld.com/10.1080/15567280802050436"
 
}
 
</bibtex>
 
 
Authors Dandass ''et. al'' analyzed 528 million sectors from 433,630 unique files. They computed the CRC32, CRC64, MD5 and SHA-1 of each sector. Not surprisingly, they find that the MD5 and SHA-1s of the sectors are different if the sectors are different. They find 94 CRC64 collisions and 30 million CRC32 collisions. The conclusion is that, if you are search for a single sector or building a database of single sector hashes, you are better off building a database of CRC64s because they are easier to store and dramatically faster to calculate than the traditional hash functions, and they are nearly as accurate.
 

Latest revision as of 12:26, 28 March 2011