Difference between pages "Tools:Visualization" and "Bulk extractor"

From ForensicsWiki
(Difference between pages)
Jump to: navigation, search
 
m (current version is 1.4.4)
 
Line 1: Line 1:
Although not strictly for forensic purposes, '''visualization tools''' such as the ones discussed here can be very useful for visualizing large data sets. As forensic practitioners need to process more and more data, it is likely that some of the techniques implemented by these tools will need to be adopted.
+
== Overview ==
 +
'''bulk_extractor''' is a computer forensics tool that scans a disk image, a file, or a directory of files and extracts useful information without parsing the file system or file system structures. The results can be easily inspected, parsed, or processed with automated tools. '''bulk_extractor''' also created a histograms of features that it finds, as features that are more common tend to be more important. The program can be used for law enforcement, defense, intelligence, and cyber-investigation applications.
  
== Open Source ==
+
bulk_extractor is distinguished from other forensic tools by its speed and thoroughness. Because it ignores file system structure, bulk_extractor can process different parts of the disk in parallel. In practice, the program splits the disk up into 16MiByte pages and processes one page on each available core. This means that 24-core machines process a disk roughly 24 times faster than a 1-core machine. bulk_extractor is also thorough. That’s because bulk_extractor automatically detects, decompresses, and recursively re-processes compressed data that is compressed with a variety of algorithms. Our testing has shown that there is a significant amount of compressed data in the unallocated regions of file systems that is missed by most forensic tools that are commonly in use today.
=== Visualization Toolkits and Libraries ===
+
* [http://csbi.sourceforge.net/index.html Graph Interface Library (GINY)] - Java
+
* [http://www.gravisto.org/ Gravisto: Graph Visualization Toolkit] - An editor and toolkit for developing graph visualization algorithms.
+
* [http://ivtk.sourceforge.net/ InfoViz Toolkit] - Java, originally developed at [[INRA]].
+
* [http://jgrapht.sourceforge.net/ JGraphT] - A Java visualization kit designed to be simple and extensible.
+
* [http://www.softwaresecretweapons.com/jspwiki/Wiki.jsp?page=LinguineMaps Linguine Maps] - An open-source Java-based system for visualizing software call maps.
+
* [http://prefuse.sourceforge.net/ Perfuse] - A Java-based toolkit for building interactive information visualization applications
+
* [http://www.gnu.frb.br:8080/rox Rox Graph Theory Framework] - An open-source plug-in framework for graph theory visualization.
+
* [http://touchgraph.sourceforge.net/ TouchGraph] - Library for building graph-based interfaces.
+
* [http://www.ssec.wisc.edu/~billh/visad.html#intro VisAD] - A Java component library for interactive and collaborative visualization.
+
* [http://public.kitware.com/VTK/ The Visualization Toolkit] - C++ multi-platform with interfaces available for Tcl/Tk, Java and Python. Professional support provided by [http://www.kitware.com/ Kitware].
+
* [http://zvtm.sourceforge.net/index.html Zoomable Visual Transformation Machine] - Java. Originally started at Xerox Research Europe.{{nocite}}
+
  
===Graph Drawing Applications===
+
Another advantage of ignoring file systems is that bulk_extractor can be used to process any digital media. We have used the program to process hard drives, SSDs, optical media, camera cards, cell phones, network packet dumps, and other kinds of digital information.
* [http://www.graphviz.org/ Graphviz] - Originally developed by the [http://public.research.att.com/areas/visualization/ AT&T Information Visualization Gorup], designed for drawing connected graphs of nodes and edges. Neato is a similar system but does layout based on a spring model. Can produce output as [[PostScript]], [[PNG]], [[GIF]], or as an annotated graph file with the locations of all of the objects — ideal for drawing in a GUI. Runs from the command line on [[Unix]], [[Windows]] and [[Mac]], although there is also a [http://www.pixelglow.com/graphviz/ MacOS GUI version].
+
* [http://graphexploration.cond.org/ Guess: The Graph Exploration System] - Originally developed at HP, this is a large Jython/Java-based system that you can use for building your own applications. Distributed under GPL.
+
  
; [http://hypergraph.sourceforge.net/ HyperGraph]
+
==Output Feature Files==
: Hyperbolic trees, in Java. Check out the home page. Try clicking on the logo...
+
  
; [http://sourceforge.net/projects/ivc/ InfoVis Cyberinfrastructure]
+
bulk_extractor now creates an output directory that includes:
: Another graph drawing system written in Java.
+
* '''ccn.txt''' -- Credit card numbers
 +
* '''ccn_track2.txt''' -- Credit card “track 2″ information
 +
* '''domain.txt''' -- Internet domains found on the drive, including dotted-quad addresses found in text.
 +
* '''email.txt''' -- Email addresses
 +
* '''ether.txt''' -- Ethernet MAC addresses found through IP packet carving of swap files and compressed system hibernation files and file fragments.
 +
* '''exif.txt''' -- EXIFs from JPEGs and video segments. This feature file contains all of the EXIF fields, expanded as XML records.
 +
* '''find.txt''' -- The results of specific regular expression search requests.
 +
* '''ip.txt''' -- IP addresses found through IP packet carving.
 +
* '''telephone.txt''' --- US and international telephone numbers.
 +
* '''url.txt''' --- URLs, typically found in browser caches, email messages, and pre-compiled into executables.
 +
* '''url_searches.txt''' --- A histogram of terms used in Internet searches from services such as Google, Bing, Yahoo, and others.
 +
* '''wordlist.txt''' --- :A list of all “words” extracted from the disk, useful for password cracking.
 +
* '''wordlist_*.txt''' --- The wordlist with duplicates removed, formatted in a form that can be easily imported into a popular password-cracking program.
 +
* '''zip.txt''' --- A file containing information regarding every ZIP file component found on the media. This is exceptionally useful as ZIP files contain internal structure and ZIP is increasingly the compound file format of choice for a variety of products such as Microsoft Office
  
; [https://jdigraph.dev.java.net/ Jdigrah]
+
For each of the above, two additional files may be created:
: Java Directed Graphs.
+
* '''*_stopped.txt''' --- bulk_extractor supports a stop list, or a list of items that do not need to be brought to the user’s attention. However rather than simply suppressing this information, which might cause something critical to be hidden, stopped entries are stored in the stopped files.
 +
* '''*_histogram.txt''' --- bulk_extractor can also create histograms of features. This is important, as experience has shown that email addresses, domain names, URLs, and other information that appear more frequently on a hard drive or in a cell phone’s memory can be used to rapidly create a pattern of life report.
  
; [http://bioinformatics.icmb.utexas.edu/lgl/ Large Graph Layout (LGL)]
+
Bulk extractor also creates a file that captures the provenance of the run:
: A bioinformatics system from University of Texas. They really mean Large.
+
;report.xml
 +
:A Digital Forensics XML report that includes information about the source media, how the bulk_extractor program was compiled and run, the time to process the digital evidence, and a meta report of the information that was found.
  
; [http://www.opendx.org/ OpenDX]
+
==Post-Processing==
: Based on [[IBM]]'s Visualization Data Explorer, runs on [[Unix]]/X11/Motif.
+
  
; [http://jung.sourceforge.net/ Java Universal Network/Graph Framework (JUNG)]
+
We have developed four programs for post-processing the bulk_extractor output:
: Graphing, [[data mining]], [[social network]] analysis, and other stuff.
+
;bulk_diff.py
 +
:This program reports the differences between two bulk_extractor runs. The intent is to image a computer, run bulk_extractor on a disk image, let the computer run for a period of time, re-image the computer, run bulk_extractor on the second image, and then report the differences. This can be used to infer the user’s activities within a time period.
 +
;cda_tool.py
 +
:This tool, currently under development, reads multiple bulk_extractor reports from multiple runs against multiple drives and performs a multi-drive correlation using Garfinkel’s Cross Drive Analysis technique. This can be used to automatically identify new social networks or to identify new members of existing networks.
 +
;identify_filenames.py
 +
:In the bulk_extractor feature file, each feature is annotated with the byte offset from the beginning of the image in which it was found. The program takes as input a bulk_extractor feature file and a DFXML file containing the locations of each file on the drive (produced with Garfinkel’s fiwalk program) and produces an annotated feature file that contains the offset, feature, and the file in which the feature was found.
 +
;make_context_stop_list.py
 +
:Although forensic analysts frequently make “stop lists”—for example, a lsit of email addresses that appear in the operating system and should therefore be ignored—such lists have a significant problem. Because it is relatively easy to get an email address into the binary of an open source application, ignoring all of these email addresses may make it possible to cloak email addresses from forensic analysis. Our solution is to create context-sensitive stop lists, in which the feature to be stopped is presented with the context in which it occures. The make_context_stop_list.py program takes the results of multiple bulk_extractor runs and creates a single context-sensitive stop list that can then be used to suppress features when found in a specific context. One such stop list constructed from Windows and Linux operating systems is available on the bulk extractor website.
  
; [http://web.mit.edu/bshi/Public/nv2d/ NetVis 2D]
+
== Download ==
: Another graph visualization and layout tool written in Java.
+
The current version of '''bulk_extractor''' is 1.4.4.  
  
; [http://sourceforge.net/projects/sonia/ Social Network Image Automater (SoNIA)]
+
* Downloads are available at: http://digitalcorpora.org/downloads/bulk_extractor/
: Originally developed at Stanford. Written in Java.
+
* A WIndows installer with the GUI can be downloaded from: http://www.digitalcorpora.org/downloads/bulk_extractor/bulk_extractor-1.4.1-windowsinstaller.exe
  
; [http://www.informatik.uni-bremen.de/uDrawGraph/en/uDrawGraph/uDrawGraph.html uDrawGraph]
+
== Bibliography ==
 +
=== Academic Publications ===
 +
# Garfinkel, Simson, [http://simson.net/clips/academic/2013.COSE.bulk_extractor.pdf Digital media triage with bulk data analysis and bulk_extractor]. Computers and Security 32: 56-72 (2013)
 +
# Beverly, Robert, Simson Garfinkel and Greg Cardwell, [http://simson.net/clips/academic/2011.DFRWS.ipcarving.pdf "Forensic Carving of Network Packets and Associated Data Structures"], DFRWS 2011, Aug. 1-3, 2011, New Orleans, LA. BEST PAPER AWARD (Acceptance rate: 23%, 14/62)
 +
#Garfinkel, S., [http://simson.net/clips/academic/2006.DFRWS.pdf Forensic Feature Extraction and Cross-Drive Analysis,]The 6th Annual Digital Forensic Research Workshop Lafayette, Indiana, August 14-16, 2006. (Acceptance rate: 43%, 16/37)
  
; [http://www.wilmascope.org/ WilmaScope]
+
===YouTube===
: Real-time animations of dynamic graph structures. Written in Java. Sophisticated force model with strings and attraction.
+
'''[http://www.youtube.com/results?search_query=bulk_extractor search YouTube] for bulk_extractor videos'''
 +
* [http://www.youtube.com/watch?v=odvDTGA7rYI Simson Garfinkel speaking at CERIAS about bulk_extractor]
 +
* [http://www.youtube.com/watch?v=wTBHM9DeLq4 BackTrack 5 with bulk_extractor]
 +
* [http://www.youtube.com/watch?v=QVfYOvhrugg Ubuntu 12.04 forensics with bulk_extractor]
 +
* [http://www.youtube.com/watch?v=57RWdYhNvq8 Social Network forensics with bulk_extractor]
  
; [http://www.caida.org/tools/visualization/walrus/ Walrus]
+
===Tutorials===
: A 3-d graph network exploration tool. Employs 3D hyperbolic displays and layout based on a user-supplied spanning tree.
+
# [http://simson.net/ref/2012/2012-08-08%20bulk_extractor%20Tutorial.pdf Using bulk_extractor for digital forensics triage and cross-drive analysis], DFRWS 2012
 
+
== Geographical Drawing Programs ==
+
 
+
; [http://openmap.bbn.com/ OpenMap]
+
: From [[BBN]].
+
 
+
== Commercial Tools ==
+
 
+
; [http://www.aisee.com/ aiSee Graph Layout Software]
+
: Supports 15 layout algorithms, recursive graph nesting, and easy printing. Runs on [[Windows]], [[Linux]], [[Solaris]], [[NetBSD]], and [[MacOS]]. 30-day trial and free registered versions available. Academic pricing available.
+
 
+
; [http://www.geomantics.com/ Geomantics]
+
: Geographical, Visualization and Graphics software. Runs on [[Windows]].
+
 
+
; [http://www.kylebank.com/ Graphis 2D and 3D graphing software]
+
: Runs on [[Windows]]. Free 30-day evaluation copy available.
+
 
+
; [http://www.openviz.com/ OpenViz] and  [http://www.powerviz.com/ PowerViz]
+
: Both from Advanced Visual Systems, super high-end visualization toolkits. $$$$
+
 
+
; [http://www.tomsawyer.com/ Tom Sawyer Software] Analysis, Visualizaiton, and Layout programs.
+
: Heavy support for drawing graphs. Beautiful gallery. ActiveX, Java, C++ and .NET editions.
+
 
+
= Other Resources =
+
 
+
; [http://www.palgrave-journals.com/ivs/index.html Information Visualization Journal]
+
 
+
; [http://www-static.cc.gatech.edu/gvu/ii/resources/infovis.html GVU's Information Visualization Resources link farm]
+
 
+
; [http://www.msi.umn.edu/user_support/scivis/scivis-list.html Scientific Visualization at the Supercomputing Institute]
+
 
+
; [http://directory.google.com/Top/Science/Math/Combinatorics/Software/Graph_Drawing/ Google Directory of Graph Drawing Software]
+
 
+
; [http://rw4.cs.uni-sb.de/~diehl/softvis/seminar/index.php?goto=seminar ACM Symposium on Software Visualization]
+
: May give you some ideas.
+
 
+
; [http://directory.fsf.org/science/visual/ GNU Free Software directory of scientific visualization software]
+
 
+
; [http://www.cs.brown.edu/people/rt/gd.html Roberto Tamassia's resources on Graph Drawing]
+
 
+
; [http://www.manageability.org/blog/stuff/open-source-graph-network-visualization-in-java/view Open Source Graph Network Visualization in Java]
+

Latest revision as of 15:24, 19 May 2014

Overview

bulk_extractor is a computer forensics tool that scans a disk image, a file, or a directory of files and extracts useful information without parsing the file system or file system structures. The results can be easily inspected, parsed, or processed with automated tools. bulk_extractor also created a histograms of features that it finds, as features that are more common tend to be more important. The program can be used for law enforcement, defense, intelligence, and cyber-investigation applications.

bulk_extractor is distinguished from other forensic tools by its speed and thoroughness. Because it ignores file system structure, bulk_extractor can process different parts of the disk in parallel. In practice, the program splits the disk up into 16MiByte pages and processes one page on each available core. This means that 24-core machines process a disk roughly 24 times faster than a 1-core machine. bulk_extractor is also thorough. That’s because bulk_extractor automatically detects, decompresses, and recursively re-processes compressed data that is compressed with a variety of algorithms. Our testing has shown that there is a significant amount of compressed data in the unallocated regions of file systems that is missed by most forensic tools that are commonly in use today.

Another advantage of ignoring file systems is that bulk_extractor can be used to process any digital media. We have used the program to process hard drives, SSDs, optical media, camera cards, cell phones, network packet dumps, and other kinds of digital information.

Output Feature Files

bulk_extractor now creates an output directory that includes:

  • ccn.txt -- Credit card numbers
  • ccn_track2.txt -- Credit card “track 2″ information
  • domain.txt -- Internet domains found on the drive, including dotted-quad addresses found in text.
  • email.txt -- Email addresses
  • ether.txt -- Ethernet MAC addresses found through IP packet carving of swap files and compressed system hibernation files and file fragments.
  • exif.txt -- EXIFs from JPEGs and video segments. This feature file contains all of the EXIF fields, expanded as XML records.
  • find.txt -- The results of specific regular expression search requests.
  • ip.txt -- IP addresses found through IP packet carving.
  • telephone.txt --- US and international telephone numbers.
  • url.txt --- URLs, typically found in browser caches, email messages, and pre-compiled into executables.
  • url_searches.txt --- A histogram of terms used in Internet searches from services such as Google, Bing, Yahoo, and others.
  • wordlist.txt --- :A list of all “words” extracted from the disk, useful for password cracking.
  • wordlist_*.txt --- The wordlist with duplicates removed, formatted in a form that can be easily imported into a popular password-cracking program.
  • zip.txt --- A file containing information regarding every ZIP file component found on the media. This is exceptionally useful as ZIP files contain internal structure and ZIP is increasingly the compound file format of choice for a variety of products such as Microsoft Office

For each of the above, two additional files may be created:

  • *_stopped.txt --- bulk_extractor supports a stop list, or a list of items that do not need to be brought to the user’s attention. However rather than simply suppressing this information, which might cause something critical to be hidden, stopped entries are stored in the stopped files.
  • *_histogram.txt --- bulk_extractor can also create histograms of features. This is important, as experience has shown that email addresses, domain names, URLs, and other information that appear more frequently on a hard drive or in a cell phone’s memory can be used to rapidly create a pattern of life report.

Bulk extractor also creates a file that captures the provenance of the run:

report.xml
A Digital Forensics XML report that includes information about the source media, how the bulk_extractor program was compiled and run, the time to process the digital evidence, and a meta report of the information that was found.

Post-Processing

We have developed four programs for post-processing the bulk_extractor output:

bulk_diff.py
This program reports the differences between two bulk_extractor runs. The intent is to image a computer, run bulk_extractor on a disk image, let the computer run for a period of time, re-image the computer, run bulk_extractor on the second image, and then report the differences. This can be used to infer the user’s activities within a time period.
cda_tool.py
This tool, currently under development, reads multiple bulk_extractor reports from multiple runs against multiple drives and performs a multi-drive correlation using Garfinkel’s Cross Drive Analysis technique. This can be used to automatically identify new social networks or to identify new members of existing networks.
identify_filenames.py
In the bulk_extractor feature file, each feature is annotated with the byte offset from the beginning of the image in which it was found. The program takes as input a bulk_extractor feature file and a DFXML file containing the locations of each file on the drive (produced with Garfinkel’s fiwalk program) and produces an annotated feature file that contains the offset, feature, and the file in which the feature was found.
make_context_stop_list.py
Although forensic analysts frequently make “stop lists”—for example, a lsit of email addresses that appear in the operating system and should therefore be ignored—such lists have a significant problem. Because it is relatively easy to get an email address into the binary of an open source application, ignoring all of these email addresses may make it possible to cloak email addresses from forensic analysis. Our solution is to create context-sensitive stop lists, in which the feature to be stopped is presented with the context in which it occures. The make_context_stop_list.py program takes the results of multiple bulk_extractor runs and creates a single context-sensitive stop list that can then be used to suppress features when found in a specific context. One such stop list constructed from Windows and Linux operating systems is available on the bulk extractor website.

Download

The current version of bulk_extractor is 1.4.4.

Bibliography

Academic Publications

  1. Garfinkel, Simson, Digital media triage with bulk data analysis and bulk_extractor. Computers and Security 32: 56-72 (2013)
  2. Beverly, Robert, Simson Garfinkel and Greg Cardwell, "Forensic Carving of Network Packets and Associated Data Structures", DFRWS 2011, Aug. 1-3, 2011, New Orleans, LA. BEST PAPER AWARD (Acceptance rate: 23%, 14/62)
  3. Garfinkel, S., Forensic Feature Extraction and Cross-Drive Analysis,The 6th Annual Digital Forensic Research Workshop Lafayette, Indiana, August 14-16, 2006. (Acceptance rate: 43%, 16/37)

YouTube

search YouTube for bulk_extractor videos

Tutorials

  1. Using bulk_extractor for digital forensics triage and cross-drive analysis, DFRWS 2012