A Common Scheme For Evaluation of Forensic Software
A Common Scheme For Evaluation of Forensic Software
A Common Scheme For Evaluation of Forensic Software
I. M OTIVATION
Many different applications are used in digital forensics. Even in the traditional forensic sciences, such as the dactyloscopy,
software is used to support the investigation or for the enhancement of the results. In [1] different contact-less latent fingerprint
sensors and algorithms are benchmarked for this particular purpose including pre-processing algorithms. The validation
and verification of software for safety-relevant applications (as opposed to security-relevant forensic software) during the
development is described by the V-Model [2]. A particular evaluation for forensic software is supported by the NIST Forensic
Software Testing Support Tools in [3]. Therein applications for the acquisition of forensic duplicates of data storage media
are tested. However, only the quality of the copy itself is assessed, no requirements of the forensic process (see [4]) or legal
requirements (see [5], [6]) are regarded. Additionally, forensic software might be subject to attacks that might tamper with or
steal possible evidence.
Since, to our knowledge, no particular common evaluation approach for forensic software currently exists, we suggest intro-
ducing a common scheme for the evaluation of forensic software. It is designed to address technical and legal aspects of the
utilization of applications used in forensic investigations. Our goal is to alleviate the selection of an appropriate tool and the
decision about admission of evidence gathered using those applications in court for a judge in his function as a gatekeeper
within a Daubert-challenge (see [5]).
Therefore, we introduce a first approach towards a common evaluation scheme for forensic software. It consists of hard-criteria,
as particular properties of a particular version of particular tool, and soft-criteria, that might change over the time.
This paper is structured into six sections. Section II summarizes existing evaluation approaches, as well as, legal and technical
requirements for forensic software. In section III threats for forensic forensic software and its usage is enumerated; possible
attacker models are derived thereof. The section IV introduces the common scheme for the evaluation of forensic software
with its formalization. It is utilized in section V for the exemplary evaluation of an existing application for digital forensics.
Subsequently, section VI provides conclusions and outlines future work.
II. S TATE OF THE A RT AND F UNDAMENTALS FOR F ORENSIC S OFTWARE
This section summarizes current approaches towards the evaluation of forensic (and therefore security-relevant) and safety-
relevant software. Additionally, the legal fundamentals for the US jurisdiction are analyzed to show tendencies for other
countries. Subsequently, a model of the forensic process is summarized, which is used to derive technical requirements for
forensic software.
A. State of the Art
The state of the art of the evaluation of forensic software includes the NIST Software Testing Support Tools [3], which
are used to evaluate forensic duplication tools. They include a set of tools to prepare and analyze tests of such applications.
However, those tools are only used to test the quality of the acquired duplicate from various hard disks or other storage
93
They are used to determine whether the theory or methodology is scientifically valid. However, the judge is not bound to assess
all Daubert factors in every case. Additionally, other factors can be taken into account by the judge. Within the time-frame
that is investigated in [5] the factors shown in table I are used in different trials by different judges.
TABLE I
FACTORS THAT ARE USED TO ASSESS THE RELIABILITY OF EVIDENCE ACCORDING TO [5], FACTORS THAT WE CONSIDER AS RELEVANT FOR
IT- FORENSICS ARE MARKED IN ITALIC
The table I shows the possible factors that are identified in [5]. The factors we consider relevant for IT-forensics are marked
in italic. In particular, the factors 1, 2, 4, 5 and 16 are directly applicable for a particular IT-forensic tool. The factors 3, 8, 9,
10, 11 and 15 are implicitly connected to the usage of a particular IT-forensic tool. However, other factors, such as Factor 7
or 17, may additionally apply to the whole investigation.
C. Model of the Forensic Process
The model of the forensic process from Kiltz et al. [4] defines different phases of the investigation. Additionally, forensic
methods are divided into different classes. Furthermore, different types of forensically relevant data are defined. We suggest
using this model for the classification of the evaluated software.
Kiltz et al. defines six mutual exclusive phases of the forensic process:
• Strategic preparation SP : This phase includes the preparation of the IT-system for a possible incident by the operator of
an IT-system.
• Operational preparation OP : This phase addresses the preparation for a forensic investigation after a suspected incident.
• Data gathering DG: This phase all steps of securing digital evidence.
• Data investigation DI: During this phase the collected data is evaluated and data is extracted for further investigation
(also excluding data not needed).
• Data analysis DA: During this phase the relevant data is analyzed in detail and correlated.
• Documentation DO: This phase addresses all kinds of documentation during the investigation and the final documentation.
The phase of the documentation is divided into the process accompanying documentation and the final documentation. We
suggest that the process accompanying documentation is supported or entirely performed by the forensic software.
94
Forensic methods are divided into six mutual exclusive classes by Kiltz et al.:
• Operating system OS: This class contains methods provided by the operating system.
• File system F S: This class contains methods provided by the file system.
• Explicit means of intrusion detection EM ID: This class contains methods provided by additional software intended for
the logging and alerting of possible incidents. They are usually installed during the strategic preparation.
• IT-Application IT A: This class contains methods that might support forensic investigations, provided by IT-Applications
that are operated by the user and not intended for forensic investigations.
• Scaling of methods for evidence gathering SM G: This class contains methods to further collect evidence. Such methods
can be used if a suspicion is raised, they are not suitable for continuous usage in a production environment.
• Data processing and evaluation DP E: This class contains methods which support the displaying and processing of
gathered data and the documentation.
The data types are layered depending on their abstraction of the hardware. However, they are not mutual exclusive. Kiltz et
al. [4] defines the following data types:
• Hardware data DT1 ,
• Raw data DT2 ,
• Details about data DT3 ,
• Configuration data DT4 ,
• Communication protocol data DT5 ,
• Process data DT6 ,
• Session data DT7 ,
• User data DT8 .
The hardware data contains data that is stored within the hardware, which cannot (or only limited) influenced by the operating
system or applications. Raw data is any data that has not been interpreted, yet. The details about data contain any meta-data
that is stored within the data or externally, for example time stamps that are stored for every file on the file-system. All data
that influences the behavior of a system is addressed by the configuration data. If such data influences the communication
behavior, it is classified as communication protocol data. The process data contains any data of running processes, such as
the owner or assigned areas of the system’s memory. All data related to a specific session of a user or process is defined as
session data. Subsequently, all data that is intentionally consumed, modified or created by the user of a particular system is
described as user data.
We use this model in Section IV for the formalization of our evaluation scheme of forensic software.
III. T HREATS FOR FORENSIC SOFTWARE
Recently it is reported that forensic software is vulnerable to potential attacks [12]. Therefore, we define preliminary attacker
models for forensic software, that are addressed within our framework in Section IV.
A. Possible attacks on forensic software
The objective to hide data from forensic investigations is known as Anti-Forensics (see e.g. [13]). Thereby, an attacker
tries to avoid that compromising data is found during the investigation. Common methods for Anti-Forensics are the usage
of cryptography or steganography or the overwriting of some meta-data. However, these methods usually use passive attacks.
If particular vulnerabilities of common forensic software are known and being exploited, a corruption of the evidence or an
interruption of the investigation might be caused (see [12]). The authors discover some out-of-bound reading errors within The
Sleuthkit1 . Within EnCase2 they identify errors that avoid acquiring hard disk images if the MBR partition table is corrupt,
crashes during the acquisition of corrupted NTFS file-systems, string searches on corrupted Exchange databases, viewing deeply
nested directories and memory allocation errors that are caused by corrupted NTFS file-systems, as well as the possibility to
hide data if many partitions are present (more than 25).
B. Attacker models for forensic software
From the described potential attacks we suggest the following preliminary attacker models by integrating the attacker groups
from [14] and the objectives from [15]:
• Corruption of data,
• Stealing of data,
• Corruption of processes.
1 http://www.sleuthkit.org/
2 http://www.guidancesoftware.com/forensic.htm
95
The corruption of data addresses the alteration and hiding of data. This can occur during live and post-mortem forensics,
respectively. Attackers can be hackers, spies, professional criminals or even the investigator (intentional or unintentional). The
budget of those attackers varies from a very small budget (e.g. if the attacker is a hacker, trying to break a system just for
fun) to very high (e.g. if the attacker is a spy, who tries to steal valuable data without attracting attention). The particular
knowledge and the objective of the attacker vary, too.
The stealing of data is a possible attacker model if the investigated systems may contain confidential data and the investigation
is performed e.g. by an external service provider. In this case the investigator might steal this data. If the forensic workstation
is connected to the internet, vulnerabilities of the utilized forensic software might allow for a misuse (e.g. by spies). We assume
that the objective is financial or political gain in most cases. Thus, the attacker has likely a high budget and knowledge.
Subsequently, we suggest the attacker model corruption of processes, it addresses in particular the corruption of processes of
the forensic software. This might allow for the execution of malware on the forensic workstation compromising the results of
the investigation. There might be various attackers: hackers, spies, professional criminals or corporate raiders. The objective
varies between thrill and actions to cover up a previous attack. The particular knowledge and budget of the attacker varies,
too. Such a scenario is possible during live and post mortem forensics.
IV. A COMMON EVALUATION SCHEME FOR FORENSIC SOFTWARE
In this section we introduce our suggested common approach towards the evaluation of forensic software. We intend to
simplify the decision of the judge in his role as a gatekeeper about the admission of evidence that has been found with a
particular forensic tool. This common scheme for the evaluation of forensic software (COSEFOS) is intended to fulfill different
tasks. The first objective is to investigate the appropriateness of a particular tool for forensic investigations, in particular, its
effectiveness, quality of the result and potential errors. This is basically the objective of prior work. The second objective is
to investigate the integration into the forensic process. This also includes a possible court approval of the gathered data. The
following table II illustrates our suggested framework.
Hard-Criteria Soft-Criteria
Must-Criteria
Core functionality General acceptance within the expert community
Should-Criteria
Logging Publication of the method
Protection of the integrity of the gathered data
Protection of the authenticity of the gathered data Standards for the usage of the application
Protection of the confidentiality of the gathered data
Access restriction for the gathered data Intention of the investigation
Protection of the integrity of the source data
Can-Criteria Personal familiarity with the application
System heterogeneity
Minimality of required system rights
Open source
TABLE II
C OMMON S CHEME FOR E VALUATION OF F ORENSIC S OFTWARE (COSEFOS); THE HARD - CRITERIA ARE DISCUSSED IN S ECTIONS IV-A TO IV-C, THE
SOFT- CRITERIA ARE DISCUSSED IN S ECTION IV-D
The table II shows our suggested scheme for the evaluation of forensic software. We suggest to discriminate between hard-
criteria and soft-criteria. The hard-criteria can be determined with particular tests; they are unchangeable for a particular
version of a particular forensic tool. The soft-criteria are subjective; they are not directly measurable. In the case of a trial at
court it is not possible to know which factors the judge evaluates and how he weights them.
We suggest to further divide the hard-criteria into must-criteria, should-criteria and can-criteria. The must-criteria must
be addressed sufficiently by the particular tool. If it does not fulfill the requirements, it is unsuitable for the investigation.
The should-criteria can be fulfilled also by external tools. However, those criteria are required for the forensic investigation.
Subsequently, the can-criteria are motivated by the potential threats for forensic software. In the following Sections IV-A to
IV-C we describe our suggested hard-criteria in detail, the soft-criteria are discussed in Section IV-D.
A. Must-Criteria
We suggest that a tool that is intended for forensic investigation must fulfill those criteria. It is necessary, that the gathered
data is comprehensible to be able to detect possible errors. The must-criteria address the core functionality of a particular
forensic application. This is the benefit that should be achieved by using this application. In contrast to the other criteria those
96
tests are unique for every type of tool (see e.g. [3] for forensic duplication applications). According to [7] applications can be
evaluated either functionality oriented or tool oriented.
The possibility of actual testing is already fulfilling some Daubert factors (see table I). Those include the factors Peer Review
and Publication and Potential for testing or actual testing. We suggest inspecting the following properties of a particular
forensic tool during the evaluation of its core functionality:
• Reproducibility of the results,
• Possible errors that can occur,
• Frequency of those errors,
• Significance of the errors.
The reproducibility of the results is the most important property of a forensic application. If a particular tool behaves non-
deterministic, it is unsuitable for forensic investigations, because it is not possible to determine whether the result is valid
or not. This applies especially for results of tools that should be admitted under rule 901 of the Federal Rules of Evidence
(see [6]). The knowledge about possible errors and their particular frequency is also required due to the Daubert factors, in
particular, known or potential errors and the control for or consideration of confounding factors. We suggest that during the
test of the error rates false negatives and false positives are determined.
B. Should-Criteria
As mentioned before, we suggest that the should-criteria must be addressed either by the forensic application or externally
to support the forensic process. However, it is preferable that the application itself addresses those criteria sufficiently to avoid
potential errors or the omission of some criteria altogether.
1) Logging: The logging is in our opinion the most important criterion of the forensic process. If every step of the
investigation is documented properly the investigation is comprehensible for uninvolved people, such as auditors, and can
be repeated identically to test the reproducibility of the results. All logging functions of forensic applications are used for the
process accompanying documentation. The particular information that such a logging function captures can also be divided into
must, should and can information according to their particular priority. We suggest, that additional information are provided
by the investigator. Those include the name of the investigator, names of witnesses, the case ID or the search warrant number.
Some of the aspects are also suggested in [3] for the logging of the evaluation of the forensic tools. In the following we
enumerate the information that must be logged:
• Date and time of the beginning of the investigation with this particular tool,
• Date and time finishing the investigation with this particular tool,
• Name of the tool,
• Used parameters of the tool,
• With graphical tools: selected options, executed menu options,
• With interactive tools: course of the investigation,
• Reference to the produces results,
• Data about this data source that is used to extract the data from,
• Occurrence, type and number of errors.
With this data uninvolved people can comprehend and reproduce the investigation. It logs which results were produced when
and in which way. The information about errors is very important to be able to decide whether the collected data is complete.
However, this does not describe the quality of the result of this particular method. Hence, additional information should be
logged:
• Version of the tool; CVS/SVN revision of developer versions,
• Name and version of used libraries; CVS/SVN revision of developer versions,
• Information about the system that is used for the investigation.
We suggest that the versions of the tool and used libraries are logged to determine whether they are affected by bugs known prior
or after the investigation. This allows for an exclusion of possible inaccurate or incomplete results. This might be necessary
if a major error is found within the utilized tool. This does also apply for the utilized system since this might lead to an
alteration of the data source, which affects the evidentiary value. To support the analysis of such potential errors the following
information might be logged, too:
• Cryptographic hash values of source files, date and time of the compilation of the tool,
• Cryptographic hash values of source files, date and time of the compilation of the utilized libraries,
• Cryptographic hash value of the compiled tool,
• Cryptographic hash value of the compiled libraries.
Since some cryptographic hash algorithms are prone to intended collisions, only algorithms that represent the latest state of the
art should be used. It is possible to determine the exact versions of the utilized tool using this information about the forensic
97
tool itself and the libraries. However, the logging of the hash values of all source files might result in a large mount of data.
Hence, it is only reasonable to log such information in some special cases, such as the utilization of experimental tools or
libraries.
2) Protection of the integrity of the gathered data: We consider the protection of the integrity of the gathered data as very
important. It is necessary to be able to detect any alteration of the data. We suggest using cryptographic hash functions to be
able to detect any corruption of the data. According to [16], it should not be feasible to find collisions of the hash functions
using computers (see e.g. [17] for current recommendations regarding secure hash functions). We suggest that the forensic
application calculates hash values for the results of the investigations. These should be displayed and stored within the log file.
The hash value should be noted separately to hinder an intentional alteration of the collected data. However, the primary goal
of such a hash value is to be able to detect random or unintended alterations of the data, e.g. which might occur during the
transmission or storing of the data. We suggest calculating and storing a hash value for the log file and all of its entries, too.
3) Protection of the authenticity of the gathered data: We suggest protecting the authenticity of the gathered data, too.
This is necessary, because it allows for the determination whether the data is originated from the given source. Hence, a
forensic application should provide functions to support the protection of the authenticity. We suggest that this applies for all
results of the investigation and the log file. The authenticity can be achieved using different techniques, e.g. HMACs, which
combine a key-less hash function with a key ([16]) or digital signatures, such as DSA3 . However, in most cases this requires
a key management. We cannot determine which approach is the best option for a particular tool; it is highly dependent on the
particular use-case. However, we suggest using digital signatures that are supported by a public key infrastructure. Additionally,
a strong password should be used to complicate forging the signature. Ideally, the required keys are stored on external tokens,
such as smart cards, that cannot be duplicated easily. We suggest, for the prevention of manipulations by the investigator (see
attacker models in Section III-B), that only the forensic application is able to access the necessary keys. However, the particular
laws regulating the usage of digital signatures must be regarded to ensure a court approval of such methods.
4) Protection of the confidentiality of the gathered data: We suggest that a forensic application protects the confidentiality of
the gathered data. However, the necessity of such methods depends on the processed data; if they contain confidential or person
related data, methods for the protection of the confidentiality are mandatory. This applies in particular, if the investigation is not
performed by law enforcement authorities. For the protection of the confidentiality either symmetric or asymmetric encryption
can be utilized (see e.g. [16]). However, we suggest using at least on asymmetric part due to the attacker models in Section
III-B. If the investigator has only access to the public key and the gathered data is automatically encrypted by the forensic
application, stealing or modifying data is much more complicated. We suggest that the source data should be also encrypted, if
that possible in the particular use-case. If the source data is encrypted, the forensic application should enforce encrypting the
gathered data, too. Otherwise the confidential data could be stolen easily. However, this requires a suitable key management,
too. If such techniques are used, the data still can be stolen by transcription or by taking a picture. Therefore, we suggest
additional organizational regulations.
5) Access restriction for the gathered data: The access restriction for the gathered data is a special form of the confidentiality
protection. We suggest to regard such functions separately, since some tools, such as EnCase provide a simple password within
the evidence file header to restrict the access to the data. Because it does not involve any encryption, third parity tools can read
the contents of the file without knowing the password [18]. We suggest an access restriction to address the attacker models
from Section III-B. This is especially necessary if traces are investigated by external service providers. Thus, the forensic
application and its utilized data format should support the definition of access restrictions. We suggest that such restrictions
include the enforcement of the encryption of the gathered data. Additionally, a restriction to a particular forensic workstation
might be useful.
6) Protection of the integrity of the source data: The utilization of write-blocking devices is considered as best practice
in IT-forensics. However, we suggest that a forensic application provides additional methods to support the protection of the
integrity of the source data, since, in some cases, write-blocking devices are not available or cannot be used at all. Even if
a particular tool uses copies of the original data the results of the investigation might be compromised due to the alteration
of the source data. However, such a protection is not always possible, especially during a live analysis or incident response.
Thus, we suggest to define an order of the data to collect (e.g.according to the volatility of data); this is required if some data
might be compromised during the data gathering.
C. Can-criteria
The can-criteria are optional criteria. They might be addressed during the investigation, e.g. to allow for a more compre-
hensible investigation.
3 Digital Signature Standard http://www.itl.nist.gov/fipspubs/fip186.htm
98
1) System heterogeneity: We suggest the system heterogeneity between the investigated and the investigation system to
minimize the risk of the execution of malware on the forensic workstation. Therefore, this applies to the heterogeneity of
software and hardware. This is motivated by the security mechanisms with a layered architecture in [16]; here the security
must be ensured on all layers. However, this is very complicated since it requires special hardware. Since we assume that the
forensic workstation might be subject to attacks (see III), system heterogeneity might provide a limited preventive effect. This
is obvious if an investigated system is infected by malware; without any countermeasures the forensic workstation might be
infected during the investigation, too. However, if the software and hardware differs significantly, such an infection is very
unlikely.
2) Minimalism of required system rights: We suggest the criterion minimalism of required system rights to assess what
particular rights are required by a forensic application. It is motivated by the Principle of Least Privilege, which is described
in [16]. Thus, the necessary access rights of a tool should be as limited as possible. This avoids compromising of the entire
forensic workstation (or networks thereof) if a vulnerability within the forensic application is exploited by an attacker (see
III). Such access restrictions should apply to data and resources. However, in most cases such the access restriction has to be
provided externally. We suggest that the forensic application uses only the necessary resources and data.
3) Open source: In [19] the role of open source software in IT-forensics is discussed. The availability of the source code
of the utilized tools allow for evaluating those tools in detail by a large audience. Hence, possible errors that might cause
false positives or false negatives might be discovered. With the help of the source code of different versions of the same
tool possible bug fixes can be identified. Hence, we suggest logging the versions of the forensic applications that are used
during the forensic investigation. The public availability of the source code also addresses the Daubert factor peer review and
publication. However, we suggest, in contrast to ordinary forensic software, to limit the number of persons that are allowed
to change the source code. Additionally, all updates of the source code should be validated.
In [19] it is suggested to provide at least the design specifications of closed source software publicly.
D. Soft-criteria
We suggest to use soft-criteria as criteria that cannot be determined by an analysis of a particular tool and its results.
However, they might have an impact on the evidentiary value of the results of the investigation. Additionally, we assume that
soft-criteria might change over time. Thus, the particular evaluation results of the soft-criteria represent only a snapshot.
1) General acceptance within the expert community: The general acceptance within the expert community is a Daubert
factor. According to [5] the importance of this particular factor has grown since the Daubert decision in 1993. In over 96%
of the cases where the general acceptance of challenged evidence was rated unreliable the evidence was excluded from the
case. However, the general acceptance of a method is not sufficient alone: 10% of the challenged evidence was excluded, but
declared as generally accepted. It is additionally reported that only in 50 out of 375 investigated cases the general acceptance
was addressed. However, since the general acceptance is obviously required if the evidence is challenged, we suggest to evaluate
it for all utilized methods and tools.
2) Publication of the method: The publication of the method is a very important Daubert factor, too. Since the publication
of the method is required for a peer review, which might increase the general acceptance of the particular method, we suggest
evaluating the public availability. However, in contrast to the hard-criterion open source, the publication does not necessary
require the source code. It still allows for evaluations by independent third parties. Hence, we suggest evaluating the publication
of the method.
3) Standards for the usage of the application: The existence of standards for the usage of the application is also derived
from a Daubert factor. Especially for very complex applications or in cases were errors are very likely, we suggest defining
such standards. However, standards might differ from best practices to very strict guidelines. The utilization of write-blocking
devices and hash value calculation during the forensic duplication are such standards in form of best practices. In some cases
such standards might provide a special order of data that should be gathered.
4) Intention of the investigation: We assume that the particular intention of the investigation is not specific to most forensic
applications. However, this criterion is important for the evaluation of a tool. If such a tool is designed to gather data for
the accusation of a suspect exclusively, but no particular data that might prove his innocence, this tool is unfairly prejudicial.
Hence, we suggest that a forensic application should always alert the investigator if the traces that are gathered are potentially
incomplete; such warnings should be logged, too.
5) Personal familiarity with the application: The personal familiarity with the application is our last suggested soft-criterion.
Especially during incident response and live forensics an experienced operation of the forensic application is necessary.
Additionally, a better familiarity with the tool reduces the necessary time for the investigation, because no particular functions
have to be searched. Besides the individual familiarity with the tool, we suggest, e.g. for computer incident response teams, to
evaluate the familiarity with the particular tool of each member of the team. The Federal Rules of Evidence (see [6]) addresses
this criterion within rule 702; it requires that the expert is qualified by knowledge, skill, experience, training or education.
99
E. Formalization of COSEFOS
We suggest the following formalization of COSEFOS, to be able to describe the properties of a particular forensic application.
It contains the results of the evaluation of all criteria and the classification of the particular tool within the forensic process
(see Section II-C). We suggest the following formalization for COSEFOS:
Definition IV-E.1: The evaluation consists of the following properties, which are introduced in detail in the remainder of
this section:
f=({PP, CM, DT}, {CF, FL, IP, AP, CP, AR, SP, SH, MR, OS}, {GA, PM, SU, II, PF}).
The properties {PP, CM, DT} describe the classification of the application according to the model of the forensic process.
The hard-criteria are represented by the properties {CF, FL, IP, AP, CP, AR, SP, SH, MR, OS}
The soft-criteria are represented by the properties {AE, PM, SU, II, PF}.
We suggest that every sub-property addresses one criterion of COSEFOS, or one part of the classification according to the
model of the forensic process. The sub-properties are shown below. For the classification of the application according to the
model of the forensic process we suggest three sub-properties:
• PP: phase of the forensic process in which the particular tool is used,
• CM: class of method of the forensic tool,
• DT: processed data types.
We suggest that multiple phases and data types can be selected for the classification of one particular tool, since it might be
used in different phases and process different data types. However, for the class of method only one particular element should
be selected during the evaluation. We suggest to split up forensic toolkits to multiple methods and to evaluate them separately;
thus a single class of method can be selected.
The hard-criteria are described by the following sub-properties:
• CF: core functionality of the forensic application,
• FL: functions for logging,
• IP: functions for integrity protection of the gathered data,
• AP: functions for authenticity protection of the gathered data,
• CP: functions for the confidentiality protection of the gathered data,
• AR: possibility for access restriction,
• SP: functions for the integrity protection of the source data,
• SH: system heterogeneity,
• MR: minimalism of required system rights,
• OS: open source.
We suggest that for each of those sub-properties multiple elements might be selected. This also applies for the following
soft-criteria:
• GA: general acceptance within the expert community,
• PM: publication of the method,
• SU: standards for the usage of the application,
• II: intention of the investigation,
• PF: personal familiarity with the application.
We suggest combining this formalization with an evaluation scale for the benchmarking of forensic applications. In the following
section we evaluate one exemplary forensic application.
V. R ESULTS : EXEMPLARY UTILIZATION OF COSEFOS
In this section we exemplary evaluate the command line tool dcfldd4 , which is used to gather forensic duplicates of various
storage devices and the forensic toolkit EnCase Forensic [11]. However, only the imaging part is regarded for the preliminary
evaluation of the core functionality of EnCase. Furthermore, we show how COSEFOS might improve the development of
forensic tools, by introducing a preliminary framework for the creation of forensic applications (libopenforensic).
A. Evaluation of dcfldd using COSEFOS
In contrast to dd5 , dcfldd is a purpose-built tool for forensic investigations. Hence, it should fulfill the requirements that we
have defined within COSEFOS in Section IV. Therefore, we use the version 1.3.4-1 of dcfldd for our exemplary evaluation.
4 http://dcfldd.sourceforge.net/
5 http://www.opengroup.org/onlinepubs/9699919799/utilities/dd.html
100
1) Core functionality of dcfldd: Because dcfldd is an enhanced version of dd, we use the evaluation of the core functionality
from [20]. Therein it is reported that dd does not acquire the last sector of a disk if the sector count is odd; if the sector count
is even, all sectors are acquired. However, according to [21] this happens only if a Linux Kernel up to version 2.4 is used;
this error is not observed with Linux Kernels of the current 2.6 branch. Furthermore, additional sectors are marked defective
and filled with zero-bytes if the source disk contains errors. The count of additionally zeroed sectors depends on the interface
that is used to connect the hard disk. The particular results in [21] are identical for dcfldd and dd from the Helix-CD6 . If a
host protected area (HPA) or device configuration overlay (DCO) is configured for the source disk, only a duplicate of the
accessible area of the disk is acquired. Hence, they must be deactivated prior to the acquisition. If such a HPA is present,
current Linux Kernels report this, as shown in the following listing 1:
[ 3.932027] ata6 : SATA l i n k up 1 . 5 Gbps ( S S t a t u s 113 S C o n t r o l 3 0 0 )
[ 3.932800] ata6 . 0 0 : HPA d e t e c t e d : c u r r e n t 5 8 6 0 0 0 0 , n a t i v e 78140160
[ 3.932809] ata6 . 0 0 : ATA−6: ST9402115AS , 3 . 0 1 , max UDMA/ 1 0 0
[ 3.932811] ata6 . 0 0 : 5860000 s e c t o r s , m u l t i 0 : LBA48 NCQ ( d e p t h 1 )
[ 3.933663] ata6 . 0 0 : c o n f i g u r e d f o r UDMA/ 1 0 0
Listing 1. Excerpt of the boot protocol with a configured HPA
The listing 1 shows an excerpt of the boot protocol of a Linux Kernel (Version 2.6.28) of a detected HPA. If it is not manually
deactivated, any duplicate consist only of the visible 5860000 sectors. However, some Linux systems that are intended for
forensic investigations are automatically deactivating present HPAs as shown in the following listing 2:
[ 44.988878] ata6 : SATA l i n k up 1 . 5 Gbps ( S S t a t u s 113 S C o n t r o l 3 0 0 )
[ 44.990466] ata6 . 0 0 : HPA u n l o c k e d : 5860000 −> 7 8 1 4 0 1 6 0 , n a t i v e 78140160
[ 44.990472] ata6 . 0 0 : ATA−6: ST9402115AS , 3 . 0 1 , max UDMA/ 1 0 0
[ 44.990476] ata6 . 0 0 : 78140160 s e c t o r s , m u l t i 0 : LBA48 NCQ ( d e p t h 1 )
[ 44.991294] ata6 . 0 0 : c o n f i g u r e d f o r UDMA/ 1 0 0
Listing 2. Excerpt of the boot protocol of the Linux Kernel of the Helix-CD
The listing 2 shows that the HPA is automatically deactivated by the Kernel of the Helix-CD. This allows for the acquisition
of a forensic duplicate of the whole disk. Hence, we suggest that such a specialized forensic environment is used for the
acquisition.
The results of the data acquisition with this particular version of dcfldd might be incomplete depending on the utilized
environment, thus we suggest ”tool works mostly correctly; however, some errors might occur under special circumstances
(cf1 )” as evaluation result for the core functionality.
2) Logging functions of dcfldd: The forensic application dcfldd has some limited logging functions. Using the parameter
hashlog=Filename a simple log file containing hash values is created; additionally, the parameter errlog=Filename defines
a log file which records all occurred errors. Both logging functions must be activated manually. Because dcfldd stores the
gathered data as raw data, all logs must be stored in separate files. Because dcfldd does not have any further logging functions,
we suggest to log the program execution with all parameters, the version used, time and date of the start and the end of the
investigation, as well as, references to the source and gathered data externally. The logging functions of dcfldd are obviously
limited, thus the evaluation result is ”tool supports partial logging (fl1.0 )”.
3) Functions to for integrity protection of gathered data of dcfldd: Several functions for the protection of the integrity of
the gathered data are supported by dcfldd. Our evaluated version is able to calculate different hash values: MD5 (ip2.0 ), SHA-1
(ip2.1 ), SHA-256 (ip2.3 ), SHA-384 (ip2.4 ) and SHA-512 (ip2.5 ). However, since MD5 is used as default hash function, another
function should be selected by providing the parameter hash=Hashalgorithm at the command line, because of the known
weakness of that algorithm, especially against intended collisions. Besides the hash value of the whole file, additional hash
values for segments of the data can be calculated. Therefore, the parameter hashwindow=BYTES is used.
4) Functions to for authenticity protection of gathered data of dcfldd: The application dcfldd does not support the protection
of the authenticity (ap0 ). Hence, we suggest external methods for the protection of the authenticity.
5) Functions to for confidentiality protection of gathered data of dcfldd: The forensic application dcfldd does not support
the protection of the confidentiality (cp0 ). The gathered data is stored as raw data, without external methods the confidentiality
of the data cannot be assured. Hence, we suggest encrypting the files, e.g. within a cryptographic-container.
6) Functions for access restriction of dcfldd: The gathered raw data of dcfldd does not support any kind of access restriction
(ar0 ). Thus, we suggest to use access restriction methods of the file-system or to store the data within a cryptographic-container.
6 http://www.e-fense.com/products.php
101
7) Functions for integrity protection of source data of dcfldd: During the acquisition dcfldd does not write to the source
disk if the parameters are given correctly. However, no particular warning message is displayed if the investigator attempts
to overwrite the source data. Additionally, no countermeasures are available to block write access of other applications to the
source data. Thus, no particular functions for the integrity protection of the source data are provided (sp0 ) by this particular
version of dcfldd. Hence, we suggest using a hardware write-blocking device and to train the investigator for the usage of
dcfldd and the investigation environment.
8) System heterogeneity of dcfldd: The application dcfldd allows for the usage on different platforms and operating systems
(sh3 ). It is available for Microsoft Windows and different UNIX derivates. This allows for choosing a forensic workstation that
significantly differs from the investigated system.
9) Required system rights for the usage of dcfldd: The forensic application dcfldd requires read access to the source data
and write access to a directory for storing the gathered data. No particular administrator privileges are necessary (mr1 ).
10) Availability of the source code of dcfldd: Dcfldd is open source software (os1 ), the source code is available for everyone7 ;
it is distributed under the terms and conditions of the GPL8 license.
11) General acceptance of dcfldd within the experts community: The application dcfldd is used in multiple forensic
investigations. It has been developed by the Department of Defense Cyber Crime Center9 for this purpose. Hence, we assume
that dcfldd is generally accepted (ga1 ).
12) Publication of the method dcfldd: Because the source code of dcfldd is available for everyone, the method is published
(pm1 ). This allows for analyzing the source code and peer review.
13) Standards for the usage of dcfldd: To our knowledge, no particular standard for using dcfldd exists (su0 ). However,
there are best practices for the forensic duplication. They include e.g. the utilization of hardware write-blocking devices to
avoid any corruption of the source data. Furthermore, hash values should be calculated by the investigator.
14) Intention of the investigation with dcfldd: We cannot determine a special intention of the usage of dcfldd (ii1 ). It is
usually used to acquire complete duplicates of hard disks or partitions. Thus, no particular intention of the investigation can
be determined. We suggest logging the acquisition to ensure that no possible evidence is excluded from the investigation.
15) Formalization of the evaluation of dcfldd: The formalization of the evaluation of dcfldd according to Section IV-E is
shown in the following:
f = ({pp3 , cm6 , dt2 }, {cf1 , fl1.0 , {ip2.0 , ip2.1 , ip2.3 , ip2.4 , ip2.5 }, ap0 , cp0 , ar0 , sp0 , sh3 , mr1 , os1 }, {ga1 , pm1 , su0 , ii1 , {}})
This particular tool is used during the data gathering phase (pp3 ). It is a method for data processing and evaluation (cm6 )
and operates with raw data (dt2 ).
The personal personal familiarity with the particular tool is investigator dependent, thus is cannot be determined in general;
therefore an empty symbol is used ({}).
B. Evaluation of EnCase Forensic using COSEFOS
EnCase Forensic[11] is a widely used commercial forensic toolkit developed by Guidance Software. It is used for example by
many law-enforcement agencies for forensic investigations. The breadth of functions covers the majority of typical challenges
of forensic investigations starting with the data gathering towards the final documentation. We use the Version 6.1.0.17 of
EnCase Forensic for our exemplary evaluation.
1) Core functionality of EnCase Forensic: Our exemplary evaluation of the core functionality of EnCase Forensic is limited
to the forensic duplication mechanism due to the breadth of its functionality. Furthermore, the integrated scripting language
allows for an extension of the functionality of EnCase. The forensic duplication mechanism is evaluted in [22] using the
NIST Forensic Software Testing Support Tools [3]. Here, EnCase produces exact duplicates in most test cases. However, some
particular circumstances cause errors during the data gathering:
• When imaging a NTFS-Partition a small amount of sectors is not duplicated correctly, those sectors are filled with the
content of other sectors.
• When imaging a NTFS-Partition the last sector of the partition is not acquired,
• If some sectors of the source disk are faulty, additional sectors might be zeroed within the duplicate, depending on the
actual setting of the error granularity. Using the default settings a block of 64 sectors is zeroed ([23]). Only with an error
granularity of 1 no additional sectors are zeroed,
• Configured Host Protected Areas or Device Configuration Overlays are acquired when utilizing the FastBloc SE software
write-blocking device; If some other hardware write-blocking devices are used, such as FastBloc FE, those particular
areas are not acquired,
7 http://dcfldd.sourceforge.net/
8 http://www.gnu.org/licenses/gpl.html
9 http://www.dc3.mil/home.php
102
• During the restoration of some file-systems, such as FAT32 or NTFS, some meta-data is altered. However, this could be
avoided by cutting the power-supply after restoring the image, instead of performing a standard shutdown procedure of
the operating system,
• During the restoration of images on USB-devices some meta-data is changed, too.
In our opinion the last two errors are not relevant during most forensic investigations but must be regarded if the duplicates are
written to other hard disks. However, those errors are caused by the required underlying windows operating system because
no write-blocking device can be used during the recovery to a physical drive.
Areas protected by HPA or DCO are usually inaccessible, if they are not deactivated by the utilized write-blocking device.
The error granularity should be additionally set to 1 to reduce the amount of zeroed sectors within forensic duplicates of faulty
disks. The first two errors can be avoided by acquiring images of the entire disk, instead of multiple partition images. Thus,
we suggest ”tool works mostly correctly; however, some errors might occur under special circumstances (cf1 )” as evaluation
result for the core functionality.
2) Logging functions of EnCase Forensic: EnCase Forensic has some limited logging functions for the process accompanying
documentation (fl1.1 ) and final documentation (fl2 ). Therefore, an integrated report generator can be utilized to create a
preliminary final report. It creates the report using results of special functions (e.g. search for strings and phone or credit
card numbers) and manually bookmarked traces.
For the process accompanying documentation various data is logged, it includes the case-ID, the name of the investigator and
optional notes. For some particular actions a status, as well as starting and ending time is recorded automatically within the log.
Additionally, besides an MD5-Hash a globally unique identifier (GUID) is generated and stored for each acquired duplicate;
this information is optionally recorded in the log, too. Additionally, the meta-data of bookmarked files is recorded within the
log. However, how and when a file was bookmarked, is not recorded.
A detailed log of the course of the investigation is not recorded automatically; thus, we suggest that the investigator must
perform a manual logging.
3) Functions to for integrity protection of gathered data of EnCase Forensic: The integrity of the acquired data should be
protected by the EnCase Evidence File Format (see e.g. [18]), it calculates a CRC-checksum (ip1.0 ) for each block of data,
additionally a MD5-hash (ip2.0 ) is computed for the entire forensic duplicated and saved within the evidence file. However,
various vulnerabilities are reported for the MD5 hash algorithm (see e.g. [24]). Thus, more secure hash algorithms are necessary
to detect intentional manipulations of the acquired data.
4) Functions to for authenticity protection of gathered data of EnCase Forensic: EnCase Forensic itself does not support
the protection of the authenticity of the gathered data ap0 . However, it is optionally possible to store the data within EnCase
SAFE, which should be able to ensure the authenticity according to [23]; if EnCase SAFE is not available, the authenticity of
the data must be ensured using external methods.
5) Functions to for confidentiality protection of gathered data of EnCase Forensic: EnCase Forensic has an option to define
passwords for evidence files, if such a protected file is opened with EnCase the correct password must be entered. However,
the password is just a flag within the header of the evidence file (see e.g. [18]) and does not involve any encryption of the
data. Thus, alternative implementations, such as libewf 10 or GetData MountImage Pro11 , are able to open those files without
the password. Hence, in our opinion EnCase does not support any techniques for the protection of the confidentiality of the
gathered data (cp0 ).
6) Functions for access restriction of EnCase Forensic: Techniques to restrict the access to the gathered data are not
supported by EnCase Forensic. However, by defining a password during the acquisition of the data it is possible to restrict the
access within EnCase (ar0 ).
7) Functions for integrity protection of source data of EnCase Forensic: EnCase Forensic uses optionally the FastBloc SE12
software write-blocking mechanism to protect the integrity of the source data (sp1 ). Additionally, particular precautions are
necessary to avoid any alteration of the source data by the utilized windows operating system; a separate hardware write-
blocking device can provide a suitable protection against unwanted alterations.
8) System heterogeneity of EnCase Forensic: EnCase Forensic is available exclusively for Microsoft Windows13 operating
systems (sh0 ). However, separate duplication utilities are provided for Linux and DOS.
9) Required system rights for the usage of EnCase: EnCase Forensic required administrator privileges for the acquisition
of forensic duplicates (mr0 ), without them only a small portion of storage devices, such as floppy disks or compact disks can
be acquired.
10) Availability of the source code of EnCase Forensic: EnCase Forensic is a commercial product, the source code is not
publicly available (os0 ). Thus, a detailed verification or validation of the implemented functions is not possible for everybody.
10 http://sourceforge.net/projects/libewf/
11 http://www.mountimage.com/de/index.php
12 http://www.encaseenterprise.com/products/ef modules.asp#se
13 http://www.microsoft.com/windows/default.aspx
103
Fig. 1. Structure of libopenforensic
11) General acceptance of EnCase Forensic within the experts community: Since EnCase Forensic is used by several law-
enforcement agencies for forensic investigations and because the gathered evidence has been admitted in various courts (see
e.g. [10]), we assume that this particular software is generally accepted (ga1 ).
12) Publication of the method EnCase Forensic: EnCase Forensic is commercially available, thus, the functionality can
be evaluated with various scenarios (pm1 ). However, no formal evaluation is possible due to the lack of a publicly available
source code.
13) Standards for the usage of the toolkit EnCase Forensic: Several standards and certifications exist for the usage of
EnCase Forensic (su1 ), e.g. [23] addresses the certification of investigators using EnCase.
14) Intention of the investigation with EnCase Forensic: We cannot determine any particular intention using due to the
breath of provided functionalities of EnCase Forensic (ii1 ).
15) Formalization of the evaluation of EnCase with COSEFOS: The formalization of the evaluation of EnCase according
to Section IV-E is shown in the following:
f = ({pp3 , cm6 , dt2 }, {cf1 , {fl1.1 , fl2 }, {ip1.0 , ip2.0 }, ap0 , cp0 , ar0 , sp1 , sh0 , mr0 , os0 }, {ga1 , pm1 , su1 , ii1 , {}})
EnCase is used during the data gathering phase (pp3 ). It is a method for data processing and evaluation (cm6 ) and operates
with raw data (dt2 }).
The personal personal familiarity with the particular tool is investigator dependent, thus is cannot be determined in general;
therefore we are using an empty symbol ({}).
C. Using COSEFOS to derive requirements during the development of forensic software
Our suggested evaluation scheme COSEFOS is also suitable to improve the development process of forensic software by
providing a list of requirements. Therefore, we introduce the preliminary framework for the development of forensic software
”libopenforensic”. It is designed to fulfill as much hard-criteria of COSEFOS as possible.
1) Design of libopenforensic: Libopenforensic is designed to use various data formats. Therefore, a generic interface is
created, which provides access to the data, as well as functions for logging, integrity protection, authenticity protection and
confidentiality protection. The Figure 1 illustrates the structure of our framework: libopenforensic provides the mentioned
interface for the particular forensic application for data access and protection, the two data modules dciraw (for raw input data)
and dcioaff3 (for the Advanced Forensic Format, AFF [18]) provide access to particular data structures and lfutility
provides hash and logging functions within libopenforensic. The particular input and output files are abstracted for the forensic
application to complicate the access with functions other than libopenforensic. If a particular file is opened as input file, no
particular functions exist to write any data to the particular file handle. Additionally, a software write lock is enabled for
raw input files to hinder other applications altering the files content. Furthermore, the cryptographic hash sum for the file is
computed automatically after opening the file; thus, its integrity can be verified at any time.
Before an application can use any function of libopenforensic, it must be initialized. During this initialization all command-line
parameters, application and libopenforensic versions, system user, date and time, system environment (CPU-architecture and
104
operating system version), as well as the hash value of the application is logged automatically. All further actions during the
investigation, such as opening and closing of files or copying data is logged automatically with timestamps, too. This process
accompanying documentation is stored in each output-file and, if enabled, within each processed AFF file as an additional
chain-of-custody bill-of-materials (see [18]).
The integrity is ensured by default using the SHA-256 algorithm. However, other hash algorithms can be selected. Additionally,
the AFFs function to calculate hash values for each block is used. Here, SHA-256 is used by default, too.
Since all output files are stored through AFFLIB14 , all particular functions of AFFLIB for the protection of the authenticity
and confidentiality are available, too. To preserve the meta-data (e.g. MAC-times) of input files, such data is collected prior
reading the file and is recoded within the log.
2) Criteria of COSEFOS addressed in libopenforensic: Our preliminary implementation of libobenforensic addresses multiple
hard-criteria of COSEFOS. The most important criterion is the logging. As mentioned in Section V-C1, any application that
uses libopenforensic must initialize it at first. Thereby, the log is initialized and all following actions using libopenforensic are
logged automatically with an additional timestamp. The particular forensic application might additionally record custom log
messages within the process accompanying documentation.
Furthermore, the integrity of the gathered data and of the collected meta-data is ensured with cryptographic hashes. Optionally,
the authenticity and confidentiality can be ensured with digital signatures or encryption, respectively. Libopenforensic tries to
preserve the integrity of the source data by applying file-system based write-locks and prohibiting any write access to input
files for the forensic application within libopenforensic.
Libopenforensic is intended to be platform independent (to fulfill the criterion system heterogeneity) open source library. The
library itself does not require any minimal system rights.
3) Differences between libopenforensic and AFFLIB: AFFLIB is used by libopenforensic as data storage layer. Unlike
AFFLIB, it is intended to enforce a forensic sound course of investigation by limiting the access to the processed data and
extensive logging. It additionally allows for an automatic generation of AFF bill-of-materials for any forensic application which
utilizes libopenforensic, which is not possible using plain AFFLIB because it does not provide this functionality for third party
tools. Additionally, fail-safe defaults (see e.g. [16]) are used by libopenforensic, this includes the automatic logging and the
application of the SHA-256 hash algorithm. However, during the verification of the hash values, a legacy hash algorithm, such
as MD5, could be also used. This ensures a fast adoption of new hash standards if vulnerabilities are found within the default
hash.
VI. C ONCLUSIONS AND FUTURE WORK
We provide a first approach of a common evaluation scheme for forensic software. Therefore, we address technical and legal
aspects. We use the Federal Rules of Evidence and the Daubert Challenge as exemplary legal fundamentals from the U.S.
jurisdiction to show tendencies for other countries. However, some particular assumptions, such as the automatic authentication
of evidence that has been gathered using an automated process, do not exist in many countries. Based upon potential attacks
on forensic software, we suggest different preliminary attacker models. These are addressed within our common evaluation
scheme.
We show an exemplary evaluation of the forensic duplication application dcfldd and the forensic toolkit EnCase Forensic.
Here, dcfldd provides different algorithms for the protection of the integrity of the gathered data and some basic logging
functions. EnCase additionally provides a final report generator. However, in our evaluated version it uses the outdated MD5
hash algorithm to ensure the integrity of the gathered data. We additionally introduced a preliminary framework for the
development of forensic software, libopenforensic, by deriving requirements from COSEFOS. Future work should extend our
preliminary attacker models and address the additional requirements within the evaluation scheme. Furthermore, different
forensic applications should be evaluated. The extension of the formalization of COSEFOS with an evaluation scale might be
used for the benchmarking of forensic applications.
ACKNOWLEDGMENTS
The work in this paper has been funded in part by the German Federal Ministry of Education and Science (BMBF) through
the Research Programme under Contract No. FKZ: 13N10818.
R EFERENCES
[1] S. Kiltz, M. Leich, J. Dittmann, C. Vielhauer, and M. Ulrich, “Revised Benchmarking of contact-less fingerprint scanners for forensic fingerprint detection:
Challenges and Results for Chromatic White Light Scanners (CWL),” in Proceedings of SPIE Vol. 7881, Multimedia on Mobile Devices 2011; and
Multimedia Content Access: Algorithms and Systems V, 2011.
[2] G. Klotz-Engmann, “Funktionale Sicherheit- Integraler Bestandteil der Betriebssicherheit, Schutzeinrichtungen nach IEC 61508/61511, Funktionale
Sicherheit und SIL,” 2007. [Online]. Available: http://www.sdv-ev.de/fileadmin/pdf/Klotz-Engmann.pdf
[3] National Institute of Standards and Technology, FS-TST: Forensic Software Testing Support Tools - Requirements, Design Notes and User Manual, 2005.
14 http://afflib.org/
105
[4] S. Kiltz, T. Hoppe, J. Dittmann, and C. Vielhauer, “Video surveillance: A new forensic model for the forensically sound retrival of picture content off a
memory dump.” in Proceedings of Informatik2009 - Digitale Multimedia-Forensik, S. Fischer, E. Maehle, and R. Reischuk, Eds., 2009, pp. 1619–1633.
[5] L. Dixon and B. Gill, Changes in the Standards for Admitting Expert Evidence in Federal Civil Cases Since the Daubert Decision. RAND Institute
for Civil Justice, 2001, ISBN: 0-8330-3088-4.
[6] Federal Evidence Review, “Federal Rules of Evidence 2011,” 2011. [Online]. Available: http://federalevidence.com/downloads/rules.of.evidence.pdf
[7] Y. Guo, J. Slay, and J. Beckett, “Validation and verification of computer forensic software tools - Searching Function,” Digital Forensic Research
Workshop, 2009.
[8] Scientific Working Group on Digital Evidence, “SWGDE Recommended Guidelines for Validation Testing Version 1.1,” 2009. [Online]. Available:
http://www.swgde.org/documents/swgde2009/SWGDE Validation Guidelines 01-09.pdf
[9] “Common Criteria for Information Technology Security Evaluation,” 2009, version 3.1, Revision 3, Final. [Online]. Available:
http://www.commoncriteriaportal.org/files/ccfiles/CCPART1V3.1R3.pdf
[10] Guidance Software, “EnCase Legal Journal,” September 2009. [Online]. Available:
http://www.guidancesoftware.com/WorkArea/DownloadAsset.aspx?id=2525
[11] “EnCase Forensic - Digital Data Collection & Analysis Application, Guidance Software,” 2011. [Online]. Available:
http://www.guidancesoftware.com/computer-forensics-ediscovery-software-digital-evidence.htm
[12] T. Newsham, C. Palmer, A. Stamos, and J. Burns, “Breaking Forensics Software: Weaknesses in Critical Evidence Collection,” 2007.
[13] S. Garfinkel, “Anti-Forensics: Techniques, Detection and Countermeasures,” 2007.
[14] R. Anderson and M. Kuhn, “Low cost attacks on tamper resistant devices,” in Security Protocols, ser. Lecture Notes in Computer Science,
B. Christianson, B. Crispo, M. Lomas, and M. Roe, Eds. Springer Berlin / Heidelberg, 1998, vol. 1361, pp. 125–136, 10.1007/BFb0028165. [Online].
Available: http://dx.doi.org/10.1007/BFb0028165
[15] J. D. Howard and T. A. Longstaff, “A common language for computer security incidents (sand98-8667),” Sandia National Laboratories, Tech. Rep. ISBN
0-201-63346-9, 1998.
[16] M. Bishop, Computer Security: Art and Science. Addison-Wesley, 2003, ISBN: 0201440997.
[17] E. Barker and A. Roginsky, “Transitions: Recommendation for Transitioning the Use of Cryptographic Algorithms and Key Lengths,” *, 2011, NIST
Special Publication 800-131A. [Online]. Available: http://csrc.nist.gov/publications/nistpubs/800-131A/sp800-131A.pdf
[18] S. L. Garfinkel, “Providing Cryptographic Security and Evidentiary Chain-of-Custody with the Advanced Forensic Format, Library, and Tools,” The
International Journal of Digital Crime and Forensics, vol. Volume 1, no. Issue 1, 2009.
[19] B. Carrier, “Open Source Digital Forensics Tools, The Legal Argument,” @stake Research Report, 2002. [Online]. Available: http://www.digital-
evidence.org/papers/opensrc legal.pdf
[20] “NIJ Special Report: Test Results for Disk Imaging Tools: dd GNU fileutils 4.0.36, Provided with Red Hat Linux 7.1,” U.S. Department of Justice,
Office of Justice Programs, National Institute of Justice, Tech. Rep., 2002, nCJ 196352.
[21] J. Lyle, “Quirks Uncovered While Testing Forensic Tool,” 2008. [Online]. Available: http://www.cftt.nist.gov/presentations/ENFSC-Lyle-Oct-08.ppt
[22] “NIJ Special Report: Test Results for Digital Data Acquisition Tool: EnCase 6.5,” U.S. Department of Justice, Office of Justice Programs, National
Institute of Justice, Tech. Rep., 2009, nCJ 228226. [Online]. Available: http://www.ncjrs.gov/pdffiles1/nij/228226.pdf
[23] S. Bunting and W. Wei, The Official EnCE: EnCase
R Certified Examiner Study Guide. Wiley Publishing, Inc., 2006.
[24] X. Wang, D. Feng, X. Lai, and H. Yu, “Collisions for hash functions: Md4, md5, haval-128 and ripemd,” 2004. [Online]. Available:
http://eprint.iacr.org/2004/199.pdf
106