Argumentative Agents for Justifying Decisions in
Audit
∗
∗
Ioan Alfred Letia∗ , Adrian
, Groza , Radu Balaj
∗ Computer
Science Department
Technical University of Cluj Napoca, Romania
[email protected]
[email protected]
[email protected]
Abstract—This research focuses on justifications in
an audit context. An argumentation framework based
on the formal model of justification logic is applied for
audit dialogues in the fish industry. Different forms of
justifications used in audit like net persuasive evidence,
breadth of issues, and framing evidence are formalised
in the proposed framework.
I. Introduction
Statements on Auditing Standards (SAS) provide guidance to external auditors on generally accepted auditing
standards (GAAS). For instance, SAS 31 requires the
auditors to obtain evidence supporting or attacking the
management assertions, whilst SAS 56 forces the auditor
to obtain evidence regarding the managent explanations
of significant fluctuations. If management is unable to provide an acceptable explanation the auditor should perform
additional procedures to investigate those fluctuations
further [1].
Auditors use different forms of justifications, depending
on context, such as: i) net persuasive evidence, ii) breadth
of issues, or iii) evidence framing. Auditors with both high
managerial and technical knowledge tend to enumerate
more supporting versus attacking justifications and to
consider a wider breath of issues, if they know that the
reviewers of their reports have dissimilar task preferences.
If the preferences of the reviewers are anticipated to be
similar, an evidence framing approach would be employed
by stressing out the consistent evidence against the inconsistent ones [2]. In many situations, the justifications
stressed out by the auditors are intended to persuade the
reviewers of the audit report about the appropriateness
of the reached conclusions [3]. Auditors receiving corroborating evidence documented the fewest justifications,
while those who received inconsistent evidence, or even no
evidence at all, needed more justifications [4].
The agent being audited has the obligation to provide
supporting evidence regarding its business activity. Auditors determine the appropriate amount of this evidence.
There are seven categories of evidence from which the
auditor can choose: physical examination, confirmations,
documentation, analytical evidence, written representations, mathematical evidence, oral evidence, or electronic
evidence. An auditor needs a preponderance of persuasive
evidence for each assertion to have a reasonable basis
for a conclusion. When a reasonable support exists an
unqualified, qualified or adverse opinion will be issued.
Otherwise, a disclaimer of opinion will be issued.
Argumentation is an adequate way of resolving contradictions Multi-Agent Systems (MAS) and helps agents to
better understand the environment they live in, and the
information they rely on. Besides knowing how to argue,
argumentative agents need evidence, the building blocks
of any argument. The justification logic that we employ
here is aimed specifically at evidence, or justifications, by
enabling agents to constantly ask for, challenge, and evaluate proofs. During the process of auditing, two or more
parties exchange justifications or explanations, detailing
that different norms are complied with or not.
The remaining of this paper is structured as follows:
Section II introduces distributed justification logic and the
argumentation framework built on top of it. Section III
details the system architecture and norms representation.
Sections IV and V illustrate the arguments conveyed during the quality audit and within the work-papers, respectively. Finally, related work is presented and conclusions
are drawn.
II. Arguing in Distributed Justification Logic
A. Distributed Justification Logic
Justification logic provides an evidence-based foundation for the logic of knowledge [5], where ”F is known”
is replaced by ”F has an adequate justification”. Even in
its infancy, justification logic seems the adequate technical
instrumentation to respond to the observations raised by
Walton [6] regarding the use of knowledge in argumentation. The notation t : F means ”F has justification t”,
while the notation ti : F is used in the multi-agent version
of justification logic to mean that ”agent i has justification
t for believing F is true”. The distributed justification
logic [7] (DJL) is exploited in our approach.
Definition 1: The language L0 contains proof terms t ∈
T and formulas ϕ ∈ F
t ::= c | x | t • t | t + t |!i t |?i t | t ≻ t
ϕ ::= γ | ϕ ⋆ ϕ | ¬ϕ | t :i ϕ
A0
A′1
A′2
A′4
A′5
A′6
A′7
C1
C2
C3
:
:
:
:
:
:
:
:
:
:
classical propositional axioms
t :ε F → F
s :i (F → G) → (t :j F → (s • t) :k G)
t :i F →!j t :i (t :i F )
¬t :i F →?j t :i (¬t :i F )
s :i F ∧ t :j F → (s + t) :i F, s + t ≻ t
F → t :i F
r :j (t :i F ∧ s :i ¬F )
F : s :i → (s + t) :i F
t :i F → t :j F
Fig. 1.
(e-reflexivity)
(distributed application)
(positive proof checker)
(negative proof checker)
(accrual)
(internalisation)
(justified inconsistency)
(transferring evidence)
The axioms of distributed justification logic.
The axioms in figure 1 have the following informal
semantics: According to e−ref lexivity, if all the agents in
the system have a justification for F , only then F is taken
to be a fact. Based on axiom A′2 , an agent k can create
its own compound justification s • t based on evidence t
brought by agent j and the implication F → G validated
by agent j based on the piece of evidence s. A′4 allows the
agent i’s justification t for the sentence F to be verified
by another agent j. When j = i positive introspection
occurs. If agent i does not accept t as sufficient for proving
F , agent j can request a justification for this preference
(axiom A′5 ). According to A′6 , if two agents i and j have
different justifications for proving F , then the union of two
pieces of evidence (s + t) represents stronger evidence for
F , where ≻ represents a preference order relation over justifications. The internalisation axiom says that if formula
F is true, then at least one agent i has accepted it as fact,
based on evidence t. From the argumentation viewpoint,
every argument should have a justification in order to be
supported. Consequently, self defending arguments are not
allowed in distributed justification logic.
B. Argumentation Framework
Agents can communicate their justifications, allowing
for complex scenarios. For instance, the C1 patterns says
that agent j has evidence r that justifies agent’s i inconsistency. In the C2 communicative pattern, the evidence t is
not strong enough to convince agent i that his evidence s
is not sufficient to prove F : r :j (t :i F ∧s :i ¬F ). Here, the
agent i is inconsistent because it has accepted two pieces
of evidence supporting opposite conclusions. For the C3
case, the agent j accepts agent’s i evidence, thus accepting
formula F .
An argument A is consistent with respect to an evidence
t if A does not contradict any evidence in t. We say that a
piece of evidence t does not defeat evidence s of an agent
i if s :i F → (s + t) :i F .
Definition 2 (Undercutting defeater): The evidence t is
an undercutting defeater for F justified by s if the joint
evidence s + t does not support F any more. Formally:
s :i F → ¬(s + t) :i F
Corollary 1 (Justified undercutting defeater): Note that
the undercutting defeater is an implication, which is a
formula in justified logic. So, based on the internalisation
axiom A′7 , it should have a justification: q :i [s :i F →
Fig. 2.
System architecture.
¬(s + t) :i F ]. Informally, q is agent’s i justification why
the piece of evidence t attacks evidence s in the context
of F formula.
Definition 3 (Rebutting defeater): The evidence t is a
rebutting defeater for F if it is accepted as a justification
for ¬F .
III. Verifying Norm Compliance
The definition of audit that is of interest to us regards
the evaluation of a process or product, commonly known
as quality audit. According to the ISO 10011-1:1990 standard, a quality auditor is responsible of determining if a
process complies with regulatory requirements and it is in
conformity to recognised quality standards.
The scenario is taken from the food industry, where the
level of scrutiny has increased impressively. Not only does
it matter if the final product is suitable for human consumption, but also the processes involved in production.
The running scenario considers an external quality auditor
e, who is responsible for the legal monitoring of norms,
and an internal auditor i who provides specific evidence
supporting the conformance of the business entity with
the active norms. After the quality audit, the external
auditor prepares his working papers, which are usually
double checked by a reviewer.
A. System Architecture
Figure 2 illustrates the top view architecture of the
system. The ontologies of both agents are created in Protege and using the OntologyBeanGenerator plug-in, these
ontologies were exported as Java classes for use within
the JADE environment. Drools tool is used to develop
rules according to the justification logic model (figure 4).
Agents are developed in JADE and use ACL messages to
communicate their justifications.
Drools is a Business Logic integration platform that
allows for creating complex business processes, and executing them [8]. It includes a rule engine, Drools Expert,
Tuna
size(Kg)
<9
<9
≥9
≥9
On-board
preparation
none
none
evisceration
none
Time constraints
(hours)
≤9
≤ 12
≤6
≤6
Temperature
constraints(◦ C)
≤ 10
≤ 4.4
≤ 4.4
≤ 10
TABLE I
Temperature regulations for on-board storing of fresh
tuna fish.
Fig. 4.
Drools rules for ’StoringNorms’ component
and a language for describing rules. Drools comes with
its own graphical representation of a process, similar to
BPMN, but the user is left with the option of creating an
equivalent BPMN visualisation.
In the fish industry, business processes present specific
tasks that show how different business entities work together in order to provide the customer with the needed
services. Norms represent food safety regulations that are
imposed in order to make sure that the final product is,
suitable for human consumption. Norms are represented
as rules in Drools, aiming to automatically verified when a
business process is executing. One advantage of using rules
is the ability to dynamically change them at runtime and
separate data and business logic. In figure 3, a business
process is modeled that deals with fishing of tuna, the
storage on vessel and unloading of fishery products on
land.
Norms are identified by RuleFlowGroups like Location,
Storing, or U nloading norms. The system checks whether
or not a norm was complied with while the process is still
executing. That is, the norms for storing fishery products
are verified right after the step of storing is executed, and
before the norms on unloading are verified. The active
norms regarding the storage constraints are exemplified
in figure 4. A rule in Drools can be written either in a
Java dialect (the default approach), either in ”mvel” (see
rule ”Temperature9” in figure 4).
After the execution of the process, a log file is generated,
comprising of the data gathered during the execution,
and the norms that failed. This allows the auditor to
evaluate the process at any time and in case a norm was
breached, it starts the communication with the internal
auditor. Table I captures specific values regarding the
storing temperature for tuna fish on vessel, values that are
used in the next sections to provide an adequate debate
scenario.
B. Agents background knowledge
Figure 5 presents a partial tree view of EA’s ontology.
At the top of ontologies is the class Concept which is
the class that should be sub-classed in order for OntologyBeanGenerator to properly create corresponding Java
classes. On the second level of the ontology tree, there
are the classes that represent the norms. The exception is
Tuna, which is part of all norms, but since you can not subclass multiple classes, it is conceptually linked to norms
through class properties (e.g. Tuna hasStorageTemperature StorageTemperature). From the third level on, more
detailed norm concepts are presented.
The classes and properties are instantiated, which completes the knowledge base of an agent with specific data
that can be used in the argumentation process (instances
are extracted from the ontology through the Protege
API). Agent EA captures in his ontology the right values
regarding the norms he is monitoring, while agent IA has
slightly different values to allow the possibility of a debate.
C. Communication protocol
The agents modeled in JADE implement the AchieveRE
(Achieve Rational Effect) communication protocol, a FIPA
compliant protocol. The EA agent implements the AchieveREInitiator role, because he is the one who initiates the
dialogue once a norm in the business process is breached,
by informing the other agent of the fact that a norm
was violated. Consequently, the IA agent implements the
AchieveREResponder role.
Firstly, the EA agent sends an inform ACL message that
specifies the norm being violated, in the presented scenario, we can call it T emperatureN orm. Upon receiving
the message, the IA agent searches in his ontology for a
concept named T emperatureN orm. In case the concept
is not in the ontology, IA sends back a not-understood
response. This evidentiates the role of ontologies in sharing
common knowledge for agents: if the IA agent has no
concepts related to a norm, even if EA does, he can
not effectively communicate. There is another undesired
response called refuse, in which agent IA refuses to take
part in the communicative act. Refuse takes into account
the autonomous nature of agents, which are free to refuse
to perform an action if they want. Both responses, notunderstood and refuse, are undesired for the IA agent
because in both cases EA overrules him, EA being the
greater legal authority to whom IA must respond to. A
third possible response is f ailure, in case the agent fails
to perform an action. Since an action in the system is
the act of argumentation (evaluating different rules and
Fig. 3.
Interleaving norms and processes in Drools.
Fig. 5.
Ontology tree view.
responding appropriately), f ailure is likely to occur unless
there is a Java exception thrown. The fourth possibility is
the agree response, as in the agent agrees to take part in
the debate and performs his action.
Once the exchange of arguments starts, both agents
can ask for justifications or give justifications for their
conclusions. Asking for a justification from the other
agent corresponds to axiom A′4 : (positive proof-checker)
from figure 1. Axiom A′5 : presented in the same figure
corresponds to justifying why the response given is not
sufficient to prove the other agent’s formula.
The figure 6 bears out the action an agent performs
when a message is received. Note that * signals a line that
only the agent EA can perform.
IV. Technical Audit
Each agent has a knowledge base represented by an
ontology. The axioms in the ontology refer to elements
of business processes that are essential to the normative
side of the business process like the temperature the
fish have to be stored at different times and locations
like fishing vessels transportation trucks or in-plant (see
table II), the adequate equipment that has to be used while
handling fishery products or the chemicals involved in the
preparation of fish.
Agents have different, yet similar ontologies in order to
make disagreement possible, not to mention that different
knowledge bases for different agents is the case in reallife scenarios. Still, ontologies have to entail the same
concepts so the agents can communicate effectively with
one another. What can differ are the instances of classes,
the attributes and relations. Table II presents a simple
example for each of these.
Normally, the external auditor agent has the right
knowledge about the norms he is monitoring. That does
not make him prevail the entire debate by default, the
internal audit can still find loopholes in order to overrule
his opposition. The external auditor has also access to
information regarding the execution of the process he is
monitoring. This is not part of his ontology, but provided
by the system at time of execution of a business process.
The dialogue goes like this: first of all, the external auditor e informs the internal auditor i that a specific norm was
violated (figure 7). The i then asks for a justification; the e
if message == ”Give justification for F” then
Extract ontology instances
*Extract business process data
Reply with justification
end if
if message == ”Give more information for norm N” then
Reply with information from ontology or
*business process data
end if
if message == ”Here is my justification for F” then
Search own knowledge base for the terms in justifications
end if
if received information == own information then
Search knowledge base for other concepts related to norm N
if no additional information then
Reply ”I give up!”
else
Apply axiom A′5 on the new information
Send reply ”Here is my justification F”
end if
else
Apply axiom A′5
Send reply ”Here is my justification 6 F ”
end if
if message == ”I give up!” then
stop
end if
Fig. 6.
Ontology
component
Class
instance
Class
attribute
Class
property
Instance for class PermittedAdditive
Temperature value at 6
hours after catch for on
board eviscerated tuna
T unaF ish
⊑
∀hasBiologicalHazard.
e:
i:
e:
(4)
i:
(5)
e:
(6)
(7)
(8)
i:
e:
i:
(9)
e:
Fig. 7.
Agent action upon receiving a message.
Component details
(1)
(2)
(3)
External
auditor
VitaminC
Internal auditor
CalciumOxide
4.4◦ C
5.3◦ C
Bacteria
Pathogens
TABLE II
Knowledge and expertise differs among auditors.
agent justifies this by referring to the evidence available to
him from his ontology and from the data collected during
process execution. The i can either accept the information,
offer a justification of why he thinks the information does
not make the e’s case, or ask for more information. The
communication ends when the auditor considers he has
gathered enough evidence and explanations to prepare
his reports and when the internal auditor has run out of
justifications that can save his case.
The internal auditor finally runs out of justifications or information requests that can save him, therefore the external auditor, having justified his formula
(N ormV iolated), wins the debate. A scenario where i
could overrule would be one that stops at step (8): the e
takes into consideration i’s justification, verifies its validity
with his information about how the process was executed
and his ontology, and accepts agent i explanations. In
this case, the information about the storing temperature
checks out with his own ontology, and the fish were not
eviscerated on board according to the information about
the executed process.
(1)
(2)
(3)
(4)
(7)
(8)
(9)
(10)
Fig. 8.
The norm on temperature was violated
Why do you say the norm was violated?
The fish were stored at 5 degrees,
instead of 4.4 as regulated.
But 5.3 is the right temperature, therefore
the process is compliant. Do you refer to the
temperature storage on vessel, not in-plant?
Yes, the norm in question is referring to the
temperature on vessel.
At what time the temperature was recorded?
5 hours after storing the tuna.
I know that at 9 hours, the temperature has
to be under 10◦ C only. Since tuna was stored
at 5◦ C at 6 hours, the norm wasn’t breached.
I refute your justification: the tuna was
eviscerated on board, which requires a storage
temperature of 4.4◦ C in 6 hours.
Requesting explanations during the technical audit.
T 3 breached → t :e T 3 breached
!i t :e t :e T 3 breached
(regulated.LessT han 4.4) ∧ recorded temp.5) :e
T 3 breached
recorded temp.5 :i ¬T 3 breached
recorded T ime.5hours :e T 3 breached
rule temp9 • (recorded T ime.6hours∧
recorded temp.5) :i ¬T 3 breached
eviscerated : ¬[rule temp9•
(recorded T ime.6hours ∧ recorded temp.5)] :e
¬T 3 breached
Conveying arguments in distributed justification logic.
The dialogue is modeled according to the formal justification logic in figure 8. By conveying that the norm
T 3 was breached, according to axiom A′7 , the auditor
e should have a justification for its statement (1). This
explanation is requested by the internal auditor i (2).
Two pieces of evidence saying that the recorded temperature points towards the value 50 C, while the acceptable temperature should be below 4.4. According to
i’s ontology, the evidence recorded temp.5 represents a
rebutting defeater for the conclusion in hand (4). Entymemes are present in this dialogue: the justification
regulated temp.LessT hen4.4 is not explicitly expressed
by the agent i in the defeater above. Similarly, the dialogue so far has refered to the storage temperature. At
the technical level, the agents are aware of two types
of temperatures V esselStorageT emp ⊑ StorageT emp,
respectivily P lantStorageT emp ⊑ StorageT emp. In (8)
the norm temp9 in Drools is mentioned as a justification
by applying the given premises. In (9) the external auditor
concludes that given the fact that tuna was eviscerated on
board, the provided explanations by the other party are
not strong enough to support the not breach conclusion.
Preparers’
Conclusion
Require
Additional
Allowance
for Doubtful
Debts
Footnote
Disclosure
is Sufficient
Pro arguments
Con arguments
Comparing the ageing
report of 1999 and
2000, 2000 sees an
increase in the debts
between 0-90 days.
Hired two experienced
”credit agents” to monitor collectability.
Client
has
implemented
additional
controls
to
deal
with
the
change in credit policy.
The credential of new
debtors has decreased
since they are smaller
companies with lower
credit ratings. Thus, risk
of unrecoverability increases correspondingly.
TABLE III
Net Persuasive Evidence.
1
2
3
4
(debts 2010 > debts 2009) : additional allowance
(two experienced credit agents) : ¬additional allowance
(additional intern control) : f ootnote disclosure
[(smaller companies ⇒ lower credit ratings)∧
new debtors are small companies)] :
credential of new debtors decreased :
risk of unrecoverability increases : ¬f ootnote disclosure
Fig. 9.
Net Persuasive Evidence in Justification Logic
Internal control
Financial statement
analysis
Materiality
GAAP
V. Justifying Working Papers
After performing the technical audit, the auditor should
prepare the working papers. The information within represents the main record of the work of the auditor has done,
the conclusions that he has reached, and their justifications
(SAS 41). These justifications should be sufficient and
appropriate to support a documented conclusion (SAS 96).
The doubtful debts scenario is adapted from [2], in
which a business agent requests an increasing of its credit
line to its partner bank. Consider that the fish processing
business entity has been the client of the audit firm for
the past five years. Aiming at meeting its sales target,
during the second quarter of the past year, the company
has adopted a more liberal credit policy. Two experienced
”credit agents” have been hired in order to manage the
newly introduced risk. The financial representative of
the company considers that no additional allowance for
doubtful debts is needed and that a detailed footnote on
the above matter suffices in case of need. By requesting
additional financing to meet its operational needs, the
bank performs an audit of the financial statements to
decide between allowing a footnote disclosure or asking
for additional allowance. An audit manager reviews the
memos produced by the auditors.
The 3 specific forms of justifications (net persuasive
evidence, breadth of issues, and framing evidence) are
formalised in DJL. When net persuasive evidence is chosen, arguments both supporting and attacking the possible
decisions are enumerated (table III). The corresponding theory in distributed justification logic is illustrated
in figure 9, where the only agent, the auditor, is no
longer figured. The formula 4 here says that because
the smaller companies implies lower credit ratings and
new debtors are indeed small companies, by applied the
application operator, this is an argument for the decreasing of credential of new debtors, which in turn, further
justifies the increased risk. The increased risk term is
an acceptable rebutting defeater against the decision of
footnote disclosure.
In the breadth of issues justification strategy, arguments
from different areas are aggregated to support a specific
consequent (table IV). The accrual operator is exploited
Client-related
factors
Industry-related factors
Company hired two experienced ”credit
agents” whose task is to monitor accounts
receivable collections
Company’s ageing accounts receivable for 91120 days and greater than 120 days has decreased (2%)
Sales made to new customers were not material to the total sales for the year
Based on the principle of prudence, suggest
additional allowance
Historically a good client for the last 5 years,
hence, may have comfort with management
representation
Volatile nature of the industry.
TABLE IV
Breadth of Issues.
to formalise this, as figure 10 bears out.
In case of framing evidence approach, the justifications
are composed differently in order to stronger support
one decision or the other, exemplified in table V. The
corresponding formalisation is depicted in figure 11.
VI. Discussion and Related Work
Recent work in Multi-Agent Systems concerns different
communication protocols and strategies that enable agents
to negotiate and argue. Research, such as that presented
in [9] and [10], uses argumentation as a form of negotiation
for reaching the desired outcome, while [11] introduces
value-based argumentation for justifying compliance to
specific regulated norms. Norms are seen as a form of
constraining agent behaviour in [12], where argumentation
is used to reason about the norms one should or one may
not comply with.
One common approach is inspection using HACCP
(Hazard Analysis and Critical Control Points) plans [13],
which addresses different potential hazards during production which may include physical, chemical, or biological
Preparers’
Conclusion
Require
Additional
Allowance
for Doubtful
Debts
Footnote
Disclosure
is Sufficient
Framing evidence
The company is very confident about its new strategy
and feels that no additional accrual is necessary and
merely a footnote is enough. However, based on the
principle of prudence suggest additional allowance.
Although management adopted a more liberal credit
policy by selling to smaller companies with lower
credit ratings, management hired experienced credit
agents to monitor accounts receivable collections, and
as such, no additional provision is required
TABLE V
Framing Evidence to be Consistent with the Conclusion.
[two credit agents + aging account decreasing + (¬material sales) + (prudence ⇒ additional allowance) + (credible client ⇒
conf ort with management • credible client)] : additional allowance
Fig. 10.
Breath of Issues in Justification Logic.
(principle of prudence : additional allowance) : ¬(company conf idence ∧ ¬additional accrual ∧ ¬f ootnote enough) :
f ootnote disclosure[(two credit agents ⇒ ¬additional) • two credit agents] : ¬[(smaller company ⇒ lower credit rating)•
smaller company • (lower credit rating ⇒ liberal credit policy)] : ¬f ootnote disclosure
Fig. 11.
Framing Evidence in Justification Logic.
hazards. The approach presented in this paper is similar
to a HACCP audit: business processes are, at their critical
points, comprised of sets of rules that verify whether or
not different norms at different stages of the process are
being complied with. Each decision in an HACCP plan
should be justified. Similarly, the auditors should include
in their workpapers adequate justifications supporting the
conclusion.
Since in justification logic, an agent is able to justify
evidence for its own, it is quite obvious that collaborating
evidence translates to ”no need for justification from the
other side”. That is not the case when the external auditor
receives inconsistent evidence though: he needs a justification of why he should believe the evidence provided by the
internal audit.
Another issue that occurs during an argumentative dispute is that of standards of proof. Considering the nature
of our application, which entails a legal dispute, both
agents have the same standard of proof. That is because
they both have to subscribe to an external standard
(both being auditors), provided as a benchmark by legal
entities [14]. Distribute justification logic allows to model
different standard of proofs for each agent: t :i F ∧ ¬tj : F .
Here, the same evidence t is accepted by the agent i
to justify F , but it not strong enough for the agent j
viewpoint to accept F .
Business processes are a collection of tasks that produce a specific service or product. They ensure that each
business entity knows what they have to do, and what
inputs and outputs they work with. Business processes
are modeled as flowcharts, which makes them suitable
for computer-based representation. The most common
approaches include the BPMN (Business Process Modeling
Notation) standard [15], process ontologies [16], Petri
nets [17] and even UML activity diagrams [18]. In this
study, the Drools tool was used for checking the norm
compliance.
VII. Conclusion
Given the short history of justification logic and its
theoretically roots, there is a lack of running scenarios
modeled using justification logic. Our study presents such
a practical scenario in the auditing domain, advocating
that the expressivity of justification logic [19], [20] is
adequate for modelling types of justification occurring in a
common audit processes, such as net persuasive evidence,
breath of issues, or framing evidence.
Acknowledgment
We are grateful to the anonymous reviewers for their
useful comments. This work was supported by the grant ID
160/672 from the National Research Council of Romanian
Ministry of Education and Research: ARGNET - Support
System for Structured Argumentation in the Web Context,
and POSDRU/89/1.5/S/62557/EXCEL.
References
[1] A. I. of Certified Public Accountants. Auditing Standards Executive Committee, “Analytical review procedures,” 1978.
[2] P. G. Shankar and H.-T. Tan, “Determinants of audit preparers’
workpaper justifications,” Nanyang Technological University,
Tech. Rep., 2005.
[3] J. S. Rich, I. Solomon, and K. T. Trotman, “The audit review
process: A characterization from the persuasion perspective,”
Accounting, Organizations and Society, vol. 22, no. 5, pp. 481–
505, July 1997.
[4] G. M. Lisa Koonce, Urton Anderson, “Justification of decisions
in auditing,” Journal of Accounting Research, vol. 33, pp. 369–
384, 1995.
[5] S. N. Artëmov, “Justification logic,” in JELIA, 2008, pp. 1–4.
[6] D. Walton and D. M. Godden, “Redefining knowledge in a way
suitable for argumentation theory,” in Dissensus and the Search
for Common Ground, H. Hansen, Ed., 2007, pp. 1–13.
[7] I. A. Letia and A. Groza, “Arguing with justifications between
collaborating agents,” in Argumentation in Multi-Agent Systems, Taipei, Taiwan, 2011.
[8] P. Browne, JBoss Drools Business Rules. Packt Publishing,
2009.
[9] T. Skylogiannis, G. Antoniou, N. Bassiliades, and
G. Governatori, “Dr-negotiate - a system for automated
agent negotiation with defeasible logic-based strategies,” in
Proceedings of the 2005 IEEE International Conference
on e-Technology, e-Commerce and e-Service (EEE’05) on
e-Technology, e-Commerce and e-Service, ser. EEE ’05.
Washington, DC, USA: IEEE Computer Society, 2005, pp. 44–
49. [Online]. Available: http://dx.doi.org/10.1109/EEE.2005.61
[10] S.-L. Huang and C.-Y. Lin, “The search for potentially
interesting products in an e-marketplace: An agent-to-agent
argumentation approach,” Expert Syst. Appl., vol. 37, pp.
4468–4478, June 2010. [Online]. Available: http://dx.doi.org/
10.1016/j.eswa.2009.12.064
[11] B. Burgemeestre, J. Hulstijn, and Y.-H. Tan, “Value-based
argumentation for justifying compliance,” in Proceedings of the
10th international conference on Deontic logic in computer
science, ser. DEON’10. Berlin, Heidelberg: Springer-Verlag,
2010, pp. 214–228. [Online]. Available: http://portal.acm.org/
citation.cfm?id=1876014.1876030
[12] F. Lopez y Lopez, M. Luck, and M. d’Inverno, “Normative agent
reasoning in dynamic societies,” in Proceedings of the Third
International Joint Conference on Autonomous Agents and
Multiagent Systems - Volume 2, ser. AAMAS ’04. Washington,
DC, USA: IEEE Computer Society, 2004, pp. 732–739. [Online].
Available: http://dx.doi.org/10.1109/AAMAS.2004.197
[13] W. H. Sperber, “Auditing and verification of food safety and
haccp,” Food Control, vol. 9, no. 2-3, pp. 157 – 162, 1998,
first International Food Safety and HACCP Conferrence. [Online]. Available: http://www.sciencedirect.com/science/article/
B6T6S-3TGN8TF-K/2/2ee1830d21bd4ede390400c97e56b7f2
[14] R. Schwartz, “Auditors’ liability, vague due care, and auditing
standards,” Review of Quantitative Finance and Accounting,
vol. 11, pp. 183–207, 1998, 10.1023/A:1008220317852. [Online].
Available: http://dx.doi.org/10.1023/A:1008220317852
[15] P. Wohed, W. M. P. van der Aalst, M. Dumas, A. H. M.
ter Hofstede, and N. Russell, “On the suitability of bpmn for
business process modelling,” in Business Process Management,
2006, pp. 161–176.
[16] M. Hepp and D. Roman, “An ontology framework for semantic
business process management,” in Wirtschaftsinformatik (1),
2007, pp. 423–440.
[17] N. Lohmann, E. Verbeek, and R. M. Dijkman, “Petri net transformations for business processes - a survey,” T. Petri Nets and
Other Models of Concurrency, vol. 2, pp. 46–63, 2009.
[18] K. B. Akhlaki, J. L. Garrido, M. Noguera, M. V. Hurtado, and
L. Chung, “Extending and formalizing uml 2.0 activity diagrams
for the specification of time-constrained business processes,” in
RCIS, 2010, pp. 93–100.
[19] M. Fitting, “Reasoning with justifications,” in Towards Mathematical Philosophy, Papers from the Studia Logica conference
Trends in Logic IV, ser. Trends in Logic, D. Makinson, J. Malinowski, and H. Wansing, Eds. Springer, 2009, vol. 28, ch. 6,
pp. 107–123, published online November 2008.
[20] M. Horridge, B. Parsia, and U. Sattler, “Justification oriented
proofs in OWL,” in International Semantic Web Conference,
2010.