See discussions, stats, and author profiles for this publication at:
https://www.researchgate.net/publication/3187712
Integrating multiple paradigms within the
blackboard framework
Article in IEEE Transactions on Software Engineering · April 1995
DOI: 10.1109/32.372151 · Source: IEEE Xplore
CITATIONS
READS
48
23
2 authors:
Sanja Vraneš
Mladen Stanojević
97 PUBLICATIONS 366 CITATIONS
62 PUBLICATIONS 231 CITATIONS
Mihajlo Pupin Institute
SEE PROFILE
Mihajlo Pupin Institute
SEE PROFILE
All content following this page was uploaded by Mladen Stanojević on 10 January 2014.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
244
IEEE TRANSACTIONS
ON SOFTWARE
ENGINEERING,
VOL.
21, NO. 3. MARCH
1995
Integrating Multiple Paradigms
within the Blackboard Framework
Sanja VraneS, Member, IEEE, and Mladen StanojeviC
Abstract- While early knowledge-based systems suffered the
frequent criticism of having little relevance to the real world,
an increasing number of current applications deal with complex,
real-world problems. Due to the complexity of real-world situations, no one general software technique can produce adequate
results in different problem domains, and artificial intelligence
usually needs to he integrated with conventional paradigms for
efficient solutions. The complexity and diversity of real-world
applications have also forced the researchers in the AI field to
focus more on the integration of diverse knowledge representation
and reasoning techniques for solving challenging, real-world
problems. Our development environment, BEST (Blackboardbased Expert Systems Toolkit), is aimed to provide the ability
to produce large-scale, evolvable, heterogeneous intelligent systems. BEST incorporates the best of multiple programming
paradigms in order to avoid restricting users to a single way
of expressing either knowledge or data. It combines rule-based
programming, object-oriented programming, logic programming,
procedural programming and blackboard modelling in a single
architecture for knowledge engineering, so that the user can
tailor a style of programming to his application, using any
or arbitrary combinations of methods to provide a complete
solution. The deep integration of all these techniques yields a
toolkit more effective even for a specific single application than
any technique in isolation or collections of multiple techniques
less fully integrated. Within the basic, knowledge-based programming paradigm, BEST offers a multiparadigm language
for representing complex knowledge, including incomplete and
uncertain knowledge. Its problem solving facilities include truth
maintenance, inheritance over arbitrary relations, temporal and
hypothetical reasoning, opportunistic control, automatic partitioning and scheduling, and both blackboard and distributed
problem-solving paradigms.
Index Terms- Artificial intelligence, blackboard model, expert systems, logic programming, model-based reasoning, multiparadigm approach, object-oriented programming, proceduralprogramming, rule-based programming
I.
INTRODUCTION
E
XPERIENCE from building knowledge-based systems
for real-world applications has shown that their power
is most apparent when the problem considered is sufficiently
complex. As the complexity of knowledge-based systems
approaches the complexity of the real world, modularization
becomes imperative. Due to its inherent modularity, the black-
Manuscript received October, 1992; revised May, 1993. Recommended by
D. Wile.
The authors are with the Mihajlo Pupin Institute, Computer Systems
Department, Belgrade 11060, Yugoslavia.
IEEE Log Number 9407079.
009%5589/95$04.00
board architecture appears to have the greatest potential for
knowledge-based system modularization. The modularity of
the knowledge sources makes them ideal candidates even for
distribution among multiple processors. But, modularization
alone is not the answer. It makes a system manageable.
To make it programmable, a proper paradigm has to be
chosen. A programming paradigm can be thought of as a
basis for a class of programming languages, as an underlying
computational model, as a primitive set of execution facilities,
or as a powerful way of thinking about a computer system.
Although each among the best known programming paradigms
(procedural, logic, object-oriented, rule-based) offers some
advantages in comparison with the others, the corresponding
disadvantage is that any of them is too narrowly focused to
describe all aspects of a large, complex system. Therefore,
we adopted a multiparadigm approach to complex system
programming.
A. Utility of Multiparadigm Systems
The purpose of multiparadigm programming is to let us
build a system using as many paradigms as we need, each
paradigm handling those aspects of the system for which
it is best suited. A special part of the Fifth International
Symposium on AI held in Cancun, Mexico, was Marvin
Minsky’s keynote address on “The Near Future of AI”, that
set the tone of the conference in encouraging AI researchers
to use multiparadigm approaches to solving problems. Minsky stressed that during the next decade, AI researchers
need to move towards a hybrid approach of handling knowledge representation and solving problems. For example, one
piece of the problem may be best solved by a rule-based
expert system, another part by a decision support system,
and so on. As Pamela Zave said [47], multiparadigm programming addresses one of the essential problems in software engineering-the drastic difference among aspects of a
complex system, i.e. its heterogeneity. Jay Liebovitz [28]
also pointed out that integration is the key trend in expert
systems applications. The stand-alone expert system is becoming obsolete, as operational, real-world expert systems,
need to be integrated with existing databases, information
systems, spreadsheets, optimization packages, simulation software, or other software necessary for solving a specific problem.
A lot of recent research has been concerned with multiparadigm languages and environments that allow the programmer to use more than one mode of thinking (paradigm)
0 1995 IEEE
VRANES
AND STANOJEVI6:
PARADIGMS
WITHIN
THE BLACKBOARD
FRAMEWORK
for complex problems, but the approaches vary widely from
project to project. Let us mention some of them here.
Arctic from Carnegie-Mellon University is a functional
and real-time programming language. While in conventional
languages concurrency is expressed by creating multiple processes that change state in parallel, in Arctic, however, values
have a time dimension. Many functions can be defined on
overlapping time intervals to express concurrency. Concurrency is a natural byproduct of the functional properties of the
language, rather than a language control construct.
The C++ programming language (AT&T Bell Laboratories)
supports data abstraction and object-oriented programming
in addition to traditional programming techniques. A similar
combination of imperative and object-oriented paradigms is
provided by Smallword (IBM Research), while Lore (CGE
Research Center France) combines object-oriented and setbased programming.
Two modem paradigms-logical
programming and functional programming-are also an active area of research in
multiparadigm languages and environments. Their great similarities include applicative nature, reliance on recursion for
program modularity, and providing execution parallelism in a
natural manner. Their differences include radically different
variable concepts, availability of higher order program entities
and fundamental support for nondeterministic execution. In
a successful combination, one might hope for the notational
leverage and execution directness of functional programming,
wedded with search guided through constraint accumulation.
Many proposals have been offered on how to combine these
two paradigms in a tasteful and powerful manner. One of the
most interesting proposals came from the University of Utah
1291, [301.
CaseDe from Case Western Reserve University merges
imperative and specification paradigms and focuses mainly on
the process of design and notations for describing a design. It
deals not only with the purely functional design, but also with
such design criteria as modifiability, performance, resource
usage, and reusability.
Tablog (IBM Research, Stanford University, Weizmann
Institute Israel and SRI International) is a logic programming
language that combines functional and relational programming
into a unified framework. It incorporates advantages of two
of the leading programming languages for symbolic manipulation-Prolog
and Lisp, by including both relations and
functions and adding the power of unification and binding
mechanisms.
An interesting extension of the Lucid language toward
a high-level, real-time software-oriented tool with forinal
mathematical semantics is reported in [ 141.
The programming language Nial [23] supports several styles
of programming including imperative, procedural, applicative,
and lambda-free. Nial tools can be built to illustrate relational
and object-oriented styles of programming.
Researchers in the artificial intelligence field have recently
been taking an increased interest in multiparadigm representation and reasoning systems. The first generation artificial
intelligence systems were mostly monolithic, isolated and
stand-alone, which prevented them from adequately addressing
245
the complexity, diversity and performance challenges of complex, heterogeneous, large-scale applications. However, most
real world problems and situations are sufficiently complex to
demand more than one reasoning technique to be used for
solution, each technique attacking one characteristic of the
problem domain. There is also a need to integrate AI solution
techniques with conventional techniques. AI researchers have
for most of the early years been guilty of ignoring a significant
amount of research done in traditional fields like decision
sciences and operations research. Furthermore, just as there
is no one omnipotent general reasoning technique, there is
little consensus on a general architecture for integrating diverse
reasoning techniques.
B. Characterization of Multiparadigm Systems
Though a number of multiparadigm AI systems have been
designed and studied, little effort has been devoted to comparing the systems or searching for common principles underlying
them. A characterization of the multiparadigm AI systems
along two dimensions has been suggested [17]-systems that
employ multiple representations and those with multiple reasoning paradigms. We hereby introduce the third dimensionmultiple control paradigms.
Along the representation dimension, a system can represent
the same knowledge in different media-as in the VIVID
reasoning systems, presented by Brackman and Etherington
[ 131 and the multiple reasoners at a single layer of the CAKE
architecture [35]+or it can have different representations for
different kinds of knowledge-as in KRYPTON [2] and many
sorted logic [5], [16], [46].
An interesting example of a language with multiple
paradigms in the object framework is Orient/84 [40], which
has been designed to describe both knowledge systems and
systems of more general application. It has the metaclassclass-instance hierarchy and multiple inheritance from multiple
superclasses. A knowledge object consists of a behavior part,
a knowledge-base part, and a monitor part.
The behavior part is a collection of procedures (or methods)
that describe actions and attributes of the object in Smalltalklike syntax and semantics. The knowledge-base part is the
local knowledge base of the object, containing rules and
facts. Prolog-like predicate logic is employed to describe the
knowledge base. The monitor part is the guardian and demon
for the object and is described in a declarative manner.
Along the reasoning dimension, a system can have different
reasoners for the same representations-as in KRYPTON, KLTWO [41], theory resolution [38] and many sorted logic-r
the same reasoner for different kinds of knowledge-as in
the Patel-Schneider’s hybrid logic [33]. RAL (Rule-extended
Algorithmic Language) from Production Systems Technology
(developer of OPS5 and OPS83) adds rule-based and objectoriented capabilities to C programs.
The Loom knowledge representation language, developed
by Robert MacGregor and John Jen at the University of
Southern California’s Information Sciences Institute combines
object-oriented programming, rules, and logic programming
in a set of tools for constructing and debugging declarative
IEEE TRANSACTIONS
models [31]. Loom uses a description classifier to enhance
knowledge representation and to extend the class of useful
inference beyond simple inheritance (found in most frame
systems). It supports the description language (the frame
component) and a rule language, and uses its classifier to
bridge the gap between the two. The classifier gives Loom the
additional deductive power to provide inference capabilities
not often found in current knowledge-representation tools.
A new version of Loom uses a RETE matcher for efficient
classification, as does our multiparadigm language, Prolog/Rex
[44] (used in BEST as a common substrate for multiple
paradigm implementation). However, Prolog/Rex possesses
capabilities not found in Loom like hypothetical, temporal
and approximate reasoning, multiple inheritance over arbitrary relations, assumption-based truth maintenance, while our
multiparadigm environment BEST (Blackboard-based Expert
Systems Toolkit) [43] as a whole offers opportunistic control, automatic explanation, data-base gateway, the inclusion
of operational research techniques, multicriteria optimization,
distributed problem-solving framework, etc. Moreover, BEST
is capable of integrating with diverse tools from universities
and industry, combining knowledge-processing and conventional subsystems and customizing itself according to the
domain-specific needs. In this it is similar to ABE (A Better
Environment) from Cimflex 1201.
Both BEST and ABE provide a very general and flexible
base for integrating a variety of programming paradigms.
However, BEST is a more fine-grained integration framework,
where knowledge sources are single-paradigm programs and
the blackboard itself is the integration platform, while ABE
offers a variety of frameworks for designing applications
(one of them is the blackboard framework), and even generic
solutions to classes of problems (skeletal systems) at the higher
level.
The Loops [37] knowledge programming environment integrates function-oriented, object-oriented, rule-oriented and
access-oriented programming. Another rich amalgamation of
multiple programming paradigms is KEE [22], which integrates object-oriented and rule-oriented programming with a
database management system. ART [21] successfully combines rules and frames (schemata) and provides the means for
hypothetical and time-state reasoning, but lacks the facilities
for complex knowledge base modularization, uncertainty management and explanation of its reasoning process (all provided
by BEST).
BEST represents one possible approach to integrating diverse knowledge representation and reasoning techniques (internal integration), and to integrating the basic knowledgebased paradigm with other programming paradigms (external
integration), within the blackboard framework. It incorporates
the best of traditional, procedural programming, rule-based
programming, frames, logic programming, object-oriented programming and blackboard modelling in a single architecture
for knowledge engineering.
The user can tailor a style of programming to his application, using any or arbitrary combinations of methods to
provide a complete solution. The deep integration of all these
techniques yields an environment more effective even for
ON SOFIWARE
ENGINEERING,
VOL.
21, NO. 3. MARCH
1995
a specific single application than any technique in isolation
or collections of multiple techniques less fully integrated.
Due to its flexibility and generality, the blackboard paradigm
provides an excellent integration framework for composing the
mentioned programming paradigms, without sacrificing either
execution performances or validation capabilities of any of
the participating paradigms.
When a complex problem is carefully and appropriately
decomposed into manageable subproblems, the best suited paradigm is chosen for each particular subproblem encapsulated
within separate knowledge sources. Each knowledge source,
which is now a single paradigm program, can be validated a
great deal in isolation from the rest of the system. When the
local correctness of every knowledge source is established, it
is much easier to validate the whole system.
The interaction between paradigms is provided through
common data structures at the blackboard, accessed through
the protocol defined at the conceptual level of the participating
paradigms. The only possibility for one knowledge source to
influence another knowledge source’s computation is through
blackboard structures, and there is no way for one paradigm to
disturb another paradigm’s syntax or semantics. A knowledge
source that posts its partial solution to the blackboard need
not even know which knowledge source uses it and when.
The consumer remains unknown, which means that paradigms
communicate opportunistically.
Another less important possibility for some paradigm composition is a well-recognized procedure or function call. For
instance, the Prolog compiler, through which the logic programming paradigm is incorporated, has built-in C, GKS,
and SQL interfaces, which can provide escape to procedural
languages, graphic packages or database management systems.
Of course, in this kind of communication between paradigms,
both involved paradigms are aware of the communication,
since a message sender waits for the answer.
Within the basic knowledge-based programming paradigm,
our development environment offers a multiparadigm language for representing complex and heterogeneous knowledge, including incomplete and uncertain knowledge, and
multiparadigm reasoning (backward- and forward-chaining
and hybrid initiative) and control (hard-wired procedural,
and flexible rule-based control). BEST provides techniques
for hypothetical and time-state reasoning, assumption-based
truth maintenance and facilities for building “explainability”
into a system (saved descriptions and justifications for derived results), and a new system architecture that provides
the efficiency and flexibility required for real-time applications.
Moreover, the modularization provided by BEST is the
key to achieving reusability and extensibility. The coherently structured chunks of knowledge encapsulated within
the knowledge sources could be reused in various similar
applications. The autonomy and loose cooperation between
knowledge sources offer the possibility of modelling and
testing integrations: multi-agents, multi-experts, and multipleprogramming-paradigms. It allows for the most natural problem/paradigm mapping and therefore for solving the problem
in the manner similar to human experts.
VRANES
AND STANOJEVIk
PARADIGMS
WITHIN
THE BLACKBOARD
247
FRAMEWORK
To program a specific knowledge source, the user can select
the following:
rule-based programming with hybrid rule language
and multiple reasoning paradigms (forward-chaining,
backward-chaining and the combination of the two,
adaptive control and search strategy, which can significantly reduce the amount of problem solving activities
performed in an application, pattern matching through
an indexing scheme or the RETE algorithm, and other
possibilities for performance tuning, etc.). BEST’s
facilities, not often found within commercial frameworks,
provide the following powerful reasoning paradigms:
- hypothetical reasoning,
- nonmonotonic reasoning,
- approximate reasoning,
- reasoning under uncertainty,
- truth maintenance system, using logical dependencies to maintain logical soundness,
- explanatory reasoning, etc.;
intelligent database and spreadsheet management;
procedural programming, usually through the C language
or any other conventional, imperative language, accessible through the C interface;
frame language-the core of the BEST knowledge representation language (Prolog/REX) is its basic data structure, a concept which can be used to represent domain
objects, situations, events and processes;
logical programming through the standard Prolog environment;
data-driven and access oriented programming through the
use of demons and data-driven pattern matching (the
RETE algorithm and indexing schemes are provided);
object-oriented programming through the use of abstract
data typing, inheritance, methods and message passing;
model-based reasoning provided by the combination of
object-oriented and access-oriented programming;
distributed artificial intelligence.
Before describing how different paradigms are implemented
and integrated with each other (Section IV) we will pay attention to the key paradigm integration problems (Section II), and
to the way our blackboard framework, used as an integration
platform, solves these problems (Section III). Finally, we will
use a simplified version of one of the prototype systems built
by BEST to illustrate how a complex, heterogeneous problem
could be solved successfully when a multiparadigm approach
is adopted (Section V).
l
l
l
l
l
l
l
l
l
II. KEY INTEGRATIONPROBLEMS
What are the main problems in the design of a multiparadigm system to solve a particular problem? One design
issue is how the complex system is to be divided into single
paradigm components. The most natural approach is to start
with some distinction between problem aspects and then to
seek the most appropriate paradigm for each. A paradigm for
each aspect is chosen by looking simultaneously at several factors-expressiveness, runtime features, validation capabilities
and the features that a paradigm offers for free that mirror the
structure of the main portion of the problem and most easily
encapsulate the problem-solving procedure.
Another, more complex design issue involves the integration
of the paradigms involved. The key integration problems
concerned are as follows:
communication and cooperation between the components
of a multiparadigm system;
control and synchronization of the multiplicity of components;
validation of a multiparadigm system.
Any multiparadigm system needs to have a way of representing the goals of the different single paradigm components
and their returned results (partial results of the overall problem). Therefore, something like public language, common
substrate, or teleological vocabulary, needs to be defined in
which to express these goals and partial results.
The control structures may be of various kinds; the way in
which resources are allocated may be heuristic or algorithmic
and may be based on either syntactic or semantic criteria.
The communication protocol may be directed between subsystems or broadcast generally. Control may be centralized or
distributed.
Zave [47] identified three modes of paradigm synchronization+all,
stream and event synchronization. Because
each synchronization mode provides a different conceptual
structure for integration, it is important to choose the synchronization mode that fits the communication needs of both
programs. The first mode of paradigm synchronization is the
only widely recognized conceptual structure-familiar
interlanguage procedure or function call. The common assumptions
shared by programs that communicate with call synchronization are the signatures of the calls. In stream synchronization,
two programs communicate via messages sent on a one-way
buffered communication channel (a stream), assuming that
the stream producer and consumer agree upon message type
and format. In the third mode, event synchronization, the
common assumption shared by two programs includes a set
of shared events. The programs constrain each others behavior
because a shared event cannot occur unless it is simultaneously
enabled in both programs. Using a blackboard framework as
an integration platform, we introduce here the fourth way
of paradigm synchronization-opportunistic
scheduling, which
will be further elaborated in Section III.
The final important issue raised is how to validate a multiparadigm system. Validating a single-paradigm program in
isolation from the other programs in the system often achieves
significant reduction in the complexity of validation. Isolated validation is especially powerful when one program
contains all the information relevant to a property of the
system as a whole. The problem with isolated validation is
that a single-paradigm program generally contains multiple
nondeterministic choices. When the whole system executes,
the other programs may make these choices in a correlated
fashion, but in isolated validation that external information is
lost. When correlated external decisions are needed for proper
validation, then an isolated validation is useful only when
such a decision can be simulated for testing purposes. If this
l
l
l
248
IEEE TRANSACTIONS
ON SOFI’WARE
ENGINEERING,
VOL. 21. NO. 3. MARCH
1995
simulation is too complicated it might be better to test the
program in combination with other programs whose partial
results are so critical to it.
III. INTEGRATIONUSINGTHE BLACKBOARD FRAMEWORK
A blackboard model is a particular kind of problem-solving
model, i.e., a scheme for organizing reasoning steps and
domain knowledge to construct a solution to a problem [ 121.
The knowledge needed to solve the problem is partitioned into
knowledge sources (KS’s), which are kept separate and independent. The problem-solving state data are kept in a global
database, the blackboard. Although the blackboard model has
been widely recognized as the most general and flexible
knowledge system architecture, it is only just beginning to
be appreciated outside the walls of academic research, which
may be due in part to a lack of commercially available generic
frameworks for blackboard systems development. Therefore,
we built our own framework, called BEST (Blackboard-based
Expert Systems Toolkit), with which we aimed not only to
provide the infrastructure for building blackboard systems, but
also to construct a platform for the integration of multiple
programming paradigms. In this section we will describe the
organizational principles used in our framework, and the ways
it solves the main integration problems mentioned in the
previous section.
The architecture of the multiparadigm system developed
by BEST (Fig. 1 gives a schematic of it) consists of two
groups of individual computation agents, known as domain
knowledge sources (DKS’s) and control knowledge sources
(CKS’s), and two shared databases, known as the global domain blackboard (GDBB) and the control blackboard (CBB).
A shell permits a heterogeneous knowledge sources structure,
where each of them can be programmed within the most
suitable paradigm. However, no matter what representation
is used, to every knowledge source corresponds a Domain
Knowledge Source Activation Rule (DKSAR) which has an
if-then (precondition-action) format, where the if-part (Domain Knowledge Source Relevance Test-DKSRT) describes
situations in which the knowledge source can contribute to
the problem solving process, while the then-part (domain
knowledge source invocation-DKSI)
initiates the knowledge
source execution. In the if-part, an event or a set of significant
blackboard events in which the corresponding knowledge
source is interested, are declared. An event may occur whenever a specified blackboard item (concept, fact, hypothesis,
see Section III-A) is written, modified or deleted. The if-part
can contain arbitrarily complex tests, for instance a boolean
combination of atomic tests for the presence (absence) of
certain slot values, facts, hypotheses (or their combination).
In the then-part, the computation (traditional procedure, logic
program, object-oriented program, a set of SQL primitives,
and so on) or rule interpretation (whether a simple RBS, or
a more sophisticated hypothetical or nonmonotonic reasoning
system) to determine a desired effect (to either create a new
object or change an old one), is initiated.
The framework organization allows these various independent and diverse sources of knowledge to be specified and their
Fig. 1. BEST’s architecture.
interactions coordinated so that they might cooperate with one
another (through the global blackboard) to effect a problem solution. Knowledge sources react opportunistically to the global
blackboard changes produced by other knowledge sources executions and, as a result of its computation/cognition, produces
new changes.
The global domain blackboard (GDBB) is the central communication and paradigm integration medium used as a repository for a global data, partial solutions or pending knowledge
sources. When one knowledge source is active, it posts on
the blackboard all the information that it concludes is of
importance in solving the overall problem. This information
may trigger other knowledge sources.
Using different relations (system defined inheritance relations and/or arbitrary user defined relations) we can divide
the global blackboard into several panels with many levels
of abstraction. Panels can be described as the collections of
objects that share some common properties. For instance, if we
are playing a war game [42], one panel may represent one side,
and the other panel the other side. In this example we can also
distinguish different levels of abstraction, if we are tracing the
decision making process from the supreme command to the
commands at the lowest level. For every command we know
to which level of abstraction it belongs.
The global blackboard does not have a predetermined structure. A user can easily tailor the structure of the global
blackboard to perfectly fit a problem of interest. The global
blackboard is implemented using a common substrate, declarative data carriers of Prolog/Rex (facts, hypotheses, concepts,
relations, demons, and contexts, see III-A), and the user is
free to use them in the way that is most convenient in solving
a particular problem. Since each knowledge source contains
many internal hypotheses and partial solutions which need not
(or even should not) be visible to other knowledge sources,
each knowledge source can contain its own local domain
blackboard (LDBB). A local blackboard can also be partitioned
into panels containing different types of data and can have
three other dimensions (internal abstraction level, solution
alternative and/or interval). All information that belongs to
the local blackboard is local to that knowledge source, and
cannot be used in other knowledge sources. We implement
VRANES
AND STANOJEi’k:
PARADIGMS
WITHIS
THE BLACKBOARD
FRAMEWORK
the local blackboards in order to avoid undesirable interference
between knowledge sources, and to make the mapping to the
distributed or multiprocessor platform much easier and more
beneficial. Since BEST was conceived as a potentially distributed/multiexpert development environment from the very
beginning, we have gone to some lengths to avoid (or at
least reduce) the blackboard information bottleneck. One way
of achieving this is just by providing a local blackboard,
shared by the interrelated rules or procedures within a single
knowledge source. Two knowledge sources can share the same
local blackboard in two cases, when the names of the domains
are declared to be the same (the names of the local blackboards
are the same), or when the domain name of the activated
knowledge source is shared (the activated knowledge source
works on the local blackboard of the previous knowledge
source).
Contention on the common memory containing global
blackboard information is minimized, since a great percentage
of computation and inference is done locally (about 90% of
all blackboard referenees during the interpretations are to a
local blackboard rather than to the global blackboard, so the
possible source of contention in a distributed environment is
significantly reduced).
Knowledge sources can also manipulate the data on the
global blackboard, and this is, at the same time, the only
permitted way of communication between knowledge sources.
Our blackboard framework for distributed problem-solving
provides a tool for finding the cooperative solution of cognitive
problems by a centralized and loosely coupled collection of
knowledge sources located in a number of distinct processor
nodes. The knowledge sources cooperate in the sense that
none of them has sufficient information to solve the entire
problem. By centralized we mean that both control knowledge
sources and the common database (blackboard) are centralized
in order to avoid a coherence problem and to simplify the
overall problem-solving process supervision. Mutual sharing
of information representing nonlocal aspects of the problems is
performed in a highly controlled and constrained way. Loosely
coupled means that individual knowledge sources spend a
great percentage of their time in cognition, computation or
decisionmaking rather than communication. The so called
“result sharing” paradigm is inherent to blackboard architecture, where individual knowledge sources assist each other by
sharing partial results, based on different expertise, different
dimensions of the problem or somewhat different perspectives
on the same overall problem.
BEST can deal with a situation that changes through time
by making chains of global blackboards snapshots and storing
them in a historical blackboard (HBB), that can’ be used
for long term analysis, debugging, tracing, replay, etc. The
frequency of memorization may vary according to the current
context and the relative importance of changes on the global
blackboard. A context mechanism is used for modelling hypothetical alternatives, pursuing automatically and efficiently the
analysis of branching possibilities. It does this by building a
tree-like structure of related contexts.
A system baseline (SB) knowledge source captures knowledge relevant for system configuratjon and initialization.
249
The global domain blackboard monitor (GDBBM), conflict
resolution (CR), meta plan (MP), domain knowledge source
activation (DKSA) and global domain blackboard indexes
(GDBBI’s) will be described later in greater detail (see Section
III-B).
Let us now examine our knowledge representation language,
Prolog/Rex, that serves as a common substrate for the implementation of different programming paradigms (see Section
IV), while its declarative part represents a public language
used for paradigm communication via the global blackboard.
A. An Overview of Prolog/Rex
Knowledge representation is central to knowledge-based
systems design and their ability to integrate with other programming paradigms and software technologies. Therefore,
the success of the system depends a great deal on the extent
to which domain knowledge and data can be represented
within a particular paradigm. The optimal knowledge representation scheme depends on the application. If a single
isolated paradigm is adopted, it might diminish individual
performance and erode customized advantages and efficiencies
in a heterogeneous, complex domain. Therefore, we offer
a hybrid, multiparadigm knowledge representation system,
built on top of Prolog (Prolog/Rex) [44], that can accommodate natural heterogeneity in complex knowledge bases.
Prolog/Rex, provides a substrate on which the user can easily
implement many single-paradigm programs. The main benefits
of multiparadigm representation and reasoning systems include
the following:
l
increased expressive power because a single-paradigm
knowledge representation language may not easily represent everything,
l
increased reasoning power, that is, the ability to make
more inferences out of the same knowledge,
l
increased efficiency, because a specialized reasoner can
make optimizations that a general purpose reasoner cannot.
Therefore we developed Prolog/Rex, a hybrid language.
which integrates several knowledge representation paradigms
and offers remarkable reasoning and control flexibility, not
found in other Prolog extensions.
The declarative knowledge representation facilities of Prolog/Rex (DK, Fig. 2) are used to implement the structure
of the global and local domain blackboards (GDBB and
LDBBs, Fig. 1). Global blackboard constructs represent not
only the problem-solving
state data. They also contain a
teleological vocabulary, representing the goals of the different
single-paradigm knowledge sources, their triggering events
and returned results.
All the knowledge used in the reasoning process is organized in data abstractions called contexts. Conceprs, facts and
hypotheses comprise the content of one context. Contexts are
used as an element of a truth maintenance system implemented
in Prolog/Rex. similar to a viewpoint mechanism in ART [22].
The conrexr mechanism provides Prolog/Rex with the ability
to pursue hypothetical alternative pathways to a goal and to
represent the situation that changes in time.
7
NO.
Fig. 2.
Prolong/REX
knowledge representation
MARCH
1995
facilities.
The main portion of declarative knowledge we want to
handle in the system is brought together in a common category
called concept which is a frame-based data abstraction that
provides the knowledge engineer with an easy means of
describing the types of domain objects that the system must
model. Information about the object described in terms of the
concept is stored in the slots. A slot value can be inherited, i.e.
pased down the object hierarchy. A restriction on slot (value
and type) provides automatic consistency checking. Each slot
declaration consists of two parts-a name, and contents that
may change under the action of the rules or procedures.
Concepts may belong to many classes simultaneously,
thereby giving rise to multiple inheritance hierarchies. The
inheritance network is declared by means of the two standard
inheritance relations-is-a and instance-of. Is-a relation links
one general concept to another, while instance-of represents
one of something, i.e., a particular object that, of course,
reflects all the properties which have been defined for their
parent concept, but also has the properties that vary between
individuals with the same parent. PrologRex automatically
creates the inverse relations (has-a, has-instances), which is
a labor-saving feature. Furthermore, the is-a relationship is
transitive, which means that declaration of is-a relationships
resulting from the transitive composition of others can be
omitted. It is obvious that not all relations between conceprs
should produce inheritance. Moreover, not all the possible
inheritance links among concepts can be expressed with
standard, system-defined relations is-a and instance-of, and
their inverse relations has-a and has-instances. Therefore, the
potential to create user-defined inheritance and other relations
is provided by Prolog/Rex.
Demons represent some means of noticing and acting upon
changes in slot values, necessary for significant event detection. Thus, the execution of a specified function can be
triggered whenever a slot value is referenced. or whenever
a new value is stored in a slot. Such demons can automatically and invisibly compute values to be placed in a slor
(Prolog/Rex’s intrinsic inference). Demons are also needed
when values must be looked up in databases. They also
participate in the expert system reasoning activities by providing “automatic” inference as part of each assertion and
retrieval operation. They also provide a way of associating
domain-dependent behavior with concepts. which, together
with inheritance and data abstraction, qualifies concepts as an
object-oriented programming facility.
Facts are used to represent simple relational statements.
A hypothesis is used to define the assumption that invokes
the alternative pathway to the solution, i.e. that changes the
problem state, through contexr generation.
Procedural knowledge (PK, Fig. 2), i.e., the behavior associated with domain objects, is expressed in terms of production
rules or classical procedures (C language, SQL, graphic primitives, etc.). Although Prolog itself is a rule-based system that
uses stored facts and rules to deduce solutions, experience
has shown that the Prolog built-in rule interpreter is not
always efficient enough, especially when the knowledge base
is large. Therefore, we built a new rule language and its
interpreter. In our rule language we distinguish two basic types
of rules-domain
rules (forward-chaining rules, backwardchaining rules, constraint rules, Fig. 2), that generate hypotheses and conclusions about a problem area, and control rules (meta-rules, domain knowledge source activation
rules-DKSARs, set-control rules) that permit flexible control
of reasoning, adjustable to the processing context. The DKSAR
type of control rule has a special role in solving the key
paradigm integration problem-control and synchronization.
An escape to the underlying standard Prolog system is
provided from both sides of any kind of Prolog/Rex rule (using
VRANES
AND STANOJEVIt:
PARADIGMS
WITHIN
THE BLACKBOARD
FRAMEWORK
the @ operator), which can be useful for building complex
tests in the precondition part of a rule, for writing messages
from the action part, for providing procedural attachment
(accomplished through a built-in C interface) or for any other
action that logic programming is suitable for.
251
is influenced by the current strategy (depth-first, breadth-first,
hill-climbing, beam-search, A*, etc.). Initially the depth-first
strategy with the use of rule priorities is set, but it can be
changed later using the set-control rules. When the conflict is
resolved according to the selected strategy, a rule instance is
put into the rule agenda, and control is returned to the if-part. If
there exists another set of data that satisfies the preconditions,
B. Opportunistic Control and Synchronization
the conflict resolution is performed again, otherwise control
We distinguish two levels of built-in control in BEST. The is returned to the local monitor, and the next rule from the
first level of control affects knowledge source scheduling and list is tried.
The rule activation mechanism invokes the then-parts of
activation (Fig. l), while the second level of control affects
rule scheduling and interpretation. Whenever a concept, fact forward-chaining, constraint, and set-control rules according
or hypothesis on the global blackboard is asserted, retracted, to the selected strategy. The order of rule firing can be changed
or modified, a global domain blackboard monitor (GDBBM) is using the meta-rules. Meta-rules are sensitive to the state of the
activated. It receives information about the performed opera- global blackboard, local blackboard and the rule agenda. When
tion, and then tries to find a Global Domain Blackboard Index a then-part of a selected rule instance is fired, it can perform
(GDBBI) that matches the given concept, fact, or hypothesis. changes on the global blackboard and local blackboard, thus
The global index contains a list of all DKSAR’s whose if-parts transferring control to the global or local monitor. When they
are affected. After the first DKSAR is picked from the list, the finish their tasks, control is returned to the rule activation
control is transferred to its if-part. If all the preconditions in mechanism, and this control loop repeats. The loop terminates
the if-part are fulfilled, the conflict resolution (CR) mechanism when the rule agenda is empty, or on the user’s request.
Since the choice of a control structure strongly affects
gains control. Its task is to put the DKSAR instance into the
efficiency, we have found it extremely useful to offer the
knowledge source agenda (KSA) using the DKSAR’s priority,
and the value of a criterion (if one is defined), such as the choice of switching to the most appropriate control strategy
efficiency, reliability, or a value returned by a knowledge for every single subproblem we solve. Prolog, with its “hard
source or a heuristic function which combines these values. wired” backtracking strategy, lacks this kind of flexibility. The
The agenda is always kept sorted in decreasing order of Prolog/Rex rule manager supports both forward and backward
priorities. When the conflict is resolved, control is returned chaining (where forward-chaining is the basic strategy) while
to the if-part in an attempt to satisfy the preconditions using the backward-chaining mechanism is activated whenever the
other data. If it succeeds, control is transferred again to the pattern from the precondition part of the forward-chaining rule
conflict resolution mechanism, otherwise control is returned cannot be matched with the fact from the knowledge base,
to the global monitor and the next DKSAR from the list is but there exists a backward-chaining rule that can prove that
fact. By allowing the interleaving of forward-and backwardtried.
A domain knowledge source activation (DKSA) takes the chaining inference at the rule level, the facilities can cooperate,
first DKSAR instance from the agenda and transfers control not compete. Moreover, no matter which control strategy has
and the data collected in the DKSRT to the domain knowledge been selected, it is sometimes useful to control the selection
source invocation (DKSI)-the then-part of the corresponding of the next rule, instead of following the knowledge base
DKSAR. In cases when there are many DKSAR instances with order of rules. A human expert, for instance, albeit always
the same highest priority, meta -rules can be used to determine having free access to his own knowledge, identifies and
the sequence of their invocation. Meta-rules perform this task applies exactly those rules that are relevant to the problem
using the information from the global blackboard and agenda. at hand. To imitate that focused manner of applying rules,
Within the then-part, data either on the global blackboard or on i.e., to structure and order the unstructured world of rules, our
the local blackboard, can be changed. If the data are changed agenda-based rule manager offers the possibility of assigning
on the global blackboard, control is transferred to the global a priority to a rule in its original definition (regardless of
monitor, and if they are changed on the local blackboard, whether the rules are preferred because they deal with the
control is transferred to the local domain blackboard monitor most serious issue, or are the fastest to execute, or because
(LDBBM), thus initiating the knowledge source execution. they were written by the most knowledgeable human, etc.).
When either the global monitor or knowledge source finishes Apart from priorities, heuristics expressed in terms of metaits task, control is returned to the knowledge source activation, rules or set-control rules can be used to control the rule firing
and the control loop repeats. The control loop terminates when order. Meta-rules enable the user to intervene in the inference
the agenda is empty or on the user’s request.
process, to define his own strategy of applying rules and
When a change on the local blackboard occurs, the local to dynamically change that strategy according to the current
monitor gains control. It uses the Local Domain Blackboard context. Unlike the similar approaches [lo], [19], [9], which
Indexes (LDBBI’s) to find the names of affected forwardincorporate metaknowledge capabilities in the standard Horn
chaining, constraint and set-control rules. The first name is clause interpreter, our meta-rule describes an action to be
taken and control is transferred to the if-part of the correspond- undertaken by our own flexible rule manager whenever it
ing rule. If all the preconditions are fulfilled, then the conflict
focuses on a rule from a conflict set involved in the meta-rule
resolution mechanism takes control. The order of rule firing precondition.
252
IEEE TRANSACTIONS
Due to the flexibility of Prolog/Rex inference mechanisms,
it is much faster with those problems where Prolog’s blind,
depth-first search strategy does especially badly, i.e., with
problems where combinatorial explosion creates a seemingly
infinite number of possible answers (such as the possible
configurations of a machine).
C. Paradigm Cooperation and Integration
ON SOFTWARE
ENGINEERING,
VOL. 21, NO. 3, MARCH
1995
TABLE I
GDBB INTEGRATION
PARADIGM
RULE-BASED
PARADIGMS
INVOCATION
by asserting facts or
concepts on the LDBB
PARTIAL
GDBB
RESULT
ON
using global structure in
forward-chaining rule
No matter which paradigm is used for knowledge source
by referencing the 1 using built-m predicates
CONCEPT
programming, it is invoked using the DKSAR rule. The knowlinitial
BASED
edge source is initialized when a premise of the corresponding
slot of a concept
PARADIGMS
I
DKSAR is fulfilled. A bridge to the active body of the
knowledge source, represented within the most appropriate
usmg built-in predicates or
using B structure in
LOGIC
programming paradigm, is provided through the action part of
structures in then-part of
then-part of DKSAR
PROGRAMMING
DKSAR
the DKSAR. The knowledge sources communicate with each
other using the global blackboard. Each knowledge source
can be implemented using the paradigm that solves a given
PROCEDURAL
using embedded C in @
subproblem in the easiest way. Those differently implemented
PROGRAMMING
structure in then-part
knowledge sources can be integrated by posting the results
DKSAR
of each knowledge source on the global blackboard in the
uniform manner, using Prolog/Rex declarative data carriers
INTELLIGENT
(facts, hypotheses, and concepts).
r- DBMS
All the paradigms supported by BEST can be divided into
five groups based upon their implementation (see Section IV).
The first group contains paradigms implemented predominantly using concepts, relations, and demons: access-oriented,
object-oriented programming, and model-based reasoning. The
The IDBM paradigm is integrated in the BEST environment
second group comprises paradigms implemented using rules. by making queries of a data-base in the if-part of a DKSAR,
This group can be divided further into two subgroups contain- and asserting the wanted data on the global blackboard in the
ing paradigms that use the context mechanism, and paradigms then-part of the DKSAR.
that do not use it. The first subgroup contains the hypothetical,
The logic programming paradigm in BEST is provided by
nonmonotonic, and time-state reasoning, while the second using one predefined structure in the if-or then-part of the
one contains paradigms based on forward- and backward- DKSAR. This structure allows calls to Prolog predicates. The
chaining reasoning without the use of the context mechanism. results can be written onto the global blackboard either by
The approximate-reasoning paradigm is a member of both using out arguments of the predicate call, or by using built-in
subgroups. The third group represents the intelligent data- predicates in the Prolog code.
The procedural-programming paradigm can be invoked
base management (IDBM), the fourth-logic
programming,
and the fifth group-the procedural programming paradigm. In from the then-part of the corresponding DKSAR. The results
this section we will describe the invocation of the knowledge can be written onto the global blackboard (in the then-part of
sources implemented using different programming paradigms, DKSAR) by using the out parameters of the procedure call.
Apart from paradigm integration via the global blackboard,
and the way these knowledge sources assert their results on
the global blackboard (Table I).
BEST provides some means of direct paradigm communicaIf a knowledge source is implemented using one of the tions. Rule-based paradigms can invoke any other paradigm
paradigms based on concepts, it is activated by referencing supported by BEST (Table II). The logic- and procedural(asserting, or modifying) a slot value of a certain concept in programming, as well as the IDBM paradigms can be invoked
using structures in the if-or then-part of a domain rule. The
the then-part of the DKSAR, i.e., by sending the initializing
message. If we want to work on a local blackboard, then paradigms based on the use of concepts can be invoked by
it is necessary to assert all the initial concepts on the local referencing a slot value of a concept in the if- or then-part
blackboard and to reference the slot value of a certain concept. of a rule.
Paradigms based on the use of concepts can also invoke any
Partial results can be written onto the global blackboard using
other paradigm. The rule based paradigms and the IDBM parasome built-in predicates.
When a knowledge source is implemented using one of digm can be invoked using some built-in predicates. The logic
the paradigms based on rules, then it is necessary to assert programming paradigm is directly accessible, because methods
a new fact or to reference a slot value of a concept on a local are implemented in Prolog. C can be invoked from Prolog,
blackboard (in the then-part of the corresponding DKSAR) in therefore the integration with the procedural programming
order to initialize the reasoning process. The partial results can paradigm is provided.
The logic programming paradigm is directly integrated with
be written directly from the then-part of domain rules onto the
all the other paradigms using built-in predicates, because the
global blackboard.
VRANES
AND STANOJEVk:
PARADIGMS
WITHIN
THE BLACKBOARD
FRAMEWORK
253
We introduce multiparadigm control by providing both
built-in procedural control and rule-based control, based on
CALLING
PARADIGM
meta-rules. The first provides a mechanism for different
search strategies (depth-first, breadth-first, A*, hill-climbing,
beam-search, depth-first-iteratively-deepening,
etc.). The
rule-based control paradigm lets the user define heuristics
(expressed in terms of meta-rules) to choose the rule firing and
context expansion schedule. An advantage of the rule-based
IULE-BASED
using built-in
using built-in
?AFtADIGMS
predicates
pdiCateS
control is the ability to provide an external control of the
C
reasoning system, and to change that control dynamically. It
A
makes internal structures developed by the inference engine
ZONCEFTusing
using built-in
L
BASED
concept
L
predicates
accessible for inspection and modification by metu-rules.
stNctures
E PARADIGMS
To enhance Prolog’s inherent pattern matching (syntactic
unification) and resolution techniques, we used the simple
D
LOGIC
using e
direct
indexing scheme first [36] (storing for each pattern a list
stNctures
P PROGRAMMING
of rules in whose left sides it appears). By indexing we
A
buy speed at the expense of space. This has significantly
R
A PROCEDURAL
using
using
using
improved Prolog inferencing, especially when the forwardembedded C
D PROGRAMMING
embedded C
embedded C
chaining strategy is used. Due to its simplicity, the indexI
ing scheme is efficient when the number of facts that are
G
to be matched with appropriate patterns is small, but it
using sql
using built-in
M INTELLIGENT
using built-in
DBMS
structures
predicates
predicates
shows rather poor performance in the case when a large
I
I
number of facts correspond to each pattern. Therefore, we
implemented the heavily-modified RETE pattern matching
algorithm [ 151 which shows better performance in these cases.
BEST environment is implemented in Prolog and escape to This is caused by the fact that the order of complexity
function for the RETE algorithm is lower than the order of
the underlying Prolog is possible from any point in BEST.
The procedural programming and IDBM paradigms cannot complexity function of the indexing scheme. The conclusion
is obvious-when
we have a large number of facts that
directly invoke other paradigms.
match a particular pattern, the RETE algorithm is recommended. Otherwise, it is better to use the simple indexIV. IMPLEMENTATIONOF THE SYSTEM
ing scheme. This observation leads to the conclusion that,
Let us now see how different programming paradigms are due to the performance tradeoff between different pattern
implemented using Prolog/Rex (BEST’s knowledge represen- matching algorithms, the best solution is to offer the user
tation language) as a common substrate.
the possibility to choose the technique most suitable for a
particular knowledge base, or even to combine dynamically
A. Rule-Based Programming
two different techniques. Our shell offers this kind of flexiModern knowledge-based systems use a large number of bility.
Let us now describe how different, powerful reasoning
rules about a problem area or domain. Even if the knowledge
base is modularized around the blackboard framework, effi- paradigms are supported by BEST.
ciency of the rule manager can enormously affect the success
1) Truth Maintenance: An additional important feature of
of the system. The runtime performance in knowledge-based BEST is the ATMS-like [6], [7], [8] truth maintenance capasystems depends on two factors: the appropriateness of avail- bility implemented using the context mechanism (the correable control structures for the domain (that is, adaptability) and spondence between terms used in ATMS and BEST is shown
the ability to run the system with minimal levels of intervening in Table III). In BEST, constraint rules are used to detect a
language interpretation (that is, compilation). With respect to contradiction in a context, handled by the poison function. The
the former, BEST provides many reasoning methods, control believe function contradicts all competing solutions. When a
structures and rule application strategies, while with respect to new datum is asserted in a context, its label is updated, and
the latter, it provides two rule expansion methods (an indexing data that can be inferred subsequently will be asserted when
scheme and a heavily-modified RBTE expansion method). The the corresponding rules fire. If the asserted datum causes a
RETE algorithm replaces repeated testing of condition clauses contradiction, a constraint rule will fire, thus poisoning the
with the maintenance of each rule’s current applicability as current context and all of its descendants. The labels of all
working memory changes.
facts, hypotheses, and concepts contained in the poisoned
The best contemporary environments for building
contexts will be changed, and the rules whose precondition
knowledge-based systems sometimes provide a rich parts were satisfied in the poisoned contexts will be removed
amalgamation of knowledge representation formalisms,
from the agenda. Dependency-directed backtracking is avoided
and/or offer a powerful multiparadigm inference mechanism, since there is no need to search for the assumptions underlying
but severely limit control specification or external control. a contradiction, then to select a culprit in order to resolve
TABLE II
DIRECT COMMUNICATION
254
IEEE TRANSACTIONS
TABLE III
ATMS-BEST CORRESPONDENCE
ATMS
nodes
assumptions
contexts
labels
BEST
facts,concepts
hypotheses
contexts
values of
slots
fact-in,
hypothesis-in
environment
values of the
slot
has_hypothesis
nogood database
no-megegairs
fact
the contradiction by performing a context switch. After a
contradiction is detected in BEST, and the inconsistent context
poisoned, the context switch is performed simply by transferring the control to the rule that operates on some other
context.
Moreover, TMS-like nonmonotonic justifications [ 1l] which
are not supported by ATMS, can be encoded in BEST using
forward- or backward-chaining rules and constraint rules. In
BEST we assert a new datum (using forward- or backwardchaining rules) only if all nodes of an inlist [ 1l] are in a
current context and if the nodes of an outlist [ 1l] are out. The
constraint rule is used to detect a contradiction if any of the
nodes from the outlist becomes in, or vice versa.
Hypothetical and nonmonotonic reasoning are facilitated
using the BEST context mechanism.
2) Hypothetical Reasoning: The availability of a hypothetical reasoning paradigm for commercial application is quite
recent [21], 1221, although it is very helpful when there is
uncertainty in determining a solution to a problem. The BEST
context mechanism (see Section 1II.A) similar to the ART
viewpoint mechanism, represents a set of facts that are entailed
by the assumption expressed in terms of the BEST hypothesis
facility, which helps in differentiating between actual facts and
those that have only been presumed to be true. To reduce the
number of contexts created, BEST has a facility to poison the
contexts within which some predefined constraints (expressed
in terms of constraint rules) are violated, and prevent the
exploration of forbidden contexts. A context can generate
descendant contexts that inherit all the facts true in the parent
context. Offspring may selectively delete the inherited facts if
desired. To keep the context structure as simple as possible,
BEST has the possibility of automatically merging two or more
existing contexts.
3) Nonmonotonic Reasoning: The same context mechanism
(described above) is used for representing the situation that
changes over time, i.e. to provide time-state nonmonotonic reasoning. Unlike the monotonic hypothetical reasoning, where
the number of facts is constantly growing, in time-state reasoning the facts are retracted frequently, i.e. the number of facts
pulsates, which makes this kind of reasoning nonmonotonic.
ON SOFIXARE
ENGINEERING,
VOL. 21, NO. 3. MARCH
1995
BEST can construct a chain of contexts, reflecting the state
space that changes through time.
4) Approximate Reasoning: The approximate or satisficing
approach can be viewed as an attempt to construct a problemsolving system that produces the best possible answer in a
given amount of time. An answer may be more or less certain,
precise or imprecise, and complete or partial. This method
allows the system to trade-off solution completeness, precision
or certainty to satisfy time constraints. Thus, by ignoring some
aspects of the solutions, or by not determining exactly some solution parameters, or by not considering all supporting or refuting evidence, these methods can produce the answers at a given
deadline, but might not have an answer if interrupted before
the deadline. BEST, by contrast, provides the ability to commit
to an action almost instantaneously, but allows the quality of
that decision to improve as long as time is available. Once the
deadline is reached, the best decision arrived at is provided.
BEST now supports a great variety of search strategies and
their combinations to be employed in the reasoning process
[39]. By using these strategies the process of finding the
satisfactorily “good’ solution could be implemented simply by
employing the heuristic functions that guide the search process.
These adjustments can be implemented so as to enable the enduser of the specific application to reach the requested state as
quickly as required. BEST offers the following methods of
relaxing the requirement for the optimal solution:
Adjusting the weights of g and h functions, as suggested by
[34], we get the following:
dynamic weighting [34]-this algorithm finds the solution
whose cost is not greater than (1 + E)C*, where E is the
given satisficing threshold, and C* is the cost of finding
the optimal solution;
AZ-an algorithm using search effort estimates. This
algorithm is E-admissible, i.e. it always finds the solution
whose cost does not exceed (1 + E)C* ([34], Theorem
13, p. 89);
R,*-a limited risk algorithm that uses the information
about the uncertainty of h.
Another approach in solving the satisficing problem in knowledge based systems which is also implemented in BEST, with
respect to its specific properties, is given in [32]. As stated
there, “the purpose of satisficing is to resolve as few initial
condition and goal states as can be managed consistent with
reaching a satisfactory answer”. This system fulfills absolutely
the requirements given above, and, by applying the most
important rule at each stage in the decision process, minimizes
the effort for finding the most satisfactory answer.
To fulfill the mentioned requirements, the following steps
are performed:
l
l
l
l
l
l
l
a certainty value is given to the rule premise;
each rule has its evidence value, i.e. the attenuation value
between 0 and 1;
for each rule the satisficing thresholds are defined: this
is the evidence value over which the rule is considered
to be true;
for the sake of simplicity it is required that each rule have
only or, and, or not logical couplings between premises.
VRANES
AND STANOJEVIC:
PARADIGMS
WITHIN
THE BLACKBOARD
FRAMEWORK
The basic heuristic is to try to resolve the most promising
state first, i.e., the state that will most likely have the largest
final value. The values are computed bottom-up (from the
initial set of facts to the conclusions upwards) by applying uncertainty management (certainty factors, fuzzy logic,
probabilistic approach) to the values listed above.
Uncertainty management using certainty factors in BEST
is similar to the management of certainty factors in MYCIN
[4]. To every piece of evidence in the premise of a rule, i.e.,
hypotheses,fuct or slot value, a certainty factor can be attached
(a value between -1 and 1). In the rule premise, evidence
can be and-ed, or-ed, or negated. A rule itself can have a
certainty factor that describes the relationship between the
premise and the conclusions of the rule (cf= -1 means that if
the rule premise is true, then the conclusions are false and vice
versa). The certainty factors of rule conclusions are evaluated
using the resulting certainty factor of the rule premise and the
certainty factor of the rule.
Fuzzy reasoning [25] is a well-defined reasoning system
based on fuzzy set theory and the theory of fuzzy relations.
We have implemented different formulae for the fuzzy set
complement of the Yager and Sugeno class, for set intersection,
and for union of the Schweizer & Sklar, Hamacher, Frank,
Yager, Dubois & Prade, and Dombi classes [25]. We use the set
complement to evaluate the membership grade for the negated
evidence, set union for the or combination of evidence, and
set intersection for the and combination of evidence.
We use fuzzy relations to describe the relationships between
the evidence in the precondition part of the rule, and the
conclusion in the action part of the same rule. It is necessary
to define one rule for each row of the two-dimensional array.
BEST also allows the use of fuzzy functions. The incorporation of fuzzy logic qualifies BEST for the design of rule-based
fuzzy industrial controllers. The degree of control achieved
using the fuzzy rule-based approach could be superior to a
strictly analytical approach, where there is a lot of imprecision
in the controlled system no matter whether the reason is
a degraded sensor, nonlinearity in the system dynamics, or
delays in control signals.
BEST provides the use of probabilistic theory in uncertainty
management [31]. To every piece of evidence in the rule
premise a probability value can be attached. Those values can
be combined using the independence assumption, conservative, or liberal approach. The probability value attached to a
rule describes the probability that conclusions are valid if the
premise is valid. The resulting probability is evaluated using
the Bayesian formula, the probability of the rule premise and
the probability of the rule.
Both fuzzy logic and probabilistic reasoning are used for
the implementation of the BEST-based investment advisory
expert system (INVEX), described in Section V.
5) Temporal Reasoning: Our development environment has
several different built-in generic facilities for temporal information representation and manipulation, that can be adapted to
the needs of individual application. Information about domain
objects is kept in the concept database, where attributes of the
object, represented by concept’s slots can be given a time
dimension, i.e., can be represented as a function of time.
255
An absolute time line is implemented, which is the same
for all attributes that have a time dimension. The functions
that describe a character of attribute time dependency are
implemented in terms of demons attached to the slot. The
demon itself can be realized as a procedure in C or in any other
imperative language, or as a set of rules defining the attribute
value’s validity in time, or a character of time dependency.
When the slot with a time tag is accessed, the demon is
activated which checks the current time against the value of
the time validity descriptor, and retrieves the attribute value
only if it is valid at that particular moment, or computes the
attribute value if it is defined as a function of time of any
kind. Moreover, if the value is defined as a discrete function
of time, several ways of covering unspecified time values could
be defined (the value is null at time not specified or the value
is interpolated if it falls between two time tags, or the value
corresponding to the preceding time tag is retrieved, etc.). The
values of time-varying attributes are computed on a “whenneeded” basis, which saves time and space significantly, in
comparison with computing and memorizing the values for
each time step.
Often, a large amount of information cannot be linked to a
precise time, but the duration of data or facts is well known,
or these facts can be related to one another. Using BEST
we can handle these situations in two different ways. One
possibility is to define the time boundaries symbolically (a
start-date, an end-date, a duration, a symbolic constraint) and
to maintain the symbolic time line which defines the order
of symbolic points. While slot inheritance is provided along
the concept taxonomy in BEST, time constraint propagation
can be obtained automatically. Another way of establishing
the time relationships between the objects is by “user defined”
relations. An arbitrary number of time relations, such as those
introduced by Ladkin [27] can be defined:
l
always, sometimes,
l
equals, disjoint from,
l
precedes, follows,
l
meets, met by,
l
overlaps, overlapped by,
l
contains, contained by,
l
begins, ends,
9 begins with, begins before, begins after, ends with, ends
before, ends after, etc.
When a new interval relation is entered, all consequences are
computed. This is done by computing the transitive closure of
the temporal relations as in [l].
B. Data-Driven Processing Paradigm
Through the RETE pattern matching algorithm, a datadriven processing paradigm is introduced in BEST. The RETE
match algorithm [15] is a method for comparing a collection
of patterns to a collection of objects in order to determine
all possible matches, without iterating over the patterns. The
iteration is avoided using a tree-structured sorting network for
the patterns.
Data-driven processing has become the standard in highperformance reasoning architectures and inference engines.
256
IEEE TRANSACTIONS
NASA has benchmarked many inference engines and determined that data-driven are dramatically more efficient than
procedure flow inference engines. Empirical evidence concerning commercially deployed expert systems also indicates
that data-driven architectures are better choices for solving
real-world problems.
To implement the RETE algorithm we build two networks-a pattern- and join-network, for each forward-chaining
and backward-chaining rule. In the pattern-network we try
to match patterns with facts, and in the join-network we
examine whether all preconditions of one forward-chaining,
or backward-chaining rule, are satisfied or not.
Our RETE algorithm is much more complex than the
original one [ 151. The original algorithm was developed for
use in OPS5, a first generation expert system language which
supports only object-attribute pairs, forward-chaining rules for
knowledge representation, and the best-first search strategy.
Providing a variety of formalisms to represent domain objects
over which a Prolog/Rex rule operates, and a variety of inference paradigms (forward- and backward-chaining, hypothetical and time-state reasoning, monotonic and nonmonotonic
reasoning, negations, different search strategies, uncertainty
management, etc.) required a significant extension to the
original RETE algorithm.
Similar pattern- and join-networks are built in the LISP
implementation of the RETE algorithm in ART [2 11. Although
we do not know ART in detail, it seems that implementations
differ a lot in network topology, the process of variable
bindings, the partial matches manipulation in hypothetical
reasoning, etc. An additional major difference induced by
certainty management in BEST is its inability to build separate
pattern-networks for or-ed patterns. This splitting rather simplifies the network topology in ART. While a rule precondition
part is represented in terms of its corresponding pattern- and
join-network, an action part is realized in exactly the same
way as in the indexing scheme.
C. Object-Oriented Paradigm
Combining object processing with knowledge-based technology is a recent and powerful innovation in data processing
[26]. Individually, each of these techniques has many advantages over conventional programming technology, and
when used together the advantages are enhanced even further. Rule processing provides support for representing expertise, inference, hypothetical reasoning, and truth maintenance,
while object-oriented processing provides support for data
abstraction, knowledge encapsulation, inheritance, reusability,
and extensibility. The message passing capability of objectoriented processing allows the system to keep knowledge
about data separate from knowledge about reasoning, which
is critical for good data abstraction and the encapsulation
of knowledge, while pattern-matching rules and data-driven
reasoning, accomplished through the RETE algorithm, enable
a clear and concise specification of the algorithm.
In “object-oriented” programming the central entities are
data, not procedures or rules. The data objects invoke one
another through messages to each other. Collections of objects
n
ON SOFIWARJZ
ENGINEERING,
VOL.
21, NO. 3, MARCH
1995
that share some properties are implemented through classes.
A class describes the methods that capture behavior of its
instances. Classes implement a very fundamental concept in
object-orientation, namely, abstract data typing. The second
powerful object-oriented concept is inheritance. Inheriting
behavior enables code sharing and reusability, while inheriting
representation enables structure sharing among objects. The
combination of these two types of inheritance provides a
powerful modelling and software development strategy. The
third powerful object-oriented concept is object identity, i.e.,
the property of an object that distinguishes one object from
another.
I) Data Abstraction: An object in BEST, expressed as a
concept instance, is a self-contained entity consisting of its
own private data contained in slots, and a set of operations,
implemented through demons. These operations constitute a
protocol that provides the object’s external interface to the
rest of the system. The algorithm used to implement each
of the demons attached to the particular concept’s slots is
encapsulated within the concept, and is hidden from the users.
All interaction with an object occurs through messages sent
to it requesting an operation in its protocol. A message is
sent when a slot (to which a demon that realizes the method
is attached) is named. Methods are implemented through
the procedural parts of demons, while the declarative parts
of demons are used in the process of dynamic binding. A
method has the exclusive capability to manipulate a particular
object’s private property contained in the slot (performing the
computation and returning a value, sending the message, and
so on). The exact mechanism used by the object to respond to
the messages, implemented in demon is encapsulated within
and is not visible outside. Users need know only how to send
the appropriate message, which is rather simple in BEST (slot
naming). Classes, as well as objects, in BEST are represented
by concepts. Every concept has a unique name and an arbitrary
number of slots. Slots in concepts representing classes describe
their structure, and play the role of instance variables. The slot
values can be the default slot values for the objects-members
of the corresponding class. Strong typing can be provided
using demons that will examine the slot value whenever it
is instantiated, or modified.
The new concepts in BEST can be created either statically
during concept taxonomy creation and initialization, or dynamically, using rules or demons. A new concept can be created
using the addc predicate with the name of this concept as the
only argument. If this new concept represents an object or a
class, a slot defining a relation with the inheritance property
must be added (the insert predicate, described later, performs
this task). Using the inheritance mechanism, the new concept
inherits the prescribed structure. The predicate remc is used to
delete a concept with a given name, and the predicate changec
changes an old name of a concept to a new one.
BEST explicitly facilitates class extensions. To define inheritance between concepts, a user must define corresponding
relationships between them by adding slots that represent
relations with the inheritance property. Every subclass, or
every member of a class, is mentioned as the slot value
(representing a particular relation) of the corresponding class.
VRANES
AND STANOJEVIt:
PARADIGMS
WITHIN
THE BLACKBOARD
FRAMEWORK
Methods (procedural parts of demons) in BEST are implemented as Prolog clauses. The selector, or name of the method,
represents the functor of the head, the first argument is the
name of the class or object-the owner of the method, the
second argument represents the name of the target object, and
the third argument of the head is a list that contains values
(as formal parameters). The body of the clause performs the
desired operation of the method.
Methods can be invoked implicitly, by accessing a slot value
of a concept, using a mechanism for dynamic binding, or
explicitly, by calling the corresponding method.
There are four basic operations that can be performed on
concepts:
- get a slot value,
- insert a new slot value,
- modify a slot value, and,
- delete a slot value.
To each of those basic operations corresponds one predicate
in Prolog (hgetconc, hinsert, hmodify, and hdelete). These
predicates check first if there exists a demon that performs the
requested operation (using the message predicate) and invoke
it in the case of existence; otherwise the default operation is
performed (defined by predicates getconc, insert, modify, and
delete). Each of these predicates has three arguments where
the first argument represents a name of a concept, the second
one-a slot name, and the third one-a list of slot values.
The predicate message is used to implement the dynamic
binding of message selectors to the methods that implement
them. The message predicate has four arguments. The first
argument represents a name of a target concept, the second
one stands for a slot name (to which a demon is attached), the
third argument is a list that contains the slot values, and the last
one describes the desired operation (get, insert, modzfy, delete).
This predicate checks first to see if there is a demon attached
to the slot of the target concept. The demon can belong to
the target concept, or it can be defined in a class concept that
is linked with the target concept by one of the relations with
the inheritance property. This relation must be mentioned as
the value of the slot relation in the declarative part of this
demon, whereas class concept must be a value of the slot
concept. The operation specified in the call of the message
predicate must correspond to the value of the demon’s slot
operation. If all these preconditions are met, then the clause
that implements the procedural part of the demon is called.
This call is created using the functor which is the value of
the slot method, and the three arguments (the name of an
owner concept, the name of a target concept, and the list of
the values).
The third way to invoke a method is to explicitly call the
predicate that implements it. A user must specify the functor,
and the three arguments of the call that match the head of the
predicate.
The names of demons can be overloaded, which means that
different concepts can invoke the demon using the same name,
but with different semantics and implementations. The demon
name can be overloaded even within the same concept, in the
sense that for every slot operation there can exist a different
251
method. Method names can also be overloaded; i.e. different
owner concepts can share the same method name.
Demons can be used to implement constraints that test
the correctness or completeness of the abstract data types
represented by concepts. They can test the slot values, or have
some pre- and postconditions that must be satisfied before a
particular method is executed.
2) Inheritance: Another property that makes a language
“object-oriented” is inheritance. In BEST, objects (implemented in terms of concepts) may be defined as members
of one or more classes. The class defines the internal form
of every object of the class and the methods associated
with objects of the class. New classes can inherit both representation (attributes, instance variables, and so on) and
behavior (demons that realize operations, methods, messages,
and so on). Inheriting representation enables structure sharing
among data objects, while inheriting behavior enables code
sharing. Inheritance also provides a very natural mechanism
for “taxonomizing” concepts into well-defined inheritance
hierarchies, which is especially suitable for blackboard object
organization. Moreover, inheritance introduces a semantic
network representation paradigm in BEST. In the semantic
network representation, nodes represent concepts and links
represent inheritance relationships or any other user-defined
relationship which might exist among objects (includes, works
in, supervises, consists of, and so on). Concept may belong
to more than one class, i.e. multiple inheritance is provided.
Using inheritance in concept definitions assists in defining
object protocol in a relatively standardized way. The inheritance provides a facility of polymorphism, which enables
uniform treatment of objects from different classes. Using
classes and inheritance provides a simple and expressive model
for the relationship of various parts of system definition and
assists in making components reusable or extensible in system
construction.
The getconc predicate, that searches the slot values of a
concept, checks first whether the slot name is one of the
defined relations, then tries to find the slot in the given concept,
and if that fails, it looks for the slot in the ancestor concepts.
This implementation has the following consequences:
- slots that represent relations cannot be inherited;
- the slot values of the ancestor concepts are overridden,
if the slot is defined within the given concept;
- if on the path from the given concept to the root of the
hierarchy tree there exist several definitions of the same
slot, then the definition that is the closest to the given
concept is taken.
It is interesting to note that BEST provides flexibility in
overriding the type of a slot value. In BEST, demons are used
to provide strong typing. If, for one slot, the same demon is
used for all descendant concepts, then no redefining of the
slot value type is allowed. The arbitrary redefining of type
is also provided by definition of overriding demons attached
to the same slot in the descendant concepts. The inheritance
property of a relation can be limited to allow the inheritance
of a predefined set of slots. If we omit a slot in this set, then
this slot will not be inherited. In the same hierarchy tree we
258
IEEE TRANSACTIONS
can have different definitions of the same slot with different
demons attached to each of them, thus providing the hidden
definitions.
The specialization in the hierarchy tree can be expressed
by defining the new slots specific for the given concept. If
so specified, these new slots can be inherited further in the
hierarchy tree.
The demons and the corresponding methods can also be
inherited. The demon inheritance utilizes the mechanism of
dynamic binding. The declarative part of a demon contains the
slot relation with a value that represents a name of a relation
with the inheritance property, and the slot concept that contains
the names of concepts in the hierarchy tree ordered by the
given relation. In order to apply a method on a slot value of a
given concept, the values of the slot concept are searched in an
attempt to find the name of the given concept or, if that fails,
to find a concept which is an ancestor of the given concept.
If the demon can be inherited from any ancestor concept to
any descendant concept, then the corresponding relation must
be declared transitive.
In order to provide demon overriding, a rule is defined
and must be followed when defining the sequence of the slot
concept values. The concepts that lie on the level closest to the
root of a hierarchy tree must be defined first, then the concepts
on the next level and so on. These values are searched in
the inverse order, and the rule allows us to find the method
belonging to the closest ancestor of the given concept. If the
demon is defined for the given concept then it overrides all
inherited demons, otherwise, the demon of the closest ancestor
is inherited. An inherited demon can be excluded using the
same mechanism of demon overriding.
In fact, concepts, relations, and demons comprise a prototype system, because the concepts can be individually created,
no matter whether they represent a class or an object. The
relations provide a means for a more flexible form of inheritance (as defined in object-oriented programming paradigm)
called delegation, where arbitrary concepts can inherit from
one another. The inheritance relationship can be established
dynamically, while the inheritance relationship of class-based
languages is established and fixed when a class is created.
Multiple inheritance is another feature provided by BEST.
The possible multiple inheritance conflicts are resolved using
a linearization strategy, or qualified slots and methods [24].
Using the getconc predicate, an inherited slot value can be
obtained. The getconc predicate is backtrackable, and so all
multiple inherited slot values can be retrieved. This sequence
is defined implicitly by the order of relation definitions. The
slot value defined in the concept in a hierarchy tree determined
by a relation that is most recently defined will be found first.
The same predicate can be used to directly access the slot
value in the concept where it is actually defined. The third
way to resolve the multiple inheritance conflict is to attach a
demon (that explicitly defines a resolution strategy) to the slot.
The linearization strategy for the multiple inherited demons
is implemented in the process of dynamic binding. The most
recently defined demon will be activated first. Another approach is to explicitly call the specific method with the known
owner concept.
ON SOFDVARE
ENGINEERING,
VOL.
21, NO. 3, MARCH
1995
3) Object Identity: In BEST object identity is assured
through the unique concept names. Each concept is represented
by a separate hush table.
There are three equality predicates that can be used in
the object-oriented programming paradigm: identity predicate,
shallow equality predicate, and deep equality predicate. The
identity predicate is Prolog’s built-in predicate, while the other
two can be implemented using the getconc predicate.
Shallow and deep copy are not explicitly provided, but can
be programmed using the getconc, addc, and insert predicates.
4) Integration of Rule-Based and Object-Oriented ProgrammingParadigms: The rule-based and object-oriented programming paradigms are fully integrated. That means that slot
values of a concept can be retrieved, inserted, deleted, or modified by rules (concept, assert, retract, and modify structures,
respectively), and that rules can be invoked from any method.
When concepts are used in rules, then the multiple
inheritance problem can arise. One solution to this problem
is qualified slots. The slot name can be qualified by the
name of the concept which is the owner of the slot value
or demon (owner-conceptname ownsslot slotname or
ownerxonceptname owns-demon slotname). If the slot name
is not qualified, then the linearization strategy is applied. For
the multiply-inherited slot values, a demon can be defined to
resolve the conflict.
There are some interface predicates that allow methods
to invoke rules. The getgoal predicate invokes backwardchaining rules. This predicate has three arguments, the first
one stands for the first term of a goal, the second one for the
second term, and the third one is a list that contains other
terms in the inverse order. The predicates cassert, cdelete,
and cmodify perform the specified operation on a slot value
(assertion, deletion, modification), and activate the pattern
matching mechanism. The precondition parts of DKSAR,
forward-chaining, constraint, or control rules can be satisfied,
and later fired performing the desired action.
D. Access-Oriented Programming Paradigm
The access-oriented programming paradigm is fully integrated in BEST. An access-oriented paradigm is based on
annotated values that associate annotations with data. Fetching
or storing data can cause a procedure to be invoked. It
has historical roots in languages like Simula and Interlisp-D,
which provide ways of converting access to a record into
computations for all records of a given type. More immediate
predecessors are the ideas of procedural attachment from frame
languages like KRL or KL-One. Unlike the object-oriented
paradigm where the object that receives a message changes
its data as a side effect, in access-oriented programming,
when one object changes its data, a message may be sent as
a side effect [37]. BEST demons attached to concept slots
are the basic computational mechanisms of access-oriented
programming. Demons are invisible to programs that are not
looking for them. Moreover, since a demon itself is a framelike construct, which has its own slots for saving the state
and can have a demon attached to its slot, the demons are
recursive. Demons can be organized in an inheritance lattice,
VRANES
AND STANOJEVIC:
PARADIGMS
WITHIN
THE BLACKBOARD
FRAMEWORK
due to the possibility of attaching inheritance relations and
user-defined relations to their slots. Demons have very low
computational overhead, and can be used for both property
annotations and active values. Property annotations provide
a way of attaching extra descriptions to data for guiding
its interpretation, for instance, for checking data types and
constraints. Before storing new data into the slot, they are
checked against the constraints (e.g., data type) through a
demon attached to the slot. Property annotations can be nested,
supporting the notion of descriptions of descriptions. Active
values convert a variable reference to a method invocation.
When a slot is accessed, computation is triggered, which
eliminates the need for conventional functional interfaces for
changing the monitored slot. Like properties, active values can
be nested (which comes for free due to the frame structure
of the demon), providing an automatic, invisible propagation
of side effects, which makes a solid basis for model-based
reasoning.
Obviously, all basic concepts of access-oriented programming can be found in BEST. The same features used to enable
object- and access-oriented programming paradigms, provide
the model-based reasoning paradigm [ lg] in BEST.
V. AN ILLUSTRATIVE EXAMPLE
We have used BEST to build two prototype multiparadigm
applications: the tactical decision making system for Air
Forces [42], and the INVEX-Investment
Advisory Expert
System [45].
To build our tactical decision making aid we used
knowledge-based programming as the basic paradigm, while
appreciating some old methodological friends, like linear
programming which was used for programming the knowledge
sources dealing with route planning and different tactical
manoeuvers estimation. Similarly, purely numeric methods are
used in a knowledge source that calculates damage expectancy
against a target, etc. This system is admittedly somewhat
too complex to be used for a case study, and therefore we
will take a heavily simplified version of INVEX in order to
describe how a multiparadigm system could be constructed
using BEST as a basis.
The user sees INVEX as a monolithic data processing
program because the only connection with it is the well known
Microsoft Excel spreadsheet. He is not aware of the embedded
rule-based system, fuzzy logic, or multicriteria analysis (an operational research technique). BEST glues all these paradigms
together (spreadsheet, rule-based programming, fuzzy-logic,
probabilistic reasoning, operational research) and takes care of
their communication. To establish local correctness, functional
testing is applied to every knowledge source in isolation. During the testing of the integrated system, only minor correction
and tuning of the participating single-paradigm knowledge
sources have been done.
The primary motivation for the development of the BESTbased investment advisory expert system (INVEX) was the
lack of a satisfactory aid to guide the project analyst and
investment decisionmaker in their choice among alternative
projects and project designs. The approach adopted here is
259
to decompose the decision process into stages (strategies),
specify a particular methodology for each stage, and choose the
most appropriate paradigm for programming the corresponding
knowledge source. For the purpose of project analysis, we
have adopted a stage-by-stage integrated graphical approach
which uses standard financial tables based on an integrated
documentation system. This stage is fully implemented using
the Microsoft Excel spreadsheet. Moreover, we use the Excel
spreadsheet as a front-end interface, being aware of the fact
that the spreadsheet metaphor is a very good automation
of what people would usually do with business data, paper,
pencil and calculator, and that it has a well-known and wellaccepted form. Both BEST and Excel run under the Microsoft
Windows 3.1. environment, with intrinsic gateways among the
applications (using DDE-Dynamic
Data Exchange protocol).
The summary tables for the alternative projects and project
designs are communicated from Excel (through DDE) to the
global blackboard of the BEST-based intelligent server in the
background that provides value beyond automation. It uses
human expert knowledge for heuristic investment classification
and ranking, blended with conventional multicriteria analysis
and risk assessment methods. Different methodologies are used
for different analyses and different programming paradigms
are respected for the corresponding knowledge sources implementation. Intermediate data and partial results and decisions
are communicated among knowledge sources through a shared
data repository built out of the common substrate (Prolog/Rex
declarative data carriers) and located on the global blackboard.
Standard financial tables, combined with graphical analysis supported by the Excel spreadsheet, lead the analyst
in logical stages from a standard financial analysis to a
complete economic evaluation of a project and its quantifiable
impacts. Various static and dynamic measures of the project
effectiveness are computed in order to shed light on the
project’s desirability from a different angle. No single measure
can by itself provide sufficient information for judging the
merit of a project, for each measures this merit from a
different point of view. The values of all these measures
are communicated to the knowledge source that performs
multicriteria analysis (through the project summary table on
the global blackboard loaded from Excel through the DDE
interface), so that all the impacts can be summarized by using
some weighting scheme. Of course, since there is no universal
agreement on the weights assigned to the different measures
of the project desirability; a dialog between the project analyst
(and/or decisionmaker) and INVEX is needed in order to
establish the weights that properly reflect the decisionmaker’s
wishes and intentions. During a consultation, INVEX first
asks about a customer’s wishes and requirements, then builds
up a customer profile, where the information asked from the
customer depends heavily on his intentions and the course of
the consultation. These wishes and intentions are translated
using production rules (another rule-based knowledge source)
into the weights assigned to the different objectives in the
multicriteria analysis knowledge source.
INVEX is fed with data through the interface that most
users already know-the spreadsheet. The Microsoft Excel
spreadsheet plays the role of a user-friendly, well-known
260
IEEE TRANSACTIONS
and well-accepted front-end for data entry, standard financial
table generator and translator, and as a client of the BESTbased intelligent server, performing background intelligent
decisionmaking activities. This client-server structure seems
to be the solution for future decision support systems. Excel is
also used for the presentation of the results, since it is highly
graphic, with very good presentation capabilities.
As a client, BEST is fully responsible for conversation flow
and synchronization. This task can be successfully fulfilled
through the rules only. The expert system shell is therefore
expanded with a set of additional predicates to support DDE.
Data being taken by BEST appear as Prolog variables instantiated after the call to a DDE communication predicate.
Similarly, data being sent from BEST to a server appear as input parameters of the corresponding communication predicates
and must be instantiated before the call. The rule that makes
use of these predicates converts gathered data to a suitable
knowledge representation form, i.e. writes them into the slots
of a concept or asserts them as simple facts on the global
blackboard.
As an intelligent server, BEST makes some of the knowledge base elements visible to other applications. Prolog/Rex
declarative data carriers can be easily interpreted by other
applications: concepts and slots have close analogies in records
and fields of an ordinary database. Therefore, we decided
to make some of the concepts visible to other applications.
This visibility is fully transparent for the inference engine.
Transparency is achieved in two ways: organization of the
shell guarantees that an explicit query from the client will
be serviced instantly, no matter what the current activity
within the shell. Second, every slot taking part in a DDE
conversation has an associated demon. The demon is activated
whenever a slot value changes. Activation of the demon
invokes a communication predicate which in turn notifies
client applications (linked to BEST with “warm” links) that
a slot value has been changed.
In our multiparadigm decisionmaking system we rely on
expert economics judgmental heuristics for a simple test for
the financial acceptability of a project and classification and
ranking of alternative projects and project designs. Using
four dynamic measures of project desirability, all projects
are divided into five groups. Investments from the group
VERY GOOD are accepted for the multicriteria decisionmaking (MCDM), while investments from the group VERY
BAD are rejected. For the investments from the groups GOOD,
MEDIUM, and BAD, a sensitivity analysis is performed, and
then the user is asked whether to accept or reject each of
these investments. This classification reduces the number of
projects that will take part in MCDM by rejecting the bad
choices. If specified, a risk analysis is performed on the
accepted investments, and then the MCDM gives the optimal
combination of investments for the given resources. Total
preorder, used in MCDM, suggests the best investments from
the set of accepted investments.
Since the dynamic criteria used for classification are not
precisely determined, we rely on fuzzy-set theory to describe
the criteria and classify the projects. Each criterion is represented by one fuzzy set, as well as each group. An investment
Relative
ON SOFTWARE
ENGINEERING,
Net Present Value of Investment
Return
1995
on Investment
Critical Point
Payback Period
Fig. 3.
VOL. 21, NO. 3, MARCH
Membership grade functions.
can belong to many groups simultaneously, but with different
membership grade values, thus making boundaries between
the groups fuzzy (see Fig. 3).
In Fig. 3, ci stands for a compound interest, p0 for the
referent payback period, and CO for the referent period of
achieving the breakeven point.
We determine the membership grade values for the five
groups using the composition operation:
IoR=O
where I represents the input vector, R represents the fuzzyrelation, and 0 represents the output vector. The composition
operation is similar to the matrix product where each production is replaced by the min function, and each addition by the
max function.
Another perspective of investment decision making relates
to the issue of future uncertainty and its consequences for
planning and decision making. However, high returns are often
associated with high risks. An important role of INVEX is
to aid managers in assessing various future alternatives and
the levels of risk and return associated with each of them. A
complete knowledge source, using the probabilistic reasoning
paradigm, is dedicated to the risk-bearing attitude.
After the consultation, selected risk measures are evaluated
and posted to the global blackboard (where they are asserted as
simple Prolog/Rex facts) to be used by the knowledge source
performing MCDM (operational research paradigm).
The explicit consideration of multiple, even conflicting,
objectives in a decision model has made the area of Multiple
Criteria Decision Making (MCDM) very popular among researchers during the last two decades. In our work, we adopted
a special outranking method, proposed by Brans [3], based on
extensions of the notion of criterion. In INVEX, this method
is implemented in a knowledge source using the proceduralprogramming paradigm (C language). Its results are posted to
the global blackboard, and further passed to Excel (through
the DDE) to be presented graphically to the end user.
A multiparadigm method adopted in INVEX is somewhat
complex, but so is the problem with which it is designed
to deal. The major objective of INVEX is accomplished,
by making this complex task easier for the user, while still
incorporating all the relevant factors that are critical for the
VRANES
AND STANOJEVIC:
PARADIGMS
WITHIN
THE BLACKBOARD
FRAMEWORK
decisionmaker in the real world. INVEX has confirmed the
validity of the assumption [20] that knowledge-based computing cannot stand apart from conventional data-processing
techniques and concerns, and most future complex applications
will be only partially knowledge-based, with the remainder
of the system being built out of conventional technological
components.
VI.
CONCLUSION
This paper reports one possible way of integrating diverse
knowledge representation and reasoning techniques within
knowledge-based programming paradigms (internal integration) and the integration with other programming paradigms
(external integration), using the global blackboard as an integration platform. The BEST-based multiparadigm system
comprises several single-paradigm knowledge sources, each
performing the function for which it is best suited, communicating via abstract data types representing the common
substrates on the global blackboard.
The combination of multiple programming paradigms supplies a firm foundation for addressing complex problems
such as process control, command and control, planning and
resource allocation. A multiparadigm approach to complex
problem solving is .much more adaptable and thus less brittle.
Moreover, a multiparadigm approach solves the problem in a
more natural way, increases programmer productivity during
application development, reduces maintenance costs, and significantly improves the overall efficiency of the system, since
every particular subproblem is solved using the most appropriate paradigm. The deep integration of paradigms accomplished
through BEST provides all the advantages of all technologies, giving the programmer the right tool at the right time.
With our multiparadigm framework we have expanded the
range of applications that could exploit intelligent reasoning
and knowledge processing, including complex, heterogeneous,
evolvable and scalable applications. Moreover, BEST allows
the importing of commercial tools (Excel in INVEX for
example), letting users develop each subsystem with the best
available technique. The system is fully implemented, and has
proven to be very suitable for building complex applications,
such as those mentioned in Section V.
REFERENCES
[I]
J. F. Allen, “Maintaining knowledge about temporal intervals,” Commun. ACM, vol. 26, no. 11, pp. 832-844, Nov. 1983.
[2] R. .I. Brachman, Richard E. Fikes, and Hector J. Levesque, “KRYPTON:
A functional approach to knowledge representation,” IEEE Comput.,
Special Issue on Knowledge Representation, vol. 16, pp. 67-73, Oct.
1983.
[3] J. P. Brans and Ph. Vincke, A preference rankina organization method.
Munagemenr Sci., vol. 31, no. 6 pp. 647-656, 19851
[4] B. S. Buchanan and E. H. Shortlife, Rule-Based Experr Sysrems. Reading, MA: Addison-Wesley, 1985.
[S] A. Cl. Cohn, “A more expressive formulation of many sorted logic,” J.
Automated Reasoning, vol. 3, no. 2, pp. 113-200, 1987.
[6] J. de Kleer, “An assumption-based truth maintenance system,” Artificial
Intell., vol. 28, pp. 127-162, 1986.
[7] -,
“Extending the ATMS,” Arfijicial Well., vol. 28, 1986, pp.
163-196.
“Problem
Solving with the ATM&” Art$cial Intell., vol. 28,
F31-3
pp. 197-224, 1986.
261
[91 P. Devanbu, M. Freeland, and S. Naavi, “A urocedural aouroach to
search control in Prolog,” in Proc. In?. ‘Conf: EkAI’86, 1988.’
[lOI M. Dincbas and J-P. La Paoe. “Metacontrol of logic oronrams in
METALOG,” in Proc. In?. C& Sth Generation Coiput: Syst. 1984,
1984.
[Ill J. Doyle, “A truth maintenance system,” Artificial Intell., vol. 12, pp.
231-272, 1979.
r121R. Enaelmore and T. Morgan, Blackboard Systems. Reading, MA:
Addison-Wesley, 1988.
[I31 D. W. Etherinaton, A. Boraida, R. J. Brachman, and Henry Kautz,
“Vivid knowledge and tractable reasoning: Preliminary repo&,” Proc
IIth Inf. Conj Artificial Intell., Aug. 1989, pp. 1146-l 152.
u41 A. A. Faustini and E. B. Lewis, “Toward a real-time dataflow language,”
IEEE Sojiware, pp. 29-35, Jan. 1986.
fast algorithm for the many pattern/many object
1151 C. L. Forgy, “RETE-A
pattern match problem,” Art&al
Intell., vol. 12, pp. 17-37, 1980..
1161A. M. Frisch, “A general framework for sorted deduction: Fundamental
results on hybrid reasoning,” Ronald J. Brachman, Hector J. Levesque,
and Raymond Reiter, Eds., in Proc. Ist Inc. Con& Principles Knowledge
Representation Reasoning, Toronto, May 1989.
(171 A. Frisch and A. Cohn, “1988 Workshop on principles of hybrid
reasoning,” AI Mug., vol. 11, no. 5, pp. 77-84, Jan. 1991.
[lsl S. L. Fulton and C. 0. Pepe, “An introduction to model-based reasoning,” AI Expert, pp. 48-55, Jan. 1990.
[I91 H. Gallaire and C. Lasserre, “Metalevel control for logic programs”
in Logic Programming, K. Clark and Tamlund, Eds. New York:
Academic, 1982.
F.
[201 Hayes-Roth, J. E. Davidson, L. D. Erman, and J. S. Lark, “Frameworks for developing intelligent systems: The ABE systems engineering
environment,” IiEI?Experty vol. 6, pp. 3&41, June- 1991. 1211ART-Automated Reasoning Tool User’s Manual, version 3.0, Inference
Corp., Los Angeles, CA, 1987.
PI KEE Software Development System User’s Manual, IntelliCorp Inc.,
Mountain View, CA, 1986.
r231 M. A. Jenkins, J. I. Glasgow, and C. D. McCrosky, “Programming Styles
in Nial,” IEEE Software, pp. 4655, Jan. 1986.
New York: Wilev.
~241 S. Khoshatian and R. Abnous. Object Orientation.
1990.
]251 G. J. Klir and T. A. Folger, Fuzzy sets, Uncertainty, and Information.
Englewood Cliffs, NJ: Prentice-Hall, 1988.
1261B. Kowalski and L. Stipp, “Object processing for knowledge based
systems,” AI Expert, Oct. 1990.
~271 P. B. Ladkin, “The completeness of a natural system for reasoning
with time interval,” in Proc. I&h In?. J. Cont. Milan, Italy, 1987, pp.
462-467.
@I J. Liebowitz, “Integrating intelligent systems.” Intell. Svst.
_ Reu..
_ vol.
10, no. 1, pp. 7-8,-1993.
.
~291 G. Lindstrom and P. Pananeaden. “Stream-based execution of logic
programs,” in Proc. I984 in?. Simp. Logic Programm., 1984, pp.
168-176.
[301 G. Lindstrom, “Functional programming and the logical variable,” in
Proc. Symp. Principles Progr-aim. Languages, Jan. 1985, pp. 266280.
[311 R. MacGregor and M. H. Burstein. “Using a description classifier to
enhance knowledge representation,” IEEE Expert, vol. 6, pp. 4146,
1991.
L.
J. Mazlack, “Satisfying in knowledge-based systems,” Data &
[321
Knowledge Eng., vol. 51 pp. 139-166, 1990.
1331 P. Patel-Schneider. “Decidable first-order logic for knowledge representation,” Ph.D. dissertation, Univ. Toronto, May 1987.
Reading, MA: Addison Weslev, 1984.
1341 J. Pearl, Heuristics.
1351 C. Rich and Y. A. Feldman:“Seven layers of knowledge representation and reasoning in support of software development,” IEEE Trans.
Sofhvare Eng., vol. 18, pp. 451-469, June 1992.
]361 N. C. Rowe, Artificial Intelligence Through Prolog. Englewood Cliffs,
NJ: Prentice-Hall, 1988.
[371 M. J. Stefik, D. G. Bobrow, and K. M. Kahn, “Integrating accessoriented programming into a multiparadigm
_ - environment,” IEEE Sofiware, voi. 3: Jan.1986.
[381 M. E. Stickel. “Automated deducton by theory resolution,” in Proc.
9th In?. Joint Conj Artificial Intell., Los- Angeles, CA, Aug. 1985, pp.
455458.
[391 P. Subai, M. StanojeviC, and S. VraneS, “Adaptive control in rule based
systems,” IMACS Symp. Modeling and Control of Techn. Syst., Lille,
France, May 1991.
]401 M. Tokoro and Y. Ishikawa, “Orient/84: A language with multiple
paradigms in the object framework, ” in Proc. Hawaii In?. Conf: Syst.
Sci., Honolulu, Hawaii, Jan. 1986.
1411 M. Vilain, ‘The restricted language architecture of a hybrid representation system,” in Proc. In?. Joint Co@ Arti$cial Intell., Morgan Kautman,
262
IEEE TRANSACTIONS
Los Altos, CA, 1985, pp. 547-551.
~421 S. VraneS, M. Stanojevic, M. Lu-in and V. Stevanovic, “Blackboard
metaphor in tactical decisionmaking,” Europ. J. Operational Rex, vol.
61, no. 1-2, Aug. 25, 1992.
[431 S. VraneS, M. Stanoievic, M. Lu-in, V. Stevanovic, and P. Subai,
“A blackboard framework on top of prolog,” ht. J. Expert Syst.
Applications, vol. 7, no. 1. Dec. ‘93/Jan. ‘94.
way to extend prolog
[441 S. VraneS and M. Stanojevic, “PROLOG/REX-A
for better knowledge representation,” IEEE Trans. Knowledge Data
Eng., vol. 6, pp. 22-38.
“IN[451 S. VraneS, M. Stanojevic, V. Stevanovic, and M. Lu-in,
VEX-Investment
advisory expert system,” submitted for Intelligent
Systems in Accounting, Finan& and kanagement.
[461 C. Walther, A Many-Sorted Calculus Based on Resolution and Paramodulation. Los Altos, CA: Morgan Kaufman, 1987.
1471 P. Zave, “A compositional approach to multiparadigm programming,”
IEEE Software, vol. 6, pp. 15-25.
Sanja VraneS (M’90) received the Dipl. Ing., MSc.,
and Ph.D. degrees in electrical engineering from the
University of Belgrade, Yugoslavia.
She has worked as a Research Scientist at the Mihajlo Pupin Institute for Automation and Telecommunications, Belgrade, since 1980. She took a oneyear sabbatical leave from 1993 to 1994 working
at the Advanced Manufacturing and Automation
Research Centre (AMARC) at the University of
Bristol, England. Her primary research interests
are in the fields of multiparadigm programming,
knowledge-based systems design, blackboard systems, knowledge representation, truth maintenance, and machine learning, in which she has published
more than 40 papers.
Dr. VraneS is a member of the IEEE Computer Society and the AAAI.
View publication stats
ON SOFIWARE
ENGINEERING,
VOL.
21, NO. 3, MARCH
1995
Mladen StanojeviC received the Dipl. Ing. degree in
computer science (cum laude) from the University
of Belgrade, Yugoslavia, in 1989. He is working
towards the MSc. degree at the same university.
He is currently a Graduate Research Assistant in
the Computer Systems Department of the Mihajlo
Pupin Institute, Belgrade. His research interests include knowledge representation, knowledge-based
systems, blackboard systems, truth maintenance systems, object-oriented and logic programming.