Creativity, Cognitive Mechanisms, and Logic
Ahmed Abdel-Fattah, Tarek Besold, and Kai-Uwe Kühnberger
University of Osnabrück, Albrechtstr. 28, Germany,
{ahabdelfatta,tbesold,kkuehnbe}@uos.de
Abstract. Creativity is usually not considered to be a major issue in current AI and AGI research. In this paper, we consider creativity as an important means to distinguish human-level intelligence from other forms
of intelligence (be it natural or artificial). We claim that creativity can be
reduced in many interesting cases to cognitive mechanisms like analogymaking and concept blending. These mechanisms can best be modeled
using (non-classical) logical approaches. The paper argues for the usage
of logical approaches for the modeling of manifestations of creativity in
order to step further towards the goal of building an artificial general intelligence.
Keywords: Logic, Creativity, Analogy, Concept Blending, Cognitive Mechanisms.
1
Introduction
During the last decades many cognitive abilities of humans have been modeled
with computational approaches trying to formally describe such abilities, to develop algorithmic solutions for concrete implementations, and to build robust
systems that are of practical use in application domains. Whereas in the beginnings of AI as a scientific discipline the focus was mainly based on higher cognitive abilities, like reasoning, solving puzzles, playing chess, or proving mathematical statements, this has been changed during the last decades: in recent
years, many researchers in AI focus more on lower cognitive abilities, such as
perception tasks modeled by techniques of computer vision, motor abilities in
robotic applications, text understanding tasks requiring the whole breadth of
human-like world knowledge etc.
Due to the undeniable success of these endeavors, the following question
can be raised: what is a cognitive ability that makes human cognition unique in
comparison to animal cognition on the one hand and artificial cognition on the
other? At the beginning of AI most researchers would probably have said “higher
cognitive abilities” (see the above examples), because only humans are able to
reason in abstract domains. In current (classical) AI research, many researchers
would, on the contrary, (perhaps) say that all in all still “lower cognitive abilities”
like performing motor actions in a real-world environment, perceiving natural
(context-dependent) scenes, the ability to integrate multi-modal types of sensory
input, or the social capabilities of humans are the basis for all cognition as a
whole and therefore also the key features for human-level intelligence. Finally,
an AGI researcher would probably stress the combination and integration of both
aspects of cognition: a successful model of artificial general intelligence should
be able to integrate higher and lower types of cognition in one architecture.
Besides these possibilities, there is nevertheless an important cognitive ability that seems to be usable as a rather clear feature to distinguish human intelligence from all other forms of animal or artificial intelligence: creativity. Although
we ascribe creativity to many human actions, we would hardly say that a certain
animal shows creative behavior or a machine solves a problem creatively. Even
in the case of IBM’s Watson, probably the most advanced massive knowledgebased system that exists so far, most people would not ascribe general creative
abilities to it. At most certain particular solutions of the system seem to be creative, because they are extremely hard to achieve for humans.
This conceptual paper discusses some aspects of creativity, as well as the possibility to explain creativity with cognitive principles and to subsequently model
creativity with logical means. The underlying main idea is not to model creativity
directly with classical logic, but to reduce many forms of creativity to cognitive
mechanisms like analogy-making and concept blending. Such mechanisms in
turn can be modeled with (non-)classical logical formalisms.
The paper has the following structure: In Section 2, we sketch some forms
and manifestations of creativity. Section 3 discusses the possibility to describe
creative acts by cognitive mechanisms, such as analogy-making and concept
blending. It is explained that this cannot only be done for examples of creativity from highly structured domains but for a broad variety of different domains.
Section 4 proposes the logical framework Heuristic-Driven Theory Projection
(HDTP) for analogy-making and concept blending in order to model creativity.
Section 5 concludes the paper.
2
Forms of Creativity
Creativity describes a general cognitive capacity that is in different degrees involved in any process of generating an invention or innovation.1 The concepts
invention and innovation describe properties of concrete products, services, or
ideas. From a more engineering- and business-oriented perspective, an invention
is usually considered as the manifestation of the creative mental act, resulting
in a new artifact (prototype), a new type of service, a new concept, or even
the mental concretization of a conception. An innovation requires standardly
the acceptance of the invention by the market, where market is not exclusively
restricted to business aspects. We are considering in this paper creativity as a
cognitive ability, but we have to refer to inventions, innovations, new concepts,
new findings etc. in order exemplify creativity in a concrete setting.
Creativity appears in various forms and characteristics. Creativity can be
found in science, in art, in business processes, and in daily life, i.e. creative acts
can occur in highly structured and clearly defined domains (like in mathematics), in less structured domains (like business processes), or even in relatively
1
The following distinction is based on [5].
Examples for creative acts
Domain Areas
Examples
Mathematics Argand’s geometric interpretation of complex numbers [3]
Science Linguistics Chomsky’s recursive analysis of natural language syntax [6]
Physics
Einstein’s theories of special and general relativity
Music
Invention of twelve-tone music by Arnold Schönberg
Art
Poetry
The invention of a novel (as a genre of poetry)
Visual arts Usage of iconographic and symbolic elements in paintings (Eyck)
Daily life
Fixing a household problem
Other Business
Nested doll principle for product design
Table 1. Some domains, areas, and examples of manifestations of creativity are mentioned. Clearly, the table is not considered to give a complete overview of domains in
which creative inventions of humans can occur.
unstructured domains (like a marketing department of a company having, for
instance, the task to design a new advertisement for a certain product).
We summarize different types of creativity in Table 1. Taking into account
the various domains in which creativity can occur it seems to be hard to specify a domain, in which creativity does not play a role. Rather certain aspects of
creativity can appear in nearly all environments and situations. This is one reason why the specification of common properties and features of creativity is a
non-trivial task. For example, some attempts have been made to specify certain
phases in the creative process (cf. [23]). Unfortunately, such phases, as for example a “preparation phase”, are quite general and hard to specify in detail. It is
doubtful whether any interesting consequences for a computational model can
be derived from such properties.
3
Creativity and Cognition
There seems to be an opposition between creativity and logical frameworks. Certain creative insights, inventions, and findings do seem to be creative, precisely
because the inventor did not apply a deterministic, strictly regimented form of
formal reasoning (the prototypical example being classical logical reasoning),
but departed from the strict corset of logic. Therefore, often a natural clash and
opposition between logical modeling and creativity seem to be perceived. We
think that this claim should be rejected. On the contrary, we advocate that the
natural way to start is to model creativity with logical means, at least in highly
structured domains like science, business applications, or classical problem solving tasks. The reason for this is based on the hypothesis that creativity is to a
large extent based on certain cognitive mechanisms like analogy-making and
concept blending. But now, due to the fact that analogy-making and concept
blending is essentially the identification and association of structural commonalities, in turn a natural way to model these mechanisms are logic-based frameworks.
Fig. 1. Two design examples (one from the engineering domain and one from product
design) that are based on the same principle, namely the nested doll principle. Objects
are contained in similar other objects in order to satisfy certain constraints.
Although creativity seems to be an omnipresent aspect of human cognition
(compare Table 1), not much is known about its psychological foundation, the
neurobiological basis, or the cognitive mechanisms underlying creative acts. One
reason might be that examples for creativity cover rather different domains,
where completely different mechanisms could play important roles. Nevertheless, we hypothesize that many classical examples for creativity can be reduced
to two important cognitive mechanisms, namely analogy-making on the one
hand and concept blending on the other. We mention some examples in order to
make this hypothesis more plausible:
– Conceptually, the usage of analogy-making is rather clear in cases where one
is using a general principle in a new domain, e.g. the nested doll principle
in design processes (compare Figure 1): creativity can be considered as a
transfer of a structure from one domain (e.g. the structure of a planetary
gearing, namely gears that revolve about a central gear) to another domain
(e.g. the design of nesting bowls containing each other). This transfer of
structural properties is best described as an analogy.
– In science, analogies and blend spaces do appear quite regularly. For example, in [10] it is shown how analogies can be used to learn a rudimentary number concept and how concept blending can be used to compute
new mathematical structures. Furthermore, in [16] it is shown that concept
blending can lead to a geometric interpretation of complex numbers, inspired by the historically important findings of Argand mentioned above in
Table 1.
– Also the interpretation of certain visual inputs can easily be described by
analogy-making (visual metaphor). Figure 2 gives an example, depicting
an advertisement. In order to understand this advertisement a mapping between tongue and sock as well as a transfer of properties of socks need to be
performed.
The number of examples, which show that analogy-making and concept
blending can be used to explain manifestations of creativity, are numerous. If
Fig. 2. Advertisement on the left side depicting an association between a tongue and a
sock. In order to understand this advertisement (as a marketing tool for hard candy) the
establishment of a mapping between tongue and sock is necessary. Then, hard candy can
be used as a means against breadth odor. In [21], a formal modeling is specified.
it is true that several characteristics of creativity can be modeled by analogies
and concept blending, a computational approach towards creativity can naturally be based on an algorithmic theory of analogy and concept blending. Due
to the fact that analogy-making is the identification of structural commonalities
and concept blending is the (partial) merger of structures, the natural way for
an algorithmic approach is to use logic as the methodological basis. Whereas
for concept blending, a symbolic approach for modeling is quite undisputed,
the situation in analogy-making is more complicated. Concerning the modeling
of analogies, also several neurally inspired and hybrid models have been proposed. Nevertheless, when having a closer look, it turns out that the most important subsymbolic aspects of such models are activation spreading properties or
synchronization issues in a (localist) network, whereas the basic computational
units of the network still are quite often symbolic (or quasi-symbolic) entities
(cf. [12] or [13] for two of the best known neurally inspired analogy models).
Additionally, logic-based models of analogy-making have a wider application domain in comparison to neurally inspired or hybrid models. Therefore, in total, it
seems a natural choice to apply logical means in modeling these two cognitive
mechanisms.
4
4.1
A Logical Framework for Modeling Creativity
HDTP and Analogy Making
In what follows, we will use Heuristic-Driven Theory Projection (HDTP) [20] as
the underlying modeling framework. HDTP is a mathematically sound framework for analogy making, together with the corresponding implementation of an
analogy engine for computing analogical relations between two logical theories,
representing two domains (domain theories are represented in HDTP as sets of
axioms formulated in a many-sorted, first-order logic language). HDTP applies
restricted higher-order anti-unification [14] to find generalizations of formulas
and to subsequently propose analogical relations between source and target domain (cf. Figure 3), that can later be used as basis for an analogy-based transfer
of knowledge between the two domains (see [1, 10, 16, 20] for more details
about HDTP and an expanded elaboration of recent application domains).
Generalization (G)
Source (S)
analogical transfer
Target (T )
Fig. 3. HDTP’s overall approach to creating analogies (cf. [20]).
Analogical transfer results in structure enrichment of the target side, which
usually corresponds to the addition of new axioms to the target theory, but may
also involve the addition of new first-order symbols. There are application cases
in which two conceptual spaces (in our case the input theories source and target) need not to be (partially) mapped onto each other, but partially merged in
order to create a new conceptual space. In such cases, HDTP uses the computed
generalization, the given source and target theories, and the analogical relation
between source and target to compute a new conceptual space which is called a
blend space.
4.2
Concept Blending and HDTP
Concept blending (CB) has been proposed as a powerful mechanism that facilitates the creation of new concepts by a constrained integration2 of available
knowledge. CB operates by merging two input knowledge domains to form a
new domain that crucially depends on and is constrained by structural commonalities between the original input domains. The new domain is called the
blend, maintaining partial structures from both input domains and presumably
adding an emergent structure of its own.
In cognitive models, three (not necessarily ordered) steps usually are assumed to take place in order to generate a blend. The first step is the composition
(or fusion) step, which pairs selective constituents from the input spaces into the
blend. In the second step, the completion (or emergence), a pattern in the blend
is filled when structure projection matches long-term memory information. The
actual functioning of the blend comes in the third step, the elaboration step, in
which a performance of cognitive work within the blend is simulated according
to its logic (cf. [8, 19]).
Figure 4 illustrates the four-space model of CB, in which two concepts,
S OURCE and TARGET, represent two input spaces (the mental spaces). Common
2
Whence, CB is sometimes referred to as ‘conceptual integration’.
G ENERIC
S PACE
identification
S OURCE
TARGET
B LEND
Fig. 4. The four-space model of CB: common parts of the S OURCE and TARGET concepts
are identified, defining a G ENERIC SPACE and a B LEND. The connecting curves within a
concept reflect an internal structure.
parts of the input spaces are matched by identifying their structural commonalities, where the matched parts may be seen as constituting a G ENERIC SPACE.
The B LEND space has an emergent structure that arises from the blending process and consists of some matched and possibly some of the unmatched parts of
the input spaces (cf. Figure 4). One of the famous blending examples is Goguen’s
H OUSE B OAT and B OAT H OUSE blends, which result, among others, from blending
the two input spaces representing the words H OUSE and B OAT (cf. [9]).
Only few accounts have been proposed formalizing CB or its principles in
the first place, and those that have been proposed are unfortunately not broad
enough to suit generic computational accounts of CB (cf. [2, 9, 19, 22]). CB itself
noticeably still suffers from the lack of a formally precise model integrating its
many aspects. The well-known optimality principles of CB, for instance, raise a
challenge for developing such formalizations: these principles are the guideline
pressures that are assumed to derive the generation of a feasible blend and
distinguish good blends from bad ones [8, 18].
In fact, CB has already shown its importance as a substantial part of cognition
and a means of constructing new conceptions. It has been extensively used in the
literature in attempts at expressing and explaining cognitive phenomena, such
as the invention of new concepts, the meaning of natural language metaphors, as
well as its usefulness in expansion, reorganization, and creation of mathematical
thoughts and theories ([1, 2, 8, 9, 10]).
The ideas of CB are very much related to the properties of a creative process, since a creative process can result in new insights as a result of a ladderascending procedure that steps through “background knowledge”, and subsequently increasingly refines the insights to spell-out an innovation (cf. Section 2). Undoubtedly, creative agents must have (enough) background knowledge before a creative process can take place, still mere knowledge most likely
is not sufficient: For example, simply having knowledge about Maxwell’s equations, the principles of semi-conductors, and the principles of graph theory al-
G
S
/T
m
B
Fig. 5. HDTP’s view of concept blending. S and T are source and target input theories. m
represents the analogical relation between S and T and G is the generalization computed
by anti-unifying S and T . The dashed arrows S → B and T → B describe the injections
of facts and rules from source and target to the blend space. Due to the fact that the input
theories may contain inconsistent information, the injections are partial in general.
most surely by itself is not enough in order to devise the ideas of very-large-scale
integration (i.e., the creation of integrated circuits by combining thousands of
transistors into one single chip). We claim that this is exactly where CB comes
into play.
HDTP now provides a framework for a CB-based computation of novel concepts given a source and target domain: Assume two input theories S and T
are given. The computation of an analogical relation between S and T by HDTP
outputs (besides other things) a shared generalization G of S and T by the antiunification process. This generalized theory G functions in the further process as
the generic space in CB mentioned above. The construction of the blend space
is computed by first, collecting the associated facts and rules from S and T generated by the analogical relation between S and T and second, by projecting
unmatched facts and rules from both domains into the blend space. This second step can result in clashes and inconsistencies. Furthermore, the coverage of
the blend space concerning S and T can be more or less maximal. Taking additionally into account that for every given S and T HDTP can compute different
analogical relations, there can be many possible blend spaces for a given input. Figure 5 depicts diagrammatically the overall structure of concept blending
using HDTP.
HDTP has successfully been used to compute concept blends in complex domains like mathematics. In [10], Lakoff and Núñez’s mathematical grounding
metaphors [15] are modeled that are intended to explain how children can learn
a rudimentary concept of numbers based on simple real-world actions in their
environment. These metaphors and the emergence of an abstract number concept can be explained by analogy-making and concept blending. In [16], the
invention of a geometrical interpretation of complex numbers (i.e., the complex
plane) was computationally modeled by concept blending. This example shows
that even for rather formal and complex theories the creative generation of a
new concept can be computed using a logical approach.
5
Conclusions
In particular for AGI systems, creative problem solving abilities and the finding of novel solutions in unknown situations seems to be crucial. We consider
creativity as a crucial step towards building a general form of AI. From a cognitive perspective creativity can often be reduced to cognitive mechanisms such
as analogy-making and concept blending, which in turn can neatly be modeled
using logic-based approaches. Therefore, the apparent tension between creative
abilities of agents and a logical basis for their modeling disappears.
In fact, we are not the first ones to investigate into the computational modeling of creativity as a cognitive capacity. Going back already to work by Newell,
Shaw and Simon [17], researchers in AI and related fields over the decades repeatedly have addressed different issues and aspects of creative thought. The
results of these investigations range from contributions on the more conceptual
side (as, e.g., Boden’s theory of P- and H-creativity [4]), to concrete implementations of allegedly “creative systems” (as, e.g., The Painting Fool [7]). And also
in the computational analogy-making domain there already is relevant work on
the relation between creativity and analogy, most prominently exemplified by
Hofstadter’s contributions related to the Copycat system [11]. Still, on the one
hand, work on issues of creativity within human-style intelligent systems this
far has not gained wide attention in an AGI context. On the other hand, even
within the more general setting of computational creativity research, only very
few approaches try to integrate models of different cognitive capacities into a
system aiming for general creativity capacities, instead of limiting the focus to
modeling one specific kind of creative act.
This paper sketches the necessity to tackle the hard problem of creativity
in AGI systems. Although the described HDTP framework has been applied to
show that the computation of interesting blend spaces can be achieved in certain
rather complex (but highly specific) domains, no generalizations of such specific
examples exist so far. This remains a task for future work, besides a further
formally sound and complete characterization of concept blending on a syntactic
and semantic level.
References
[1] Ahmed Abdel-Fattah, Tarek R. Besold, Helmar Gust, Ulf Krumnack, Martin
Schmidt, Kai-Uwe Kühnberger, and Pei Wang. Rationality-Guided AGI as Cognitive Systems. In Proc. of the 34th annual meeting of the Cognitive Science Society,
2012.
[2] James Alexander. Blending in Mathematics. Semiotica, 2011(187):1–48, 2011.
[3] J-R. Argand. Philosophie mathématique. essay sur une manière de représenter les
quantités imaginaires, dans les constructions géométriques. Annales de Mathématiques pures et appliquées, 4:133–146, 1813.
[4] M. Boden. The Creative Mind: Myths and Mechanisms. Taylor & Francis, 2003.
[5] L. Burki and D. Cavalluci. Measuring the results of creative acts in r & d: Literature
review and perspectives. In G. Cascini D. Cavalluci, R. de Guio, editor, Building
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
Innovation Pipelines through Computer-Aided Innovation, CAI 2011, pages 163–177.
Heidelberg: Springer, 2011.
N. Chomsky. Syntactic structure. The Hague/Paris: Mouton, 1957.
S. Colton. The painting fool in new dimensions. In Show and Tell, editors, Proceedings of the 2nd International Conference on Computational Creativity, 2011.
Gilles Fauconnier and Mark Turner. The Way We Think: Conceptual Blending and
the Mind’s Hidden Complexities. Basic Books, New York, 2002.
Joseph Goguen. Mathematical models of cognitive space and time. In D. Andler,
Y. Ogawa, M. Okada, and S. Watanabe, editors, Reasoning and Cognition: Proc. of
the Interdisciplinary Conference on Reasoning and Cognition, pages 125–128. Keio
University Press, 2006.
Markus Guhe, Alison Pease, Alan Smaill, Maricarmen Martínez, Martin Schmidt,
Helmar Gust, Kai-Uwe Kühnberger, and Ulf Krumnack. A computational account
of conceptual blending in basic mathematics. Cognitive Systems Research, 12(3–
4):249–265, 2011.
D.R. Hofstadter. The Copycat Project: An Experiment in Non Determinism and Creative Analogies. A.I. Mema. Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 1984.
J. E. Hummel and K. J. Holyoak. A symbolic-connectionist theory of relational
inference and generalization. Psychological Review, 110:220–264, 2003.
B. Kokinov and A. Petrov. Integration of memory and reasoning in analogy-making:
The ambr model. In D. Gentner, K. Holyoak, and B. Kokinov, editors, The Analogical
Mind: Perspectives from Cognitive Science. Cambridge, MA: MIT Press, 2001.
U. Krumnack, A. Schwering, H. Gust, and K.-U. Kühnberger. Restricted higherorder anti-unification for analogy making. In Twenties Australian Joint Conference
on Artificial Intelligence, pages 273–282. Springer, 2007.
G. Lakoff and R Núñez. Where Mathematics Comes From: How the Embodied Mind
Brings Mathematics into Being. Basic Books, New York, 2000.
M. Martinez, T. R. Besold, Ahmed Abdel-Fattah, K.-U. Kühnberger, H. Gust,
M. Schmidt, and U. Krumnack. Towards a domain-independent computational
framework for theory blending. In AAAI Technical Report of the AAAI Fall 2011
Symposium on Advances in Cognitive Systems, pages 210–217, 2011.
A. Newell, J. Shaw, and H. Simon. The process of creative thinking. In H. Gruber, G. Terrell, and M. Wertheimer, editors, Contemporary Approaches to Creative
Thinking, pages 63–119. Atherton, New York, 1963.
Francisco C. Pereira and Amílcar Cardoso. Optimality principles for conceptual
blending: A first computational approach. AISB Journal, 1, 2003.
Francisco Câmara Pereira. Creativity and AI: A Conceptual Blending Approach. Applications of Cognitive Linguistics (ACL). Mouton de Gruyter, Berlin, December
2007.
A. Schwering, U. Krumnack, K. Kühnberger, and H. Gust. Syntactic principles
of heuristic-driven theory projection. Cognitive Systems Research, 10(3):251–269,
2009.
A. Schwering, K.-U. Kühnberger, U. Krumnack, H. Gust, and T. Wandmacher. A
computational model for visual metaphors. interpreting creative visual advertisements. In B. Indurkhya and A. Ojha, editors, Proceedings of International Conference
on Agents and Artificial Intelligence (ICAART 2009), 2009.
Tony Veale and Diarmuid O’Donoghue. Computation and Blending. Computational
Linguistics, 11(3–4):253–282, 2000. Special Issue on Conceptual Blending.
G. Wallas. The art of thought. C.A. Watts & Co. Ltd, London, 1926.