1
Experience and Abstraction:
the Arts and the Logic of Machines
Simon Penny
Presented Digital Arts and Culture 2007,
Perth Australia
Published Fibreculture Online Journal,
#11, Jan 2008
ABSTRACT
This paper is concerned with the nature of traditions
of Arts practice with respect to computational
practices and related value systems. At root, it
concerns the relationship between the specificities of
embodied materiality and aspirations to universality
inherent in symbolic abstraction. This tension
structures the contemporary academy, where
embodied arts practices interface with traditions of
logical, numerical and textual abstraction in the
humanities and the sciences.
The hardware/software binarism itself, and all that it
entails, is nothing if not an implementation of the
Cartesian dual. Inasmuch as these technologies reify
that worldview, these values permeate their very
fabric. Social and cultural practices, modes of
production and consumption, inasmuch as they are
situated and embodied, proclaim validities of
specificity, situation and embodiment contrary to
this order. Due to the economic and rhetorical force
of the computer, the academic and popular
discourses related to it, are persuasive.
Where computational technologies are engaged by
social and cultural practices, there exists an implicit
but fundamental theoretical crisis. An artist,
engaging such technologies in the realization of a
work, invites the very real possibility that the
technology, like the Trojan Horse, introduces values
inimical to the basic qualities for which the artist
strives. The very process of engaging the technology
quite possibly undermines the qualities the work
strives for. This situation demands the development
of a ‘critical technical practice’ (Agre).
This paper seeks to elaborate on this basic thesis. It
is written from the perspective, not of the
antagonistic luddite, but from that of a dedicated
practitioner with twenty five years experience in the
design and development of custom electronic and
digital artworks.
Note and Disclaimer: This paper, inevitably,
focuses on issues which arise as a result of the
peculiarities of western cultural and technical
history, and reflects discourses conducted in the
English language. As discussed, some of the forces
influencing those historical flows relate to the
traditions of western philosophy, itself strongly
influenced by Christian doctrine. The question of
what form automated computation might have taken
if it had arisen in a culture with different religious
and philosophical history is a fascinating one.
Likewise, the way that such a culture might
negotiate the relation between technology and
culture might be very different from that which has
occurred in the West, and might offer important and
useful qualities.
Keywords
Generality, Abstraction, Instrumentality, Computer
Science, Digital Cultural Practices. Media Art,
Media
Art
Theory,
Media
Theory,
Interdisciplinarity, Science and Technology Studies.
1. INTRODUCTION
“Art: no experience necessary”.This slogan, seen on
a T-shirt in Singapore evoked in me both amusement
and a poignant sadness. While the intent may have
been directed to the (questionable) notion that
‘anyone can be an artist’ - ie that being an artist
requires no professional training or special acumen –
it also, perhaps unwittingly - asserts the
Cartesianism which has had such a withering effect
on Arts discourses in the modern period, and more
recently has been reinforced by the influence of
computational discourses.
Much of my writing has grappled with issues which
I find fundamental to the formation of art-practices
which exploit the capabilities of emerging
technologies (often but not always, involving real
2
time digital computation). [1][2]. These theoretical
inquiries arise out of pragmatic attempts to apply
these technologies to artistic practice. I have been
developing
custom
electronic
and
digital
technologies for cultural practices for twenty-five
years. Throughout that time, I have felt an abiding
disquiet regarding implicit disjunctions between
technological and cultural practices, at a
fundamental level. This paper is an attempt to make
explicit a set of issues which I feel are fundamental
to the contemporary socio-technological context, and
crucially relevant to questions interdisciplinarity,
interdisciplinary digital arts practices, and to the
question of the role of the arts on campuses and in
the world at large [3].
The presence of arts practices on contemporary
campuses is fraught with complexity. While
contrasts are commonly drawn between the science
and humanity ‘sides’ of campuses, these practices
share a common commitment to abstraction,
symbolic notation and some notion of the power of
general applicability. The academy as a whole, is a
culture of the symbolic notation, of the book and the
text. The arts, at its core, bypasses this translation
from worldly experience of materiality to symbolic
representation as alphanumeric characters. The arts
is largely concerned with the way objects, forms,
materials, and bodily actions can mean. The arts
focus on immediate sensorial experience,
unmediated by alphanumeric translation. I make this
generalization quite aware that it is full of holes, but
I make it in order to set such practices in stark
contrast to practices of alphanumeric abstraction,
and specifically to the act of coding and the
functioning of code as an alphanumeric machine.
I am at pains to emphasise that although I will
identify problematic aspects of theoretical, technical
and cultural practices, I am not antagonistic to any of
these. Indeed, I am an active and longtime
practitioner in them all, and in their combinations.
My goal is to help to establish a rigorous
interdisciplinary critical foundation from which
well-informed digital cultural practices can proceed.
This inquiry maps out a project of radically
interdisciplinary
intellectual
research
and
artistic/technical production concerning the relation
of embodied practices to the current state of digital
technologies and the underlying values reified in the
technology. As such it entails:
1. a rigorously interdisciplinary agenda,
2. a tightly integrated relation between theory
and practice, where practice initiates
theorising and theory informs practice,
3. the recognition of the need for the
development of a theory of practice and of
the ‘aesthetics of behavior’.
In my opinion, the full force of some of these
realities is felt most clearly by the practitioner in the
complex process of realisation of cultural artifacts
employing these technologies. Contemporary digital
arts practice is shaped, in large part, by the
ramifications of the disjunctions discovered in a
process where technological components formulated
for instrumental ends are applied to goals which
exceed these instrumental conceptions. Michael
Mateas observed that ‘you push against the materials
and the materials push back’. But in the case of
digital arts, one might well assert that it is the
ideology pushing the materials back.
These concerns can be differentiated into subcategories. The most superficial (but nonetheless
challenging), have to do with the pragmatics of the
technical capabilities of these devices and the
development of a design and development process
for cultural applications which incorporates them.
Another layer of concern is about how employment
of these devices changes the kind of art which is
made. Basic to any theoretico-historical study of
emerging digital art practices is the recognition that
such practices are the confluence of two streams.
These are the traditions of industrial automation
rooted in rationalist science (and consumer
commodity economics); and the traditions of
artisanal arts practices and their related institutions
and philosophical contexts.
Digital Cultural Practice is a heterogenous field, and
distinctions can be drawn in along diverse axes. A
fundamental distinction is between practices which
are the emulation in digital technologies of preexisting practices, as compared with those which are
novel and ‘native’ to the new medium, for which one
must struggle to find precedents in pre-digital
practices. 2D image treatment rooted in
photography, graphic design and painting, and
digital video, are prime examples of the former. On
the other hand are practices which are in some sense
native to the context of computational technologies,
3
and could not exist via backwards-emulation: 3D
modeling and animation, hypertextual and sensor
based interaction, interactive and multi-player
networked gaming. (While this distinction is
fundamental, it is not always clear, and as practices
evolve, these distinctions tend to become aspects)
[4]. Any practices that exhibit dynamic real time
behavior, or responsiveness to their environment and
require real time computation and/or networking fall
into the class of practices for which, I believe, a
wholly new branch of aesthetics is demanded: the
aesthetics of behavior.
A deeper level of inquiry concerns the negotiation
of the values of the professional culture which gave
rise to the machine with respect to the values and
traditions of the arts. What flows from this is a
recognition of how the values of the discipline of
engineering insinuate themselves into art practice
and art consumption, changing the practice. At root,
one is drawn into a deeply interdisciplinary
consideration of the fundamental values and worldviews of these two kinds of practice, these two
cultures.[5]
What is called for then, is a simultaneous
assessment of these values ands their implications
to contexts outside their ‘native’ territory, and
simultaneously, a reassessment with respect to these
issues, of the core values, methodologies and
sensibilities of the arts. We need then, to find new,
relevant and compelling argument for the arts in the
new techno-social context. This will necessarily
lead, I believe, to a re-evaluation and re-valuing of
aspects of the traditions of the arts which have been
or are in the process of being occluded or lost, due
to the authority of technological rhetorics
perceivable on every campus, a certain kind of
cowering in the face of such rhetorics, and an
articulation deficit on the part of arts practitioners.
2. FRAMING THE ENFRAMING
Embodied in the machine there is an idea of what
the mind is and how it works. The idea is there
because scientists who purport to understand
cognition and intelligence put it there. No other
teaching tool has ever brought intellectual baggage
of so consequential a kind to it. Theordore Roszak.
[6]
“..most tools produce effects on a wider world of
which they are only a part, the computer contains its
own worlds in miniature…” Paul Edwards [7]
A computer…does not simply have an instrumental
use in a given site of practice; the computer is
frequently about that site in its very design. In this
sense computing has been constituted as a kind of
imperialism; it aims to reinvent virtually every other
site of practice in its own image. Philip Agre [8]
Heidegger proposed that the essence of technology
is a project of ‘enframing’. We make the world
amenable to manipulation and exploitation through
instrumental science. We dominate nature through
knowledge. The last few centuries can be
characterised by an ever-growing assemblage of
knowledge and power, science and technology,
which enframes the world and ourselves within it. At
issue is not whether instrumentality is a good thing,
but whether the machinery, the hardware and
software, is imbued with the ethos of
instrumentality. And if, in applying this technology
to human pursuits not previously embraced by such
technology, these practice are thus perturbed in a
way which might be deemed unfortunate.
Technologies do not pop into the world fully
formed, they emerge from specific cultures with
specific traditions. In order to understand what the
computer, at root, is, we must look at its history, its
precedents, the goals set for the research, the
interests of the funders, and the intellectual traditions
of the generating contexts. The question then
becomes: can we assume that the computer is a
neutral tool, or does it inhere specific notions about
information, knowledge, and representation (etc)?
This would not be a huge issue if the technology
remained located in its original application zones.
But the contemporary socio-technological situation
is one in which this technology is constantly moving
out across society and culture, engaging various sort
of established practices for which the instrumental
paradigm (and related values) has dubious relevance.
This then is a call for an active critical engagement.
The following questions must be asked:
1. Are fundamental philosophical values
reified in the technology. If so how and
where, and how are they expressed.
2.
Are these values supportive or destructive
of each of the new zone into which the
technology moves.
4
3. These questions then imply an assessment of
the core values of the practices which are
effected.
4. If the result of the inquiry reveal areas of
serious concern, then one must ask: is the
situation salvageable, ie can the tools and
the practices be adapted together to produce
a positive situation. If not, must the entire
project of technological art be abandoned or
is it possible to imagine a technology which
would have such positive qualities.
5. If so, we must discuss how to build
technologies which are more supportive of
the core values of these practices. An entire
interdisciplinary technological research
program is thus implied, one that moves
towards technics from real human needs,
rather than moving out from motivations of
profit and manufacturing efficiencies,
through advertising campaigns into the
market.
The Cartesianism of the academy and its emphasis
on abstraction; the construction of Generality as a
virtue in computer science (which is itself entirely in
sympathy with the logic of economy of scale in
industrial production); and the emergence of the
digital commodity and its associated culture: these
three form an unholy alliance which demands
interrogation in the interests of more critically sound
digital cultural practice. The rationalism of the
academy is characterized by the valorization of
symbolic forms of representation: textuality, logicomathematical symbol systems, and symbolic
representation more generally. Computer code is
entirely consistent with this environment. (The
paradox of code is that it implements and reifies
academic textuality as an operational machine).
Philip Agre makes a similar argument when he
observes that these fields ”concentrate on the aspects
of representation that writing normally captures. As
a result, theories will naturally tend to lean on
distinctions that writing captures and not on the
many distinctions that it doesn’t.” [9].
Cultural practices are traditionally often concerned
with specificities of history, personality and context.
They have not, in the past, been subject to evaluation
on the basis of instrumental criteria such as
efficiency, productivity, generality and optimality.
With the emergence of computing as a commercial
and a cultural force, these values have insinuated
themselves, into areas of cultural practice. When the
values of computer science piggyback on
commercial technologies as they travel out of one
socio-cultural niche into another, they can cause
havoc. The computer is, in this sense, a Trojan Horse
which carries these ideas, hidden, through the gates.
Agre describes the application domains of
computational systems as a frontier: “Each of these
borderlands is a complicated place: everyone who
resides in them is, at different times, both an object
and an agent of technical representation, both a
novice and an expert…every resident of them is a
translator between languages and worldviews: the
formalisms of computing and the craft culture of the
“application domain””. [10]
We cannot dispute that computers and computation
constitute the paradigmatic technology of our day.
[11] As in Descartes’ day, when human physiology
was described in terms of the cogs and springs, so
today, even thought is susceptible to computational
metaphors. It is reasonable to be deeply suspicious
of any theorisation that adopts such metaphors
unreflexively. Yet, by the same token it is easy
understand why computational explanations are
unreflexively adopted: they are the intellectual
waters we swim, they are a constitutive part of our
world view. As a result, many fundamental qualities
of our culture evidence a drift as a result of the
ubiquity of computation and computational
metaphors.
Take for instance the astonishing changes in the
notion of play over the last couple of decades. When
stripped of its colorful monsters and futuristic
weapons, game-play in the paradigmatic first person
shooter is indistinguishable from the worst qualities
of industrialized labor: constrained and highly
repetitive tasks executed in social isolation, a tight
harnessing of user and machine, rewards linked to
high rates of production, to say nothing of the covert
inculcation of military skills. In this way, pleasure
has been instrumentalised and commodified. Any
gaming-partisan can (and several have) taken me to
task, saying ‘but game X is not like that’, or ‘such an
analysis ignores aspects PQR which are culturally
good’. It will also be asserted that such games are
shaped by the market as well as by the makers who
themselves are the product of a larger and older
culture. I do not, in principle, dispute those
5
objections, but the fact remains that, for better or
worse, such game-play is colored and constrained by
the history of industrial labor and the development
of sciences of man-machine integration for military
applications.
To reiterate: the purpose of this paper is to explore
dimensions of the fundamental problematic
encountered
when
machines
for
abstract
mathematico-logical procedures are interfaced with
cultural practices whose first commitment is to the
engineering of persuasive perceptual immediacy and
affect, employing sensibilities and modalities alien
to the technology and possibly incompatible with its
structuring precepts. I must necessarily paint in
broad strokes, in order to broadly describe a class of
issues. Inevitably exceptions can be found. My
concern is not so much to persuade as to make
explicit a set of issues which must be engaged if
critically coherent practice is to occur in the field.
This paper is thus a call to a Critical Technical
Practice in Digital Cultural Practices. I want to draw
a distinction between Agre’s use of his term and my
use here. He called for such practice as a corrective
for the difficulties he recognized in a discipline (AI)
with a substantial history behind it. I want to argue
that in digital arts, we need a critical technical
practice in order to build a critical/theoretical
apparatus adequate and appropriate to an emerging
range of practices.
This conversation, is, at root, concerned with the
power of scientifico-technical rhetorics and their
relevance to fields which have come to be on their
margins, whether that is because of an imperializing
on the parts of those discourses, or due to their
attractive power to previously non- scientificotechnical fields in the current technophilic climate. (I
do not mean to universalize regarding scientific
practice, there are many ‘sciences’. I refer to the
power of the those discourses as presented in their
oversimplified form for ideological or commercial
purposes.) As Friedrich Kittler noted, following
Nietchze, “If the 19th century…was a victory of the
scientific method over science, then our century will
be one which saw the victory of scientific technology
over science.” [12]
There could be little argument that the computer is
the most complex appliance in common use at home
and in the workplace, so any discussion of its use
must be general, or must differentiate between
diverse aspects: the interface, the operating system,
various applications, the fundamental procedures
which define the von Neumann machine, theoretical
paradigms, the status of the device in contemporary
cultures and in the past, the various modes of use, as
research tool, as office tool, as pleasure tool, the
integration of the machine into a global network and
all the dimensions of networked computer use. To
address all these aspects would exceed by far the
time and space available here. Here I focus mainly
on the history, cultural placement, computational
fundamentals and the interface.
3.
ACADEMIC
CARTESIANISM
AND
ARTISANAL CRAFT
All the art projects I have worked on have at least
one thing in common... From an engineers’ point of
view, they are ridiculous. Billy Klüver
There is, in the western academy and other aspects
of western culture, a deep value which ascribes
greater worth to more abstract and ‘mental’ work,
and implicitly or explicitly denigrates work which
involves manual labor and skill, and therefore
devalues the people who do that work. This is a
dangerous and foolish belief system. Manual work is
not inherently stupid, an where it is, it has been
made that way through the de-skilling of labor in
industrial contexts, and prior to that in the protoindustrial context of slave labor driven Carribean
sugar plantations. [13] It is necessary to distinguish
between aggressively de-skilled industrial labor, and
artisanal labor. Technical labor, crafts and trades,
bodily training in sports, dance and martial arts,
often require high intelligence (think of virtuoso
musicianship). Intelligence and manual skills are not
mutually opposed.
Computer science, as a technical discipline, reifies
philosophical notions which, oddly, were already
under interrogation in other disciplines prior to its
formation. Among these are the Cartersian dualism,
an implicit and unproblematised Objectivism and a
simplistic notion of Intelligence, inflected by a
paranoid militarism. The conception of intelligence
in computational discourses is rooted in an early-mid
C2Oth approach valorizing mathematico-symbolic
problem solving – precisely the same functions that
the first generation of AI researchers sought to
6
simulate in their systems (famously Newell, Simon
and Shaw’s Logic Theorist). This monolithic
conception of intelligence has been largely
abandoned by the psychological community,
replaced with an idea of intelligence as individually
varying aptitudes in 20 or more aspects. It is
surprising is that mathematical logic should be
unilaterally hailed as the hallmark and epitome of
intelligence in humans, and yet the process is utterly
consistent with a logic of isomorphism (Maslow)
ubiquitous in computer science. Boolean logical
operations are implemented as a machine – then the
machine demonstrates (via applications such as
Logic Theorist) that human intelligence is logicomathematical in nature. Here then is a prime
example of the representational nature of computer
science, in which an automated system is built to
emulate a certain description of a human capacity,
and this system and the rhetoric around it then goes
on to form an entire school of thought about human
thinking – computationalist cognitive science.
This issue is of great significance in the current
discussion, as the kinds of intelligences which
enable the arts and cultural practices are among
those exclude from the mathematico-symbolic
conception. Handwork can involve high intelligence
and sensibility. But that kind of intelligence –
embodied, kinesthetic and multi-modally sensorial
intelligence, tends to be irreconcilable with textual,
alphanumeric logico-symbolic forms of work.
Contrarily, the process of translation from the
abstract to the concrete is an exercise of high
intelligence, and valuable knowledge and insight is
drawn from actual manipulation of matter, as
opposed to talking about it or using pre-constructed
simulations.
Conventionally, artists are ‘not very clever but they
are good with their hands’. The implication is that
artists are stupid, but it also reinforces the mode of
bastardized Cartesianism which infects our
campuses that asserts that manipulating matter and
intelligent thought are mutually exclusive. [14] An
artist must have a deep sensitivity their tools and
their medium. There is a tension between the
academicism required of the university, and the
traditions of bodily training and kinesthetic and
proprioceptive sensitivity development so crucial to
virtuosity. As programmable technologies have
become increasingly usable, coding as a practice has
become increasingly pervasive and basic mechanical
and electronic skills have seemed less relevant. In
many fields, computer technology is causing a
problematic drift away from embodied and material
intelligences. [15]
To a generation naturalized to commodity digital
technologies since childhood, three related
assumptions seem to qualify their relation with that
technology: an assumption that all possible digital
commodities already exist; that they are valueneutral; and that all that is required in making a
project is to plug them together and provide
necessary software glue. None of these could be
further from the truth. All commodity technologies
come with constraints as well as affordances. These
constraints are often only revealed in the process of
working with them and attempting to make them do
something they were not designed explicitly to do.
Poor choice of high level components can make
tasks more complex and more difficult than
necessary. Such reality is consistent with the general
principles of knowledge representation, indeed, such
artifacts embody and reify certain modes of
knowledge representation.
The notion of information having the possibility of
existing in a disembodied form is, we must remind
ourselves, axiomatic and rhetorical and without
evidence. All information is materially instantiated,
and the idea that information can be migrated from
one material to another does not assert the
independent immaterial existence of information as a
thing. The entire computational dualism of
immaterial information inhabiting a material
substrate is nothing but a recapitulation of
Descartes’ peculiar and tenuous dualism, conjured
up to resolve his own crisis concerning the relation
between the immaterial soul and the material body.
It is odd that computer science would take as so
fundamentally formative an hypothesis that has not a
shred of scientific evidence to support it.
Contrary to the idea from symbolic AI that
‘intelligence’ was the logical manipulation of
symbolic tokens in an abstract reasoning space
unconnected to the world, it is equally easily
asserted that interaction with the physical and social
world constitutes intelligence, and that historically,
AI took its position because the necessary sensing
7
and
interpretation
tasks were technically
challenging, if not intractable. (see ‘the matter with
matter’, below).
4. MAN-MACHINE INTERACTION AND
TECHNOPHILE
RHETORICS
OF
LIBERATION
“Our computers retain traces of earlier
technologies, from telephones and mechanical
analogs to directorscopes and tracking to radars.”
David Mindell [16]
As David Mindell reminds us, the physical
conformation and functionality of the machine we
use is determined by the history of technologies
from which it arose. It is a skeuemorphic
assemblage. The history is military, bureaucratic and
commercial, to varying degrees (depending on who
you read). Interactive multimedia, we must recall, is
the child of Cold War computing research. The urHCI project was the SAGE system, which put
soldiers with keyboards and lightpens in front of
monitors, to accomplish the complex pattern
recognition functions which the system could not
autonomously manage. This constellation of
technologies was the model for the keyboard-mousemonitor paradigm. The fact that this harnessing of
flesh to machine was later clad in the rhetoric of
liberation in the heyday of interactive multimedia
remains deeply ironic.
Why do I sit at a desk to use a computer? The
unavoidably historical answer is that the device was
developed as a replacement for a component of a
preexisting organisational and architectural order, in
this case the business office. The desktop computer
is, or was, an enhanced typewriter and calculator
with added filing-cabinet functionality. It follows
then that it is particularly useful and relevant for
activities which resemble office desk activities, and
is decreasingly appropriate for activities whose
social and architectural placement diverges from that
scenario. Most cultural and artmaking activities do
not resemble office work in their physical contexts,
methodologies or goals.
While various pioneers in computer art have been
and are being inserted into a retroactively compiled
pre-history of the field, the fact remains that in the
formulation of the fundamental aspects of the
machine, hardware and software, and their
relationship, serial processing and operating
systems, networking and interface, artistic needs,
goals and methods were never considered and no
artist was ever consulted. It seems surprising from
this perspective that any artist would imagine that
computer art might be possible. (That, I suppose, is
the genius of art.) By the same token, it is no
surprise that in attempting to utilize the machines,
artists have experienced repeated frustrations. (In the
past I have likened the situation to sending a swatteam into battle with excellent hair-dryers and
toaster-ovens.) And seldom does this frustration
reach a level of analysis where a distinction can be
made between a technical fault (a bug) and a
limitation in principle.
There is no end to the accolades we hear offered for
the triumphs of computer animation, or scientific
visualization, or hypertext, or the web, or multiuser
gaming- new cultural practices which are more or
less compatible with the various constraints of
conventional computing and computer use. It is
much more difficult to ask – if the basic
conformation of the device and its peripherals were
different, what kinds of socio-cultural practices
might be accommodated, assisted or afforded? This
very acceptance of the hardware conformation of the
machine constrains the kind of practices which can
occur. Here then is a research agenda which begins
from rigorous intellectual inquiry and offers the
prospect of unimagined realms of technical and
aesthetic development.
5. EMBODIED AND SITUATED PRACTICES
AND
THE
DRIVE
TO
FORMAL
ABSTRACTION
As a longtime practitioner of practices of embodied
intelligence, I remain alarmed that we are prepared
to accept as generally useful, a machine system
which is only capable of interpreting as input, linear
strings of alphanumeric characters. The machine
knows nothing of the world, except that which a
human predigests and feeds to the machine as
alphanumeric strings. Such a system is excellent for
doing arithmetic and accountancy, calculating tide
and firing tables, storing and retrieving textual
records (the kinds of practices which the technology
was originally designed for) because these practices
have already been abstracted into formal
mathematico-logical
representations
and
organizational and cataloging systems generations
before the machine existed. (Implementation of
8
algorithms for sorting by date an alphabetically
clearly depend on the prior development of
calendars and alphabets, and the construction of a
more or less universal literacy with regard to them.)
Indeed, the machine is well attuned to these
practices because the formalisms upon which the
machine is based, and the formalisation of those
organizational practices arise from a common root.
Reflect on the larger historical arc, beginning, as AI
practitioners like to do, with Descartes and the
establishment of rationalism. Here, in broad terms,
we see the success of attempts to categorise and
organize the world according to mathematico-logical
ordering systems. Subsequently, we see the
development and increasing sophistication and
elaboration of techniques for designing and building
machines: engineering and the industrial revolution.
This paradigm gains momentum as electricity, radio,
telegraph and related technologies arise, and
coalesce as electrical engineering and electronics,
during a time when engineering itself is being
reconfigured as an increasingly analytic and
mathematical discipline [17]. From this technical
base arises electronic computing. Now while many
of the founders of AI were psychologists, the
technology they employed had a different
provenance. The implementation of Boolean logic as
electronic machine was the foundation upon which
programs like logic theorist ran. So it should be no
surprise them that such technology was found to be
highly amenable to the automation of mathematical
logic, and by the same token, it explains why
problems outside that realm have been found so
intractable. Again, Philip Agre concurs : “a theory of
cognition based on formal reason works best with
objects of cognition whose attributes and
relationships can be completely characterized in
formal terms.” [18].
Our world is replete with complex cultural and
social practices in which the calculation, storage and
retrieval of data play a vanishingly small part, and in
which spatial awareness, texture, gaze, gesture, tone
of voice, perceptual integration, active sensing,
kinesthetics and proprioception (all sensibilities
outside the ken of the computer) play key roles.
What this means, in effect, is that the technology to
which we are encouraged to apply to these functions
is incapable of sensing or measuring these qualities
(I hesitate to even call them variables). In effect, the
conventional PC is a filter which filters out all
aspects of our complex embodied intelligence except
that small part which can be encoded as strings of
alphanumeric characters. Rhetorics of computing,
both marketing rhetorics and the more complex and
subtle charactisations of the computational in
literature and film, commonly contain extropian and
anti-corporeal sentiments which imply that human
experiences which are not amenable to serial
Boolean logical expression are somehow irrelevant.
Surely this should be an issue of greatest concern to
practitioners and theorists of embodied practices, yet
there is an almost entire absence of informed critical
assessment of the relevance of such a technological
paradigm to activities like, for instance,
choreography, painting, cooking, sailing, clinical
diagnosis or physical therapy.
At root, this is the danger of the implicit acceptance
of the von Neumann machine as the paradigmatic
technology of our day as is the case in
computationalist cognitive science. By taking the
functioning of the serial processing Boolean
computer as an acceptable analogy to the
functioning of mind, we thereby afford the
development of a specific range of ideas and
research programs and close off the possibility of
many others. There is thus, an underlying and
seldom acknowledged conflict between the values
reified in the hardware and software of computer
technology, and the purposes to which these
technologies are put. The simple fact is that media
arts employ technologies designed for instrumental
purposes – automation, accountancy, archiving. It
cannot be asserted that artistic needs and purposes
were ever considered in the design of the basic
technologies. It follows then that existing computer
technologies are unlikely to be optimally appropriate
for such applications. This is unlike, for example,
the evolution of the medium of oil paint, which was
developed over generations specifically for the task
of painting pictures.
A machine designed for manipulating strings of
alphanumeric characters may simply not be relevant
to certain human tasks – why should we assume it
should be? Why should we be at such pains to deny
the obvious fact that our intelligence and our
embodiment are precisely attuned to each other,
through childhood development as well as through
evolutionary process? Our intelligence is expressed
in all modes and all combinations of modes of our
lived physical being. Yet we are increasingly
naturalized to the idea that we should be ready to
9
translate any sort of human notion or practice, into
keystrokes, in order to make in acceptable to this
cloth-eared device. Not only is it absurd that such an
expectation be attached to such a purportedly
marvelous technology, but it relegates any human
quality not amenable to such processing to oblivion
or irrelevance.
All too often, digital culture workers seem to think
in terms of ‘how can I (change my behavior in order
to)
exploit
this
(available,
commodified)
technology’. This assumes that the currently
available range of commodified hardware products
are adequate and sufficient. I find this preposterous.
Vast new areas of research and practice will open up
if we instead ask: ‘what sort of technology would be
an asset in the prosecution of my chosen task?’
We are conditioned to imagine that the output (and
input) of an interactive system will be symbolic,
textual and graphical, probably on a monitor: a
technologically arbitrary arrangement determined
only by historical factors. Even though the
hegemony of the desktop appears to be fissuring, the
new portable, locative and wearable technologies
generally simply minaturise and otherwise replicate
this paradigm, as was the case for many of the
interfaces developed for immersive stereoscopic
environments (VR) in the 90’s: devices such as the
‘wand’ which absurdly ported the pointing device
with buttons idea, originated to compensate for the
lack of spatiality and tangibility of the desktop, into
the realm of embodied interaction.
The machine which has trickled down to artists is a
machine for the quasi-arithmetic manipulation of
abstract alpha-numeric symbols. [19] It is very good
at that. But if digital arts practices are to develop in
a well theorized way, we must ask: is art practice,
always, primarily or ever, about the logical
manipulation of symbolic entities? Indeed, to ask
this question would be to open a range of important
inquiries. Occasionally, exploratory work in the
media arts explores the range of possible practices
less constrained by paradigms of data-entry and
command-and-control. It is worth noting that while
such practices were more common in mid twentieth
century art+technology experimentation, they were
less common in late twentieth century work, after
the consolidation of the desktop computer
paradigm. It may be that such projects are now
more confounding to audiences due to the
naturalization of that audience to the desktop and
related paradigms.
6. ART AND AI
Art and AI are remarkable foils for each other.
While AI saw logical problem solving as the
defining pinnacle of intelligence, that capacity does
not rank high in any conception of intelligence in the
arts. Whereas AI came to grief in the complexity of
everyday life, art would come to grief in attempting
logical generalism. While CS takes generality as a
virtue, one might propose that Art takes specificity
as a virtue. While reductivism is part of the very
fabric of CS, art is holistic. Artificial Intelligence
found its initial successes in the automated solution
of mathematico-logical problem solving activities,
the logic theorist and GPS of Newell, Simon and
Shaw, chess programs, toy and micro-worlds and the
like. These were heralded as heights of intellectual
achievement but they were consistent and
constrained, local logical domains. AI stumbled on
the realities Kurt Goedel articulated, as it attempted
to extrapolate these successes to the real world,
spoken language and the like: untidy, heterogenous
and illogical domains in which artist are trained to
operate.
The drive toward abstraction and generality came
into computer science from the mathematical side.
Abstraction is beguiling in its promise of
transcendent clarity. Abstraction affords a certain
kind of power, yet it also forgoes any power that
specifity and the particular can bring. As Wendy
Chun notes, “Programming languages inscribe the
absence of both the programmer and the machine in
its so-called writing.” [20]. Indeed the march to ever
‘higher level’ languages creates increasing
abstraction in which both hardware specifics and
stored data are increasingly effaced. Instrumentality
entered from another side, linked to digital
technologies as they arose as a form of industrial
production. Against these, as it were, are arrayed
situated and embodied sensibilities native to the arts,
and a commitment to material specificities.
In the histories of the plastic arts, in the modernist
period, there was a notion that the appearance of an
artifact should betray the nature of its materials and
methods of manufacture. Hence the Bauhaus dicta of
10
‘form follows function’ and ‘truth to materials’.
Computing, contrarily, hews to a postmodern
aesthetic of surface and superficiality: the function
of the interface is to obscure the true nature of the
machine. To protect the machine from the user
and/or vice versa is the motivation of HCI.
In terms of effective HCI, a tool or package is
successful to the degree that it is intuitive. That is,
that it recedes from conscious awareness, that it
facilitates an illusion that there is no mediating
technology between the user and the work object or
process. Contrarily, that an artwork should contrive
to obscure its own artifice is almost unconscionable
in the modern and postmodern periods. Works often
exist to bring to attention the artifice of the medium,
the qualities of the technology or the way they
perturb the situation or object of attention.
Illusionism is constructed only to be broken, or
intentionally problematised. In these terms, the
relationship of (naïve) HCI and (critical) media art
practice are entirely opposed. If HCI aspires to be
‘ready to hand’, media art aspires to be ‘present at
hand’. In my own work Fugitive, an illusion of
immersion was facilitated, only to be abruptly
disrupted, in an attempt to bring the user to an
awareness of their own trajectory of embodiment (as
opposed to their subject position as an actor of
limited agency in a prestructured world) and their
own willing suspension of disbelief. The function of
the project, then was intentionally reflexive and
‘meta’ . It was conceived, as most of my works are,
as an intervention into a discourse, in the form of an
artifactual system which is directly experienced
rather than read.
A significant difference between computer science
research and media arts practice lies in the
ontological status of the artifact. As discussed above,
for an artwork, the effectiveness of the immediate
sensorial effect of the artifact is the primary criterion
for success. It is engaging, it is communicative, it is
taken to be coherent, or it is a failure. The criterion
for success is performative. Most if not all effort is
focused on the persuasiveness of the experience.
Backstage may be a mess, a kluge. In computer
science the situation is reversed. If the physical
presentation is a little rough around the edges, or
even missing entire pieces, this can be overlooked
with a little handwaving, because the artifact
functions as a ‘proof of concept’ which points to the
real work, which is inherently abstract and
theoretical.
7.
INFORMATION,
COMMUNICATION,
MEANING
Fundamental to CS is the idea of information, and
the idea that information exists, or can exist, in some
abstract non-material realm, separate from and
independent of, its material substrate. This is an
(inherently Cartesian) assertion and not a self
evident truth. As a structuring assumption it is ripe
for critique. As such, it has permitted the sorts of
advances compatible with the paradigm, but,
equally, has excluded entire avenues of research.
Due to the elaboration of this paradigm, an
ontological drift in the term ‘information’ has
occurred over the past half-century under the
influence of the development of techniques which
utilize Boolean operations in a von Neumann
architectures. Expressions such as Information
Economy and new disciplines such as Informatics
attest to this drift. The range of common
contemporary uses of the term indicate, that, like
many expressions in common language in which
technical definitions and uses have been applied
applied to them retroactively, the word possesses a
hazy cloud of meanings. I suggest that the discipline
is structured by an informal working definition
which is not unproblematic because it confuses
‘information’ with ‘computability’. ‘Information’
has been formalized as quantifiable and logically
manipulable (Shannon), and hence, information
which is not quantifiable and logically manipulable
is no longer information. Now it may be that it is not
logically manipulable because there has been no
compelling (commercial) reason to render it
manipulable, or it may be that it is inherently not
amenable to logic or quantification in that sense. We
must therefore examine the value structure thus
created: if logical manipulability is valorized, then
vast realms of human practices are hence
devalorised. [21].
As Ronald Day notes: “Within the context of
information theory’s operational and statistical
understanding of language and affect, all human
actions are subject to statistical and predictive
11
prediction and design. Needless to say, such
prescriptions have dire consequences for any
statistically marginal dialects, forms, genres, or
identities that are not socially dominant, as well as
for activities of language (such a poetry, art, and
even, sometimes, critical theory) in which
language’s formal and social functions precede and
ground their more, so called, “communicational”
functions.” The conduit metaphor “ not so much
plays the role of describing an empirical event, but
rather, of transmitting and prescribing a certain
model of language and society. That model is an
utopian one of a formally closed communicational
society, similar to that which is found in the “closed
world” of the Cold War (see Edwards, 1996).” [22]
In effect, these operational and statistical
understandings construct a hegemonistic order
which changes a landscape of plurality and diversity
into an oppressive order, marking certain practices
as deviant and forcing them underground.
Interestingly, it is into this subterranean well that
mainstream culture then dips for novelty. One way
to understand the artistic avant-garde is as the
provider of this mechanism to reintroduce
(memetic?) variety from the cultural ‘wilderness
park’ or ‘biodiversity preserve’ which is thereby
constructed – a protected zone of (named and
tolerated) deviant behavior which is simultaneously
nurtured and marginalized. The mechanisms of the
art world – small semi-commercial galleries and
performance venues, small presses, low budget
media production, and marginal public media (ie
pacifica) provide the ‘conduit’ by which this
diversity is sucked back into mainstream culture in
metered doses to revivify it. [23]
8. OBJECTIVITY AND ENACTION
One of the large trends in western thought over the
last century, felt equally in the sciences, in the
humanities and the arts, has been the challenges to
the presumed authority, validity or even possibility
of objective knowledge or a detached objective
viewpoint. This trend is perceived in the crisis
Heisenberg and Schroedinger brought to modern
physics as it is in the problematising of authorial
status and the authority of texts (Barthes, Derrida
etc). In the sixties and seventies, both second-order
cybernetic theory and autopoietic theory addressed
the condition of the observer directly. As Heinz von
Foerster remarked “Objectivity is a subject's
delusion that observing can be done without him.”
The culture around computer science, like any other
academic discipline, has its inconsistencies and
oddities. These include subscription to an
unreconstructed Cartesianism and unreconstructed
Objectivism, explicit in the ‘gods eye view’ often
encountered in software and systems. [24]
Enactive and situated theories of cognition and
phenomenological critique of AI (Dreyfus,
Suchman, Varela, Lakoff and Johnson, et al)
exposed a platonic and top down spirit in that
enterprise and the school of cognitive science
associated with it, and led to a recognition of the
relevance of theories of situated and embodied
cognition. This opened a way for more subjective
and less autocratic modes of technical practice
(Brooks, Maes, Agre, Horswill and Chapman et al).
David Marr begins his well-known 1982 book on
vision with the statement that "vision is the process
of discovering from images what is present in the
world, and where it is". [25] But in human and
animal biology, the study of perception as a one way
process, an of methods in which are clinically
isolated from lived experience has given way to the
conceptualization of active sensing, which asserts
the importance of examining the kinesthetically
engaged, temporal coupling of sensing and action.
In the plastic arts we see an ongoing challenge to the
single, detached, privileged viewpoint reified in
perspective, first in modernist image making
(Cubism) in which the conventional perspectival
view was perturbed and multiple viewpoints were
combined, thereby problematising the unique and
authoritative viewpoint of the observer. By the mid
60’s, the authority/authoriality of the artist was
actively under critique by artists themselves, as was
the divide between critic and artist, and between text
and the plastic arts (Conceptual art). This process
generated a profusion of new genres in which the
reliable stasis and formal relationship between
viewer and work, as well as between artist and work,
were broken down. In such cases the spatial and
temporal subjectivity of experience was emphasized.
Such works were thus often disorienting to their
audiences.
As I have previously observed, the theoretical
agendas of (at least the first generation of) media
artists were established in this period. In hindsight,
12
one can view the radical work of the 60s and 70s as
prefiguring and modeling the challenges of digitally
based art forms. (This would be consistent with the
idea that one of the functions of art in our culture is
as a cultural ‘early warning system’.) With the
availability of computational tools, the arts have
engaged in the design of (automated) behavior and
interaction. Recognition of this paradigm shift
demands the abandonment of old aesthetics of
passive contemplation and calls for the formation of
an aesthetics of dynamic engagement by and with
cultural artifacts [26]. This trend gives rise to modes
of cultural practice in which the user takes some
active and constructive role in the creation of her
experience. This trend is clear in the transition from
the authority of the cinematic eye/screen to the
distributed contingencies of multi-user gaming in
hybrid environments combining the agencies of
remote players and semi-autonomous software
agents or ‘bots’.
As instrumentality is natural to the realm of
machines, so autopoeisis and symbiotic relationships
are natural to biological organisms and systems
thereof. In biological (as well as social) systems,
cybernetics notwithstanding, identification of
discrete inputs and outputs depends on a tenuous and
strained contrivance. A critically motivated practice
might work towards technological projects in which
organization is based on an autopoietic or ecological
metaphor, where none of the entities or parts
produce ‘output’ but, in the spirit of Actor Network
Theory, all entities – humans, animals, instruments,
networks and institutions are conceived as agents are
linked in a hybrid, heterogenous and mutually
enhancing circulation. New paradigms for
understanding and making interactive cultural
pursuits may be theoretically enhanced by reference
to
contemporary
Cognitive
Science,
Neurophysiology, Ecology and Social Theory.
9. GENERALITY AND SPECIFICITY
A fundamental commitment of computer science is
that of the General Purpose Machine. From the
outset, generality was taken to be desirable, for
reasons which are unassailable in formal terms. The
principle of the ‘general purpose machine’, is an
elaboration of Alan Turing’s fundamental notion of
the ‘Universal Machine’ (known latterly as the
Turing Machine). The virtue of generality was
reinforced with the GPS (General Problem Solver)
of Newell, Simon and Shaw. It is basic to the
concept of the digital computer, (this is textbook
computer science history). The unquestioned
axiomatic acceptance of the concept of generality as
being a virtue in computational practice demands
interrogation, especially when that axiomatic
assumption is unquestioningly applied in realms
where it may not be relevant. Indeed, the fact that
the idea of the universal relevance and validity of the
concept of generality is rarely asked; itself suggests
fertile ground for interrogation. [27]
Historically one can identify a two-stage process of
elision and reification, related to the economic
principles of the computer industry and the rapid
uptake of the computer in diverse socio-cultural
contexts far from the original applications of the
machine. The first stage was the transfer of the
notion of ‘general purpose’ to the beige colored box
and its big vacuum tube appendage. Quite possibly a
result of the odd combination of ignorance,
mendacity and pecuniary interest so particularly
characteristic of the advertising (so-called) industry.
The idea of generality, entirely substantiable in
formal mathematical terms, became thus attached to
a physical commodity. The notion of generality thus
offered justification for highly profitable strategies
of consumer commodity economics. The casualties
of this capitalist sortie are seldom discussed. But if
all uses for the computer could be contained by
alpha-numeric desk-work, the other sorts of human
practices which were not compatible with that
particular work culture, or not identified as
profitable enough sectors to justify the investment in
software tool development; had to reshape
themselves or suffer the stigma of remaining
uncomputerised.
The world was thus divided into the computerized
and non-computerised realms, and caché and
advantages flowed to the computerized practices, in
popular culture, which was itself increasingly
defined by and located in digital practices; as in the
academic
and
research
worlds,
where
computerized/computerizable disciplines were able
to access comparatively huge funds (much of which
flowed directly back to the computer hardware and
software industry). The result of this trend was that
all sorts of human practices for which the computer,
13
as formulated by the industry, was not ideally
conformed, often then bent and reconfigured
themselves to adapt, often at a significant cost to the
integrity of the practice and its sensibilities and
knowledge base.
This process is observable in diverse fields and
disciplines over the last quarter of the C20th, from
engineering to the arts, but it is in the arts that such
trends are particularly stark. This is because, as
argued, the arts the practices rest on such profoundly
different foundations, both historically and
theoretically. This then is the core of my argument.
Artworks are made by individuals of particular
physical conformations, with particular perceptual
and physical skills, immersed in specific cultural and
historical contexts.
10. EMBODIMENT, SITUATION AND TOOLS
Pataphysics will be, above all, the science of the
particular, despite the common opinion that the only
science is that of the general. Alfred Jarry, [28]
Tools are specific to functions. There is no such
thing as a general purpose tool. Every craft has a
range of specialised tools. The skilled craftsman is
highly discerning about matching a task to a tool.
The notion that generality is a virtue is opposed to a
generally accepted notion that there is a tool for
every job and a job for every tool. Contrarily,
informed by the dual evil motivations of userfriendliness and generality – software tools seek to
reduce the diversity and specificity of individual and
cultural motivations and world-views: user friendly
software tools make easy (generalisable) tasks easier
and difficult (more specific) tasks more difficult.
In opposition to the ideology of generality, one
might propose that art is naturally Pataphysical. An
artwork is deemed to be excellent if it addresses a
particular situation with persuasive precision. That
is, by a subtle combination of the signifying
potential of spatial organisation, materials, sounds,
images and user dynamics; a coherent experience is
generated which leads the audience/user into a
particular realm of interpretation. An artwork is
successful to the extent that it is specific. Generality
is not a virtue in the Arts. Generality and affective
power seem to be mutually exclusive. It’s hard to
imagine what a general purpose artwork would be
like, unless it was one of those generic and vacuous
hotel room pictures, whose work is to proclaim a
respect for art on behalf of their owners, while
safely avoiding the danger inherent in actually
being art. This is the fatuous conundrum at the root
of the myriad of techno-cultural projects which
attend to and intend to automatically generate
cultural artifacts. The notion of the general purpose
machine has indisputable power and relevance in its
place. But we must be wary of the drift of axiomatic
assumptions which can flow from a paradigmatic
technology of both rhetorical and economic power.
Over the latter part of the C20th, computer based
image making became increasingly sophisticated as
the technology became more affordable and
dispersed across culture. As such image processing
engaged the realm of painting, we can observe a
degeneration of the bodily and material culture of
painting. Painting as a tradition of practice has
honed its tools and techniques over hundreds of
years, such that the painter trained in and practised a
diverse and integrated range of proprioceptive
skills, kinesthetic sensibilities and perceptual
procedures which taken together, resulted in a
practice of infinite diversity, expressiveness and
subtlelty. All this then went out with the bathwater
when painters were enticed to sit with a fixed focal
length at a small scintillating image while pushing a
little plastic box around on a small space of desktop
nearby.
The interface and tools used some of the language
of painting, but the actual physical interfaced was
utterly unlike the performative context of the
painter: the display was small and of low resolution,
the complexity and subtlety of physical skill was
completely absent, and the ‘output’ product was
(usually) of small scale. How neatly the rhetorical
power of the paradigm of disembodied information
dispatched the unnecessary and encumbering bodily
knowledge and liberated the abstract and pure idea
content of the practice. In the face of six hundred
years of refinement, the desktop computer painting
emulator had barely sixteen. The technology did,
and does, afford all sorts of capabilities which
painting did not: actions could be reversed, multiple
versions could be kept, product could be sent over
network to a remote location – all remarkable and
wonderful qualities. But, like the tea produced by
the nutri-matic machine, it was almost entirely
14
unlike painting. [29] It is strange to observe that
amongst practitioners, teachers and theorist in such
contexts (and there are many, not just with respect
to painting but to a wide range of other skilled
professions), critical assessment of the value and
quality of the traditional practices vis a vis the new
technologies is rare. Ironically, software developers
are more likely to undertake a study of the
traditional practices than are the purported
guardians and partisans of the practice likely to
undertake a study of software tools.
11. THE MIDI INSTRUMENT: PERILS OF
GENERALITY
Electronic music interfaces tend to hew to two
different paradigms. Some adapt or augment an
existing instrument. This approach exploits the
richness and specificity of the sensibility developed
by the musician to the artifact. In musical
performance, the bodily/artifactual cultures of
virtuosity compare to similar practices in the visual
and plastic arts, and each can be read in terms of
interface and interaction design. Here one might
consider the assumptions underlying in the term
‘interface’. For, as the face is conceived as the
sensory front end of the brain, as the windscreen
through which the driver of the bodily bus peers, so
the notion endorses an archaic notion of perception a
one-way sensory information flow into the brain,
and simultaneously denies any reality to an
‘interbody’. [30]
Traditionally, the facility of the musicians bodily
skill with his instrument is regarded as a measure of
virtuosity. The sensitivity and specificity of the
bodily actions of the musician is integrated, by dint
of long training, with the trained ear and the mental
characterization of acoustic quality. This is truly
embodied interaction in a most refined and virtuosic
sense. The alternate scenario is that of the patchable
multi-function electronic musical instrument
interface device. Such devices are, in HCI
terminology, ‘controllers’ [31]. They afford the
performer the possibility of mapping any variable of
the computer music system to any perturbable aspect
of the device. Such devices therefore import the
‘virtue’ of ‘mappability’ or ‘assignability’ from the
purportedly ‘general purpose’ physical incarnation
of the general purpose machine across to the musical
instrument. But the musical instrument is a
paradigmatic example of the specificity of tools
argument. What makes a Stradivarius much more of
a violin than a cigar box with a rubber band
stretched over it? A history of increasingly refined
atunement between the material specificities of the
artifact and the embodied intelligences and skills of
the player.
The special quality of any instrument is, it would
seem to me, its integration with a long standing
culture of training and playing, and these things
combined permit the subtlety of virtuosity. When
played by a trained player, subtle and complex
effects are produced. Specific kinds of modulation
are associated with specific kinds of physical
actions in specific locations on the instrument. The
multi-function electronic musical instrument
forgoes such possibilities. The range of possible
variables can be void of common qualities. The
same manipulation might address amplitude, or key,
or access different samples on the hard drive. The
assignment of any control function to any input
sensor, and thus to any bodily modality, is variable
and arbitrary. With such flexibility and diversity, a
fluent bodily relation to the material artifact cannot
be developed.
12. THE MATTER WITH MATTER:
SPECIFICITY AND SIMULATION
The difference between theory and practice is
greater in practice than in theory (anon)
From now on, lessons in rice planting will occur in
the paddy fields. (Notice posted on blackboard in
Chinese Cultural Revolution Film Breaking with Old
Ideas).
In computer science, consistent with the dogma of
the general purpose machine, and platform
independent technologies which succeed from it,
hardware is usually taken as a given, and assumed to
be adequate or even optimal to the task. The
machinations of code can proceed without reference
to the real physical world. But in fact, such hardware
substrates always come with their specific
affordances and constraints, and their interface to the
physical world is delimited. In a world of networked
databases, required data is (paradigmatically) always
unproblematically available in a form which does
not require interpretation. Contrarily, the real,
biophysical world is a dirty, complex and
unpredictable place. Among the robotics community
15
in the 90’s, the remark ‘fix it in software’ was often
heard and it was almost always tongue-in-cheek. The
remark signalled a recognition that many problems
could not be ‘fixed in software’. Data is generated
by the digisation of signals from sensors which
exploit electrophysical phenomena. Specific
physically tangible electronic and mechanical
technologies have to be designed and tested with
respect to specific environments in order to create a
context in which code can usefully work. If any part
of that ‘front-end’ process is faulty (wrong
alignment or calibration, bad optics, unreliable
power supply, unexpected response to environmental
factors such as humidity, etc) or if the scaling and
parametrisation of the a/d process is inappropriate;
then the data representation of the real-world
phenomenon is forever flawed. In the spirit of GIGO
(Garbage In, Garbage Out), no amount of software
downstream can create more accurate and higher
resolution representations of the world than that
supplied by the interface with that world. At best it
can retrofit a simulation of it, based on accurate
measurement of the kinds of errors inherent in the
faulty sensor. This, of course, can add a second cycle
of inaccurate representation.
This tension between the power afforded by
abstraction, and the simultaneous loss of precision,
is explicit in the case of simulation. By the same (fix
it in software) reasoning, computer simulation of
real world contexts must be regarded with some
reservation. As Eugene Ferguson, among others, has
observed, any simulation tool is itself a design
artifact, and depends for its representational
accuracy on several factors. First, that the designer
correctly identified all the relevant physical effects.
Second, that such physical effects are amenable to
algorithmic representation. Thirdly that these
representations are accurate and of adequate
resolution. Forth, that all possible interactions of
these relevant factors were appropriately calculated
and represented. Certain kinds of physical
phenomena, particularly those manufactured to
reliably embody and express a mathematically
simple physical process are more simple to simulate.
The behavior of a tree in a storm, or the turbulence
of water on a ships hull demand more complex
computation, or may be inherently uncomputatable.
Here the isomorphic loop of industrialism and
engineering stands out in stark relief. A gear train or
resistor-capacitor network is easily simulated
because these things are themselves produced to
embody behavior easily described in newtonian
terms. One is inevitably reminded of the Borgesian
conceit of the map in ‘Of Exactitude in Science”.
The recognition of the fundamental necessity of
computability of a simulation is a reality which
seems often forgotten. Inevitably, there are more
factors at play in the real world than in the
simulation. Thus many practitioners, particularly
those trained in the computer science disciplines, are
deeply shocked when the real world does not
conform to simulation. As Hamlet noted, There are
more things in heaven and earth, Horatio, Than are
dreamt of in your philosophy. [32]
13. CONCLUSION
“A critical technical practice will, for the
foreseeable future, require a split identity,-- one foot
planted in the craft work of design and the other foot
planted in the reflexive work of critique.” Philip
Agre [33]
If the thesis of this paper is taken to be valid, at least
in part, then several paths of action are called for.
The first is a thoroughgoing assessment of the
effects of the computational paradigms on cultural
practices, this is both a theoretical inquiry and a
context for historical work and case studies.
Practitioners are duty-bound to assess the values
inherent in the technological tools they employ, lest
they sabotage their enterprise. If and when these
ontological booby traps are identified, a new mode
of technology development is called for: the
imagining, design and development of tools
consistent with the values which underly and shore
up the practice itself; always allowing for the
possibility that any form of technology could be
antithetical to, or destructive of, the cultural
enterprise.
The other, alternative and parallel path is the
negotiation of new cultural practices native to the
new technologies, in a process which intelligently
and attentively assesses the potential disharmonies
between the artistic goals and the qualities of the
technologies.
Both these kinds of design processes must address
the technologies at a plethora of different levels,
from the smallest component level to entire devices,
from implicit entailments of programming languages
to dynamics of the interface and interaction, and
everything between.
16
[4] Such practices imply the development of an
Aesthetics of Behavior. Elsewhere I have argued for
the recognition that such a modality of aesthetics is
not only fundamental to such practices but
unprecedented in the history of the plastic arts.
[5] see Snow, C.P. The Two Cultures, 1959,
reprinted Cambridge University Press 1998, etc.
[6] Theodore Roszak. The Cult of Information.
Pantheon, 1986. p217
14. REFERENCES / ENDNOTES
[1] These writings include Simulation Digitsation
Interaction: The impact of computing in the Arts,
Artlink, 1987, Consumer Culture and the
Technological Imperative, in Critical Issues in
Electronic Media, SUNY Press 1995, Ed S.Penny;
The Virtualisation of Art Practice: Body Knowledge
and the Engineering World View. CAA Art Journal
Fall1997, etc.
[2] Although ubiquitous, I do my best to avoid the
descriptor ‘new media art’. In my opinion, all three
terms are dubious. It is facile to observe the
transience of ‘new’. Less often is questioned the
assertion that this practice can be described under
any of the concurrent definition of media. From my
point of view, that these practices comprise ‘art’ in a
sense that is compatible with conventional notions of
art is also, at least, an assertion worthy of discussion.
Though awkward, I prefer Digital Cultural Practices,
or Computationally Automated Cultural Artifacts
(CACA).
[3] Inasmuch as institutions of higher learning are
hosts to the pedagogical environments where these
practices are developed and taught, this inquiry has
direct relevance for institutions with programs which
address such areas of practice, and specifically to the
challenges of interdisciplianrity. Articulation of the
details of such contexts goes beyond the scope of
this paper, but has been addressed by the author
previously in: my Adequate pedagogy: the missing
piece in Digital Culture, in: A Guide to Good
Practice in Collaborative Working Methods and
New Media Tools Creation (by and for artists and
the cultural sector) eds. Lizbeth Goodman and
Katherine Milton (fall, 2003) AHDS (Arts and
Humanities Data Service)] (and in forthcoming
papers.)
[7] The Army and the Microworld: Computers and
the Politics of Gender Identity. Paul N. Edwards.
Signs, Vol. 16, No. 1, From Hard Drive to Software:
Gender, Computers, and Difference (Autumn, 1990),
pp. 108-09
[8] Towards a critical technical practice: Lessons
learned in trying to reform AI. in Bowker, Gasser,
Star and Turner Bridging the Great Divide, Social
Science, Technical Systems and Cooperative Work.
Erlbaum 1997. I make full acknowledgment of the
insightful work of Philip Agre in his analysis of the
cultures of AI and CS, he is quoted more than once
in this paper.
[9] Agre. Philip. Writing and Representation.
Michael Mateas and Phoebe Sengers, eds, Narrative
Intelligence, Amsterdam: John Benjamins, 2003
[10] Agre. Philip. Towards a Critical Technical
Practice. op cit
[11] This useful term was coined by JD Bolter in his
pioneering work of digital cultural studies, Turing’s
Man. North Carolina University Press, 1984.
[12] Friedrich Kittler. On the Implementation of
Knowledge- Toward a Theory of Hardware. Found
at
http://www.hydra.umn.edu/kittler/implement.html,
etc
[13] Mintz, Sidney. Sweetness and Power. Viking
Penguin, 1985.
[14] This is silly of course, but the staff-faculty class
structure of the (American) university is based on
this. This is another dimension of academic life
which reinforces the hardware-software dualism, and
the attendant notion that knowledge-work or creative
work occurs exclusively in the abstract mental realm
of text and code.
[15] My class "Hardware Intelligence" argues
against the dualistic academic dogma which
17
proposes that the more engaged with the physical
world a practice is, the less intellectual or intelligent
it is. Far from being just a remedial skill building
class, this class brings students who have been
alienated from the physical world by software, back
into a rich engagement with it. The ACE program
has a pedagogical commitment to a holistic approach
to technologies and the intelligent manipulation of
matter and the production of material product.
[16] Mindell, David. Between Human and Machine:
Feedback, Control, and Computing before
Cybernetics. Johns Hopkins Studies in the History of
Technology, 2004. p321
[17] Ferguson, Eugene, Engineering and the Minds
Eye, MIT press, 1994.
[18] Agre, Philip. Towards a Critical Technical
Practice, op cit
[19]I use the term ‘trickle down’ with full
recognition of its origin in discourses of military to
civilian technology transfer.
[20] Chun ,Wendy Hui Kyong. On Software, or the
Persistence of Visual Knowledge. Grey Room, 18.
[21] Ronald Day, discussing Shannon’s formulation
of information theory, similarly asserts “:
information” has, among other qualities, that of
being quantifiably measurable and “factual” in the
sense of being clear and distinct semantic units.”
Implied in this conception of information as being
susceptible to manipulation presumes the
separability and independence of information from
materiality. If as a disciplinary partisan, one
embraces such assertions (and clearly career success
within the discipline depends on it) then a certain
kind of process is prescribed, an informationoriented process in which hardware is taken to be
generic and software is where the intellectual
innovation takes place.
[22] [Ronald Day. The “Conduit metaphor” and the
nature and Politics of Information Studies. Journal
of the American Society for Information
Science51(9):805-811, 2000].
[23] Tiziana Terranova, in her exemplarily selfreflexive consideration of information theory,
proposes that for a critical apparatus to effectively
address
contemporary
communication
and
information issues, it must combine the
poststructural critiques of meaning rooted in
semiotics and deconstruction with an understanding
of mechanisms of transmission which such
poststructural approaches ignore, and this
supplement must be based in information theory.
She notes: “Information is not simply the name for a
kind of form meant to survive the attack of noise,
but more a quasi cause or catalyst for an active
power of constitution and transformation that it does
not contain in itself.” [Tiziana Terranova,
Communication Beyond Meaning: on the cultural
politics of information. Social Text, 80, Vol22, No3,
Fall 2004.Duke University Press.] The image she
conjures, to extend her employment of tropes from
complexity theory (elsewhere in the paper), is the
image of an agent poised at an energy maximum, for
whom the injection of information creates a
movement characterized by a “sensitive dependence
on initial conditions”.
[24] Philip Agre draws attention to another such
anomaly, the utilization of introspection as a method
in AI. Towards a Critical Technical Practice, op cit
[25] this is what active vision researcher Andrew
Blake called "a prescription for the seeing couch
potato" (1995). In contrast, in the active sensing
view, behavior is tightly coupled to sensing, and
behavioral programs operate on minimalist
representations of the world that are computed from
changes in the sensory information reaching the
animal as it manipulates its body, and thus its
biological sensor arrays, through space”
http://www.cnse.caltech.edu/Research02/reports/ma
cIver1full.htm
[26] Penny, Simon. From A to D and back again:
The emerging aesthetics of Interactive Art, First
published in Leonardo Electronic Almanac April
1996.
[27] Phoebe Sengers undertook a similar inquiry
with respect to another computer science tenet,
modularity, in her PhD thesis, Anti-Boxology.
Carnegie Mellon University, 1998.
[28] If physics is the study of what is and
metaphysics is the study of what "what is " is, then
pataphysics is the study of what "what 'what is' is"
is. Pataphysics...is the science of that which is
superinduced upon metaphysics, whether within or
beyond the latter's limitations, extending as far
beyond metaphysics as the latter extends beyond
physics... Pataphysics will be, above all, the science
of the particular, despite the common opinion that
the only science is that of the general. Pataphysics
will examine the laws governing exceptions, and
will explain the universe supplementary to this one...
18
Pataphysics is the science of imaginary solutions... –
Jarry, Exploits And Opinions of Dr. Faustroll,
Pataphysician.
[29] The Nutrimatic machine, in The Hitch Hiker's
Guide to the Galaxy by Douglas Adams, produced a
liquid which was ‘almost entirely unlike tea’.
[30] Music became electronic long before imagery.
The act of composition was abstracted from the act
of performance and music was resolved to symbolic
notation long before computing machines dealt in
such notation as currency. This may well be due to
the amenability of music to the symbolic realms of
computing.
Computer music also seamlessly mapped onto the
precursor and parallel technological cultures of
audio amplification, transmission and recording. It
may be surmise that this history itself led
circumstantially to the fundamental separtation of
sound and image in digital media, ie it may be an
entirely unintentional historical accident rather than
being intentional according to some project of
theoretical justification.
[31] Eric Singer’s Sonic Banana is one of many
examples.
http://www.ericsinger.com/workprojects.html
[32] Shakespeare, William. Hamlet.
[33] Agre. Philip. Towards a Critical Technical
Practice, op cit