Audiovisual Translation
Language Transfer on Screen
Edited by
Jorge Díaz Cintas and Gunilla Anderman
Audiovisual Translation
This page intentionally left blank
Audiovisual Translation
Language Transfer on Screen
Edited by
Jorge Díaz Cintas
Imperial College London
and
Gunilla Anderman
© Jorge Díaz Cintas and Gunilla Anderman 2009
Chapters © their individual authors 2009
All rights reserved. No reproduction, copy or transmission of this
publication may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6-10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of
this work in accordance with the Copyright, Designs and Patents Act 1988.
First published 2009 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin's Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries.
ISBN-13: 978–0–230–01996–6 hardback
ISBN-10: 0–230–01996–X hardback
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Audiovisual translation : language transfer on screen / edited by
Jorge Díaz Cintas and Gunilla Anderman.
p. cm.
Includes bibliographical references and index.
ISBN 978–0–230–01996–6 (alk. paper)
1. Dubbing of motion pictures. 2. Motion pictures – Titling. I. Díaz
Cintas, Jorge II. Anderman, Gunilla M.
TR886.7.A9285 2009
778.535—dc22
10 9 8 7 6 5 4 3 2 1
18 17 16 15 14 13 12 11 10 09
Printed and bound in Great Britain by
CPI Antony Rowe, Chippenham and Eastbourne
2008029937
Gunilla Anderman
In Memoriam
It was with great personal sadness that we learnt of the unexpected death of
Professor Gunilla Anderman, on 21 April 2007. Gunilla had been ill for
some time but chose to keep her illness very private and continued doing
what she loved; writing about and teaching Translation Studies.
Gunilla was co-founder of the University of Surrey’s Centre for Translation
Studies (CTS) and remained its Director for over 20 years. She nurtured the
CTS from very small beginnings to create the internationally respected Centre
that we know today. As one-time Chair of the ITI Education & Training
Committee, she was also very keen on actively fostering links between the
profession and academia. Gunilla herself was a distinguished translator of
drama between Swedish and English, as well as an inspirational teacher and
scholar of international repute.
We had been working together on this volume for some time and know how
pleased Gunilla was to see the manuscript submitted to the publishers. This
volume is dedicated to her memory.
Gunilla was a gifted communicator, full of natural charisma, with a
wonderful warmth of character and generosity of spirit. She will be
remembered with affection and respect by all who knew her.
We all feel privileged to have worked with Gunilla and will miss her dearly.
Jorge Díaz Cintas
Gillian James
Margaret Rogers
This page intentionally left blank
Contents
ix
Acknowledgements
x
Notes on Contributors
1 Introduction
Jorge Díaz Cintas and Gunilla Anderman
1
Part I Subtitling and Surtitling
2 Subtitling for the DVD Industry
Panayota Georgakopoulou
21
3 Subtitling Norms in Greece and Spain
Stavroula Sokoli
36
4
Amateur Subtitling on the Internet
Łukasz Bogucki
49
5 The Art and Craft of Opera Surtitling
Jonathan Burton
58
6
71
Challenges and Rewards of Libretto Adaptation
Lucile Desblache
Part II Revoicing
7
Dubbing versus Subtitling: Old Battleground Revisited
Jan-Emil Tveit
8 The Perception of Dubbing by Italian Audiences
Rachele Antonini and Delia Chiaro
9 Transfer Norms for Film Adaptations in the
Spanish–German Context
Susana Cañuelo Sarrión
10 Voice-over in Audiovisual Translation
Pilar Orero
vii
85
97
115
130
viii
Contents
11
Broadcasting Interpreting: A Comparison between
Japan and the UK
Tomoyuki Shibahara
140
Part III Accessibility to the Media
12
Interlingual Subtitling for the Deaf and Hard-of-Hearing
Josélia Neves
151
13 Audio Description in the Theatre and the Visual
Arts: Images into Words
Andrew Holland
170
14
186
Usability and Website Localisation
Mario De Bortoli and Jesús Maroto Ortiz-Sotomayor
Part IV
Education and Training
15 Teaching Screen Translation: The Role of Pragmatics
in Subtitling
Erik Skuggevik
197
16 Pedagogical Tools for the Training of Subtitlers
Christopher Taylor
214
17
229
Teaching Subtitling in a Virtual Environment
Francesca Bartrina
18 Subtitling: Language Learners’ Needs vs. Audiovisual
Market Needs
Annamaria Caimi
240
Index
252
Acknowledgements
The appearance of this volume owes much to the hard work and patience
of many people. First of all we would like to thank the contributors for
waiting patiently while we co-ordinated and edited the many essays on
different aspects of audiovisual translation from all over the world. Our
thanks also go to Jill Lake and Palgrave Macmillan for being supportive
from inception through to production. We are also very happy to
acknowledge our debt to Gillian James who has worked with us with
diligence and attentiveness at all stages on the manuscript and its
preparation.
GUNILLA A NDERMAN
University of Surrey
JORGE DÍAZ CINTAS
Imperial College London
ix
Notes on Contributors
Gunilla Anderman was Professor of Translation Studies at the Centre
for Translation Studies, University of Surrey. She lectured on the
undergraduate and postgraduate programmes of translation including
the MA in Audiovisual Translation. A professional translator of drama
with translations of the plays staged in the UK, USA and South Africa,
she was the author of Europe on Stage: Translation and Theatre (2005).
Rachele Antonini is a researcher at the University of Bologna, Italy,
where she is concerned with the perception of dubbing and subtitled
humour, the teaching of spoken language mediation, and child
language. She has been working as a freelance interpreter, translator
and subtitler for the past ten years.
Francesca Bartrina has a PhD in Literary Theory and Comparative
Literature and is a lecturer in Translation Studies at the University of Vic,
Spain. Publications include Caterina Albert/Víctor Català. La voluptuositat
de l’escriptura (2001), Violència de gènere (2002, editor) and articles and
chapters on Audiovisual Translation, Translation Theory and Gender
Studies. She is series editor for the book series Capsa de Pandora from
Eumo Editoral.
Łukasz Bogucki is Associate Professor and Head of Translation Studies
in the Department of English Language of Łód́ University. He obtained
his doctorate in 1997 and published A Relevance Framework for Constraints
on Cinema Subtitling in 2004. Łukasz teaches interpreting, translation
and supervises MA theses in both fields. He is also the Polish editor of
The Journal of Specialised Translation.
Jonathan Burton is a surtitler at the Royal Opera House, Covent
Garden, in London, UK. He has written English surtitles for over 90
operas, and provides subtitles for operas on DVD and the BBC’s Cardiff
Singer of the World competition. He has taught Twentieth-century
Music History, gives lectures on musical and operatic topics, and writes
programme notes for operas and concerts. He studied Music at
Cambridge and Birmingham Universities.
Annamaria Caimi is Associate Professor of English Language and
Linguistics at Pavia University, Italy. Her research interests include ESP
x
Notes on Contributors xi
teaching methodology, text linguistics, pragmatics, corpus linguistics
and subtitling. Her present research is concerned with screen
translation and different forms of subtitling in language learning.
She is the editor of Cinema: Paradiso delle Lingue. I Sottotitoli
nell’Apprendimento Linguistico (2002).
Susana Cañuelo Sarrión is completing her doctoral work on
intercultural transfers between Spain and Germany through literature
and film translation. Thanks to a scholarship from DAAD, she has been
involved in research in Leipzig, Germany since 2002. She is also working as a freelance translator and university teacher. Publications
include: Historia universal de la literatura (2002) and translations of
Elfriede Jelineks novels Las amantes (2004) and Obsesión (2005).
Delia Chiaro is Professor of English Linguistics at the University of
Bologna’s Advanced School in Modern Languages for Interpreters and
Translators at Forlì, Italy. She is best known for her work as a scholar in
humour but she is also the author of numerous publications on dubbing
and subtitling as well as on translation quality. She has lectured across
Europe, in the USA and in Asia.
Mario De Bortoli joined Euro RSCG 4D in 2000 and has been involved
in the localisation of international online and offline cross-cultural
campaigns for clients in a large number of industries. In 1998 he left
Venice for London with a BA in Simultaneous Interpreting and a
postgraduate degree in Corporate Communication. He is also involved
in the organisation of the annual Best Global Website Award, promoting excellence in localisation.
Lucile Desblache is Reader in French and Translation at Roehampton
University, London, UK. She holds two doctorates, from Paris VII and
Clermont-Ferrand. Her research interests span across translation and
comparative literature. She is the editor of The Journal of Specialised
Translation and has worked as a consultant and translator for a range of
companies including British Rail. Her adaptation of Albert Herring’s
libretto into French was performed for Opéra de Chambre de France.
Jorge Díaz Cintas is Senior Lecturer in Audiovisual Translation at
Imperial College London. He is the author of several books and articles
on subtitling and has recently published Audiovisual Translation:
Subtitling (2007), co-written with Aline Remael. Since 2002, he has been
the president of the European Association for Studies in Screen
Translation.
xii Notes on Contributors
Panayota Georgakopoulou has a PhD in Subtitling from the University
of Surrey, UK. She has set up subtitling modules and taught specialised
translation courses at university level, while at the same time working
in the subtitling industry. She is now the Managing Director of the
European Captioning Institute, a subtitling company specialising in
multi-language subtitling for DVD.
Andrew Holland trained as an audio describer in 1993 at the National
Theatre, UK and has been part of a team of describers there ever since. In
1998 he helped to set up VocalEyes as a professional service to make touring theatre accessible to the blind and partially sighted. As current Head
of Description, he has also helped in the creation of a visually-impaired
tour on the British Museum ‘Compass’ website, and over the years, has
trained a large number of professional and volunteer describers. He has
also had a number of his own plays produced.
Jesús Maroto Ortiz-Sotomayor attended the universities of Granada,
St. Petersburg, Washington and London, where he studied Translation,
IT and e-Marketing. Currently he is working on a doctorate in
International Digital Marketing at the Universitat Rovira i Virgili,
Tarragona, Spain. He has worked for the translation companies STAR,
Lionbridge and SDL, as well as for the advertising agency Euro RSCG.
He is a member of the IoL, TILP and the Intercultural Studies Group.
Josélia Neves holds a PhD in Subtitling for the Deaf and Hard-ofHearing from Roehampton University, UK. She teaches Audiovisual
Translation at the Polytechnic Institute of Leiria and at the University
of Coimbra, in Portugal. Her main interests lie in teaching AVT and in
developing action research projects in the field of accessibility to the
media.
Pilar Orero holds an MA in Translation by Universitat Autònoma de
Barcelona (UAB), Spain, and a PhD in Translation from UMIST, UK. She
lectures at UAB where she coordinates the Online Masters course in
Audiovisual Translation. She is the co-editor of The Translator’s Dialogue
(1997) and the editor of Topics in Audiovisual Translation (2004). She is
the leader of the networks CEPACC and RIID-LLSS, consisting of 18
Spanish universities devoted to media accessibility research and quality
training.
Tomoyuki Shibahara obtained his MA in Interpreting and Translating
from the University of Bath, UK in 1999. He is a freelance interpreter,
translator and lecturer, who, having worked as a broadcasting interpreter for the BBC World TV in London, is now a part-time lecturer at
Notes on Contributors xiii
Aoyamagakuin University and Dokkyo University in Tokyo. He also
provides broadcasting interpreting for NHK (Japan Broadcasting
Corporation) and translation for Discovery Channel.
Erik Skuggevik is a lecturer in Audiovisual Translation at the University
of Surrey, UK and also teaches on the MA programme in Intercultural
Communication. He also lectures on the University of Westminster’s
MA in Translation. He has subtitled over 500 films and translated books,
short stories and cartoons. Erik is working on his doctorate in
Intercultural Theory and Translation at University College, London.
Stavroula Sokoli is a tutor in Spanish at the Hellenic Open University,
Greece and is a professional subtitler. She holds a BA in English Language
and Literature from the Aristotle University of Thessaloniki and an MA
in the Theory of Translation from the Universitat Autònoma de
Barcelona. She is a member of Tradumàtica (Translation and New
Technologies).
Christopher Taylor is Professor of English Language and Translation at
the University of Trieste, Italy, where he is Head of the Language Centre.
He has written extensively on aspects of translation, and in recent years,
has concentrated on the analysis of multimodal texts for translation
purposes and on the language of film (Tradurre il Cinema, 2000). He is
also the author of Language to Language (1998).
Jan-Emil Tveit is Associate Professor at the Norwegian School of
Economics and Business Administration in Bergen. He was the first
Director of Translation and Subtitling at TV2, Norway’s biggest
commercial television company. Jan-Emil has lectured at New York
University and given presentations at conferences in Canada, USA,
Spain and Britain. He has written several articles and published
Translating for Television: A Handbook in Screen Translation (2005).
This page intentionally left blank
1
Introduction
Jorge Díaz Cintas and Gunilla Anderman
The Wealth and Scope of Audiovisual Translation
In the twenty-first century, the media is omnipresent: to inform,
arguably sometimes to misinform, to sell, to entertain and to educate. A
quick perusal of traditional television programmes or cinema guides
will testify to the growth and importance of the media and the need for
audiovisual translation (AVT) in most countries. The reasons are manifold: a larger number of television channels at all levels, international,
national, regional and local, means a sharp increase in the quantity and
range of programmes required to meet the needs of broadcasting
schedules. With the steady decline of analogue technology, the arrival
of the digital era has also contributed to the diversification of offerings
provided by television. In a very short time, corporations such as the
BBC and ITV in the UK have more than doubled their number of channels and similar developments have also seen a record boom in new
television channels at European level with 277 new channels launched
in Europe since 2004 and in excess of 200 in 2005 (Hamilton and
Stevenson, 2005). As for the cinema, the film industry seems to have
emerged from the lean years when the video appeared to pose a serious
threat to its continued existence, and now the number of cinema-goers
again seems healthy. The flourishing celebration of film festivals, with
hundreds of them taking place in any given year in all corners of the
globe also testifies to this positive outlook. Add to this the advent of the
DVD and the fact that the Internet is firmly established in our society
and the picture is virtually complete.
There is also the theatre, the opera and other live events where
translation may be required in the form of surtitles; and the rapid developments we are witnessing in the field of accessibility to the media for
1
2
Jorge Díaz Cintas & Gunilla Anderman
people with sensory impairments. Traditionally ignored in academic
exchanges, subtitling for the deaf and hard-of-hearing (SDH) and audio
description for the blind and the partially sighted (AD) are becoming
part of our daily audiovisual landscape and attracting the interest of
many scholars and practitioners.
Given the many ways in which viewers can access audiovisual
material – DVD, television, cinema, Internet – it is difficult to quantify with precision the percentage of foreign-language programmes
translated and screened in any given country. Statistics available tend
to be concerned with the number of films exported and imported for
cinema release only, forgetting crucially any other films or audiovisual
programmes (sitcoms, documentaries, TV series, musical concerts, cartoons, etc.) that are broadcast by private and public television channels
and distributed on DVD and the Internet. Taken in isolation, the sparse
figures available are likely to give a somewhat skewed overview of the
real situation. At the expense of European productions, and according
to Yvane (1995), an extremely high percentage of audiovisual programmes originate in the USA; 90% in Denmark, 90% in France, 90%
in Germany, 94% in Greece, 75% in Ireland, 80% in Italy, 92% in
Luxembourg, 90% in the Netherlands, 70% in Portugal, 95% in Spain,
and 88% in the United Kingdom. Now a decade old, these figures are
likely today to be the same or slightly higher as much has happened
since the mid-1990s, notably the exponential growth of television stations at international, national, regional and local levels.
There has been, however, a trend in the opposite direction. New low
production cost audiovisual genres have emerged that, emulating the
format of similar programmes designed in other countries and for other
audiences, can be produced in the language of other communities without the need for translation. Examples abound and British reality television programmes such as Big Brother; quizzes in the vein of Who Wants
to Be a Millionaire and The Weakest Link, soap operas depicting the daily
routines of next-door neighbours, talk shows and contests for ‘wannabes’
have proved popular in most European countries and beyond. In most
cases, the only translation required is that of the name of the programme. However, these developments do not necessarily mean that
the overall need for translation is lower since, as already mentioned
previously, there are many more television channels broadcasting many
more hours. All these changes are mainly concerned with television
productions only and more studies in the transnational trading of
audiovisual programmes at all levels are needed in order for an overall
picture to emerge of the real expansion of the field.
Introduction
3
Nevertheless, despite the fact that the number of programmes produced
in national languages would seem to be on the increase, the situation in
countries where English is not the official language is such that a large
volume of audiovisual programmes still needs to be translated. While traditionally feature films, television series, cartoons, sitcoms, soap operas
and documentaries have been ideal candidates for this intercultural journey, the current growth in the need to provide and supply more audiovisual material for new channels has made broadcasters re-examine and
broaden the range of programmes suitable for interlingual transfer.
Subjects ranging from cookery, travel, DIY, fashion, interviews, gardening
and awards ceremonies to political speeches, have started to find their
way, via translation, to television sets in the living rooms of other countries. To a large extent, these developments help to account for the so-called
revolution experienced in the field of audiovisual translation during the
last couple of decades (Díaz Cintas, 2003).
The move from analogue to digital technology and the potential
afforded by the digitisation of images has also opened up new avenues,
radically changing the essence of the industry. Together with the
ubiquitous presence of the computer and the Internet, the arrival of the
DVD can be hailed as one of the most exciting and revolutionary developments in recent decades. In just a few years, the DVD has become the
favoured mode for distribution and consumption of audiovisual programmes. Its increased memory capacity, when compared to the CD; its
superior image definition compared to the traditional VHS tape; and its
greater flexibility, allowing the viewer to watch the programme on the
TV set, the computer screen or a portable DVD player, constitute some
of the main features that make it a favourite with producers as well as
distributors and viewers. This has, in turn, resulted in new working
practices, had an impact on the design of dedicated software programs,
facilitated the work of scholars researching the field, and altered the
consumer’s view of audiovisual products. The rate at which some of
these changes in working practice are taking place is perhaps most
striking in the field of subtitling. In a relatively short period of time,
the process of subtitling has gone through a substantial transformation; what a decade ago was common practice is now viewed as out of
date. Changes are happening at all levels – technological, working routines, audience reception, emergence of new translation modes and
approaches – all bringing in their wake advantages and disadvantages
that are fully discussed in this volume.
Also increasing in volume is the amount of translation required in
the field of AVT. Films that in ‘dubbing countries’ have traditionally
4
Jorge Díaz Cintas & Gunilla Anderman
been dubbed for both cinema and VHS releases as well as television
broadcasting are now also being subtitled for distribution on DVD; and
classic movies that were only dubbed when first released are nowadays
also available in subtitled versions on DVD. In addition, the increased
memory capacity of the DVD has made it possible to include more
material also in need of translation: known in the industry as ‘VAM’,
value added material, producer’s edits, false takes, interviews and other
related bonus material often takes up more time and space than the
actual film itself. TV series, sitcoms and cartoons that are normally
dubbed when broadcast on television also end up on DVDs with subtitles. The music industry, too, seems to be slowly moving away from the
CD to the DVD in order to promote their live concerts or video clips
that tend to be subtitled. In addition, individual films and other
audiovisual programmes are now released in different formats such as
cinema, television, DVD, CD-ROM and Internet, which combined with
the increasing number of media companies operating in the field, has
resulted in many films and programmes being translated more than
once to meet consumer demand.
Since the early days of the cinema, in order to make these audiovisual
programmes comprehensible to audiences unfamiliar with the language
of the original, different forms of language transfer on the screen have
been required. In the main, there are two basic approaches to the translation of the spoken language of the original programme: to retain it as
spoken or to change it into written text. In the first instance the original
dialogue is replaced by a new soundtrack in the target language in a
process generally known as revoicing. The replacement may be total,
whereby we do not hear the original, as in lip sync dubbing and narration, or partial, when the original soundtrack can still be heard in the
background, as in voice-over and interpreting. All these modes are
available to the profession and some of them are more suited to particular audiovisual genres than others. Lip sync dubbing, for instance, is
mainly used in the translation of films and TV series and sitcoms,
whereas narration and voice-over tend to be more used in the case of
documentaries, interviews and programmes on current affairs.
When the decision has been taken to keep the original soundtrack
and to switch from the spoken to the written mode, by adding text to
the screen, the technique is known as subtitling. Quicker and a lot
cheaper than dubbing, it has more recently become the favoured translation mode in the media world and comes hand in hand with
globalisation. Despite the historically strong polarisation between
advocates and detractors of the two different approaches, nowadays it is
Introduction
5
generally accepted that different translation approaches make their
own individual demands while remaining equally acceptable. The
choice of one method in preference to another will simply depend on
factors such as habit and custom, financial constraints, programme
genre, distribution format and audience profile – to mention just a few.
In this volume the full range of approaches is discussed as applied to a
number of different languages.
For several years full access to audiovisual media for minority social
groups such as the deaf and the blind has been an issue. Recent developments and studies show that the needs of these groups are increasingly
being catered for and this field of expertise now holds an established
position within audiovisual translation. Accessibility is a new key concept; an umbrella term that encompasses all associated new modes of
translation.
According to statistics (Neves, 2005: 79), between 1% and 5% of the
population of any country, are deaf or hearing-impaired. The number
of people in these categories is growing as more people are living into
old age and they account for significant numbers on the continents of
Europe and North America. According to Hay (1994: 55), around 30% of
all Americans over 65 years of age have some degree of hearing loss. As
for Europe, figures presented at the 2003 international conference
Accessibility for All projected that by 2015 there will be over 90 million
adults affected by a hearing loss (Neves, 2005: 79).
These figures clearly call for a more consistent and systematic approach
to making it possible for viewers with sensory impairment to gain access
to television and other media. Since the mid 1970s, subtitling for the
deaf and hard-of-hearing (SDH), also known as (closed) captioning in
American English, has formed part of the audiovisual landscape in
order to ensure greater democratic access to audiovisual programming.
On television, these subtitles are broadcast by means of an independent
signal, activated only by those interested, by accessing pages 888 or 777
of teletext in most European countries. In North America they are transmitted on what is known as line 21. The oral content of the actors’ dialogue is converted into written speech, which is presented in subtitles of
up to three, or occasionally four, lines. They generally change colour on
television depending on the person who is talking or the emphasis given
to certain words within the same subtitle. Besides the dialogue, they also
incorporate all paralinguistic information that contributes to the development of the plot or to the creation of atmosphere, which a deaf person
cannot access from the soundtrack, e.g. a telephone ringing, laughter,
applause, a knock on the door, and the like.
6
Jorge Díaz Cintas & Gunilla Anderman
Visual impairment is one of the most age-related of disabilities and
the vast majority of blind and partially sighted people are elderly.
According to the European Blind Union (EBU, online) 7.4 million of
Europeans – close to 2% of the total population – have a significant
visual disability. As demographic trends show that the number of elderly people is on the increase, one can only expect that the percentage
of people with visual impairment will also increase in the near future.
In the field of accessibility, a more recent development has been audio
description for the blind and the partially sighted (AD), a service which
is rapidly gaining momentum and visibility. It can be defined as an
additional narration that fits in the silences between dialogue and
describes action, body language, facial expressions and anything that
will help people with visual impairment follow what is happening on
screen or on stage. Audio describing television programmes, films,
plays, sporting events and even an exhibition of paintings is now technically relatively easy, especially with digital technology (Marriott and
Vale, 2002).
In the field of audio description, as well as in SDH, English-speaking
countries such as the UK, the USA, Canada and Australia seem to be
leading the rest of the world. However, coinciding with the European
Year of People with Disabilities in 2003 (www.eypd2003.org) various
actions were taken by the EU at international level and by individual
European countries at national level, to raise awareness and to foster
changes aimed at improving the lives of people with disabilities. Taking
advantage of the situation, a number of countries in the European
Union took the opportunity to launch initiatives leading to widening
accessibility to the audiovisual media for all their citizens. In addition
to increased general awareness, efforts already undertaken have also
resulted in actions such as the passing of legislation making it compulsory to broadcast a minimum number of hours with SDH and AD,1 and
the creation of national bodies responsible for monitoring developments in the field.2 A welcome step forward towards further increase in
accessibility is the arrival of digital television, a development that is
bound to change prevailing notions and attitudes.
Although television is likely to remain a favoured medium for subtitling
for the deaf and hard-of-hearing and for audio description for the blind
and partially sighted, the future of these two services goes beyond the
scope of television. SDH and AD are already gaining major importance in
the DVD market and are also in regular use in the cinema. Intralingual
subtitling is being complemented by interlingual subtitling for the deaf
and hard-of-hearing and new modes of translation are emerging, such as
Introduction
7
audio subtitling, in order to make subtitled programmes accessible to the
blind in countries where a large percentage of the programmes are commercialised in a foreign language with subtitles.
This flourish of activities in the field of AVT at industrial level has
had a positive knock-on effect in the university world where AVT is now
emerging as a thriving academic discipline for teaching and research.
For many years, the skills of the trade were acquired in the workplace,
away from educational establishments. Despite the importance of the
role of AVT in our daily lives, on the whole universities have been slow
in curriculum design and the development of new courses. This situation is, however, rapidly changing and there are now many different
specialist courses in AVT on offer at universities worldwide.
Audiovisual translation also has an important role to play in the
classroom. Material and exercises may be drawn from the area of AVT
and used in the process of second language acquisition. Subtitling
can be a powerful training and teaching tool in the foreign language
learning class. Hearing the original language while reading the dialogue in context provides a stimulating environment for students to
consolidate what they are learning, enrich their vocabulary at the
same time as they become familiar with and absorb the culture of a
foreign language in an authentic setting. However, subtitles are also a
potent force in language acquisition outside the classroom. In
Singapore it is a significant factor in the life of young Singaporeans
who have been brought up speaking English but in the presence of
older generations find themselves watching programmes in Mandarin.
Subtitled in English, Chinese drama on television is of great help in
facilitating the understanding of the Mandarin they hear; the importance of the subtitles to the language acquisition process is considerable. And in some parts of Europe subtitled programmes serve as the
most important means of acquiring mastery of English. As reported
in Gottlieb (2004: 96), when asked why they wanted to learn how to
read, 72 out of 75 first-grade Danish students declared that they
wanted to be able to read the subtitles on television; the ability to
read books no longer appeared to be a major incentive. For the citizens of a nation who reportedly listen to English on television and
video for an average of one hour per day, this would hardly come as a
surprise (Gottlieb, 1994: 153–7 and Gottlieb, 1997: 151–3). Reading
subtitles in the foreign language while watching a foreign language
programme also appears to be beneficial to vocabulary acquisition, as
shown by Belgian studies (Van de Poel and d’Ydewalle, 2001). Further
advantageous uses of subtitles include the learning of minority
8
Jorge Díaz Cintas & Gunilla Anderman
languages through subtitled film sequences (Baldry, 2001 reported in
Gottlieb, 2004: 88).
More recently, audiovisual translation has evolved to the point where,
as a discipline, it is now one of the most vibrant and vigorous fields
within Translation Studies. With the celebration of several international
specialised conferences and the publication of edited volumes and
monographs on topics in AVT, research in the field has also gained visibility in a comparatively short period of time.
Although at present, audiovisual translation is experiencing an
unprecedented boom of interest and activity at all levels, a number of
problematic issues remain to be addressed. The changes taking place in
the profession are fast, not always allowing sufficient time for full
adjustment. Old methods tend to compete with new techniques, and
consistency is not always maintained. Subtitle styles tend to vary from
country to country, even from company to company. In recent years,
calls for a ‘Code of Best Practice in Audiovisual Translation’ have been
recurrent, but would such a document be workable? Is it necessary or
even feasible for all aspects to be harmonised? If so, should it be done at
local, national or international level?
In the audiovisual field, the global dominance of the English language in many spheres becomes even more of an issue. Production and
distribution companies, run in most cases with the help of American
capital, have US interests at their heart. Hollywood, the powerhouse of
the western film industry, mirrors and exports cultural patterns that
have an immediate impact on other languages and cultures. New developments in the industry mean that in addition to Los Angeles, London
is also becoming a world nerve centre for AVT. Given this situation, can
we really talk of an ‘exchange’ of cultural products? Can the term ‘intercultural’ be used to describe this process? Or are we simply faced with a
one-dimensional flow of information and ideas from the metropolis to
the ‘territories’, as countries are known in the profession’s jargon?
Structure of the volume
Audiovisual Translation: Language Transfer on Screen attempts to provide
answers to some of these questions. Others, however, will remain open
to debate but in an expanding field rapidly generating increasing interest at professional, educational and research levels.
The seventeen contributions to the subject contained in this volume
offer a detailed overview of most of the translation modes used in the
audiovisual media. Part I of the book deals with Subtitling and
Introduction
9
Surtitling, two techniques that respect the original soundtrack and add
the translation in the form of short written texts. In Subtitling for the
DVD Industry, Panayota Georgakopoulou takes a close look at the recent
growth in the DVD market and the rise of subtitling as a rapidly expanding international industry. Georgakopoulou discusses some of the constraints (and possibilities) inherent in subtitling in general, then
proceeds to examine how these are dealt with in the DVD subtitling
industry. The focus is on subtitling from English into other languages,
and all examples used come from the database of the European
Captioning Institute (ECI), which is a UK subtitling company and one
of the world leaders in multilingual DVD subtitling. Georgakopoulou’s
contribution concludes by analysing the impact that centralisation is
having on the profession and discussing the potential offered by the
‘template’, a file containing the master (sub)titles in English and used as
the basis for translation into all languages required in a given project.
In emphasising the value of norms as a heuristic tool in the field of
Translation Studies and their major contribution to the evolution of
Descriptive Translation Studies, Stavroula Sokoli proposes bringing in
an evaluative element. In Subtitling Norms in Greece and Spain she
explains that the mere description of translation behaviour for its own
sake may not provide useful results, whereas the study of norms is
bound to give insight to the intersubjective sense of what is ‘proper’,
‘correct’, or ‘appropriate’. Since norms are not directly observable, a possible approach to them is through their manifestations, whether textual
or extra-textual. The focus in her contribution is on textual sources of
norms, that is, regularities in the choices made by subtitlers, as manifested in the translated films themselves. The texts under study are the
subtitled versions in Spanish and Greek of the films The English Patient
(Anthony Minghella, 1996) and Notting Hill (Roger Michell, 1999). Given
the paper’s qualitative and descriptive nature, rather than laws or absolute truths, possible explanations are proposed and more hypotheses
are generated, which in turn become the basis for future research.
Sokoli’s proposals can be used not only to explain and predict the way
subtitles are done, but also to train future subtitlers.
Łukasz Bogucki’s contribution, Amateur Subtitling on the Internet, looks
at Polish subtitles of the film The Fellowship of the Ring in order to assess
the criteria for quality assessment and their application to audiovisual
translation. As a voice-over country, Poland has had little subtitling
tradition to date, and only feature films are subtitled for their cinema
release. However, recently a new kind of subtitling has developed.
With the help of freeware computer programs and the Internet, amateur
10 Jorge Díaz Cintas & Gunilla Anderman
subtitling is undertaken by non-professionals and governed by
dramatically different constraints than professional subtitling. Often,
the Source Text (ST) is digitised from a low-quality recording, and the
end result – that is the subtitles in the Target Language (TL) – is conditioned by how much the subtitle producer has heard and understood
from the original dialogue. As amply illustrated in Bogucki’s contribution, this approach is likely to result in a multitude of mistakes and
misinterpretations.
While subtitles are translated texts usually displayed below the image,
as on a cinema or television screen, surtitles are most often displayed
above the stage, in live opera and theatre performances. In his contribution entitled The Art and Craft of Opera Surtitling, Jonathan Burton highlights the fact that subtitling and surtitling differ substantially in their
requirements and techniques, and proceeds to discuss surtitling as the
most common approach to the translation of operatic texts. He offers a
detailed analysis of the process, without forgetting the problems and
pitfalls facing translators in this specialised field.
Lucile Desblache’s contribution, Challenges and Rewards of Libretto
Adaptation, also has music at its core. Translating for the stage means
being torn between remaining faithful to the author’s intentions and
providing a version of the text which takes into account a range of
other factors. The original message, with its cultural references and
context, must be communicated to the audience, but the transfer of
non-semantic aspects of the ST, as well as some extra-textual aspects, is
of vital importance to successful adaptation. These factors must all
be taken into consideration in libretto adaptation, but in addition, the
special nature of opera imposes musical restrictions on the text, as the
music cannot be changed and dictates, to a certain degree, which words
can be used. Surtitling has been in existence for over two decades and
has now been adopted by most opera houses. Yet, in certain cases, the
decision is still made to adapt libretti for live performances. Drawing
from her own experience of writing the French version of Albert Herring,
a comic chamber opera by Benjamin Britten, the author discusses the
problems encountered in translation work of this kind. Three orders of
difficulty are identified relating to cultural equivalence, humour and
rhyming.
Part II, Revoicing, is centred on the technique of replacing the
soundtrack of the original audiovisual programme. Seen by many as
the opposing strategy to subtitling, the arguments in favour or against
each of these two approaches have been commonly debated in the
existing literature. In Dubbing versus Subtitling: Old Battleground Revisited,
Introduction
11
Jan-Emil Tveit attempts to assess the pros and cons of dubbing and
subtitling, and to determine if either of the two is the ‘better’ option.
Included among the disadvantages of subtitling are loss of information
due to the transition from spoken to written mode as well as the
frequently occurring failure to convey the dialectal and sociolectal
features of spoken language. In addition, problems resulting from
visual, spatial and decoding constraints may cause further difficulties
if subtitling is the chosen method. However, considering the loss of
authenticity and trans-national voice qualities in dubbing, not to mention the fact that it is more costly and time-consuming, the author
concludes that, on the whole, the subtitling approach to audiovisual
translation is to be preferred to dubbing.
As mentioned above, revoicing can be carried out in two different
ways: by completely erasing the voices of the source programme
(dubbing) or by juxtaposing a new soundtrack to the original one
(voice-over, interpreting, audio description). As is well known, Italy is
commonly labelled a ‘dubbing country’ which, together with Austria,
France, Germany and Spain, has adopted a tradition of dubbing rather
than subtitling, the preferred mode of audiovisual translation in Greece,
Portugal, Scandinavia and the UK. However, the situation in dubbing countries is rapidly changing as a choice between dubbing and
subtitling is increasingly becoming available. In their contribution
concerned with the level of audience tolerance to dubbing in Italy, The
Perception of Dubbing by Italian Audiences, Rachele Antonini and Delia
Chiaro discuss the results of a large-scale research project which set out
to explore and assess the quality of dubbed television programmes.
Based on a corpus of over three hundred hours of dubbed television
programmes and by means of web technology, over five thousand
Italian viewers were tested on their perception of dubbed Italian. After
conducting an ANOVA multivariate analysis of the raw data, an index
was created, accounting for the perception of linguistic naturalness in
dubbed Italian by viewers. After analysing the responses of a robust
sample of randomly selected viewers, the results seem to point towards
a certain tolerance of what viewers recognise as being unnatural, an
acceptance of linguistic features adopted on screen which are virtually
non-existent both in autochthonous Italian fictional programmes and
in established electronic corpora of naturally occurring Italian.
The double transfer of a literary work to the screen and of the resulting
film into a different culture, generates a series of highly interesting
and mutually influential relationships between two cultures, involving
three different types of transfer, namely: film adaptation, literary
12
Jorge Díaz Cintas & Gunilla Anderman
translation and audiovisual translation. The aim of Susana Cañuelo
Sarrión’s contribution, Transfer Norms for Film Adaptations in the Spanish–
German Context, is two-fold. First she identifies the works from a corpus
of Spanish films based on literary works and produced between 1975
and 2000 which have reached the German market and sets out to analyse them from a Polysystem Theory perspective, taking into account
the characteristics of German cinematography. Second, a number of
transfer norms for film adaptations is proposed on the basis of recurring
patterns detected in adaptations of literary works in Spain and their
subsequent distribution and reception in Germany. In an audiovisual
market dominated by English-spoken productions, Cañuelo Sarrión’s
contribution has the virtue of widening the scope of languages involved
to German and Spanish.
In Voice-over in Audiovisual Translation, Pilar Orero examines the many
definitions and descriptions from the field of Translation Studies for the
audiovisual translation mode known as voice-over, in which two
soundtracks in two different languages are broadcast at the same time.
She then describes the technique of translation for voice-over taking
into consideration a distinction which, suggested by Luyken et al. (1991),
has never been properly developed, namely the difference in the process
depending on whether the translation takes place during the production
or during the post-production phases.
As an audiovisual translation mode, interpreting has received very
little attention by academics or professionals in the past. In Broadcasting
Interpreting: A Comparison between Japan and the UK, Tomoyuki Shibahara
addresses the question of conformity to different house styles. The
definition of the term ‘broadcasting interpreter’ and its history are
discussed as are the many differences in broadcasting style between
Japan and the UK. For example, broadcasting interpreting at the BBC
Japanese Unit emphasises the quality of Japanese, while accuracy is the
top priority at Japan Broadcasting Corporation (NHK). However, both
broadcasting organisations require their interpreters to edit the
information. Based on his own experience, the author describes the
interpreting style, policy and quality control systems of the above two
organisations. In addition, he provides a comparison of the employment style of both the BBC and NHK as well as an analysis of the pros
and cons of working as a full-time member of staff and as a freelance
interpreter.
Part III of the volume contains contributions discussing Accessibility
to the Media. In countries such as the UK and the USA, subtitling as a
concept is usually understood as intralingual (English into English)
Introduction
13
subtitling for the deaf and hard-of-hearing (SDH). However, in
Portugal, Greece and the Scandinavian countries, for instance, it will
be interlingual (English into Portuguese, Greek or any of the
Scandinavian languages) open subtitling that comes to mind. In these
countries, interlingual subtitling for the deaf and hard-of-hearing is
rarely seen as a specific kind of subtitling. Even though some DVDs
now carry the option of interlingual SDH, this is still not common
practice, perhaps under the assumption that standard interlingual
subtitles offer enough information for all, regardless of the fact that
some viewers might have a hearing impairment. As discussed by
Josélia Neves in Interlingual Subtitling for the Deaf and Hard-of-Hearing
this particular group of viewers has special needs when it comes to
gaining true access to audiovisual material. In some respects, standard
interlingual subtitles offer more information than they are able to
absorb; in others, an extra input needs to be added for the full semiotic message to be conveyed adequately. Looking into the future and
taking advantage of television turning digital and interactive, Neves
advocates the production of multiple solutions (dubbing, interlingual
subtitles, intralingual and interlingual SDH, adapted subtitling) for
each audiovisual programme so that they can best suit the specific
needs of different audiences.
Audio description is the other main technique aimed at widening
accessibility to the media for viewers with sensory disabilities. After giving a brief introduction to what audio description is, Andrew Holland,
in Audio Description in the Theatre and the Visual Arts: Images into Words,
offers an informative account of the role of audio describers, professionals who act as the eyes of the visually impaired and transfer what they
see into words. They express and bring alive, verbally, what they see at
the same time as the action takes place, on stage or on the screen.
Focusing on audio description for the theatre, Holland equates the
importance of the audio describer’s sensitivity with the task of relating
the plot and action of a theatre or screen performance. Providing an
audience with the subtext or the unspoken message of a production is
often as important as conveying facts, if not more so. Audio description
represents a form of audiovisual translation that is rapidly gaining
increasing attention as the needs of the visually impaired are beginning to receive the degree of concern more frequently granted to the
hard-of-hearing in the past.
The Internet has meant a revolution in our lives in general, and in the
field of translation and business in particular. In their contribution,
Usability and Website Localisation, Mario de Bortoli and Jesús Maroto
14 Jorge Díaz Cintas & Gunilla Anderman
Ortiz-Sotomayor discuss the limited success as well as the problems
encountered by big international companies when translating their
English websites into the languages of the main foreign markets.
According to the authors, for a site to be well received and successful in
the twenty-first century it has to address the unconscious hidden
aspects of what constitutes a community. Localisation needs to build in
an in-depth knowledge of the local culture which in turn means that a
multilingual website cannot be researched and developed in English
and then simply sent off to be translated; rather, every aspect needs to
be discussed and studied prior to development and subsequent implementation. Addressing these cultural differences requires the early
attention of professionals with expertise in a variety of fields. Only
through a synthesis of interdisciplinary expertise will there be a guarantee of the enormous benefits that globalisation can offer to truly
international business.
Part IV, Education and Training, is devoted to the teaching and
training of future experts and professionals in the field of subtitling. The
most distinctive feature of subtitling is the need for economy of translation. There is rarely enough space and time to fit all potentially transferable material in an audiovisual programme onto the stipulated number
of lines and characters. This is where the subtitler’s sensitivity to plot,
character and film narrative becomes crucial in order to determine the
information to keep in and to leave out. But how can this sensitivity be
defined? And how does it actually work?
In his contribution, Teaching Screen Translation: The Role of Pragmatics
in Subtitling, Erik Skuggevik looks at the issue of the sensitivity of the
subtitler as linked to several translation strategies, providing students
and trainees with underused tools to break down the constituents of
both the dialogue of the original and the subtitles of the translated programme. Drawing from the Speech Act Theory developed by Grice
(1975) and the six functions of communication identified by Jakobson
(1960), the author proposes an analysis of communication that enables
us to quantify the constituents of subtitling more closely.
In his search for useful tools for the subtitler, Christopher Taylor draws
from the potential offered by the multimodal transcription as devised
by Thibault and Baldry (2000). In Pedagogical Tools for the Training of
Subtitlers, he discusses how this methodology allows for the minute
description of film texts, thereby enabling subtitlers to base their translation choices on the meaning already provided by other semiotic
modalities contained in the text such as visual elements, music, colour,
and camera positioning. This process, though time-consuming has
Introduction
15
proved to be an extremely valid pedagogical instrument. A second stage
in this research has led to the creation of transcriptions based on phasal
analysis, following the ideas of Gregory (2002). Film texts are divided
into phases and sub-phases based on the identification of coherent and
harmonious sets of semiotic modalities working together to create meaning in recognisable chunks, rather in the manner of written text. Such
texts are analysed in terms of their phasal construction, and also in
terms of the transitions separating the phases. This further extension of
the original tool provides the basis for a thorough analysis of any film
text, and provides a useful addition to the pedagogy of film translation.
As mentioned before, audiovisual translation is closely linked to technology and any new advances are bound to have a knock-on effect on
the discipline, particularly on subtitling. The Internet has been one of
such advances. Tutors face new challenges in the global era and perhaps
the most revolutionary is the proliferation of online multimedia courses
in higher education. As discussed in Francesca Bartrina’s contribution,
Teaching Subtitling in a Virtual Environment, tutors need to transfer teaching skills from the face-to-face classroom to the virtual environment.
This in turn gives rise to new trends in teaching methods and calls
for the acquisition of new skills in the use of Information and
Communication Technology. More specifically, in the case of online
subtitling courses, it means that instructors need to stay abreast of
innovations in digital subtitling technology. Bartrina’s contribution
deals with the challenge posed by virtual courses to the teaching of
subtitling and with the way skills of future professionals may be
developed by online learning.
The final contribution in this volume, by Annamaria Caimi,
Subtitling: Language Learners’ Needs vs. Audiovisual Market Needs,
highlights the importance of subtitling in language learning and suggests possible convergences between linguistic, educational and economic goals. Such an approach requires meeting different expectations
and interests, which can be accomplished by encouraging the distribution of audiovisual programmes with subtitles, in order to introduce
foreign languages through entertainment. The educational objectives,
which consider subtitling as a linguistic product, are quality-oriented,
whereas the economic objectives, whose main concern is the distribution of subtitled audiovisual programmes, are quantity-oriented. In
order to satisfy both needs, the author tackles the issue from three
different perspectives: (1) the linguistic perspective, which focuses on
the accuracy and appropriateness of language transfer through the
application of principles of translation theory; (2) the foreign language
16 Jorge Díaz Cintas & Gunilla Anderman
teacher’s perspective, whose priority is to single out the most natural
methods and techniques to facilitate foreign language acquisition;
(3) the audiovisual marketer perspective, whose task is to meet the
requirements of consumers in order to reach and maintain full demand
for the product.
It is hoped that these seventeen studies will together constitute a
rounded vision of the many different ways in which audiovisual
programmes cross linguistic barriers and frontiers and, in particular, of
the importance of audiovisual translation at the present time.
Notes
1. Many countries, for example, Belgium, Canada, the Netherlands, the UK and
USA, have established targets to provide SDH for 100% of their television
programmes (Remael, 2007). As for AD and sign language interpreting, the
targets are lower, at between 10% and 20% (Orero, 2007).
2. The creation of the Centro Español de Subtitulado y Audiodescripción (CESyA) is
a good example of the type of initiatives taken by some countries. More information on this centre can be found on www.cesya.es.
References
Baldry, A. (2001) Promoting Computerised Subtitling: A Multimodal Approach
to the Learning of Minority Languages. Unpublished manuscript. University
of Pavia.
Díaz Cintas, J. (2003) ‘Audiovisual translation in the third millennium’. In
G. Anderman and M. Rogers (eds) Translation Today. Trends and Perspectives
(pp. 192–204). Clevedon: Multilingual Matters.
EBU (online) The Future of Access to Television for Blind and Partially Sighted People
in Europe. Paris: European Blind Union. www.euroblind.org/fichiersGB/accessTV.html#intro
Gottlieb, H. (1994) Tekstning – synkron billedmedieoversættelse. Copenhagen:
University of Copenhagen.
Gottlieb, H. (1997) Subtitles, Translation and Idioms. Copenhagen: University of
Copenhagen.
Gottlieb, H. (2004) ‘Language-political implications of subtitling’. In P. Orero
(ed.) Topics in Audiovisual Translation (pp. 83–100). Philadelphia and
Amsterdam: John Benjamins.
Gregory, M. (2002) ‘Phasal analysis within communication linguistics: two
contrastive discourses’. In P. Fries et al. (eds) Relations and Functions within and
around Language (pp. 316–45). London: Continuum.
Grice, H.P. (1975) ‘Logic and conversation’. In L. Cole and J.L. Morgan (eds)
Syntax and Semantics (pp. 41–58). New York: Academic Press.
Hamilton, F. and Stevenson, D. (2005) ‘Sex and shopping drive new television channel launches as Europe sees a record boom in new TV channels’. Screen Digest, 13
September. www.screendigest.com/press/releases/FHAN-6G7BXGpressRelease.pdf
Introduction
17
Hay, J. (1994) Hearing Loss: Questions You Have ... Answers You Need. Pennsylvania:
People’s Medical Society.
Jakobson, R. (1960) ‘Closing statement: linguistics and poetics’. In R. DeGeorge
and F. DeGeorge (eds) (1972) The Structuralists: From Marx to Levi-Strauss
(pp. 85–122). New York: Anchor Books.
Luyken, G.M., Herbst, T., Langham-Brown, J., Reid, H. and Spinhof, H. (1991)
Overcoming Language Barriers in Television: Dubbing and Subtitling for the
European Audience. Manchester: European Institute for the Media.
Marriott, J. and Vale, D. (2002) Get the Picture: Making Television Accessible to Blind
and Partially Sighted People. London: Royal National Institute of the Blind,
Campaign report 19.
Neves, J. (2005) Audiovisual Translation: Subtitling for the Deaf and
Hard-of-Hearing. London: Roehampton University. PhD Thesis. http://rrp.
roehampton.ac.uk/artstheses/1
Orero, P. (2007) ‘Sampling audio description in Europe’. In J. Díaz Cintas, P. Orero
and A. Remael (eds) Media for All: Subtitling for the Deaf, Audio Description and
Sign Language (pp. 111–25). Amsterdam: Rodopi.
Remael, A. (2007) ‘Sampling subtitling for the deaf and the hard-of-hearing in
Europe’. In J. Díaz Cintas, P. Orero and A. Remael (eds) Media for All: Subtitling
for the Deaf, Audio Description, and Sign Language (pp. 23–52). Amsterdam:
Rodopi.
Thibault, P. and Baldry, A. (2000) ‘The multimodal transcription of a television
advertisement: theory and practice’. In A. Baldry (ed.) Multimodality and
Multimediality in the Distance Learning Age (pp. 311–85). Campobasso: Palladino
Editore.
Van de Poel, M. and d’Ydewalle, G. (2001) ‘Incidental foreign-language
acquisition by children watching subtitled television programs’. In
Y. Gambier and H. Gottlieb (eds) (Multi)Media Translation. Concepts, Practices
and Research (pp. 259–273). Amsterdam and Philadelphia: John Benjamins.
Yvane, J. (1995) ‘Babel: un soutien actif aux transferts linguistiques’. Translatio,
Nouvelles de la FIT-FIT Newsletter XIV (3–4): 451–60.
This page intentionally left blank
Part I
Subtitling and Surtitling
This page intentionally left blank
2
Subtitling for the DVD Industry
Panayota Georgakopoulou
We all know what subtitles are. Luyken et al. (1991: 31) define
them as:
... condensed written translations of original dialogue which appear
as lines of text, usually positioned towards the foot of the screen.
Subtitles appear and disappear to coincide in time with the corresponding portion of the original dialogue and are almost always
added to the screen image at a later date as a post-production
activity.
Interlingual subtitling is a type of language transfer in which the
translation, that is the subtitles, do not replace the original Source
Text (ST), but rather, both are present in synchrony in the subtitled
version. Subtitles are said to be most successful when not noticed by
the viewer. For this to be achieved, they need to comply with certain
levels of readability and be as concise as necessary in order not to
distract the viewer’s attention from the programme. So, what are the
techniques used to make subtitles unobtrusive? And what is the subtitler’s role? The answers to these questions can be found if we take a
closer look at the technical, textual and linguistic constraints of
subtitling.
Technical constraints
The technical spatial and temporal constraints of audiovisual programmes
relate directly to the format of subtitles.
21
22
●
●
●
Panayota Georgakopoulou
Space. In the limited space allowed for a subtitle there is no room for
long explanations. Two lines of text are usually the norm, and the
number of characters per line depends on a number of factors, including the subtitling workstation used. Since readability of
the text is of paramount importance, it has been suggested that
an ideal subtitle is a sentence long, with the clauses of which
it consists placed on separate lines (Díaz Cintas and Remael,
2007:172–80).
Time. The length of a subtitle is directly related to its on-air time.
Accurate in and out timing is very important and the text in the subtitles should always be in balance with the appropriate reading time
setting. No matter how perfect a subtitle is in terms of format and
content, it will always fail to be successful if viewers do not have
enough time to read it. A lower word per minute (wpm) or character
per minute (cpm) setting is applied, for example, when subtitling
children’s programmes, as children cannot reach adult reading
speeds.1
Presentation. Subtitles can take up to 20% of screen space. Important
factors for their legibility are the size of the characters, their position
on screen, as well as the technology used for the projection of subtitles in the cinema (DTS or Dolby), TV broadcast, DVD emulation, etc.,
as it affects their definition. In our digital age, most of these problems
have been solved. In DVD subtitling, for instance, the choice of any
font and font size supported by Windows is possible, unlike teletext
subtitling for television, where this is not the case. These technical
constraints determine subtitlers’ work practice and their linguistic
choices.
Textual constraints
In subtitling, language transfer operates across two modes, from
speech to writing, from the soundtrack to the written subtitles. This
shift of mode creates a number of processing and cohesion issues that
make it difficult to maintain the filmic illusion in the target
product.
Oral–aural processing
Since in subtitling both source and target texts are present
simultaneously (see Gottlieb, 1994: 265 for details on the four channels that compose the audiovisual text), the viewer of a subtitled
Subtitling for the DVD Industry 23
programme has at least two different types of information on which
to concentrate: the action on the screen, and the translation of the
dialogue, that is the subtitles. This adds to the verbal information
that might appear in the original programme in the form of inserts
and which the viewers have to process through the visual channel,
making it more difficult for them to relax and enjoy the programme.
The situation becomes more difficult when the timing of the subtitles
is not satisfactorily done. When a subtitle is continued over a shot
change, for example, the viewer may think that it is a new subtitle
and re-read it, losing precious viewing time. Also, the temporal succession of subtitles is quite different from the linear succession of
sentences in a book; it does not allow the eye to move backwards or
forwards to clarify misunderstandings, recapitulate the basic facts
or see what will happen next. This can be done, however, when
watching a film or a programme on video or DVD by using the
rewind function.
As a result, in order for the subtitled text to be successful, it needs
to preserve the ‘sequence of speech acts [ ... ] in such a way as to relay the
dynamics of communication’ (Mason, 1989: 15). A few rough rules
are usually observed by subtitlers to help minimise the potentially
negative effects of these extra processing demands made by the
viewer:
(a) When the visual dimension is crucial for the comprehension of a
particular scene, subtitlers should offer only the most basic linguistic information, leaving the eyes of the viewers free to follow the
images and the action.
(b) Conversely, when important information is not in the images but in
the soundtrack, subtitlers should produce the fullest subtitles possible, to ensure that the viewers are not left behind.
(c) The presentation of the subtitles, the way in which the words of
each subtitle are arranged on the screen, and on each subtitle line,
can help enhance readability.
This last point relates to the role of grammar and word order in subtitling. The simpler and more commonly used the syntactic structure
of a subtitle, the least effort needed to decipher its meaning. For
example, in all the subtitled versions of the following text from
Hitchcock’s Psycho, the main and subordinate clauses of the sentence
are placed in separate subtitle lines and the syntax is simplified
24 Panayota Georgakopoulou
through a re-arrangement of the original phrase as shown in the
English subtitle:
Example 1
Original dialogue
01.19.21.04 – 01.19.24.22
One thing people never ought to be when they’re
buying used cars is in a hurry.
English subtitle
French subtitle
People never ought to be in a hurry
On ne devrait pas être pressé
when buying used cars.
quand on achète une voiture d’occasion.
Italian subtitle
German subtitle
Non si dovrebbe mai andare di fretta
quando si compra una macchina.
Beim Gebrauchtwagenkauf
sollte man es nie eilig haben.
Even appropriate line breaks within a single subtitle can facilitate
comprehension and increase reading speed if segmentation is done into
noun or verb phrases, rather than smaller units of a sentence or clause.
In the example below from Weird Science it is possible, within the time
allowed, to read the text of both subtitles, which are identical from a
semantic point of view. However, of the two solutions in Example 2, the
subtitle format in Option 2 is easier to read as the text is broken up after
the noun phrase ‘your parents’.
Example 2
Original dialogue
01.03.53.01 – 01.03.57.11
I don’t understand. How come your parents trust you all of a sudden?
Option 1
Option 2
I don’t understand why your
parents trust you all of a sudden.
I don’t understand why your parents
trust you all of a sudden.
Finally, in this example from the audio-commentary of Hitchcock’s
Vertigo, we can see how long sentences are split up into shorter ones in
Subtitling for the DVD Industry 25
an obvious attempt to increase reading speed. As in this case,
conjunctions usually provide a natural split for subtitle breaks.
Example 3
Original dialogue
English subtitles
Option 1
Option 2
I met with him, talked to him, I’ve told
01.26.31.07 – 01.26.34.08
him the whole story and he started the Met with him, told him the whole story
painting and one day he said he must
and he started the painting.
meet Vera Miles to keep on going.
01.26.35.06 – 01.26.38.23
One day he said he must meet Vera Miles
to keep on going.
Textuality issues
Because of the limited space generally available for subtitles, certain elements of the soundtrack have to be omitted, and the obvious solution is
to do away with redundant elements of speech. Redundancy helps participants in a conversation grasp its intended meaning more easily and
its elimination from film dialogue may, therefore, weaken cohesion in
the subtitled text. The question then is to what extent the predictability
of discourse is affected by the systematic deletion of redundant features
and the impact this may have on the viewers’ understanding of the
narrative. There is, however, a strong link between the dialogue of a
film and the context in which it takes place. Apart from linguistic
redundancy in audiovisual programmes, there is also situational redundancy that usually works in favour of the translator.
The visual information often helps viewers process the subtitles, and
to a certain extent this compensates for the limited verbal information
they contain. For example, aspects of interpersonal communication
may be found in intonation, rhythm and the facial and kinesic movements that accompany the dialogue which are, to an extent, universal
(Nir, 1984: 90). When, in Braveheart, Mel Gibson cries out ‘Hold! Hold!
Hold! Now!’ at the moment when the British cavalry is about to attack
the frontline of the Scottish army, the viewers can easily understand
what is happening, even without a translation. This is also known as
the ‘feedback effect’ of films (Nedergaard-Larsen, 1993: 214).
26 Panayota Georgakopoulou
Change in mode
The shift of mode from speech to writing presents the subtitler with
yet more challenges. Characteristics of spontaneous speech, such as
slips of the tongue, pauses, false starts, unfinished sentences, ungrammatical constructions, etc., are difficult to reproduce in writing. The
same goes for dialectal, idiolectal and pronunciation features
that contribute to the moulding of screen characters. The use of a
pseudo-phonetic transcription to reproduce a regional or social dialect in the subtitles, for instance, would not be helpful as it would
hinder the readability of the text by adding to the reading time of the
subtitle, and also hinder the comprehension of the message by obscuring the style.
What then is the subtitler to do? Certain spoken features may need to
be rendered in the subtitles if their function is to promote the plot. But
rather than reproducing mistakes in an uneducated character’s speech,
a subtitler can make use of appropriate, usually simpler, vocabulary in
order to indicate education, regional dialect or social class of the character. Or the decision may be taken not to reproduce the stuttering in a
character’s speech since viewers have resource to this information
already from the feedback effect of the soundtrack. For the subtitler, it
is a matter of deciding in each case what priority needs to be given to
certain features of each sequence of speech.
Linguistic constraints
The space and time constraints inherent in the subtitling process
usually enhance traditional translation challenges, such as grammar
and word order, as well as problems related to cross-cultural shifts.
With an average 30% to 40% expansion rate when translating from
English into most other European languages, reduction is obviously
the most important strategy in subtitling. But what, then, are the
elements of speech that are commonly omitted or edited in subtitles?
According to Kovač ič (1991: 409), there is a three-level hierarchy of
discourse elements in subtitling:
●
●
●
The indispensable elements (that must be translated).
The partly dispensable elements (that can be condensed).
The dispensable elements (that can be omitted).
The indispensable elements are all the plot-carrying elements of a film;
they carry the experiential meaning without which the viewers would
Subtitling for the DVD Industry 27
not be able to follow the action. In the two examples below of the Dutch
and German subtitles of Hitchcock’s Vertigo, only the plot-carrying
elements have been translated.
Example 4
Original dialogue
01.07.37.13
01.07.41.23 – Duration: 4.10
He had no problem getting Harry Cohn to agree a loan-out.
Dutch subtitle
English back-translation
Hij wist Harry Cohn
over te halen tot een ruil.
He got Harry Cohn
to agree a loan-out.
Example 5
Original dialogue
01.08.12.06
01.08.17.03 – Duration: 4.22
Samuel Taylor was the one who introduced the character of Midge.
German subtitle
English back-translation
Samuel Taylor
führte die Figur der Midge ein.
Samuel Taylor
introduced the character of Midge.
There are also a number of linguistic elements that many subtitlers in
the profession would omit even if the spatio-temporal constraints of
subtitling did not apply, such as:
(a)
(b)
(c)
(d)
(e)
Repetitions.
Names in appellative constructions.
False starts and ungrammatical constructions.
Internationally known words, such as ‘yes’, ‘no’, ‘OK’.
Expressions followed by gestures to denote salutation, politeness,
affirmation, negation, surprise, telephone responses, etc.
(f) Exclamations, such as ‘oh’, ‘ah’, ‘wow’ and the like.
(g) Instances of phatic communion and ‘padding’, often empty of
semantic load, their presence being mostly functional speech
embellishment aimed at maintaining the desired speech-flow.
Among these, we can find expressions such as ‘you know’, ‘well’,
28
Panayota Georgakopoulou
‘naturally’, ‘of course’, ‘understandably’; prepositional phrases (‘in
view of the fact that’); rhetorical flourishes; and phrases used for
sound effect (‘ways and means’).
Many of these linguistic elements are commonly deleted because they
can be retrieved from the soundtrack (b, c, d, and f). Were they to be
transcribed or translated, we would have a case of duplication, as the
same information would be found both in the subtitles and in the
soundtrack. English words such as ‘yes’, ‘no’ and ‘OK’ are known virtually throughout the world. This has partly to do with the global position of the English language today and its prominence in the media
industry. The degree to which certain words or expressions can be easily
recognised by target viewers also has to do with how closely the two
languages involved are related, usually English and the particular target
language into which a programme is subtitled.
Elements such as repetitions, padding expressions or even ungrammatical constructions may at times be optionally condensed rather than
omitted, as they may contribute to the textuality of the programme and
the character development of the actors. In the following example, we
find repetitive items in the second part of this dialogue from Weird
Science. The three questions ‘Didn’t throw up?’, ‘No?’, and ‘Nothing?’
could easily be omitted from the subtitles, as the meaning is still obvious
from the translation of only the last of the questions, ‘You didn’t see
anything?’. The purpose of the repetition, however, is to show the character’s exasperation and sense of urgency in getting an answer. Thus,
some repetition is still built into the subtitles in the form of ‘No?’ or
‘Nothing?’, the two most concise of the three phrases, as limited time is
available. In the subtitles aimed at deaf and hard-of-hearing viewers, yet
further repetition is built in through the addition of the phrase ‘I didn’t
throw up?’ and modification of the timings appropriately.
Example 6
Original dialogue
01.27.07.22 – 01.27.12.06 Duration: 4.14
All right, in your dream, did I get up in the middle
of the night and yak in your sink?
01.27.13.05 – 01.27.15.17 Duration: 2.12
Didn’t throw up? No? Nothing? You didn’t see anything?
Continued
Subtitling for the DVD Industry 29
Example 6 Continued
Danish subtitles
German subtitles
I din drøm, stod jeg så op midt om
natten og knækkede mig i vasken?
Stand ich in deinem Traum mitten in der
Nacht auf und kotzte ins Waschbecken?
Nej? Så du ikke noget?
Nein? Du hast nichts gesehen?
Greek subtitles
Hungarian subtitles
' u,
;
Felkeltem az álmodban, hogy
belerókázzak a mosdóba?
‘; ;
Semmi?
Nem láttál semmit?
Spanish subtitles
Russian subtitles
En tu sueño, ¿me levanté
por la noche a vomitar en el lavabo?
 òâîåì ñíå ÿ âñòàâàë ïîñðåäè íî÷è,
÷òîáû ïîáëåâàòü â ðàêîâèíó?
¿Nada? ¿No viste nada?
Íåò? Òû íè÷åãî íå âèäåë?
English back-translation
English SDH subtitles
In your dream, did I get up in the middle In your dream, did I get up in the middle
of the night and yak in your sink?
of the night and yak in your sink?
No?/Nothing?
You didn’t see anything?
I didn’t throw up? No?
You didn’t see anything?
Some observations on translation strategies
There are numerous constraints in subtitling, and there is no systematic
recipe to be followed, no prêt-à-porter solutions. To decide on the best
translation strategy, a thorough analysis of each translation issue has to
be made based on:
●
●
●
●
●
Function (relevance to the plot).
Connotation (implied information, if applicable).
Target audience’s assumed knowledge of the language and culture of
the source language programme.
Feedback effect.
Media related constraints.
30 Panayota Georgakopoulou
Reduction, which is the most important and frequently used strategy in
subtitling, is generally applied on the basis of the three functions of
language as suggested by Halliday (1973). Whereas experiential meaning needs to be translated, aspects of interpersonal and textual meaning can be omitted, especially when these may be retrieved directly
from the picture or the soundtrack. The task of the subtitler involves
constant decision-making to ensure that the audiovisual programme is
not bereft of its style, personality, clarity, and that the rhythm and its
dramatic progression not hindered. The final aim is to retain and reflect
in the subtitles the equilibrium between the image, sound and text of
the original.
DVD subtitling
With the appearance of DVDs on the market, developments in the
subtitling industry during the last part of the twentieth century have
posed a challenge, creating the demand for multilingual subtitling in a
large number of languages with very tight turnaround times. The
response from the market has come in the form of centralisation,
whereby large subtitling companies offer services for all languages
required, minimising the costs of Hollywood studios since there is no
longer the need to send master copies all over the world. But how does
centralisation work? And what are the advantages and problems?
The template
In the past, the two main problems that subtitling companies had to
address were:
(a) The cost of subtitle production: in order to make their services
attractive to large Hollywood production companies, prices needed
to be controlled.
(b) The management of subtitle files: numerous files are involved and
their management had to be made easier in order to speed up the
process and minimise the number of mistakes that might be made.
The solution to both these problems has come in the form of universal
template subtitle files in English (also referred to in the profession as the
genesis file or the transfile), to be used as the basis for translation into
all languages. In other words, the subtitling process has been split into
two distinct tasks. The timing of a film or audiovisual programme is
Subtitling for the DVD Industry 31
made by English native speakers who produce a unique timed subtitle
file in English, that is, a file where all the in and out times have been
decided. This file is then used as the basis for translation into other
languages as required, with the translation carried out by native speakers of these languages.
The advantages of this new working method are obvious. The costs
involved in the timing of a film are reduced to producing one subtitle
file only, rather than a different one for each of the languages required.
To put it in financial terms, what the template has introduced into the
subtitling industry is an economy of scale, whereby the greater the
number of languages involved in a project, the larger the cost-savings to
be made.
From a linguistic point of view, certain problems can also be eliminated. When spotting and subtitling are carried out directly by a native
speaker living in the target country and whose native language is not
English, mis-hearings of the language of the original soundtrack,
which is usually English, tend to be more frequent. When the mother
tongue of the subtitler is not English, there can be potential problems
in understanding the source language, especially slang and colloquialisms, which require an affinity with the spoken language that can only
really be acquired by living in the country where the language is
spoken. But if the subtitler used is one who has been living in an
English-speaking country for some time, with near-native fluency of
English, s/he might run the risk of not being 100% up-to-date with the
changes in his/her own native tongue, languages being living organisms in constant change. The use of a template file makes it possible,
therefore, to gain the best of both worlds: English native speakers are
used to produce the original file in English, and the native speakers of
the languages required in each project are then used to translate into
their own mother tongues.
Above all, the recruitment needs for subtitling companies catering
for many languages can now be dealt with instantly as the pool of
freelancers – in all the languages with which a company works – has
similarly been expanded. It is no longer necessary to recruit subtitlers
in all the languages required who were also able to do the technical
task of spotting, a rare type of professional that, for some languages,
may be hard to find. Translators with no subtitling training can be
given a chance. As the issue of timing and cueing is removed from the
equation, translators without subtitling experience may be given
training in recognising mistakes and good practice in subtitling and
in developing respect for its principles. Timing issues that emerge
32 Panayota Georgakopoulou
can be reported and dealt with by the project managers, leaving the
translators to concentrate on the translation of the dialogue.
As all the subtitle files required for any given project or film are based
on a single template in English, the resulting target files in all languages
have now become identical in terms of timings and subtitle number.
This facilitates their management and the number of checks they have
to undergo before they are ready for DVD emulation. And this last part
of the process does not require specific linguistic skills, other than
knowledge of English, from the person performing checks, yet another
cost-saving factor.
The problems of centralisation
When catering for worldwide subtitling needs and styles, the complexities
of subtitling described above multiply. Decisions relating to issues such
as when to use italics in the subtitles or whether to give priority to
horizontal versus vertical subtitling (as in Japanese) are country-specific.
The question has been raised as to how these different conventions can
possibly be tackled within a single subtitle file that has to conform to
each country’s conventions, while ensuring that the end-result is readily
acceptable in the countries where the finished product (that is the DVD)
is to be sold.
These requirements clearly have to be kept in mind when creating
a template file, making the effort required on the part of the subtitler
greater. But the main principle that underlies the subtitling of a template file is much the same as the subtitling of a film or any other
audiovisual programme into any language, there is one constant that
never changes, which is the source file. The main difference between
subtitling and other forms of translation is the fact that the translated version of the text does not replace the original: in the subtitled
version both are present concurrently. In other words, what for subtitlers is one of the major translation challenges in their work – the
original always remains present alongside their translation, limiting
their choices and putting their solutions as the focus of criticism of
audiences worldwide – can now be used as an advantage. It is true
that over the years different countries have developed their own subtitling styles, a fact probably more true of, for example, Scandinavian
countries with their longstanding tradition in subtitling, in contrast
to primarily dubbing countries, such as Italy or Spain, where subtitling is more of a novelty. But as regards the first part of the process,
that is the timing of a subtitle file, the choices are limited, as the
Subtitling for the DVD Industry 33
actors on the screen will always open their mouths to utter a phrase
at a certain point in a film or programme and close it at another
point, and the timings of any subtitle file would need to reflect this.
This may explain why the process of templating is not new to
Scandinavian countries, where it was used long before the appearance of DVD, with companies creating subtitle files for VHS releases
for all Scandinavian countries out of a single office (Georgakopoulou,
2003: 24).
If template files are to be used successfully in the profession, greater
effort needs to go into their creation, and more information included.
At the European Captioning Institute, a template file always contains
information such as:
(a) Translation notes for unfamiliar or culturally-bound expressions,
meant to help non-English native speakers produce a more accurate
translation. Cases of irony, wordplay and related aspects are also
brought to the attention of subtitlers.
(b) Notes as to the points of the film or programme in which burnt-in
text appears, that is written inserts on screen, in order to cater for
information that, in certain languages, may or may not need to be
translated. These notes help assure accurate positioning of the subtitles on screen, ensuring that the burnt-in text on the image is not
obscured by the subtitles.
(c) Further notes as to the choice of font, the use of italics, the treatment of songs, the treatment of titles, etc., in each language file,
depending on the conventions of each language and/or the demands
of the client.
On the whole, the creation of a template file is far more complicated
than the creation of any other subtitle file and the process involves an
even greater amount of work because of the number of ancillary files
produced. However, the rules and lists of notes necessary for the creation
of template files are also the means whereby, at a later stage of the
subsequent language files, the process of checking is facilitated to
ensure that they all adhere to the conventions necessary for acceptance
in the target country and clients’ specifications.
Conclusion
The advantages of template files and the flexibility that they offer to
subtitling vendors cannot be overestimated; they form the basis of the
34
Panayota Georgakopoulou
DVD subtitling industry. As this is a relatively young industry, it goes
without saying that the underlying rules and additional constraints for
creating template files make this subject wide open to further research
on a global scale in order to refine current practices and produce even
better quality subtitle files. Such research would inevitably require practitioners from all over the world to play a central role in order to ensure
that the language files produced will conform to the appropriate
subtitling style of each country.
A prominent Greek subtitler, with over 40 years’ experience in the
field, told me in an interview that a good subtitler can edit anything,
even a book, down to one subtitle. In line with my true passion for the
job as subtitler, and echoing the statement that ‘[t]he attempt to achieve
perfect subtitling has some affinity to the search for the Holy Grail’
(Baker et al., 1984: 6), I cannot resist the temptation of editing down
this chapter into a single subtitle:
The template IS the Holy Grail
of the DVD subtitling industry today.
(Duration: 04.15)
Note
1. Adult reading speed is calculated at ECI at 750 characters per minute, or
approximately 180 words per minute for non double-byte languages, whereas
children’s reading speed is set at considerably less and hovers between 120
and 140 words per minute.
References
Baker, R.G., Lambourne, A.D. and Rowston, G. (1984) Handbook for Television
Subtitles. England: IBA.
Díaz Cintas, J. and Remael, A. (2007) Audiovisual Translation: Subtitling. Manchester:
St Jerome.
Georgakopoulou, P. (2003) Redundancy Levels in Subtitling. DVD Subtitling: A
Compromise of Trends. Guildford: University of Surrey. PhD Dissertation.
Gottlieb, H. (1994) ‘Subtitling: People translating people’. In C. Dollerup and
A. Lindegaard (eds) Teaching Translation and Interpreting 2: Insights, Aims, Visions
(pp. 261–74). Amsterdam and Philadelphia: John Benjamins.
Halliday, M.A.K. (1973) Explorations in the Functions of Language. London: Edward
Arnold.
Kovačič, I. (1991) ‘Subtitling and contemporary linguistic theories’. In M. Jovanovič
(ed.) Translation, A Creative Profession: Proceedings XIIth World Congress of FIT –
Belgrade 1990 (pp. 407–17). Beograd: Prevodilac.
Subtitling for the DVD Industry 35
Luyken, G. M., Herbst, T. Langham-Brown, J., Reid, H. and Spinhof, H. (1991)
Overcoming Language Barriers in Television: Dubbing and Subtitling for the European
Audience. Manchester: European Institute for the Media.
Mason, I. (1989) ‘Speaker meaning and reader meaning: Preserving coherence in
screen translating’. In R. Kölmer and J. Payne (eds) Babel: The Cultural and
Linguistic Barriers Between Nations (pp. 13–24). Aberdeen: Aberdeen University
Press.
Nedergaard-Larsen, B. (1993) ‘Culture-bound problems in subtitling’. Perspectives:
Studies in Translatology 2: 207–41.
Nir, R. (1984) ‘Linguistic and sociolinguistic problems in the translation of
imported TV films in Israel’. The International Journal of the Sociology of Language
48: 81–97.
3
Subtitling Norms in
Greece and Spain
Stavroula Sokoli
Introduction
By simply watching subtitled films in Greece, traditionally considered a
subtitling country, and Spain, a dubbing country, certain differences in
this professional practice of audiovisual translation may easily be
observed. In this chapter the aim is two–fold:first to look into the factors
that guide translators’ decisions in the process of subtitling in these two
countries, by finding indications of norms; second, closely related to the
first, is to explore the relationship between the subtitles and the other
elements of the audiovisual text. The main premises are that the use of
descriptive tools can overcome the limitations posed by prescriptive
approaches and that there are regularities in the practice of subtitling.
Norm theory has contributed in a major way to the evolution of
Descriptive Translation Studies by introducing an evaluative element.
Indeed, the mere description of translation behaviour for its own sake
may not provide useful results, whereas the study of norms is bound to
give insight into the intersubjective sense of what is ‘proper’, ‘correct’,
or ‘appropriate’, in other words, the content of norms (Hermans, 1999:
82). According to Toury (1995: 54), translators have to acquire a certain
set of norms which will lead them towards adopting a suitable behaviour and help them manoeuvre among all the factors which may constrain it. In the case of subtitling, these factors concern mainly time
and space constraints, which, until recently, have attracted primary
attention in the discussions about subtitling and AVT. Here it is argued
that a shift from a discussion of the constraints themselves to the factors
that guide the translators in their work may prove useful.
Since norms are not directly observable, a possible approach is a study
of their manifestation, whether textual or extra-textual – two major
36
Subtitling Norms in Greece and Spain
37
sources of reconstruction of translational norms suggested by Toury
(1995). The focus here is on textual sources of norms, that is, regularities
in the choices made by subtitlers, as manifested in translated films. In
addition, the findings will be compared to the ones encountered in
extra-textual sources.
The theoretical starting point of the discussion is based on the
categorisations of norms made by Toury (1995) – initial, preliminary
and operational (subdivided into matricial and textual) norms – and by
Chesterman (1997) – expectancy and professional norms (subdivided
into the accountability, the communication and the relation norms).
The reason for the choice of both categorisations is that they cover the
same areas but from a different perspective.
In this chapter the focus is on matricial and relation norms. The
former affect the textual segmentation of the linguistic material and
its distribution, or, expressed in the terminology of the practice of
subtitling, the spotting of the original script, that is its division into
‘chunks’ to be translated, and the cueing of the subtitles, that is the
designation of their in and out times. Relation norms on the other
hand stipulate that an appropriate relation of relevant similarity should
be established and maintained between the source and the target text,
where ‘equivalence’ or ‘optimal similarity’ is only one of the possible
kinds of relation. Other parameters covered include the addition or
omission of information, as well as the relation to accompanying channels, for example, synchronisation between speech and the emergence
of subtitles on screen.
Another aspect to be discussed concerns expectancy norms. Adapting
Chesterman’s (1997) definition to the case of subtitling, they reflect the
expectations that viewers of subtitled audiovisual programmes have with
regard to what the subtitled product should be like. They are formed by
the prevalent subtitling tradition in the target culture and by the previous
viewing of subtitled films. In this sense, it can be argued that the
expectation norms in Spain and Greece are bound to differ, since the first
country is characterised by a tradition in dubbing and not subtitling.
The nature of audiovisual texts
Before proceeding to an analysis of the texts, it may be useful to examine
the nature of the audiovisual text, and the features that distinguish it
from other kinds of text.1 Characterised by its reception through two
channels, the acoustic and the visual, its other distinctive feature is the
importance of the nonverbal element. As pointed out by Zabalbeascoa
38
Stavroula Sokoli
(1997) all texts contain some nonverbal elements, since the message
cannot be delivered without some sort of physical support. In a film,
however, the nonverbal elements, acoustic in the form of noises and
music, or visual such as images, appear to a much greater extent than in
written texts.
Still, not all texts containing these characteristics are under
discussion here, and more parameters have to be considered in order
for any study to be defined with precision. One of these parameters is
the medium: audiovisual texts appear on a screen whether big or
small. The fact that nowadays films can be viewed not only at the
cinema or on a television set but also on a computer screen, calls for
another delimiting parameter which will define audiovisual texts as
opposed to hypertexts, received through the same medium. The
images in a hypertext, such as a web page, can be static or moving,
whereas the audiovisual text always includes moving images. The difference between the two kinds of text when they include moving
images is that the latter contains a predetermined succession of
non-repetitive images in absolute synchronisation with the verbal
elements. Another differentiating factor is interactivity: in the case of
hypertexts the receivers decide the sequence of the elements, according to their needs, whereas the audiovisual text cannot be altered. The
only possibility of intervention on the part of the receiver is the case
of a videotape or a DVD where the viewer can backtrack or ‘move’
within the film. The features that distinguish the audiovisual text can
be recapitulated as follows:
●
●
●
●
●
Reception through two channels: acoustic and visual.
Significant presence of nonverbal elements.
Synchronisation between verbal and nonverbal elements.
Appearance on screen – reproducable material.
Predetermined succession of moving images – recorded material.
These features condition the translation of the audiovisual text, and, as
a result, their consideration is fundamental for its study.
The combination of the acoustic and the visual channels, together
with the verbal and the nonverbal elements, result in four basic
elements of the audiovisual text: the acoustic verbal (dialogue), the
acoustic nonverbal (score, sounds), the visual nonverbal (image) and the
visual verbal element (subtitles).2 In this light, and for the purposes of
the present analysis, the set of subtitles is viewed as forming part of the
translated text, not as constituting a translation product by itself. In
Subtitling Norms in Greece and Spain
39
Verbal
Acoustic
(dialogue)
Visual
Verbal
(subtitles)
Nonverbal
Acoustic
(music, sounds)
Visual
Nonverbal
(image)
Figure 3.1 Spatial and temporal relationships between the basic elements of the
subtitled audiovisual text
other words, the original, untranslated film is considered to be the
source text and the subtitled film the target text.
The spatio-temporal relationships between the four above-mentioned
elements may be seen in the Figure 3.1, where the solid arrows represent
existing relationships in an audiovisual text and the dashed arrows
represent the relationships established by the subtitler.3
In the source text, the temporal relationship that exists between the
images, the sounds and the dialogue is characterised by an inherent
synchronisation. Logically, the same quality is bound to be required
between the subtitles and the rest of the elements of the target text;
subtitles will be expected to appear more or less when a character begins
to speak and to disappear when s/he stops.
In search of norms
The texts under study comprise the subtitled versions in Spanish and
Greek of the films The English Patient,4 the 162-minute film directed by
Anthony Minghella in 1996, and Notting Hill,5 the 124-minute comedy
directed by Roger Michell in 1999. A first quantitative and comparative
analysis between the two sets of subtitles shows a significant difference
in the number of subtitles, as shown in Table 3.1.
Even though the above are the only films analysed in this chapter,
there is further evidence regarding the higher number of subtitles in the
Spanish versions compared to the Greek versions, as can be seen in the
analysis of the movies The Perfect Storm,6 the 129-minute adventure film
40 Stavroula Sokoli
Table 3.1 Comparison between the numbers of subtitles in the films
The English Patient and Notting Hill
Greek sub/s
The English
Patient
Notting Hill
Spanish sub/s
Additional sub/s
in Spanish
955
1358
42.1%
1052
1754
66.7%
Table 3.2 Comparison between the numbers of subtitles in The
Perfect Storm and Manhattan Murder Mystery
The Perfect Storm
Manhattan Murder
Mystery
Greek sub/s
Spanish
sub/s
915
1449
1396
1944
Additional
sub/s in Spanish
52.5 %
34.1 %
directed by Wolfgang Petersen in 2000, and Manhattan Murder Mystery,7
the 104 minute comedy directed by Woody Allen in 1993. This difference
in number of subtitles can be attributed to two factors:
a. Difference in dialogue omission: where there is omission in the
translation of the acoustic verbal element in Greek there are subtitles
in Spanish.
b. Difference in distribution: one Greek subtitle consisting of two lines
corresponds to two Spanish subtitles consisting of one line each.
The question to be answered at this point is: why are there 403
‘additional’ subtitles in the Spanish version of The English Patient and
702 in Notting Hill? In the first film, 48.9% (197 subtitles) of this difference is due to the first factor mentioned above, that is, these subtitles
are present in Spanish but omitted in Greek. As for the remaining 51.1%
of the difference (206 subtitles), it is caused by the second factor, that is,
by the fact that two Spanish one-liners correspond to one Greek twoliner. Accordingly, in Notting Hill, 64.9% (456 subtitles) of the difference
is attributed to the first factor and the rest 35.1% (246 subtitles) to the
second factor.
Indeed, the analysis of the number of subtitles consisting of one or
two lines shows that there is a preference for two-liners in the Greek
and for one-liners in the Spanish versions, as shown in Figure 3.2.
Subtitling Norms in Greece and Spain
The English Patient
41
Notting Hill
32%
33%
Greek subtitles
68%
67%
one-liners
two-liners
37%
41%
Spanish subtitles
63%
Figure 3.2
59%
Percentage of subtitles consisting of one-liners and two-liners
The next step in the analysis is to find out the factors that determine
the choices related to the distribution and the omission of subtitles in
each language. Let us look at a couple of examples where a two-liner in
Greek corresponds to two one-liners in Spanish. The first one is from
The English Patient:
Example 1
Somos vecinos. Vivimos ...
a 2 manzanas de Montreal.
Greek
Spanish
Caravaggio: Apparently, we’re neighbours. My house is
two blocks from yours in Montreal.
.
.
It can be observed that in the Spanish version, a second subtitle is
introduced, even though both lines could fit in one subtitle. Assuming
that this choice is not random, what seems to determine the specific
decision by the subtitler is the existence of a cut, that is, a sudden
change from one image (a close-up of Hanna) to another (showing both
Hanna and Caravaggio). This norm operating in Spanish can also be
found in extra-textual sources, more specifically in articles written by
Spanish professionals like Castro Roig (2001: 280) who states that
‘whenever there is a cut, there must be a new subtitle’ (my translation),
42 Stavroula Sokoli
a normative statement that is repeated by Leboreiro Enríquez and Poza
Yagüe (2001).
This is also the case in the example below from Notting Hill, where the
second speaker follows almost immediately after the first one. In Greek
there is one subtitle consisting of two lines, whereas the Spanish subtitler
has chosen to create two separate subtitles in order to respect the visual
nonverbal element, that is, the shot change:
Example 2
Vale, se acabó.
No, no vale la pena.
Greek
Spanish
Will: Right, that’s it. Sorry.
Anna: No, there’s really no point.
-
.
- ‘.
In many cases, the difference in lexical distribution is determined by
the influence (or absence of influence) of the acoustic nonverbal
elements, namely pauses in the characters’ speech. This can be seen in
the following example from Notting Hill where the dialogue is
characterised not only by many pauses in speech, but also by a high
number of false starts:
Example 3
Anna: Tempting, but ... no. Thank you.
... pero ...
... no.
Greek
Spanish
Tentador, ...
... ...
‘, .
Gracias.
In the above example, we can see that there are four Spanish subtitles
when the Greek subtitler has decided to use only one. The distribution
of subtitles in Spanish seems to be determined by the pauses in Anna’s
speech, whereas in Greek it appears to be the direct consequence of the
norm which stipulates that, where possible, each subtitle must have a
complete meaning in itself. The extra-textual source for this norm
consists of a questionnaire directed to Greek professional subtitlers
(Sokoli, 2000). One of the outcomes of the analysis of the questionnaires
Subtitling Norms in Greece and Spain
43
indicates that completeness of meaning in each subtitle is considered
one of the main characteristics of good subtitling. A technique often
used in order to indicate the existence of pauses is the use of triple dots,
as in the above example.
An observation that has to be made concerning the Spanish versions
is that there are quite a few examples where there is both a pause and
a cut and the subtitler has to choose between cueing the subtitle
according to one of the two. It has been found that in all these cases,
priority is given to the norm which requires synchronisation with the
acoustic element; in other words, the cueing in of the subtitle will be
done when the next part of the utterance begins and not when there
is a cut.
The next step in the analysis is to examine the difference in the choice
of omissions, more specifically, the cases where an acoustic verbal
element has an equivalent subtitle in the Spanish but no subtitle in the
Greek version. The following example is from Notting Hill:
Example 4
Hola.
-
Hola.
-
Desapareciste.
Sí, sí.
Tuve que irme,
no quería molestar.
Greek
Spanish
Anna: Hi.
William: Hello.
Anna: You disappeared.
William: Yeah ... yeah. I had to leave. I didn’t want to disturb you.
.
.
‘
' .
In the Greek version, even though there are no time or space constraints,
the greeting between the two characters and the almost inaudible
‘yeah ... yeah’ are omitted in the subtitles. These choices do not seem
to be random and the norm which possibly governs them can be found
in the results of the previously mentioned questionnaire: a high percentage of the Greek subtitlers involved stated that they omit utterances that they consider either easily recognisable by the Greek
audience, (such as ‘OK’, ‘hello’, ‘yes’, ‘no’, etc., names, repetitions), or
not relevant to the plot, (hospital or airport announcements,
44 Stavroula Sokoli
songs, etc.). Moreover, it was pointed out that, despite the absence of
time or space constraints, such utterances are often omitted in order
for the viewers to have time to enjoy the image. These are utterances
which can be recovered by other elements of the audiovisual text:
●
●
●
●
●
The acoustic verbal (recognisable utterances, names, etc.).
The acoustic nonverbal (phatic elements, exclamations, etc.).
The visual verbal (repetition found in other subtitles, etc.).
The visual nonverbal (objects in the image, etc.).
A combination of all or some of the above elements.
It is clearly difficult, if not impossible, to mark a clear-cut distinction
between the elements of the audiovisual text, since these are closely
interrelated. The argument here is that the omitted utterances are
mainly and not only recoverable from these elements. Another example
comes from The English Patient:
Example 5
¡Estamos aquí!
!!
¡Aquí!
-
¡Pare!
!
¡Madox!
Greek
Spanish
Katharine: Stop! Here! Over here! Stop! Madox!
Almasy: Madox! Madox!
-
¡Madox!
-
¡Madox!
-
In the Spanish version, there seems to be a norm stipulating that there
must be as few omissions as possible, which is also extra-textually corroborated by Díaz Cintas (1997: 281). In an analysis of the subtitled
version of Manhattan Murder Mystery in Spanish he finds that there is a
tendency for what he calls sobretraducción [over-translation]. According
to this scholar, the specific phenomenon is explained by ‘the intention
that the viewer can have the feeling of not being cheated, and of having all the information contained in the original version’ (my translation). He provides examples similar to the ones above – such as Sí, sí,
¿Helen?, ¡Jack! – and considers these subtitles unnecessary for the
comprehension of the plot since they are of a purely phatic or vocative
Subtitling Norms in Greece and Spain
Table 3.3
45
The 13 possible relationships (Allen, 1983)
Relation
Symbol
Symbol for inverse
Pictorial example
x before y
#
$
xxxx
x equal y
"
"
xxxx
yyyy
x meets y
m
mi
xxxxyyyy
x overlaps y
o
oi
xxxx
yyyy
x during y
d
di
xxxx
yyyyyyyy
x starts y
s
si
xxxx
yyyyyy
x finishes y
f
fi
xxxx
yyyyyy
yyyy
nature. Moreover, according to the answers given by Spanish subtitlers
to the questionnaire discussed above, the spectator must not be left to
feel that there is any missing information, and consequently a subtitle
must appear every time an utterance is heard.
In order to study the relation between the acoustic and the visual
verbal elements, I have borrowed a relevant categorisation from the discipline of Artificial Intelligence, relating to the basic possible relationships between two intervals. According to Allen (1983) there are 13 such
relationships, as indicated in Table 3.3.
Let us now see which are the most frequent temporal relationships
between the utterances heard and the subtitles. In the Greek versions of
the films under analysis it has been found that the subtitles appear
some deciseconds (one-tenths of second) after the start of the utterance.
In the Spanish versions, on the other hand, the subtitles appear exactly
when the utterance begins and finish either when the utterance finishes
or some deciseconds after it has finished. If Allen’s categorisation is
used, where x represents the duration of the utterance and y the duration
of the subtitle, we have the following most frequent temporal relations
between the two:
●
●
Greek: ‘x overlaps y’ (o)
Spanish: ‘x equal y’ (") and ‘x starts y’ (s)
As far as the Spanish versions are concerned, the choice for the cueing
of the subtitles seems to be governed by the requirement for
46
Stavroula Sokoli
synchronisation. Naturally, even though it is almost always possible to
cue the subtitle in at the beginning of the utterance, the same does not
hold for the cueing out, because the specific duration might not be
enough for the subtitle to be read. This may explain why both relationships are frequent in Spanish.
A similar assumption of the norm cannot be made for the Greek
versions. This is because I have not encountered any extra-textual
sources verifying the existence of a norm requiring that subtitles must
be inserted some frames after the utterance has begun, despite the fact
that the specific relationship is the one most frequently met. The
regularity of this relationship could be attributed to certain technical
limitations that existed in the past, such as the necessity for manual
insertion of subtitles (Sokoli, 2000). Despite the fact that these technical
limitations have now been overcome, the lack of a norm requiring
absolute synchronisation seems to have led to a confirmation of this
practice. Thus, it could possibly be included among the cases that
Chesterman (1993: 4) describes as ‘behavioural regularities [which] are
accepted (in a given community) as being models or standards of
desired behaviour’.
Conclusion
The above quantitative and qualitative analysis shows that there are
indications of the presence of norms in the subtitling practices in
Greece, as well as Spain. In particular, there are indications of the operation of the following norms:
●
●
Matricial norms: in Spain, the distribution of subtitles is determined by the acoustic nonverbal (pauses) and the visual nonverbal
(cuts) elements. When there is a conflict between the requirements
for synchronisation with the acoustic and the synchronisation with
the visual element, synchronisation with the acoustic element prevails. In Greece, the distribution of subtitles is determined by the
requirement for completeness of meaning within the same subtitle
and the preference for subtitles consisting of two lines rather than
one.
Relation norms: in Spain, the choice dictating omission is determined by the requirement of ‘equality’ between the acoustic verbal
and the visual verbal element. As a result, the omitted utterances are
as few as possible. Moreover, there is an effort to avoid disproportion between the duration of the dialogue and the subtitles. In
Subtitling Norms in Greece and Spain
47
Greece, the choice for omission is determined by the utterance’s
recoverability from the other elements of the audiovisual text.
Recoverable elements are often omitted even in the absence of time
and space constraints.
Again, it has to be stressed that these are only indications of the presence
of norms and that more films have to be analysed in order for them to
be verified. Future research could include the comparison of each set of
subtitles with the original dialogue, with the aim to find the degree of
language compression, since the only comparison made in the present
study has been between the two subtitled versions of each film.
Moreover, other films to be studied could belong to different genres (for
example action films, documentaries) or have stricter time and space
constraints (for example films with fast dialogue), with the aim to
explore the way the above norms are adapted. Another possible line of
research involves investigation of the reception of subtitled audiovisual
programmes. For example, a subtitled film translated according to the
Spanish norms could be shown to a Greek audience, or vice versa, and
their response could be examined in order for expectancy norms to be
discovered.
Given the qualitative and descriptive nature of the present discussion in the absence of laws or absolute truths, possible explanations
have been proposed and more hypotheses have been put forward,
which in turn may serve as a basis for future research. The results
could be used not only for the explanation and prediction of the way
subtitles are manifested, but also in a programme of training for
subtitlers.
Notes
1. The definition of text used here is the one established by de Beaugrande and
Dressler (1981), according to which a text has to meet seven standards of
textuality: cohesion, coherence, intentionality, acceptability, informativity,
situationality and intertextuality. Hence, texts are not only written utterances, in spite of the connotation this word has in everyday language. This
definition also accounts for spoken utterances, as well as for television and
cinema programmes, our present object of study.
2. Delabastita (1990: 101–2) refers to these as ‘four types of film sign: verbal
signs transmitted acoustically (dialogue), nonverbal signs transmitted acoustically (background noise, music), verbal signs transmitted visually (credits,
letters, documents shown on the screen), nonverbal signs transmitted
visually’.
3. The term ‘subtitler’, as used here, does not necessarily refer to one person but
may represent the team of people involved in the subtitling process.
48
4.
5.
6.
7.
Stavroula Sokoli
Depending on the practice, the same person is responsible sometimes for all
the steps, whereas in other cases, the spotting and the translation are done by
different people (see Georgakopoulou, this volume).
The set of Greek subtitles broadcast on TV is the same as the one appearing
in the VHS version (no information was available about the cinema or DVD
version). As for the Spanish version, only the VHS was available.
In Greece, this film had the same set of subtitles for TV, VHS and DVD (no
information was available about the cinema version). In Spain, interestingly
enough, there are four different sets of subtitles for each medium and the one
used for this analysis is the TV version.
The TV version has been used for both languages.
The VHS version has been used for both languages. See Díaz Cintas (1997) for
more details about the subtitling of this film into Spanish.
References
Allen, J. (1983) ‘Maintaining knowledge about temporal intervals’. Communications
of the ACM 26(11): 832–43.
de Beaugrande, R. and Dressler, W. (1981) Introduction to Text Linguistics. London:
Longman.
Castro Roig, X. (2001) ‘El traductor de películas’. In M. Duro (ed.) La traducción
para el doblaje y la subtitulación (pp. 267–98). Madrid: Cátedra.
Chesterman, A. (1993) ‘From “is” to “ought”: Laws, norms and strategies in
Translation Studies’. Target 5(1): 1–20.
Chesterman, A. (1997) Memes of Translation. The Spread of Ideas in Translation
Theory. Amsterdam and Philadelphia: John Benjamins.
Delabastita, D. (1990) ‘Translation and the mass media’. In S. Bassnett and
A. Lefevere (eds) Translation, History and Culture (pp. 97–109). London: Pinter.
Díaz Cintas, J. (1997) El subtitulado en tanto que modalidad de traducción
fílmica dentro del marco teórico de los Estudios sobre Traducción (Misterioso
asesinato en Manhattan, Woody Allen, 1993). Valencia: Universidad de
Valencia. PhD Thesis.
Hermans, T. (1999) Translation in Systems: Descriptive and Systemic Approaches
Explained. Manchester: St Jerome.
Leboreiro Enríquez, F. and Poza Yagüe, J. (2001) ‘Subtitular: toda una ciencia ...
y todo un arte’. In M. Duro (ed.) La traducción para el doblaje y la subtitulación
(pp. 315–23). Madrid: Cátedra.
Sokoli, S. (2000) Research Issues in Audiovisual Translation: Aspects of Subtitling
in Greece. Barcelona: Universitat Autònoma de Barcelona. MA Dissertation.
Toury, G. (1995) Descriptive Translation Studies and Beyond. Amsterdam and
Philadelphia: John Benjamins.
Zabalbeascoa, P. (1997) ‘Dubbing and the nonverbal dimension of translation’.
In F. Poyatos (ed.) Nonverbal Communication and Translation (pp. 327–42).
Amsterdam and Philadelphia: John Benjamins.
4
Amateur Subtitling on
the Internet
Łukasz Bogucki
Introduction
The traditional understanding of the audiovisual translation mode
known as subtitling is that it is intended primarily for cinema and
television use, with the help of a visual component in the form of a
(video) recording and the final programme script of the original, except
perhaps for instantaneous, live subtitles (Kilborn, 1993). The ubiquity
of the Internet, however, has given rise to a new kind of AVT which I
refer to here as ‘amateur subtitling’. Amateur subtitling is not unrelated
to fansubs (www.fansubs.net/fsw/general), subtitles of various Japanese
anime productions made unofficially by fans for non-Japanese viewers.
Despite their dubious legal status, fansubs have been in existence since
the late 1980s (O’Hagan, 2003). The rationale behind the decision to
undertake the translation in the form of fansubs and amateur subtitling
is largely the same: to make a contribution in an area of particular interest and to popularise it in other countries, making it accessible to a
broader range of viewers/readers, who belong to different linguistic
communities.
Amateur subtitle producers, a term used here in place of ‘translator’ or
‘subtitler’ as the product under discussion does not qualify as fullyfledged subtitling, typically work with a recording of the original, but
they have no access to the post-production script. They rarely work with
classics, as the intention behind their work is to make local viewers
familiar with recent film productions. The quality of their product is
thus conditioned by how much they understand of the original.
Comprehension is essential in any kind of translation, yet understanding the original for amateur subtitling purposes differs from the kind of
source text analysis performed in ‘paper’ translation. The common
49
50 Łukasz Bogucki
ground is that the original can be listened to as many times as the
subtitle producer deems necessary, but certain fragments may be unclear,
made unintelligible by background noises, or otherwise irretrievable as
in, for example, interpreting.
Amateur subtitlers have at their disposal a range of freeware or shareware computer programs to create subtitles (e.g. Sub Station Alpha,
www.topdownloads.net/software/view.php?id=9740) and to superimpose them on the film (e.g. Virtual Dub, www.virtualdub.org). Inserting
subtitles in the film or programme is very easy even for relatively inexperienced users. In amateur subtitling there is no strict limit as to the
number of lines per subtitle, but experienced subtitle producers realise
that human perception is not boundless; as cinema-viewers themselves,
they subconsciously tend to apply the conventional limit of two lines
per subtitle. Three-line subtitles are used in rare cases where the pace of
the dialogue is so fast that two-liners would have to be displayed very
quickly one after the other. Nor are there any strict restrictions as
regards the number of characters per line.1 There are programs that trim
the number of characters per line to a certain limit, but we have yet to
see a standard emerge.2 As font size is a user-defined variable, longer
subtitles sometimes occupy too much screen space if large fonts are
selected. As a result, subtitle producers will adjust their work to the individual requirements of a given computer application. Amateur subtitling
is also subject to technical constraints, but it has to be stressed that the
limitations at work as regards unprofessional amateur subtitling are
unlike the ones that traditionally apply to professional subtitling (see
Karamitroglou, 1998 for a description of subtitling standards). There is
far more freedom and, as a consequence, technical constraints do not
normally impinge on the resulting product to the extent that strategies
of curtailment such as decimation, condensation, etc. (Gottlieb, 1992;
Bogucki, 2004) have to be resorted to. The problem with amateur
subtitling lies not so much in squeezing the gist of what the original
characters say into 30 or so characters per line to enable the audience to
appreciate the filmic message without too much effort; the problem, it
seems, lies mostly in the quality of the source material and the competence and expertise of the translator.
Text files with amateur subtitles for many recent cinema releases are
relatively easy to find on the Internet. 3 The examples in this section
come from an amateur translation made by Mirosław Jaworski, with
permission from the subtitle producer. The film under discussion is
Peter Jackson’s The Fellowship of the Ring (2001). Jaworski made great
efforts to become the first amateur to subtitle this particular release.
Amateur Subtitling on the Internet
51
He posted his translation in the form of a text file on the WWW about
six weeks before the film was actually shown in cinemas in Poland,
and approximately at the same time as it was released in the USA. Not
only was he pressed for time not to be outdone, but also his ‘source
text’ (the inverted commas being very much in place here) was what
he had managed to take down from a camcorder recording of the original cinema release. As a result, he had to cope with very poor quality
and the translation required the meticulous effort of listening to the
original scores of times. He was left to his own devices, as most
amateur subtitle producers are, and had to rely on his own language
competence.
Let us now look at some illustrative excerpts to see how comprehension problems can impinge negatively on translation quality.
Error analysis
As we shall see presently, the term ‘error analysis’, avoided wherever it
can be helped in literature (quality assessment, after all, is not merely
about error-hunting), can be used to discuss amateur subtitling quality.
(A) ST: You know Bilbo – he’s got the whole place in an uproar.
TT: -Znasz Bilba, całe miejsce zamienił w altan̜.
[You know Bilbo. He’s turned the whole place into an arbour.]
The visual image in this instance is a rural, peaceful landscape.
(B) ST: All I did was give your uncle a little nudge out of the door.
TT: Wszystko, co zrobiłem, to dałem twojemu wujowi par̜ porad u
drzwi.
[All I did was give your uncle some advice at the door.]
The problematic words in these two examples are ‘uproar’ and ‘nudge’.
Since, from the point of view of English learners, and Jaworski is a confessed self-learner, these terms are normally learned at later stages of
linguistic development, they may have been new to the subtitle producer. Had he had the opportunity of seeing them in writing, he would
no doubt have looked them up and may have come up with a more
accurate version. Let us look at another example:
(C) ST: The Shadow does not hold sway yet.
TT: Cié nie ustabilizował si̜ jeszcze.
[The Shadow has not set in yet.]
52 Łukasz Bogucki
The expression ‘to hold sway’ may be said to have similar features to
‘nudge’ or ‘uproar’ in that it is likely to be unfamiliar to a speaker of
English as a second language with a limited lexicon; these words do
not typically occur in textbooks for beginners or intermediate students
of English. However, in this case it can actually be seen rather than
heard, as the original sentence appears as an on-screen insert in Elvish,
and the example quoted above is the translation into English that
appears as a subtitle. In all probability, the subtitle producer looked up
the unfamiliar word in a dictionary and was able to offer an accurate
translation.
Another type of error results from inadequate linguistic competence
coupled with the specificity of cross-medium translation, that is,
having to go from the spoken of the original to the written of the
translation:
(D) ST: ... friends of old.
TT: ... przyjaciele Starych.
[ ... friends of (the) Old.]
This short excerpt comes from the Council of Elrond where noble
representatives of different races meet to discuss a serious issue. The
original is merely a polite form of address suggesting that the characters
have known each other for some time. The Polish translation may be
implying that the speaker is addressing the friends of the old race of
Elves. The mistake might never have been made but for the change in
medium from spoken to written. If the subtitle producer had seen the
dialogue in writing, he would have noticed the lower case where he was
expecting a capital letter, and would have thought twice about the
appropriateness of his rendition. Similarly:
(E) ST: ... festering, stinking marshlands as far as the eye can see.
TT: -Obrzydliwe, ́mierdz̜ce ... Tam, dok̜d si̜ga Oko!
[ ... disgusting, stinking ... Where the Eye reaches!]
‘The Eye’ is used throughout the film as a metonym for Sauron, the
Great Enemy. It would make sense in the context to interpret this fragment as ‘as far as the Eye can see’, as Sauron could scan the territory
around his tower, looking for enemies. Again, had the subtitle producer had access to the written script, he might not have made the
mistake.
Amateur Subtitling on the Internet
53
In (F) the producer may have heard only the key word (‘arrangements’)
and produced a subtitle where the message is not entirely off the mark,
but the version nevertheless remains little more than a lexical addition
in terms of translation.
(F) ST: All the arrangements are made.
TT: Tylko kilka umówionych spotká.
[Just a few appointments.]
In natural speech the beginnings of sentences are often unstressed,
sometimes hardly heard. In the following example, had the subtitle
producer heard the subject of the sentence and not, apparently, mistaken ‘he’s’ for ‘it’s’, the subtitle might have been nearer the mark. As it
is, the meaning is completely changed:
(G)
ST: (He’s) very fond of you.
TT: To bardzo naiwne z twojej strony.
[It’s very silly of you.]
Interestingly, Jaworski may have been aware of this kind of ellipsis. In
the example below, he clearly appears to have thought Gandalf said ‘a
few’ rather than ‘few’, and the reason why he heard the latter may have
been due to the fact that in natural speech short words can easily be
missed:
(H) ST: (- I can’t read it.) - There are few who can.
TT: S̜ tacy którzy umiej̜.
[There are some who can.]
On more than one occasion, the subtitle producer must have misheard
the original, due to a combination of inadequate language skills and
poor recording quality. As a result, his translations are often wide of
the mark:
(I)
ST: You will keep an eye on Frodo, won’t you?
Two eyes. As often as I can spare them.
TT: B̜dziesz miał oko na Froda?
Nawet par̜. Tak cz̜sto jak tylko b̜d̜ je mógł wyczarowac ́.
[( ... ) as often as I can conjure them up.]
54
Łukasz Bogucki
Here Jaworski must have misheard the verb ‘spare’ thinking it was ‘spell’,
and given that the sender of the message is a wizard, interpreted ‘spell’
as wyczarowac ́ [conjure up]. The same situation can be found in the
three examples below:
(J)
ST: Your love of the Halflings’ leaf has clearly slowed your mind.
TT: Twoje uwielbienie ż ycia hobbitów spowolniło twój umysł.
[Your love of the Halflings’ lifestyle has slowed your mind – ‘leaf’
understood as ‘life’.]
(K) ST: The friendship of Saruman is not lightly thrown aside.
TT: Przyjá́ Sarumana nie jest już po naszej stronie.
[The friendship of Saruman is no longer on our side – ‘aside’ for
‘side’.]
(L) ST: One ill turn deserves another. It is over! (understood as ‘the
ring returns to serve another’.)
TT: Kiedy Pieŕcié powróci by służ yc ́ innemu, wszystko b̜dzie
skóczone!
[When the Ring returns to serve another, everything will be over!]
Below is another interesting error. During the conversation that takes
place between Gandalf and Saruman, the subtitle producer may have
misheard the name uttered (‘Sauron’) thinking the wizard was addressing his interlocutor:
(M) ST: But we still have time, time enough to counter Sauron if we act
quickly.
TT: Mamy jeszcze czas, Sarumanie, jeż eli b̜dziemy działac ́ szybko.
[We still have time, Saruman, if we act quickly.]
This also shows that amateur subtitle producers may be unfamiliar with
subtitling standards developed for professional translation purposes
(Karamitroglou, 1998). When time and space limitations are stringent,
if a character addresses another one by name in the original dialogue
(and both are seen on screen), the subtitler leaves out the name, as the
context makes it clear who is being addressed. Had Jaworski known this
rule of thumb, he would probably have ignored the name (we are assuming here that he thought it was ‘Saruman’, not ‘Sauron’).
However, unwarranted reliance on visual stimuli or excessive
context-dependence may result in translations which are impossible to
Amateur Subtitling on the Internet
55
understand. In the following example, Galadriel, Queen of the Elves,
greets Frodo at her abode with the following words:
(N) ST: Welcome, Frodo of Shire – one who has seen The Eye.
Before she mentions ‘The Eye’, which in the context refers to Sauron,
the Great Enemy depicted throughout the film as an evil eye, the camera shows her own eyes in close-up. This visual context may have misled the translator:
TT: Witaj Frodo z Shire’u ... Co widzisz w moich oczach?
[Welcome Frodo of Shire ... What do you see in my eyes?]
On another occasion, two characters are talking about Gandalf, the fellowship’s leader, who has fallen to his death in the mines of Moria:
(O) ST: For me the grief is still too near.
TT: Nie powinnímy go jeszcze opłakiwac .́
[We shouldn’t mourn him yet.]
The subtitle producer might have opted for another translation had he
not been over-influenced by the book. In Part Two (which Jaworski
could not have seen on screen at the time of the TT production, but
might had read in book form) Gandalf returns; it turns out that he has
survived his misadventure. This is what the translation implies but in
fact it is inappropriate, as the original makes no such implication. The
intended message of the original is merely that Gandalf’s fall is still a
fresh memory for the fellowship.
On balance, it can be deduced that the process of non-professional
amateur subtitling by someone with a limited knowledge of the source
language, tends to result in erroneous, infelicitous or, at best, chance
decisions due to the following factors:
●
●
●
●
●
Inability to identify less commonly used words that fall outside the
domain of basic vocabulary (examples A and B).
Failure to comprehend complete utterances; taking translational decisions on the basis of minimally heard and understood information
(example F).
Misinterpreting ellipsis (examples G and H).
Misunderstanding single lexemes (examples I, J, K) and longer
stretches of connected speech (example L).
Excessive reliance on context (examples N and O).
56 Łukasz Bogucki
Conclusion
As illustrated by this discussion the quality of the translation of films
differs greatly between amateur and professional subtitling. While in
both cases the outcome, to a substantial extent, is conditioned by other
elements of the filmic message, the differences are too significant to
attempt to apply whatever conclusions seem valid for one type to the
other, especially as the product of amateur subtitling tends to be marred
by translational error due to the translator’s lack of linguistic competence in the SL, incomplete source texts, or both.
Is there then a future for this type of home-made subtitling? As freeware computer programs are widely available, technically speaking anyone owning a PC and having access to the Internet and a copy of a film
can make their own subtitles and post them on the web. But why should
they do so in the first place? In the case of the translation discussed here,
the rationale behind the decision to translate the film was two-fold: to
make a name by being second to none to subtitle a release as popular as
The Lord of the Rings, and to enable the Internet community to watch
(illegally copied) films in a language other than English. Leaving aside
the – otherwise important – question of the legal aspect, the subject is
discussed here with an eye to the future; if Internet subtitlers were to get
access to official scripts rather than what they manage to take down
from poor quality recordings, and if the subtitled films were to be made
legally and officially available (possibly for a fee), the resulting data could
be compared and contrasted with professional cinema productions. It
might then be interesting to research the linguistic implications of inhouse regulations issued by film and translation companies vis-à-vis the
technical restrictions of computer video players.
Do amateur subtitlers pose a threat for professional subtitlers? This
question cannot be fully addressed until we have a working answer to
the question of whether movies available on the Internet can compete
against the big screen and DVDs. It seems, however, that there will
always be a market for both cinema productions and digitally recorded
copies of films for home use. Hence, it would seem justified to assume
that cinema/DVD professional subtitling will continue to exist alongside ‘home-made’ captions. The latter, however, will have to become
more professional in every respect. If amateur subtitling ceases to be
‘amateur’, in other words if the source text is not taken down from a
poor quality recording but made available to the subtitle producer prior
to producing the subtitles, the resulting target text can be subject to
translation quality assessment and compared to professional cinema
Amateur Subtitling on the Internet
57
subtitling. Then – and only then – can it be studied by academics and
scholars. If, however, amateur subtitling continues to be done on the
basis of incomplete information, it will necessarily also be imperfect
and not available to academic study due to its high degree of unpredictability. As a result, this contribution is merely a prolegomenon to a more
thorough study of amateur subtitling, which can only be undertaken if
amateur captions are produced under conditions comparable with those
of professional subtitling.
Notes
1. Some computer programs (movie players capable of showing subtitles and
amateur subtitling software) limit the number of characters per line of subtitle, but as the limit can be 60,000 or more characters, for the purpose of this
discussion it can safely be ignored, as it does not impinge on the translation
process, no quantitative constraints being applicable.
2. Mirosław Jaworski, personal communication.
3. For Polish subtitles, see www.napisy.prv.pl. Films cited in my examples are
The Lord of the Rings: The Fellowship of the Ring, Peter Jackson, New Zealand/
United States, Warner Bros 2001, and Władca Pieŕ cieni: Druż yna Pier ́ cienia,
Warner Bros Polska, 2002, Polish subtitles by Elż bieta Gał̜ zka-Salamon.
References
Bogucki, Ł. (2004) A Relevance Framework for Constraints on Cinema Subtitling.
Łód́: Wydawnictwo Uniwersytetu Łódzkiego.
Gottlieb, H. (1992) ‘Subtitling – a new university discipline’. In C. Dollerup and
A. Loddegaard (eds) Teaching Translation and Interpreting (pp. 161–70).
Amsterdam and Philadelphia: John Benjamins.
Karamitroglou, F. (1998) ‘A proposed set of subtitling standards in Europe’.
Translation Journal 2 (2). http://accurapid.com/journal/04stndrd.htm
Kilborn, R. (1993) ‘Speak my language: Current attitudes to television subtitling
and dubbing’. Media, Culture & Society 15(4): 641–60.
O’Hagan, M. (2003) ‘Middle Earth poses challenges to Japanese subtitling’.
LISA Newsletter XII. www.lisa.org/archive_domain/newsletters/2003/1.5/
ohagan.html
5
The Art and Craft of
Opera Surtitling
Jonathan Burton
Introduction
While subtitles are translated text displayed below the image, as on a
cinema or television screen, surtitles are displayed above the stage, in
live opera or theatre performances (some opera companies refer to these
as ‘supertitles’). Subtitling and surtitling involve differing requirements
and techniques. In this chapter both approaches to the translation of
operatic texts will be discussed.
History of subtitling and surtitling for opera
Subtitles for opera on film have been around almost as long as cinema
itself, since the early years of the twentieth century (Ivarsson, 2002).
On television, the first subtitles for opera in the early 1970s consisted of
a series of caption boards placed in front of a camera and superimposed
on the television picture. This cumbersome arrangement was superseded by experiments with automated electric typewriters, and eventually by the familiar electronic systems in use today, including teletext,
which was used on the short-lived LaserDisc format. For video and DVD,
titles can be cued to timecode with an accuracy of a single frame (1/25
of a second), and software can provide useful information, such as
whether a title is flashed up too quickly to be read at a specified reading
speed; this is important in opera, as the pace of sung text can be much
faster, or slower, than the speed of normal conversation.
The history of surtitles is not well documented. Reputedly, the first
live titles in an opera house were in Hong Kong in the early 1980s but
did not fit the definition of either sub- or surtitles, as they were in
Chinese and therefore displayed vertically at the side of the stage.
58
Art and Craft of Opera Surtitling 59
English surtitles were first used in Canada in about 1984. In Britain,
they appeared experimentally at the Royal Opera House (ROH), Covent
Garden, in 1986, and were soon taken up by other organisations such as
the Welsh National Opera, Glyndebourne Festival Opera, Scottish
Opera and various touring companies, Opera North in Leeds being a
recent convert. English National Opera resisted the use of surtitles for
many years but decided to introduce them during the 2005–6 season in
an attempt to bring opera to the widest audience.1
Hardware and software for surtitles
Various systems for surtitling are currently in use, with no widely or
internationally agreed ‘standard’ system; each type of surtitling package
has advantages and disadvantages. Titles can be projected on a screen,
either from slides or electronically via a digital projector, or with presentation software such as PowerPoint. There are various types of LED
screens, that is, a matrix of illuminated dots, some of which have refinements such as variable brightness, fading, and colour options. Most
recently there are ‘seatback’ screens, like those found in airliners, which
display the titles on a small screen below eye-level on the back of the seat
in front of the user. These were pioneered at the Metropolitan Opera in
New York, and are now used in Vienna, Barcelona and several other opera
houses around the world. They have the advantages of multi-language
possibility, directionality so that the titles can be visible only to the individual user, and, perhaps most importantly, the option to be switched off
if the audience member does not wish to have titles at all.
The Royal Opera House in London has used a variety of systems, starting with Kodak carousels each containing a limited number of 35mm
slides. Some of the disadvantages of this system are that the slides are
prepared in advance by an outside supplier, so re-editing of the text is
impossible, and that even with three projectors and three carousels, only
a limited number of titles can be used in each act of an opera, before the
carousels have to be changed. Another setback is that with such a cumbersome system, the possibility of something going wrong in performance is considerable. This system was superseded by a FOCON LED
matrix screen, which meant a vast improvement in simplicity and reliability of use, flexibility of editing, and the capacity for an unlimited
number of titles, though font style and text layout are restricted. FOCON
screens are no longer manufactured, but they are still widely used, particularly by touring companies. The Royal Opera House now uses a custom-tailored digital system (Diamond Credit ROH, by Courtyard
60
Jonathan Burton
Electronics Limited), that allows the projection of electronically-generated
titles via a digital video projector on to a suspended screen above the
proscenium. This is backed up by a limited number of seatback screens,
relaying the titles to areas at the back and sides of the auditorium which
do not have a view of the main surtitle screen.
Texts are prepared and edited on PCs in the Surtitle Office elsewhere
in the building, and then downloaded via the company’s internal IT
network to the computer in the surtitle operators’ box, situated behind
a window at the back of the balcony. During the performance the
surtitle operator cues the titles from a marked-up musical score. The
output signal is relayed to the digital projector, high up at the back of
the amphitheatre, and via a separate computerised system to the
seatback screens. The Diamond Credit software has great flexibility in
the programming of parameters such as luminance and speed of fade
(in and out) for each title, and individual letter spacing (kerning) for
legibility – an important consideration when the projected letters are
30 cm high. However, at present, there is no facility for coloured text
and the system can only show white lettering on a dark screen. There is
of course no multi-language capability and only one language can be
projected. If the seatback installation were extended to cover all the
seats in the auditorium, this would become a possibility, although
the logistical problems would be great.
Why surtitles?
In the early days of surtitling, there was much debate as to whether
surtitles were necessary or desirable. Opera critics and stage directors
tended to be opposed to them, with audiences mostly in favour. That
battle has now largely been won, with only a few critics and directors
resolutely against the idea. In the eighteenth century – before electric
lighting with dimmers – the more well-heeled members of the opera
audience would have invested in a printed libretto with translation,
which they would follow during the performance with the house lights
up; this is perhaps the nearest equivalent to modern surtitling. An early
argument in favour of some form of translation as an aid to comprehension was amusingly set out by Joseph Addison, writing in The Spectator
around 1712 (quoted in Marek, 1957: 567–9):
It is my design in this paper to deliver down to posterity a faithful
account of the Italian Opera, and of the gradual progress which it has
made upon the English stage: For there is no question but our
Art and Craft of Opera Surtitling 61
great-grandchildren will be very curious to know the reason why
their forefathers used to sit together like an audience of foreigners in
their own country, and to hear whole plays acted before them in a
tongue which they did not understand [ ... ]. We no longer understand the language of our own stage; insomuch that I have often
been afraid, when I have seen our Italian performers chattering in
the vehemence of action, that they have been calling us names, and
abusing us among themselves; but I hope, since we do put such an
entire confidence in them, they will not talk against us before our
faces, though they may do it with the same safety as if it were behind
our backs. In the meantime I cannot forbear thinking how naturally
an historian, who writes two or three hundred years hence, and does
not know the taste of his wise forefathers, will make the following
reflection, ‘In the beginning of the eighteenth century, the Italian
tongue was so well understood in England, that operas were acted on
the public stage in that language’.
Our approach to watching opera has changed in recent decades and we
are no longer content just to appreciate the lovely sound of the voices
and let the opera wash over us. We are now a text-dominated society,
and audiences expect to know in detail what words are being sung, as
they would for the dialogue in a subtitled foreign film. No longer do we
sit in the dark for hours at a time, listening to whole acts of Wagner or
Richard Strauss with only the dimmest idea of what is actually going
on. Surtitles are now largely a necessity and there are likely to be complaints if they are not provided. The approach of opera directors has
changed too. With surtitles, directors know that we will be aware of the
meaning of what is being sung, and they will no longer be tempted to
fill the stage with superfluous action or comic elements just to keep our
attention from wandering.
David Pountney, former Director of Productions at English National
Opera, one notable director still opposed to surtitling, has famously
compared surtitles to ‘a prophylactic between the opera and the audience’. But what are the alternatives? We cannot now sit in the dark and
try to follow a printed libretto. Critics hope that we will do our homework, and study the opera text in advance before coming to the theatre;
but it would hardly be possible to remember all the words in detail at
each point in the action. One alternative is to sing the opera in
translation, in the language of the audience, a viable policy followed at
English National Opera and until recently at least in many German and
Italian opera houses. However, it raises many problems, specifically the
62 Jonathan Burton
difficulty of arriving at a singing translation which follows the composer’s
musical line and is comfortable to sing with, for instance, appropriate
vowels on high notes, while remaining an accurate rendering of the
original text. These questions are fascinating, but beyond the scope of
this chapter.
One point of contention is whether or not to surtitle an opera in the
language in which it is being sung. The Royal Opera as a matter of
company policy, now provides English surtitles for operas sung in
English, for the benefit of members of the audience who rely on surtitles
because of hearing problems, or for whom English is a foreign language;
but there is still some opposition to this, particularly from singers and
directors, who feel that it is an insult to the singers and their clarity of
diction. In the case of Britten’s The Turn of the Screw, performed at the
ROH in 2002, Deborah Warner, the director, requested that English
surtitles should not be used at the first few performances. On the
opening night there were 47 complaints from the audience about the
lack of surtitles, but none at all when these were subsequently reinstated. Conversely, some composers and directors specifically ask for
English surtitles and two examples are Harrison Birtwistle for his opera
Gawain and John Adams for El Niño, both performed at the ROH in
2000. Even for older works in English, such as Handel’s Semele, it may
be a good idea to provide surtitles, since the poetic, archaic and
sometimes convoluted language of the libretto may be difficult to
disentangle by ear but comprehensible when laid out on the screen.
The aim of surtitles
The aim of surtitles is to convey the meaning of what is being sung, not
necessarily the manner in which it is being sung. Interjections such as
‘Oh!’, ‘Ah!’ or ‘Ye Gods!’, and musical repetitions of words and phrases,
need not be included. Flowery or poetic turns of phrase may be
simplified. Punctuation can be kept simple or omitted, in particular
exclamation marks; we can see the singers ‘emoting’ on stage, so the
titles do not have to do it for us. For example, a fictitious line of Italian
operatic text might be literally translated as:2
Ah! How this poor heart is bursting within my bosom!
For surtitles, a simplified rendering is preferable:
My heart is breaking
Art and Craft of Opera Surtitling 63
It may be necessary to clarify the plot, for example tactfully adding a
character’s name if it is not clear who is referred to. Sometimes it may
help to expand details of the action that may not be clear to a watching
audience such as ‘I am putting poison into this jug of water’. However,
the surtitler must remember that the audience has come to see the
opera, not the surtitles and the titles should be discreet and not distracting. Titles may be omitted where the action is self-explanatory:
Beloved Alfredo! Oh joy! – My Violetta! Oh joy!
Take this letter which I am giving you
What happened next? – Listen and I shall explain to you – Speak
then, I am listening ... (etc.)
There is nothing worse than sitting through an entire scene of an opera
and realising at the end that one does not know what the singers looked
like, as one has been too busy reading the titles. This problem is difficult
to avoid in ‘wordy’ operas with a great deal of text and little repetition,
for example Wagner’s Tristan und Isolde and Parsifal, Richard Strauss’s
Die Frau ohne Schatten and Capriccio. The subtitler’s aim should be transparency, or even invisibility. As with a subtitled film, the ideal critical
reaction should be along the following lines: ‘Did you go to the opera
last night? What did you think of the surtitles?’ – ‘Surtitles? I didn’t
notice there were any’.
Many opera libretti are based on existing plays or novels, so it may
help to consult the original source works – Beaumarchais, Pushkin,
Shakespeare, Schiller – though there may be little of the original left.
For instance, Ambroise Thomas’s opera Hamlet does contain the line
Être ou ne pas être? but almost nothing else from Shakespeare’s text.
Sometimes the motivations or relationships of the characters may be
clearer in the source work than in the libretto as set by the composer,
thus aiding the surtitler’s task.
It is also vital to obtain the correct version of the musical score in use
for a particular production. Many operas exist in more than one
original version, or in conflicting modern editions; Bizet’s Carmen is
notorious in this respect. Sometimes a director or conductor may
exhume an unfamiliar version, or an extra aria, even for a well-known
Mozart or Verdi opera. There may be variant readings in different
versions of the score, in which the alteration of even a single word may
change the sense of a line; or some directors may have taken things
into their own hands, and changed the sung text where it conflicts
with the production ‘concept’.
64
Jonathan Burton
Each opera company evolves its own ‘house style’ for surtitles, but
there is general agreement on some points.
a. Each caption contains either one or two lines of text, to a maximum
of about 40 characters per line as any more would be too much information for the viewer to take in. The text will normally be centred,
and in an unobtrusive font such as Arial or Helvetica.
b. Dashes are used if two speakers appear on the same caption, either
singing simultaneously, in duet, or in rapid dialogue:
– Dear Figaro, look at my hat
– Yes, my darling, it’s lovely
However, care must be taken not to ‘give away’ dramatic punch-lines
too early by putting two lines on the same caption, for example,
‘Who am I?’ [pause ...] ‘You are my long-lost son’. In opera there may
be trios, quartets or larger ensembles and given that surtitle systems
can only display a maximum of two lines of text, some judicious
selection is necessary.
c. Italics signify that the voice is offstage, or that the text is a letter
or a ‘song’, sometimes a difficult concept in opera, when everything is sung. Italics may also be used in normal fashion, for
emphasis.
d. Quotation marks signify reported speech. The convention at The
Royal Opera is to place quotes in double quotation marks and italics,
for clarity:
She said “I am his mother”
e. Brackets or parentheses signify an ‘aside’, i.e. a line that is not
intended to be heard by other characters on stage:
(I mustn’t let him see me hiding here)
f. Left and right: some systems allow the possibility of splitting titles
on either side of the screen, which will help to indicate which
character is singing in a large ensemble. This is effective only with
short lines of text, and can be distracting. It is therefore more suitable to comedy than to serious opera. Left/right titles may be
successive:
He thinks I cannot
see him
She thinks
I cannot see her
Art and Craft of Opera Surtitling 65
or even simultaneous, although in this case they must be used with
due care so as not to confuse the audience:
I’m hiding
in this alcove
Figaro is
spying on me
Pacing and linguistic flavour
For live surtitles in the theatre, the pacing should be kept slow and simple, to avoid distracting the audience. If a text is much repeated, the
title can be left up for a long period if necessary. Fast exchanges should
be combined, simplified, or omitted for if they are too quick to cue,
they will be too quick to read.
For the subtitling of a live television broadcast, the pacing can be
faster. It will be necessary to consult the TV director’s camera script and
edit the titles to fit the shots he or she has chosen, avoiding if possible
going over a cut. Close-ups may give the opportunity to title lines which
would be impossible to catch at the slower pace of surtitling. Generally,
a title should not be held on the TV screen for longer than it takes to
read, provided that the effect does not jar musically, and a maximum of
about 10 seconds is acceptable. For video and DVD, ‘fine’ editing is possible, so that tiny details of text may perhaps be titled, although anything less than one and a half seconds is probably too short to read.
As already noted, opera libretti tend towards the flowery, archaic and
poetic in their vocabulary and grammatical formulations. This is particularly a problem in the operas of Richard Wagner, with their long
and convoluted German sentence structures. The titler should try to
reduce subordinate clauses to simple sentences, and simplify grammar
and vocabulary, keeping to clear, modern vernacular unless there are
exceptional circumstances. Slang, expletives and colourful language
should be treated with care; for example, nineteenth-century Italian
opera is notoriously well endowed with words for ‘bad man’ or ‘transgressor’ (empio, traditore, infido, barbaro, misero, cattivo, rio, mostro,
sciagurato ... ), for which English is hard pressed for equivalents. It is better to play safe and just use ‘he’, rather than alarm or puzzle the spectator with variations on ‘wretch’, ‘villain’ or even ‘bastard’.
Problems of translation: two tiny examples
Despite all these considerations, the actual translation of the text should
aim to be as accurate as possible, but this may not be so easy. To take a
66
Jonathan Burton
famous example from Puccini’s La Bohème, Rodolfo’s aria in Act I,
addressed to his next-door neighbour, Mimì:
Che gelida manina! Se la lasci riscaldar.
Cercar che giova? Al buio no si trova.
Anyone who has seen the opera will know roughly what he is saying:
‘Your hand’s cold, let me warm it up. It’s no use looking for your key in
the dark’. And any English opera-goer of the last hundred years will know
the familiar sung translation in English: ‘Your tiny hand is frozen’. But
what exactly do these Italian words mean? Che is ‘what’; gelida is not
precisely ‘cold’ (which would be fredda), or ‘frozen’ (gelata), but ‘icy’ or
‘freezing’ (the archaic English word ‘gelid’ could hardly be used in a
subtitle!); and manina is not simply ‘hand’ (la mano) but a diminutive,
‘little hand’. Interestingly, Rodolfo does not say ‘your little hand’ – la tua
manina – as this would have committed him to using the intimate tu
form to a girl he has just met. In fact they have been addressing each
other with verbs in the familiar second person singular ever since she
came in, while carefully avoiding actually using the pronouns tu or te.
Therefore an accurate translation would be something like ‘What an icy
little hand’, or even ‘Your tiny hand is frozen’, which turns out to be not
so far off the mark after all. She is suffering from tuberculosis, one of the
symptoms of which is coldness of the extremities due to poor circulation;
but we cannot have explanatory footnotes in surtitling. So we can see
that even with a familiar text we cannot take a single word for granted.
After Rodolfo has told Mimì all about himself, she replies:
Mi chiamano Mimì,
ma il mio nome è Lucia ...
‘They call me Mimì, but my name is ... ’ – what? Lucia? But we are translating into English: shouldn’t she be Lucy? And we are in a Parisian garret, so
surely she should have a French name: Lucile? If we look at the original
novel on which Puccini’s librettists, Giuseppe Giacosa and Luigi Illica,
based their text, Henri Murger’s Scènes de la vie de Bohème (1947), we discover that she is indeed called Lucile – except that the librettists have
altered the story, and the girl with the cold hands is not Lucile but Francine,
since Giacosa and Illica combined features of various characters in Murger’s
tale to arrive at the operatic Mimì. Even more confusingly, if we research
the life of Murger himself, we will discover that he had a predilection for
a succession of pale, doomed tubercular girls, each of whom he would
Art and Craft of Opera Surtitling 67
nickname ‘Mimì’. As it happens, the translation of the name ‘Lucia’ is not
a serious problem, since it is never referred to in the opera again; but what
do we do about the other characters? Should Rodolfo be ‘Rodolphe’, as in
the novel? Or ‘Rudolph’, as our translation is in English? The standard
convention, or compromise, is to stick to the version of the names as they
appear in the opera, illogical as this may sometimes seem: thus ‘Lucia’ and
‘Rodolfo’. The same problem arises in many other operas, for example
Verdi’s La traviata – also with a Parisian setting – in which the hero and
heroine are Italianised as Alfredo and Violetta rather than Alfred and
Violette or, in English, Alfred and Violet – a most unromantic-sounding
couple. In Dumas’ original autobiographical novel and play (1986), they
are not in fact called Violette and Alfred but Marguerite and Armand;
names that have been reinstated in some opera productions as well as a
famous ballet. One could even go back to the true story behind Dumas’
re-telling and be tempted to call them Alexandre Dumas and Marie
Duplessis, although this may be stretching a point.
Pitfalls
A problem with live surtitles is that an audience of perhaps two thousand
opera-goers will be quick to laugh at anything they find amusing, whether
intentional or not. Solitary TV or video/DVD viewers similarly should not
be distracted by inappropriate reactions. One must guard against ‘howlers’.
For example, in Rossini’s Guglielmo Tell (William Tell), the line La gioia egli
vedria d’Elvezia intera that was initially translated as:
How happy he would have been
to see Switzerland united
was pointed out as suggesting a football team (‘Switzerland United’),
and had to be hastily changed to ‘a united Switzerland’.
Another notorious case is provided by the surtitles for Puccini’s Tosca
at an American opera house. When the jealous Tosca is asking Cavaradossi
to change the colour of the eyes in a portrait he is painting, Ma fa gli
occhi neri, the request is usually tactfully translated as ‘But make the eyes
dark’. On this occasion, however, it was rendered too literally as ‘Give
her black eyes’, to the great amusement of the audience.
When jokes occur in the original text and therefore are intentional,
the cueing of the surtitles must be timed so that the audience does not
laugh too soon; nothing upsets singers more than a laugh which comes
before they have sung the relevant punch-line. The surtitler may attempt
68
Jonathan Burton
to split the line into two titles, to delay the laugh. This, however, might
distract the audience’s attention, resulting in the stage action being
missed altogether.
The addition of jokes not found in the original is not advisable, although
most titlers are at times severely tempted. The ideal is a joke which is suggested by the libretto and as a result does not contradict the spirit of the
original. A classic example is found in Rossini’s comedy La Cenerentola.
Dictating a decree, Don Magnifico turns to his eager scribes with the
words ‘No, put it in capitals’. So, the next surtitle obligingly appears as:
I, DON MAGNIFICO, DUKE,
BARON, ETC., ETC., HEREBY DECREE ...
which never fails to raise a huge laugh, although it is implied in the
libretto.
The need for further research
Some operas have exotic or obscure settings which require a certain
amount of historical research in order to establish the correct spelling
of unfamiliar names and places, or even to determine what exactly is
going on; the titler needs to understand the text to be translated, or the
audience certainly will not. A classic example is Adriana Lecouvreur, an
opera by the late nineteenth-century Italian composer Francesco Cilèa,
which has a libretto derived from a French play originally written for
Sarah Bernhardt. The characters are based on historical figures: the
actress Adrienne Lecouvreur, and her lover, a Polish nobleman named
Maurice, Count of Saxony (or Maurice Saxe), who appears in the opera
as Maurizio, Conte di Sassonia. At one point in the opera, Maurizio is
asked to recount his exploits in Curlandia and ‘the attack on Mitau’. In
pre-Internet days many hours spent sitting in libraries delving in
encyclopaedias finally revealed that Courland, Curlandia, is present-day
Lithuania, whose ancient capital was indeed not Vilnius but Mitau. The
original Italian text begins as follows:
Il russo Mèncikoff
riceve l’ordine di cormi in trappola
nel mio palagio ... Era un esercito
contro un manipolo, un contro quindici ...
Ma, come a Bèndera Carlo duodecimo,
nemici o soci contar non so ...
Art and Craft of Opera Surtitling 69
Already, in the four allusions we have a sheaf of problems: who was
Mèncikoff, and how is he spelt in English? Is un esercito contro un manipolo
some kind of proverbial saying? Who was Carlo duodecimo and where
was Benderà, and what was he doing there? Charles XII turns out to be
a Swedish king [1682–1718], held captive in exile at Bender or Benderey
in Bessarabian Turkey in 1708. He defeated not only a Russian army but
also Augustus II of Poland and Saxony, so Maurizio, the ‘Count of
Saxony’, would have known what he was talking about. Is nemici o soci
contra non so another famous historical quotation? And so it goes on.
The final subtitle rendition of Maurizio’s recitation goes as follows:
The Russian, Menshikov,
was ordered to trap me in my palace
It was an army against a handful,
fifteen to one
But like Charles XII at Benderey,
I could count neither friends nor enemies ...
All these problems are very minor, and should not be evident to the
audience. However, failure to deal with them may result in inaccuracies
or ‘howlers’ in the titles.
Conclusion
The subtitling of opera on television, video and DVD, and latterly the
surtitling of live opera in the theatre, are disciplines which have come
into being only in the last few decades and they have developed rapidly
both in sophistication of hardware and software, and in subtlety of
application. A correct and sensitive approach to the translating of operatic texts will involve considerations and problems not found in other
forms of translation. Finding successful solutions can be hard work,
but can also be satisfying. The titling of opera is not only a craft, but
also an art.
Notes
1. More information is available on: www.eno.org/src/eno_introduce_surtitles.
pdf
2. All surtitle translations quoted are either fictitious examples, for the purposes
of illustration, or are from titles by the author.
70 Jonathan Burton
References
Dumas, A. fils (1986 [1848]) La Dame aux Camélias, D. Coward, (trans.)
Oxford: OUP.
Ivarsson, J. (2002) A Short Technical History of Subtitles in Europe. www.transedit.st/
history.htm
Marek, G.R. (ed.) (1957) The World Treasury of Grand Opera: Its Triumphs, Trials
and Great Personalities. New York: Harper & Brothers.
Murger, H. (1947 [1847–49]) Scènes de la vie de Bohème. Paris: SEPE.
6
Challenges and Rewards
of Libretto Adaptation
Lucile Desblache
Introduction
During the interval of one of his recitals the Irish tenor John McCormack
was told that W. B. Yeats was attending his performance. The singer was
keen to meet the poet, but the latter was disappointed with last–minute
changes in the programme, where popular ballads replaced some of the
more highbrow art songs originally announced and is reported to have
said quite abruptly that he did not enjoy the concert. Desperately trying
to win the poet over, John Mc McCormack fumbled:1
But what about my diction? People usually comment on the clarity
of my words.’
‘Ah!’ is reported to have said Yeats, disappointed not to have heard
poems set to music, ‘the clarity of the words, the damnable clarity of
the words!
In this chapter, I am first going to discuss libretti in general, historically
and linguistically. This is followed by a case study in which I was personally involved as a translator; the transfer from English to French of
Benjamin Britten’s and Eric Crozier’s Albert Herring.
Libretti – past and present
In opera, far from complaining about the clarity of the words, the
public has only been able to relax and understand the plot since surtitling was adopted by opera houses in the late 1980s. Before then, only
those who were familiar with the repertoire or took time to read the
story ahead of the performance could understand what was happening
71
72 Lucile Desblache
on stage (see Burton, this volume). As far as content is concerned, there
rarely seems to be a happy medium in opera. Texts tend to be either
over-simplistic or repetitive or based on fiendishly complicated
historical plots. In either case, jealous baritones generally scheme the
murder of amorous tenors while (not always seemingly) feeble sopranos
die of unrequited love singing top Cs.
In operatic music and art songs, attitudes to languages have always
been different from those in other performance art forms. In straight
theatre, one of the priorities of the translator is to convey the specificities of the source culture to the target audience. As a quintessential
product of the Renaissance, opera, at least originally, was fuelled by a
desire to convey a universal message, beyond local or national significance. Although this trend is not always followed in contemporary
compositions, reinterpretation of myths has remained widespread
throughout the last century. Italian was adopted as the universal
musical language at the start, in the seventeenth century. French opera
was soon also established as a school (paradoxically, initially by an
Italian, Jean-Baptiste Lully, then by a German, Giacomo Meyerbeer)
and composers often wrote operas in collaboration with Italian or
French librettists to be performed for audiences, usually aristocratic
audiences, whose native tongue was not Italian: Glück, Handel and
Mozart, the founders of modern opera, all composed using texts not
written in their native tongue.
Opera and art songs, as artistic expressions of high culture, were
written for an educated audience who expected to hear Italian, French
and, to a lesser degree, German. Handel’s Vauxhall audiences were
handed out English translations of the Italian libretto performed, but
nevertheless, expected operas in a foreign language as an unquestionable tradition. Several versions of operas, such as Glück’s Orfeo, were
composed to please audiences – and patrons – of different nationalities.
This not only resulted in linguistic transfer but also in substantial
musical or production modifications. Yet, while opera was intended for
the aristocracy, it appeared only in French, Italian and German.
Intercultural awareness spoke the language of the rich and powerful.
The situation changed in the nineteenth century. The main operatic
and song repertoire was mostly adapted to the language of the country
in which they were performed from the mid-nineteenth to the midtwentieth centuries. This was due to an expansion in popularity of
opera and comic opera as well as a growth of compositions in a wide
range of languages. Major operas were composed in Russian, Czech,
Hungarian, Norwegian and other ‘minor’ languages. A two-tier level
Challenges and Rewards of Libretto Adaptation 73
of opera performance emerged, of which traces are still visible today,
particularly in Britain, where Sadler’s Wells Opera, which evolved into
the English National Opera, has a tradition of opera performance in
English established in 1931. World-class opera houses, such as the New
York Metropolitan Opera always performed works in their original
language, whereas provincial and less prestigious houses adapted
original pieces. While world-class theatres and opera festivals relied
on international audiences, smaller houses, in a spirit of popularisation, adapted operas to entice a local public. Resentment from the music
intelligentsia about such transfers was not uncommon. Many musicians
have defended the position of orchestral and instrumental music as a
pure signifier, as they feel that music expresses a message other than
semantic. While operatic and vocal music played with the dual use of
signified and signifier, words could transcend their semantic meaning
in a musical or poetic context and be pure sound, removed from their
linguistic sense. In 1919, Gustav Kobbé, the American opera historian,
was already writing:
Any speaker before an English-speaking audience can always elicit
prolonged applause by maintaining that in English-speaking countries, opera should be sung in English. But, in point of fact, and even
disregarding the atrocities that masquerade as translations of opera
into English, opera should be sung in the language in which it is
written. For language unconsciously affects, I might even say determines, the structure of the melody. (Kobbé, 1933: 2 [1st edn 1919])
The advent of surtitling, with its relatively unobtrusive ways of conveying the semantic message of operatic works, made it possible to watch
and hear operas in their original language while conveying the libretto’s
message. In most cases, this method of transfer works extremely well
and surtitles allow the audience to understand the gist of the plot. In the
case of light opera though, direct transfers may still be preferred.
In music terminology, comic opera refers to lyrical works which integrate spoken dialogue. However, it can also be used for works written in
a lighter vein and this is how I refer to it here. Tragic opera, opera seria,
is often based on mythology to some degree known to the audience
(Orpheus, Bluebeard, Faust, Don Giovanni), or structured on a narrative
made obvious by the action taking place on stage (Carmen, Madam
Butterfly, Tosca). Complex historical plots, which at times would obscure
intelligibility, can now be summarised very efficiently with surtitles,
and often are, even in the case of operas performed in the native tongue
74 Lucile Desblache
of the audience, as opera singers are notoriously difficult to understand.
In light opera, words may be dependent on action, their meaning may
be enhanced by the production, but they can also express humour
through puns and purely semantic means. Besides, they have often
been composed with a more intimate setting in mind than grand opera
and require full contact with the audience.
The statement made by Kobbé nearly a century ago that words are not
only semantically indispensable to the understanding of an opera but also
vital to its musical texture and structure, still stands. Yet in light opera,
the need to have direct contact with an audience through words is also
vital. Summarising text, as must be done in surtitling, and conveying the
message through an intermediary screen may damage the immediacy of
communication needed in humour, where timing is supremely important
and semantic subtleties are not always transferable. Light opera does not
have the high status of grand opera, is usually not conceived on the same
scale (using a smaller number of musicians and singers) and perhaps also
for these reasons, is often more acceptably transferred into another language. For example, in some ways, the idea of setting a performance of
Aida in English is old-fashioned whereas an adaptation of a French operetta such as Orpheus in the Underworld, especially if peppered with contemporary references and completely reset in an British context can be very
attractive to the public. This corresponds to the tradition of using mythology and history as a setting for opere serie’s topics from the past, whilst
comic operas and operettas are conceived in a contemporary context.
Benjamin Britten’s and Eric
Crozier’s Albert Herring
The reasons why comic opera is less highly regarded than grand opera
fall beyond the scope of this discussion, but from a translator’s point of
view, lighter music is more challenging, because of the type of musical
forms favoured (many recitatives where each note corresponds to a syllable, fewer lengthy arias, where long passages can be sung on one
vowel, hence the term ‘vocalise’) and because of the frequent presence
of humour. As with any score, restrictions are imposed on the text by
the music. Strong beats always correspond to stressed syllables in a
word, and for a transfer into a language like French, which has no
stresses, this poses many problems and requires varied strategies. To a
certain degree, the music dictates which words should be used. In light
music the purpose of the translation is slightly different in that the plot
is often deliberately ridiculous: the informative function of the text is
Challenges and Rewards of Libretto Adaptation 75
therefore played down to the profit of appellative and persuasive functions. Textual parody is very frequent (for example, there are many
instances of legal discourse being sent up in comic opera, from Mozart
to Donizetti). Register and intonation are also vital elements. Composers
emphasise certain words through musical means, be they repeated or
not, and it is important not to miss their prominence.
Libretto translators usually work from a vocal score, showing piano
and vocal parts. They may also have access to the orchestral score and
to a published libretto. A libretto is extremely useful, as some additional
information (foreword, stage directions, capitalisation of words for
emphasis) may be included. It also shows clearly the text forms used,
line rhymes and breaks and any structural aspect of the text which may
be easily overlooked when considered in its musical context. In the case
of Albert Herring, all three forms of publication were available to me. The
libretto proved very useful, as rhyming is used with subtlety and might
have been missed if not laid out as verse. The example below, an excerpt
from Lady Billows’ first aria, is a good illustration of how original
rhymes and structures may be overlooked when looking exclusively at a
musical score, as they are not textually emphasised (Crozier, 1981: 9):
Is this the town where I
Have lived and toiled?
A Sodom and Gomorrah
Ripe to be despoiled?
O spawning ground of horror!
Shame to Loxford: - sty
The female sex has soiled!
Eric Crozier started to work with Benjamin Britten as a stage producer
and Albert Herring was the first libretto he wrote for Britten. The whole
opera was composed between October 1946 and February 1947 ‘to
launch the first independent season of the English Opera Group, in
summer 1947’ (Crozier, 1947d: 5). Britten’s and Crozier’s collaboration
was very close and Britten even asked Crozier to stay with him for a few
weeks, so that they could work together, sometimes at the same desk
(quoted in Newman, 1947):
One of the secrets of writing a good opera is the working together of
poet and composer. [ ... T]he musician will have many ideas that may
stimulate and influence the poet. Similarly when the libretto is written and the composer is working on the music, possible alterations
76 Lucile Desblache
may be suggested by the flow of the music and the libretto altered
accordingly [ ... ] The composer and poet should at all stages be working in the closest contact.
Britten’s intimate relationship with the singer Peter Pears also made
him very aware of issues regarding text and ‘singability’ in opera and
songs. It may also have intensified his need to write for the voice and to
set words to music. His collaboration with Peter Pears was in fact
criticised at the time, as Peter Gammond’s humorous remark shows: ‘He
wrote two kinds of works: vocal, which all sound as if they were written
for Peter Pears to sing (and were), and non vocal, which all sound as if
they were written for Peter Pears to sing (and may have been, but he was
busy at the time)’ (quoted in Surrans, 1988: 40).
The collaboration with Crozier was very successful and after the completion of Albert Herring, the librettist notes in a letter to his wife-to-be:
‘The opera is all finished except for rewrites. [ ... ] We celebrated with a
glass of Genever [ ... ] and Ben said, would I devote the rest of my life to
writing libretti for him?’ (Crozier, 1947b). This collaboration was not as
fruitful as it promised to be, but the two artists did work together on
several pieces, operatic and sacred.
Albert Herring is an adaptation of a de Maupassant short story, Le Rosier
de Madame Husson, originally written in 1887. Crozier (1981: iii) explains
in the foreword to the second edition of the libretto how the story was
chosen:
[W]e settled on Guy de Maupassant’s brilliant short story Le Rosier de
Madame Husson [ ... ]. We transferred the scene of the story to Suffolk,
and centered it upon a small market-town familiar to both of us. This
brought immediate changes in the treatment of the French story and
its characters. Working from our knowledge of English people on
one hand and from the qualities of particular singers on the other,
we made our first sketch for a lyrical comedy with twelve characters,
in five scenes and three acts.
As translator, I was therefore finding myself in the unusual situation of
having to operate a cultural transfer into a French context, since the
opera was going to be performed for a French audience, but having to
keep the English essence that Britten and Crozier had imparted to the
work, even though the original story was French. Many original allusions to English social behaviour were necessarily going to be lost. The
purpose of translating the libretto was mainly to allow this gentle satire
Challenges and Rewards of Libretto Adaptation 77
to be understood, in order to allow the French audience to laugh as well
as enjoy the music. The question of the (in)visibility of the translator is
particularly acute in such work, as this professional must be as unobtrusive as the original librettist was to the music. In Crozier’s words
(1947d: 9), the librettist’s ‘highest ambition is to serve the composer’s
intention sincerely, neatly and well’. The validity of the concept of the
invisible translator has long been put in question (Venuti, 1995), but
with libretto transfer, one has to be not only faithful to the original text
but also to the music. The translator of such a work may not be required
to be invisible, particularly in light genres where s/he may be expected
to provide a contemporary version of the libretto, but s/he certainly has to
write words that not only serve their purpose in communicating a message, but also serve the music. I was very privileged to have met and developed a relationship with Crozier such that I was able to discuss any issue I
felt uncomfortable with. This is a rare privilege indeed. Clarification on
some minor points of the original libretto was very useful. But his general
advice was not to be too close to the source text. The first edition of the
score was originally published bilingually in English and German.
Crozier’s advice (1947c) to the German translator was the same:
I am especially pleased that you would like to make the German
translation. [ ... ] Of course, I needn’t tell you that from my point of
view as a librettist, I should wish you to have an entirely free hand
and to adapt the text into a German equivalent – it can’t be translated in any literal way.2
The opera is set in 1902 in East Suffolk and relates the coming of age of
a simple young working-class boy, Albert Herring, who is dominated by
his mother. As they cannot find a pure enough May Queen to be elected,
the well-to-do society of Loxford, an imaginary town remarkably similar to Aldeburgh, decides to crown a May King. Albert is chosen but
against all expectations rebels and walks off after the ceremony to lead
an independent life.
I was faced with many problems, three in particular are noteworthy:
cultural equivalence, humour and rhyming. The first two issues were
often linked, although not always. Although I read the well-known de
Maupassant story again before embarking upon the translation, it was
more interesting than useful. Albert Herring had become an English
story and had to remain one. Another main issue was the general tone
and register of the piece. The opera was a satire, but stressing the humour
too heavily would destroy the spirit of the piece. It was also difficult to
78
Lucile Desblache
convey the obsession of the English with their social system while keeping the lightness of the piece. One of Crozier’s letters (1947a) to Nancy
Evans points out very clearly that the light vein of the opera must be
preserved:
A long letter from Carl Ebert3 tonight – quite stupid – talking about
Act 1 of Albert and how the production must express the ‘social
criticism’ of the comedy and the ‘mendacious prudery’ of the characters. O God ... ! All so off the point ... Now I must write a long letter
back in words of one syllable, explaining that this isn’t an Expressionist
or Trotskyist attack on the upper classes of a decadent England, but a
simple lyrical comedy.
Finally, there was the problem of rhyming. Crozier (1964) took a very
clear stand about rhyming:
One difficulty that faces anyone who writes a comic opera in
English is the question of rhyme. Some form of rhyming is almost
indispensable:
The common cormorant or shag
Lays eggs inside a paper bag.
Take away the rhyme, substitute ‘box’ for ‘bag’ and the whole effect
is destroyed. Yet this device has been so fully and finally exploited by
W. S. Gilbert in the Savoy Operas that exact rhyming (especially if
the rhymes are double or triple ones) is liable to sound as obsolete
nowadays as the Victorian pun. I tried to avoid this danger in my
libretto by using a form of near-rhyming.
Near-rhyming is not used as readily in French as in English, but the
square model of French rhyming used in opérettes and other light
works was hardly suited to the music of Benjamin Britten, with its
many disjointed effects and syncopations. Besides, it was desirable to
mirror the diversity of strategies used by Crozier. A consistent decision
would have to be made, which would allow a contrast between free
prose, free rhyming and strict rhyming, as this contrast is essential to
the original work.
Cultural and linguistic solutions
To illustrate solutions chosen as regards cultural equivalence, humour
and rhyming, some examples have been selected, from the vocal score
Challenges and Rewards of Libretto Adaptation 79
(Britten, 1948a). In Albert Herring, cultural references are mostly linked to
social representations (allusion to social class issues, ways of celebrating
events). The problem of cultural transfer was not so much that of finding
an exact equivalent, since the translator is not tied to finding a literal
semantic counterpart. The difficulty, given the music constraints, consisted in finding an equally English concept which a French audience
would understand. The example chosen here is taken from the beginning of Act II. The schoolteacher, Miss Wordsworth, has organised the
children to sing a festive song in honour of Albert’s crowning. As they
gather for a last rehearsal, the children cannot contain their excitement,
faced with the food display laid out for the ceremony. An enumeration of
different English dishes and sweets, interspersed with the schoolteacher’s
last recommendations follows. Delicacies include jelly, pink blancmange,
seedy cake with icing on, treacle tart, sausagey rolls (sic) and trifle. Most
of these had to be transferred with French equivalents as few English
sweets and dishes are well known in France. Musically, this passage is
very fast, mirroring the excitement of the children. Short and snappy
words have to be used as the musical quality of the language is as
important as the equivalent meaning. Party food names chosen for their
snappiness were therefore chosen: tartes, bonbons, biscuits, petits fours
fitted in well musically, even if the English reference was lost.
Cultural dimensions are always more challenging to the translator
when they are linked to humour. In the case of Albert Herring, the
original English text could sometimes be kept. I decided to leave
untouched the exchange of ‘Good mornings’ taking place at the beginning of the opera; it helps to assert the conventional relationships of a
provincial town and the Englishness of the opera. Equally, all names
were pronounced with an English accent in the French version. Yet,
both satire and Englishness are not always so easily conveyed to a
French audience. In her congratulatory speech to Albert, Lady Billows,
Albert’s patron, uses more and more clichés and set phrases and her
vocalisations become quite incoherent. The parody of a stiff upper lip
and conventional speech is created through the build up of standard
English phrases which are funny through the semi-incoherence of their
accumulation. They could not all be transferred into French as it was
important to keep to set phrases. Some expressions such as ‘Britons,
Rule the deep’ could be transferred adequately (Anglais, Régnons sur les
flots), but concise phrases such as ‘King and country’, transferred as
Honneur et patrie, lost some of their flavour.
The problem of rhyming and near-rhyming was also a recurrent issue
in this comic opera. In many ways, this was the most difficult problem
80
Lucile Desblache
to solve. In very structured musical pieces which mirror established
forms (threnody, hymns), rhyming was relatively easy to transfer. Yet,
near-rhyming was used in many passages which were written in recitative form, needed to keep a flexible musical line and to convey semantic
ideas. In some cases, rhyming/near-rhyming could not be achieved
because of these restrictions: writing a text which gave a clear message
and which flowed with the music were the two priorities. For example,
in the first act of the opera, each member of the ‘May Queen committee’ suggests a possible young girl to be elected. Singers are very exposed
at this stage, as the priority is to communicate the text to the audience.
Through the speech and the music they express their social character,
their personality as well as the basic message that has been composed in
verse to be sung. In spite of many attempts to keep all these elements, I
sometimes decided not to use verse in French, as the fluidity of the text
would suffer, particularly in the less structured musical passages. I have
thus rendered the vicar’s suggestions as follows:
The first suggestion on my list
Is a charming local girl
Who takes Communion
And never missed
A Sunday – Jennifer Searl
La première sur ma liste
Est une charmante jeune fille.
Elle communie et
Vient toujours à l’office:
Jennifer Searl.
Conclusion
In her discussion of Reiss’s definition of audio-medial and multimedial
texts, Snell-Hornby (1997: 280) questions the general principle according to which text-types dictate the translation method used:
As a general principle, this [the fact that text-types determine the
translation method] may be debatable, as most texts are in fact hybrid
forms, multidimensional structures with a blend of sometimes seemingly conflicting features, but in the case of the audio-medial text, it
is certainly arguable that the translation strategy should depend on
possibilities of expression inherent in the human voice. This can
develop into an issue where the principles of rhetoric and speakability conflict with culture-specific expectations and language-specific
text type conventions.
Although Snell-Hornby does not address the specific problem of libretto
transfer, she emphasises here both the key elements in libretti which
Challenges and Rewards of Libretto Adaptation 81
determine their translation: their hybrid, textual nature and the absolute priority needed to make them singable.
So many transfer problems are inherent in libretto adaptation that
some translators find it a more frustrating than rewarding experience.
Indeed, some of the worst translations published have been texts linked
to music and we have all laughed at some dated nineteenth-century
translations of art songs. In this sense, the screen translation offered
through surtitling may be more satisfactory. Yet, in such works as Albert
Herring, based on humour and direct communication with an audience, and which are generally performed in small theatres where sung
words can be understood much more easily than in average opera
houses, direct contact with the text is, in my view, preferable. It is
always difficult to make a judgement of one’s work, and many of my
own solutions were not fully to my satisfaction. Nevertheless, it is
ultimately very gratifying to know that you are collaborating with a
first-class musician and to hear your audience laugh. It was also an
unsurpassable experience to meet and get to know Eric Crozier and his
wife Nancy Evans, who were so generous in sharing their experience
and providing encouragement. I would like this contribution to be a
tribute to them.
Notes
1. This anecdote was reported to me by an opera singer.
2. In the end, the opera was not translated by Hans Oppenheim, to whom this
letter was addressed, but by Fritz Schröder.
3. Carl Ebert was the artistic director of the Glyndebourne opera at that time.
References
Britten, B. (1948a) Albert Herring. Vocal score. London: Boosey and Hawkes.
Britten, B. (1948b) Albert Herring. Orchestral score. London: Boosey and
Hawkes.
Crozier, E. (1947a) Unpublished letter of Eric Crozier to Nancy Evans,
10th February.
Crozier, E. (1947b) Unpublished letter of Eric Crozier to Nancy Evans,
12th February.
Crozier, E. (1947c) Unpublished letter to Hans Oppenheim, 21st June.
Crozier, E. (1947d) Foreword to the Libretto Albert Herring. London: Boosey and
Hawkes.
Crozier, E. (1964) Foreword to the Decca recording of Albert Herring. London:
Decca.
Crozier, E. (1981) Albert Herring, Libretto (2nd edn). London: Boosey and
Hawkes.
82 Lucile Desblache
Kobbé, G. (1933 [1919]) Complete Opera Book. London and New York: Putnam’s
and Sons.
Newman, E. (1947) Albert Herring. The Sunday Times, 6th July.
Snell-Hornby, M. (1997) ‘Written to be spoken: The audio-medial text in
translation’. In A. Trosborg (ed.) Text Typology and Translation (pp. 277–90).
Amsterdam and Philadelphia: John Benjamins.
Surrans, A. (ed.) (1988) 100 compositeurs de A à Z, Jeux de massacre. Paris: Editions
Bernard Coutaz.
Venuti, L. (1995) The Translator’s Invisibility. A History of Translation. London and
New York: Routledge.
Part II
Revoicing
This page intentionally left blank
7
Dubbing versus Subtitling:
Old Battleground Revisited
Jan-Emil Tveit
Introduction
Early in the twentieth century the new film medium transcended all
national and cultural borders, but with the arrival of the talkies, the
film industry faced a translation problem since only a small percentage
of the world’s population understood English. As a result, there was a
growing need to find appropriate screen translation approaches.
In Europe, France became a forerunner experimenting with both
dubbing and subtitling. It did not take long, however, to find out that
both approaches had their disadvantages: it was even claimed that
translating a film ruined it. To solve the problem a third approach was
tried out in the form of multiple versions, which meant that films were
shot in several languages instead of one. But the different versions were
not on an equal footing, and it did not take long before it became obvious that the TL-versions were suffering. A main problem was that their
linguistic quality was not up to par.
As time went by, however, French audiences grew increasingly
dissatisfied with subtitling and dubbing gained considerable territory.
Along with other countries like Italy, Spain and Germany, France gradually developed into a dubbing stronghold, while the Scandinavian
countries and the Netherlands, on the other hand, opted for subtitling.
During the first few years, this was largely a question of money. But
much has changed since the arrival of the talkies and the advent of
television. Which one of these two translation solutions is the best
option for today? In order to answer this question, I will compare the
two approaches paying special attention to their constraining factors.
85
86
Jan-Emil Tveit
Constraining factors of subtitling
Whereas it may be argued that subtitling is the only intelligent solution
(Reid, 1978), it has also been claimed that this approach is only the
lesser of two evils (Marleau, 1982). And there is no doubt that subtitling
has its constraints.
An important aspect of the subtitling process is the filtering of potential loss of information: for the purposes of expressing nuances the written word cannot possibly compete with speech. Hence a large number
of lexical items tend to be required in order to match what is conveyed
by stress, rhythm and intonation. Normally, however, the subtitler does
not have room for wordy formulations or complex structures: in order
to enhance readability, brevity is the essence. And if the subtitles are to
remain on the screen long enough for audiences to read them, contraction is a must, which in turn can result in a regrettable loss of lexical
meaning. Often it is not easy to decide what to leave out. Although
there are redundant linguistic features in speech, sometimes even slight
omissions may bring about significant changes in meaning.
It is difficult to generalise when it comes to reading speed and rate of
standardised presentation. According to Luyken et al. (1991), the reading speed of adult viewers hovers around 150 to 180 words per minute.
This is, however, subject to extensive variation and depends on the
complexity of the linguistic and factual information that the subtitles
contain. If lexical density is high, accessibility to the information tends
to be low, which calls for added subtitle exposure time.
Furthermore, readability is said to be affected by film genre. This is
how Minchinton (1993: 14–15) comments on love stories: ‘Viewers need
not read many of the titles; they know the story, they guess the dialogue, they blink down at the subtitles for information, they photograph them rather than read them’. Crime stories, according to
Minchinton, may give translators and viewers a harder time: if the
action is to be understood the subtitles have to be read.
A further point made by Minchinton concerns reading speed that
may be affected by the viewers’ attitude to the subject matter of films or
programmes. He suggests that if viewers find a story exciting they are
able to read the subtitles faster. On the other hand, it may be argued
that the more interesting audiences find a film, the less inclined they
are to spend the time reading.
As far as television subtitling is concerned, condensation levels vary
between countries. If we compare the Scandinavian countries, stricter
rules have traditionally applied in Sweden than in Norway and Denmark.
Dubbing vs Subtitling
87
In Sweden the duration of a full double-line subtitle is supposed to be
6–7 seconds, compared to 5 seconds in Denmark. The position of the
Norwegian Broadcasting Corporation (NRK) is that a full double-line
subtitle should remain on the screen for at least 6 seconds.
Except for some research carried out in Sweden a couple of decades
ago, the definition of readability is determined more by common sense
than on research results. By way of obtaining what I felt was necessary
empirical evidence, I decided to test whether the exposure time of
Norwegian subtitles could be cut down without significantly reducing
readability and comprehension. My samples were drawn from pupils/
students at nine Norwegian lower and upper secondary schools. There
were 508 respondents of between 13 and 20 years of age, and the
response rate turned out to be as high as 95%. The samples would appear
to be sufficiently random to constitute a representative cross-section of
Norwegians belonging to these age groups (Tveit, 2005).
My results showed that the retention of textual information was only
marginally reduced when the exposure time of each subtitle was cut by
1 second. Hence readability was not dramatically affected when the
duration of a full double-line subtitle decreased from 6 to 5 seconds.
When the exposure time was cut by a further second, however, the
situation changed significantly, most respondents losing out on a
considerable amount of information.
Cohesive devices are often considered omittable. But although they
may not have obvious semantic functions, these still play an important
role in making relationships and events explicit. A text that does not
contain words of this kind may be difficult to access, and omitting
cohesive devices in order to boost readability can therefore prove counterproductive. It may, indeed, reduce readability.
Admittedly, Gottlieb (1997) has a point when he emphasises the fact
that subtitling is additive by nature, that is verbal material is added to
the original programme and nothing is removed from it. The usefulness
of this addition, however, depends on the viewers’ comprehension of
the original dialogue. It is true that tone of voice, stress and intonation
may contribute to conveying information across language barriers, but
if source and target languages are poles apart in terms of lexis, the value
of keeping the original soundtrack may be rather limited. Very few
Scandinavian viewers benefit much from dialogue in Russian or Greek
being retained in order to further understanding of lexical meaning.
They may benefit greatly, however, if the dialogue is in a language
similar to their own, and therefore, comprehension is markedly facilitated for Norwegian television audiences when Swedish interviewees
88 Jan-Emil Tveit
appear in, for instance, news bulletins as, in addition to being subtitled
into Norwegian, they continue to speak in Swedish.
Although my research indicates that it is possible to reduce exposure
time by a second, it is difficult to generalise in terms of reading speed
and rates of presentation of information. As mentioned, the complexity of the linguistic and factual information has to be taken into
account.
Traditionally, a distinction has been made between television and
video on the one hand and cinema on the other. It has been assumed
that identical subtitles are easier to read on the cinema than on the
television screen. The reason for this has never been satisfactorily
investigated, but it is assumed to be related to the size of the screen,
the font used for the letters, and the better image resolution of the
cinema.
Spoken versus written language
Another constraining factor of subtitling results from the spoken word
containing dialectal and sociolectal features which are extremely difficult to account for in writing. Whereas spoken language tends to contain unfinished sentences along with redundant speech and interruptions,
writing has a higher lexical density and a greater economy of expression.
In addition, written translations of spoken language often display a tendency toward nominalisation, whereby verbal elements are turned into
nouns. Hence it is difficult to retain the flavour of the spoken mode in
subtitles. When it comes to keeping the register and appropriateness of
the SL-version dubbing can undoubtedly be at an advantage.
Previously it was rather common for informal stylistic features of
speech to be replaced by more formal and inappropriate language in
subtitles, but in this respect a marked improvement can be observed
in recent years. It could well be that the orientation away from a
traditional ‘bookish’ approach is related to the increasing number of
American productions that are shown on television in Norway. This
reflects a trend away from a formal register of language in American
culture. Sitcoms like Friends, Frasier, Seinfeld and Ally McBeal are far
from easy to handle for screen translators and seem to have contributed
to making subtitlers more aware of the need to handle register at both
the lexical and syntactic level. The translation of such programmes
into other languages requires a high degree of communicative
competence.
Dubbing vs Subtitling
89
Traditionally, Norwegian television has handled four-letter words
with conspicuous caution. Since the advent of television, taboo words
have been used increasingly in foreign language programmes that the
Norwegian Broadcasting Corporation has bought from abroad, but they
have not always been translated into equally strong language. This has
to do with the assumption that the effect of swear words is reinforced
when they are printed. Not uniformly accepted in a country like Norway
with its deep-rooted puritan traditions, it now seems strange that it
took such a long time for Norwegian translators to challenge the
assumption that the printed swear word is stronger than the spoken
one. If anything, it ought to be the other way around; a printed representation is probably only a weaker version of a word pronounced with
stress and intensity. Literature is full of four-letter words and it seems
hardly likely that their force or effect would be reduced in films based
on the books.
As for the handling of four-letter words in translation, the most difficult task I have encountered was the translation of a Channel Four
production portraying Graham Taylor who had been sacked as England
manager following the failure to qualify for the 1994 World Cup in
soccer. The programme provided me with a number of challenges, not
least because of its continuous context-dependent changes of style
and register. It ranged from the elevated and analytical language of
Channel Four reporters to Graham Taylor giving his players a dressing down for being unable to produce their best results. The most difficult part to transfer to Norwegian was Taylor’s verbal behaviour
during matches. As the ball did not roll England’s way that season,
four-letter words were fired in rapid succession. Pretty much in line
with Norwegian subtitling tradition and what I had been trained to
do, I tried my best to tone down the expressive force of the four-letter
words, in addition to leaving many of them out. But on the day that
the programme was to be broadcast, Norwegian morning newspapers
focused on the reception the programme had got in the UK, where
Graham Taylor’s strong language had created a stir. As a result, I felt I
had to change a number of my translation solutions, the neutralisation strategy having turned out to be inappropriate. Fortunately, I still
had a couple of hours to bring the register of the target version in line
with the rather colourful original. When the programme was aired in
the late evening, the subtitled Norwegian version contained a rather
high number of four-letter words; there were no viewer reactions in
the days and weeks to follow.
90 Jan-Emil Tveit
Visual, spatial and decoding restraints
In addition to linguistic restraints there is the visual factor. Trying their
best to read everything that has been translated, viewers are often
unable to concentrate adequately on other important visual information and sometimes also on oral information. This is regrettable since
audiovisual programmes combine words and images, and the translation should observe the interrelation between the way a plot is told and
the manner in which it is shown. Subtitles should synchronise not only
with speech, but also with image.
Twenty years ago, cinema and television films were cut less fast than
they tend to be now. This is undoubtedly an important reason why the
history of subtitling does not contain much discussion of editing and
camera manipulation techniques. Visual aspects are now more an issue,
which reflects the rapid development of the last couple of decades in
camera manipulation, sequence construction and programme editing.
This is not only due to the number of cameras involved and the way
they are used, but also to what takes place when the filming of a programme is complete, when the different sections, with their shots,
scenes and sequences, are assembled and edited.
It is increasingly important for the subtitles to be integrated with the
film and to fall in with the rhythm of the visual information on the
screen. This is far from easy to achieve when the editing of the film is
gaining momentum and the flow of the shots is adjusted to the demands
of the narrative and the impact on the audience. With respect to spatial
constraints, in the Nordic countries, one- and two-line subtitles are
used. The length of the line varies, but normally does not exceed 38
characters. The main limiting factors are the size of the television screen
and the size of the font.
When the subtitles follow each other in rapid succession, reading one
two-liner appears to be less strenuous than reading two one-liners containing the same amount of information. Although precise timing has
usually been defined in terms of subtitle–speech synchronisation,
research carried out by Baker (1982) in Great Britain indicates that subtitles overrunning shot changes cause perceptual confusion. Interesting
as these results may appear, they do not seem to have had too much
of an impact on subtitling territory in the Netherlands and the
Scandinavian countries and the great majority of Norwegian screen
translators still favour two-line subtitles.
Sometimes, however, two lines occupy too much space and interfere
substantially with the visual information and composition of the
Dubbing vs Subtitling
91
picture. A case in point is the animation series South Park, where the
bodies and faces of the characters fill up most of the screen. This series
can be difficult to handle for subtitlers. Also in news bulletins, faces
often dominate the screen as the camera focuses on the so-called ‘talking head’ but since there is limited activity in the picture, the viewers
are able to concentrate on the translation.
News, current affairs and documentary programmes frequently contain captions or inserts. These are texts added to the film after shooting.
They may be names of people taking part, or names of places visited or
shown. As captions often appear at the bottom of the screen, there is
the risk that they may collide with the subtitles, a situation that should
obviously be avoided.
Decoding may present translators with a difficult task due to the presence of ambiguities. Normally a translator has the means of following up
unclear linguistic or factual points; for the subtitler, however, this is not
necessarily the case. One reason may be the lack of a manuscript or dialogue list, often the situation when interviews are made in news programmes. However, as the reporter tends to know enough about the
context, the subtitler may be helped with potential decoding problems.
Frequently the subtitler does not have the time to obtain adequate
knowledge of the context. In addition, the huge number of varieties of
English – and the fact that the interviewees’ command of the language
may be far from satisfactory – often make translation a rather difficult
process. A mistake may occur after the translator has spent considerable
time trying to decode a difficult word. But sometimes we simply misinterpret what somebody is saying without realising it until we are confronted with our mistake at a later stage. The following mistake belongs
to the latter category. It was made by Swedish television in a translation
of Bill Clinton’s 1996 re-election campaign. The words of the president
were: ‘We’ve created nine and a half million jobs, more than half of
them high-wage jobs’. The translator, who does not appear to have had
access to a manuscript, produced the following translation:
Vi har skapat 9 ½ milj.arbetsplatser,
mer ãn hälften inom motorvägsarbete.
[We have created nine and a half million jobs,
more than half of them highway jobs.]
Mishearing phonemes is a common type of decoding mistake, and seems
to be what occurred here. The translator must have been convinced that
s/he heard ‘highway jobs’ (motorvägsarbete) and not ‘high-wage jobs’. In
92
Jan-Emil Tveit
this particular case there is no doubt that decoding was extremely
difficult. The two constructions contained very similar sounds.
Nevertheless, common sense combined with adequate knowledge of US
society and politics would probably have saved the translator from dropping this brick. After all, it would have been rather extraordinary if half
of all new jobs had been created in the field of road construction – even
in the homeland of Henry Ford.
Stringent deadlines in combination with decoding problems often
present screen translators with difficulties and lead to misunderstandings,
errors and inaccuracies. They represent considerable decoding constraints
and are bound to have a negative impact on the quality of the translation.
And when the audience has access to the Source Language dialogue, a
high degree of translation accuracy is required.
The constraints of dubbing and
lip synchronisation
When we experience state-of-the-art lip synchronisation, it is not difficult to understand why this method is the favoured screen translation
approach in large parts of the world. However, the constraining factors
of this approach are very obvious.
One important consideration is the loss of authenticity. An essential
part of a character’s personality is their voice, which is closely linked to
facial expressions, gestures and body language. Authenticity is undeniably sacrificed when a character is deprived of their voice and instead
the audience hears the voice of somebody else. At the Cannes Film
Festival in 2003, I interviewed 25 people working in the film industry
about their screen translation preferences. All but two said they favoured
subtitling. When asked why, most of them answered that they regarded
subtitling as the most intelligent and authentic option. For actors to
experience what their contribution has been turned into in different
Target Language versions must be somewhat startling, and it may seem
a bit strange that directors do not put their foot down more often than
they do, for example, in cases where dubbing has been favoured and
subtitling would have been the preferred option.
When such linguistic transplantation takes place, it is not only authenticity that is sacrificed but, in addition, credibility, which may be particularly problematic in news and current-affairs programmes when voice-over
is used. Reporters seem to be increasingly preoccupied with the dramatic
effect subtitled interviews are reputed to have. This probably has to do
with the transnational qualities of the human voice. Even if we do not
Dubbing vs Subtitling
93
understand the words of a foreign language, the voice itself may convey a
great deal of information. Although intonation patterns often vary from
language to language, universal features such as the expression of pain,
grief and joy should not be ignored: linked to pitch, stress, rhythm and
volume, they contribute considerably to conveying information not only
about the speakers, but also about the context of which they form a part.
Voices reflect the mood and atmosphere of a situation, whether it is at
a major sports event, the scene of an accident or the convention of a
political party. The effect of a persuasive speech during a presidential or
parliamentary election campaign is probably significantly reduced in a
voice-over. Since many politicians capitalise on their voices, sound is an
important part of their public image. The Northern Ireland politician
Gerry Adams is a good example of the significance of the impact of a
political voice. For a long period of time the British people did not have
the opportunity to hear his voice on radio or television. Whenever Adams
made a statement, it could only be broadcast when somebody else read
his words, revealing the importance the British authorities attached to
the voice of the charismatic politician. It also underlines the significance
of letting the audience have access to the original soundtrack.
There is little doubt in my mind that subtitling has an important
educational value. Visitors to the Scandinavian countries are often
impressed by the standard of English of the people they meet, most of
whom have never lived in an English-speaking country. Rather than
reflecting the superior result of differing language teaching standards,
it may in fact be the inherent pedagogical value of having access to the
original English language soundtrack that has brought this to bear. In
1987 I undertook a study that had as its objective to monitor the needs,
preferences and perceptions of the linguistic standard of 4,200 students
of English from the following nine European countries: Norway,
Sweden, Denmark, Finland, Germany, France, Italy, Austria and the
Netherlands (Tveit, 1987). An interesting result of the study was that
listening comprehension was perceived as significantly more difficult
by students from ‘dubbing countries’ than by students from ‘subtitling
countries’, the former category showing a much stronger need for an
increased knowledge of vocabulary when communicating in English.
Dubbing is both expensive and time-consuming
A further factor to be considered is cost; dubbing is a lot more expensive
than subtitling. The fact that figures vary considerably might be linked
to the relationship between supply and demand. Since trained actors
94 Jan-Emil Tveit
have traditionally been in short supply in small countries like Norway,
the cost of using their services has been high, which has contributed to
making subtitling a much cheaper alternative. Although the difference
in cost has started to even out, dubbing remains 5 to 10 times more
expensive. It might therefore seem odd that an increasing number of
foreign films and television productions are dubbed for the Scandinavian
markets. A case in point would be the family film (children under 7 are
not admitted) Charlie and the Chocolate Factory (Tim Burton, 2005). It
was dubbed into Norwegian, something that would hardly have happened a few years ago. Other productions that have been dubbed for the
Scandinavian audiences in recent years are the animation films Shrek
(Andrew Adamson and Vicky Jenson, 2001) and The Polar Express (Robert
Zemeckis, 2004). In France, cinema audiences have been able to choose
between dubbed and subtitled versions of major productions for a
number of years. Although it is still only the thin end of the wedge, a
similar development can now be seen in Scandinavia.
What is it then that makes dubbing a viable alternative when it is so
expensive? The argument seems to be that costs do not matter too much
if revenues are big enough. If lip synchronisation can attract bigger
audiences, increased translation costs would not present too much of a
problem. The television business is increasingly preoccupied with ratings. If a network is able to get an edge on its competitors, costs are not
likely to hold it back.
In 1997, an interesting screen translation experiment on subtitling
took place in Scandinavia. TV 2, Norway’s biggest commercial television channel, decided to test viewer reactions to lip synchronisation
and the first three episodes of The Gregory Hines’ Show were dubbed into
Norwegian. Some of the country’s most renowned actors were hired,
and the show was given prime viewing time on Friday evenings. After
three weeks the viewers were asked for their verdict. The result showed
that 85% of the viewers wanted to see subtitled versions of the remaining episodes, a somewhat baffling reaction, considering The Gregory
Hines Show is a family programme, aired at a time when parents tend to
watch television together with their children, many of whom have not
yet learned to read.
Film aficionados looked at the result as proof of subtitling being the
best solution – and far superior to dubbing. But there is clearly a lot
more to it than that, and tradition is a key word. One should not forget
that preferences depend on what viewers are accustomed to – and
Norwegians are used to subtitling. In a country lacking the tradition, it
takes time to develop state-of-the-art dubbing. As for TV 2, the result of
Dubbing vs Subtitling
95
their market research seemed to convince the powers that be that
dubbed versions did not attract big enough audiences.
Normally the dubbing process takes considerable time. In the case of
news bulletins, it is obviously not possible to invite actors to play the
parts of interviewees. Instead, the big US and European networks make
use of voice-over. It is a reasonably speedy process, but it tends to distract viewers who tend to concentrate on the initial voice to the extent
that they lose out on parts of the voice-over.
In most cases, the subtitling of news reports can be done swiftly, and
often takes place minutes before the news bulletin starts. The subtitling
of films and series can also be done surprisingly quickly, as illustrated
by the following example dating back to the 1990s. A couple of hours
after US television audiences had been treated to the solution to the
murder mystery, the final episode of Murder One was on its way to
Europe. It arrived in Norway 11 hours before the ending to Steve Bocho’s
drama was to be shown on Norwegian television. Nine hours later the
target version had been brought to completion and the programme was
ready for transmission. In dubbing countries such as Germany and
Spain, the linguistic adaptation process would probably have taken
weeks rather than hours, involving a large number of people. For a start,
the dialogue list has to be translated. Then the chosen actors have to be
given the time to study and rehearse their parts. The recording sessions
also tend to take time. From this it would seem evident that as far as
meeting stringent deadlines is concerned the usefulness of this screen
translation approach is rather limited.
Conclusion
Based on the premises outlined above, my conclusion is that subtitling
is normally a better approach to screen translation than is dubbing.
This does not follow logically from counting the number of constraining factors of the two approaches, but has to do with the fact that some
of the constraining factors are easier to get around or compensate for
than others. In my opinion the loss of authenticity in dubbing, since
important aspects of a character’s personality are revealed through the
use of their voice, is the biggest problem of all.
There are, however, cases where the voice does not form an integral
part of a character, it simply belongs to the off-screen commentator.
This tends to be the case in documentary programmes, which normally
lend themselves more easily to revoicing than subtitling. Here, if the
latter method is chosen, it may lead to extensive loss of information.
96
Jan-Emil Tveit
Programmes that are cut extremely fast and have rapid speech rates
should not necessarily be subtitled. If readability requirements are to be
met, the sheer levels of condensation needed result in too great a loss of
information. In spite of all its disadvantages, in such cases dubbing may
be the lesser of two evils. Moreover, films and programmes aimed at
small children have to be dubbed for the simple reason that the target
audiences have not yet learned to read.
References
Baker, R. (1982) Monitoring Eye-movements while Watching Subtitled Television
Programmes. London: Independent Broadcasting Authority.
Gottlieb, H. (1997) Subtitles, Translation & Idioms. Copenhagen: University of
Copenhagen. PhD Thesis.
Luyken, G. M., Herbst, T., Langham-Brown, J., Reid, H. and Spinhof, H. (1991)
Overcoming Language Barriers in Television: Dubbing and Subtitling for the European
Audience. Manchester: European Institute for the Media.
Marleau, L. (1982) ‘Les sous-titres ... un mal nécessaire’. Meta 27(3): 271–85.
Minchinton, J. (1993) Sub-Titling. Hertfordshire, England. Manuscript.
Reid, H. (1978) ‘Sub-titling: the intelligent solution’. In P.A. Horguelin (ed.)
Translating: A Profession. Proceedings of the VII World Congress of the International
Federation of Translators (pp. 420–8). Ottawa: Council of Translators and
Interpreters of Ottawa.
Tveit, J.-E. (1987) Orientation Towards the Learner. Stavanger: Stavanger lærerhøgskole.
Tveit, J.-E. (2005) Translating for Television. A Handbook in Screen Translation.
Bergen: JK Publishing.
8
The Perception of Dubbing by
Italian Audiences
Rachele Antonini and Delia Chiaro
Screen translations, and especially those for the small screen, are
consumed by millions of people in Europe, and of course all over the
non-English speaking world, at every moment of the day.1 In this
respect, Italy is representative of the many nations which require evergrowing amounts of interlingual mediation for both big and small
screen. As is well known, Italy is commonly labelled a ‘dubbing
country’ which, together with Austria, France, Germany and Spain,
has adopted a tradition of dubbing rather than subtitling, the preferred mode of screen translation in Greece, Portugal, Scandinavia
and the UK. However, the situation in dubbing countries is rapidly
changing as a choice between dubbing and subtitling is now available
on satellite stations and, owing to the fact that subtitling is the more
cost-effective choice of the two, for the colossal DVD market. 2 However,
despite the fact that many younger people who are more proficient in
English than their parents may well prefer subtitling to be more widely
available, dubbing is bound to remain the chief form of linguistic
mediation for some years. Perhaps it is worthwhile remembering that
attitudes and preferences are largely an issue of habit and thus a
person who has been used to almost a lifetime of dubbing is unlikely
to be persuaded to change to a different mode of screen translation:
‘viewers are creatures of habit’ (Ivarsson, 1992: 66) and preferences
depend on ‘what the audience is used to rather than rational
arguments’ (ibid.: 20).
The Eurobarometer survey conducted in 2001 on the language skills
of European citizens and their attitudes towards language learning supports Ivarsson’s claims. In fact, 60% of respondents declared that they
prefer to see foreign films and programmes dubbed into their own language. In Italy, more than 70% of respondents expressed support for
97
98 Rachele Antonini & Delia Chiaro
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
Germany
Spain
Non-US Imports
France
UK
Films
Fiction
Films
Fiction
Films
Fiction
Films
Fiction
Films
Fiction
0%
Italy
US and US Co-production
Figure 8.1 Volume of fiction/films broadcast on television channels in the five
principal European markets, 2001
Source: European Audiovisual Observatory; Press Release 28 January 2003.
dubbing as their preferred form of audiovisual translation (AVT) while
countries comprising the other block confirmed their strong support
for subtitled products.
Television programmes which undergo the process of dubbing in Italy
consist of all imported genres of fictional products from films and TV
series to sitcoms and soap operas for a total of over 40 hours airing on
weekdays and, as illustrated in Figure 8.1, most products are translated
from American English.
Now of course, while the UK is the chief importer of American programmes, unlike non-English speaking countries, these products
require no linguistic negotiation and are thus cheaper by default as they
are purchased ‘ready-for-use’. In the other countries reported in
Figure 8.1 the same imports require time and investment before they
are ready for broadcasting.
With globalisation dictating that films should be premièred
simultaneously on both sides of the Atlantic, many operators working
in Italy’s dubbing sector who are feeling the strain of such a great workload, poor wages and working conditions have begun to take issue with
Perception of Dubbing by Italian Audiences
99
their employers. In fact, in February 2004, operators working in the
Italian dubbing sector took industrial action in protest at the fact that
they do not possess the fundamental rights of any worker in a democratic country, namely a national contract. From dubbing translators to
actors, workers withdrew their labour thus threatening the substantial
revenues expected at box offices and from TV advertising. In a totally
anarchic market, a film which once required three weeks to dub from
start to finish, in the new millennium necessitates the same task to be
completed in five days or else it runs the risk of being sent to a small
make-do ‘do it yourself’ company willing to comply with a lower fee
and a quick and dirty translation. And what of quality? According to
Toni Biocca, a representative of AIDAC, Italy’s association of dubbing
translators, executives at RAI and MEDIASET are frequently heard to
say della qualità non importa a nessuno [who cares about quality] (quoted
in Gallozzi, 2004).
Needless to say, the key function of translation is to allow people to
be privy to texts in languages with which they are not familiar, yet the
emphasis of research in audiovisual translation, and translation studies
in general, has been on actual texts, the translations themselves and
their translators rather than on readers and viewers. In other words, the
people who essentially make use of these translations seem to have been
largely ignored by researchers. The literature on the processes involved
in the work of a screen translator is generally based on studies of a
descriptive and prescriptive nature (Gambier, 1998; Gambier and
Gottlieb, 2001). Case studies concerned with dubbed texts are abundant
and are normally based on contrastive analyses of audiovisual texts in
their original form and their dubbed version(s), and it can safely be said
that the ins and outs of the various compromises involved in the process
of dubbing have been explored in a variety of language combinations
by several scholars (Baccolini et al., 1994; Heiss and Bollettieri Bosinelli,
1996; Bollettieri Bosinelli et al., 2000).
However, while uncovering and analysing all the possible translating techniques and choices applied to AVT and the point of view and
the experience of the translator, such approaches tell us nothing at all
about end-users’ perception of AVT and its quality. In fact, media
translation is an area in which quality control becomes extremely difficult due to the complexity of several semiotic systems functioning
simultaneously. Yet, although interest in this issue is growing both
amongst practitioners and academics, regrettably, it would appear that
attention stops short at interest. Rare exceptions to this state of affairs
are represented by the study on reception of screen translations in
100 Rachele Antonini & Delia Chiaro
Greece (Karamitroglou, 2000), work on reception of dubbing in Spain
(Fuentes Luque, 2001), and perception of subtitled humour in Italy
(Antonini et al., 2003).
The present investigation is an attempt to overturn the traditional
standpoint of research and look at AVT from the point of view of its
recipients rather than the translator or the text in an attempt to identify their attitudes, opinions and above all the perception of what they
watch. Interlingual translations are textually unique in that they exist
as a version of a pre-existing duplicate in another language. By the
same score, it can be safely assumed that recipients of translations, that
is to say, the consumers of these translations, are mainly those who are
not au fait with the Source Language and that such viewers perhaps
would rather not perceive translations as translations. Presumably
most recipients would like to hear a smooth, easily digestible text in
their mother tongue and, in the case of AVT, although what appears on
the screen may be unfamiliar, what is perceived through the ears (in
the case of dubbing) should be identifiable and free of turbulence of
any sort. This research sets out to explore exactly what audiences
perceive when they watch and hear a dubbed product. Thus, by
examining the reaction of Italian audiences to dubbed programmes,
the research shifts its emphasis from the text and translator to the
recipient in an attempt to explore new ground.
This preamble contains an underlying implication. Surely smooth,
Italian-sounding TV products, the contents of which are easily comprehensible to the average viewer, correlate with good quality products in
terms of linguistic mediation? Due to the total inter-dependence of several semiotic systems functioning simultaneously, the verbal elements in
AVT are strongly compromised, thus what we have labelled lingua-cultural
drops in voltage may well be inevitable. Lingua-cultural drops in voltage refer
to viewer perception of lingua-cultural uneasiness or turbulence, such as a
cultural reference which is not completely understood, an unnaturalsounding utterance, an odd-sounding idiom or a joke which falls flat.
Lingua-cultural short circuits or worse still, lingua-cultural power cuts (and
the metaphor is, hopefully, quite clear) which suggest significant faults or
even a total breakdown in which communication fails, are outside the
scope of this research. On the other hand, audience misconceptions,
however slight, are exactly within the bounds of this study.
Several translational choices in Italian AVT contain a number of
compromises that either filter through to audiences unnoticed, or
else are instinctively accepted. Many such translational choices may
well have been norm-driven, however, there is no valid reason why a
Perception of Dubbing by Italian Audiences
101
norm should not be adjusted if necessary. Thus, in devising the
experimental design, the research questions we asked ourselves were
the following:
●
●
●
●
Are viewers aware that many mediated linguistic forms which occur
in dubbed products are unusual in naturally occurring Italian?
Do viewers who are aware of lingua-cultural drops in voltage find them
acceptable?
Who exactly perceives what when watching a linguistically mediated
text by means of dubbing, in terms of age, gender, education, etc.?
If a programme is successful, do such drops in voltage matter
anyway?
The focal point of these research questions are viewers and their
perception of what they see and hear, rather than our own perception
as researchers. However, for reasons of space, the results pertaining to
language-specific features alone will be presented in this chapter.
The experimental design of the project
The experimental design of this project was based on a web questionnaire and a corpus made up of 300 hours of Italian-dubbed fictional
programmes. Respondents involved in the study were asked to visit a
website, log in, watch four short video clips from the corpus and answer
one question regarding each clip. Web technology provided the means
to carry out the investigation on a large scale rapidly, cheaply and
accurately. Responses were stored directly in an ACCESS database and
subsequently elaborated statistically. What follows is a brief explanation of the creation of both corpus and questionnaire.
The corpus
Over 300 hours of dubbed television programmes were recorded over a
period of eight weeks during February and April 2002. A total of 30 programmes were selected and recorded in order to form a representative
sample of the choice of dubbed products that Italian viewers had at their
disposal on the three state-owned channels (Raiuno, Raidue, Raitre), on
the three privately owned channels (Canale5, Rete4, Italia1), and a private channel (La7) which first began broadcasting just before recordings
for the corpus commenced in January 2002. Dubbed fictional programmes were recorded on a daily basis between 8am and midnight.
Feature films and TV films were deliberately excluded from the corpus.3
102
Rachele Antonini & Delia Chiaro
During this period, approximately 133 hours of dubbed programmes
were aired each week during the 8am to 12pm time span. Interestingly,
the Mediaset group (Canale5, Rete4, Italia1) accounted for 60% of the
total amount of broadcast TV programmes. Thus, together with the
dubbed programmes aired by the other privately-owned channel, La7,
we obtain a total percentage of 76% (79 hours), while the state owned
RAI group share only averaged a little less than 33 hours (24% of the
total amount).
The selection of the programmes included in the corpus took explicit
criteria into consideration. Care was taken to include all TV genres
which undergo the process of dubbing (series, serials, sitcoms, soap
operas, telenovelas and cartoons). Furthermore, time spans and target
audiences were also identified and recordings reflected all possible
viewing times i.e. mornings when soaps and telenovelas targeted at
housewives are aired; afternoons which comprise mainly soaps, cartoons and series aimed at teenagers; prime time evening TV which airs
telenovelas and series/serials for a more general audience. The corpus
thus included examples of programmes aimed at all specific audiences
and age-groups as well as programmes from all Source Languages aired
in dubbed versions during the two month recording period i.e. US,
UK, Canadian and Australian English; Standard German, Austrian
German, Brazilian Portuguese, and various Latin American varieties
of Spanish.
Editing the corpus
All examples of highly specific cultural references and instances of what
were considered to be examples of Italian dubbese (Alfieri, 1994; D’Amico,
1996; Pavesi, 1994 and 1996) contained in the corpus were identified.
Scores of examples were reduced to what were considered the best in
terms of technical clarity and subsequently packaged into 170 clips (.mpg
files averaging between 8 to 15 seconds in length). These clips were then
grouped according to four categories of cultural and/or linguistic drops in
voltage, classified as follows:
(1) Culture-specific references (e.g. references to food and drink; weights
and measures; place names; institutions, etc.).
(2) ‘Lingua-cultural drops in translational voltage’ (e.g. rhymes; songs;
proverbs; puns, etc.).
(3) Language-specific features.
(4) Purely visual cultural specificity.
Perception of Dubbing by Italian Audiences
103
Finally, a short introduction was prepared for each clip in html. The
aim of these short texts was to contextualise each clip.
Selecting language-specific features
As this chapter is concerned particularly with viewers’ perception of
language-specific features, what follows is an account of the
methodology adopted for the part of the questionnaire which dealt
expressly with these features. When choosing clips from the complete
recorded corpus which were representative of ‘turbulent’ languagespecific features (Category 3) and that adhered to norms that were
highly discernible to translators and researchers, the final choice fell
on a number of elements. These included the norms for the following
features:
(1) Terms of address. Unlike the other Source Languages in the corpus,
English does not express politeness, courtesy and familiarity via its
second person pronoun like Italian, which markedly connotes familiarity and/or social distance. Consequently such difficulty necessitates firm translational strategies regarding not only personal
pronoun forms, but also the substitution of a large range of terms of
endearment, titles, names, etc. (Pavesi, 1994).
(2) Taboo language. Swear words were explicitly selected from the corpus owing to the fact that Italian tends both to under-translate such
words and insert fewer than occur in the original (Pavesi and
Malinverno, 2000).
(3) Words left spoken in the original language.
(4) Written scripts left in the original language: signs, letters, notes,
newspapers, etc.
(5) Exclamations.
(6) Affirmatives. The English affirmatives ‘yes’, ‘yeah’ and German ja
create problems of lip synchronisation if left translated with the
customary Italian word for asserting sì and are thus normally translated with the term già.
The questionnaire
A questionnaire-style test, in which questions were based on a random
selection of the video clips, was devised and divided into six sections:
the first four each containing one question referring to one clip from
each of the four categories under investigation, and the last two comprising general questions on dubbing and subtitling and socio-demographic
104
Rachele Antonini & Delia Chiaro
classification questions respectively. One single question was devised for
each category of clip in which respondents were asked to:
●
●
●
●
Rate their understanding of the first clip concerning culture-specific
references – Category 1.
Rate their understanding of the second clip regarding lingua-cultural
drops in voltage – Category 2.
For clips concerning language-specific features, rate the likelihood of
such language arising in naturally occurring Italian – Category 3.
Rate their understanding of the situation for the clips concerning
purely visual cultural specificity – Category 4.
Thus each respondent answered four questions regarding four video
clips. All questions were of the closed type. In fact, respondents were
required to assess each video clip according to a 1–10 point graphic rating scale by simply dragging an arrow and dropping it at the exact point
on the scale which most closely corresponded to their evaluation.
Having done that, respondents also had the opportunity of explaining what they had understood in their own words. This extra qualitative information proved to be quite precious in interpreting the results
of the investigation.
Both the questionnaire and the clips with their introductory texts
were placed on a website which was developed using ASP and Java
script programming languages.4 The site was programmed in such a
way that each respondent who agreed to complete the questionnaire
was automatically given four clips with their matching question from
the categories (listed on page 103). Although the four questions were
always identical, the questionnaire was programmed in such a way as to
ensure that no respondents would receive the same combination of
clips. In fact, questions and clips were automatically selected at random
from four separate electronic folders. Thus, whenever a visitor entered
the site and agreed to take part in the investigation by completing the
questionnaire, programming was such that:
(1) They were presented with a short introduction which contextualised each clip they were about to watch.
(2) They watched four clips picked at random by the program.
(3) They were invited to rate their appreciation of the clips on a 10-point
graphic rating scale aimed at assessing their self-reported
understanding of the content of the clip.
Perception of Dubbing by Italian Audiences
105
(4) They were asked to explain in their own words (in an electronic
notepad) what they had understood from each clip.
With regards to the question on language-specific features, Category 3,
respondents randomly received a clip regarding any one of the six features described above (terms of address, exclamations, etc.).
Elaborating raw data regarding
language-specific features
What follows are the results regarding the sample’s perception of language-specific features. The raw data was elaborated statistically with
SPSS (Statistical Package for the Social Sciences).5 In order to measure
the Perception of Likelihood of Occurrence (GPLO) of the items in naturally occurring Italian, the respondents’ scores were first analysed in
general and subsequently in more detail after collapsing the scores into
a dichotomic variable coded as follows:
(1) Scores from 0 to 5 " Generally perceived as more towards unlikely to
occur.
(2) Scores from 5.01 to 10 " generally perceived as more towards likely
to occur.
Administering the questionnaire
The questionnaire was promoted on one of the largest and most popular
Italian web providers: Virgilio. An advertisement announcing the
questionnaire and inviting surfers to participate was placed as a popunder 6 which appeared on the provider’s home page and also on pages
dedicated to cinema, TV, music and books.
Between 16th January and 29th February 2004 over 3,000 people
visited the website and 2,478 submitted a questionnaire. From these
submissions, a total of 195 questionnaires were considered to be complete and hence valid for data elaboration. The reason why so many
questionnaires were considered invalid was due to the fact that many
people did not complete them with their socio-demographic details. In
fact, blocks eliciting attitude towards dubbing and socio-demographic
information were positioned last on the questionnaire and presumably
many respondents simply watched the clips, answered the questions
and then logged off without giving any personal information. Of
course, this partial information was useless for the purposes of data
elaboration.
106
Rachele Antonini & Delia Chiaro
Results
The sample
The final block of the questionnaire required respondents to give
socio-demographic information about themselves and their lifestyles.
Respondents were mostly male; 123 males compared to 72 females.
This proportion is predictable if we accept the commonplace that the
typical user of the Internet in Italy is indeed male; 63.1% males compared to 36.9% females. On the other hand, the majority of questionnaires were actually completed during office hours, thus reflecting the
Italian working reality that more men than women are employed outside the home. This too reflects CNEL and EURISKO data which shows
that 39.8% of employed Italians use the Internet at the workplace.7
Furthermore, over half the sample were under the age of 30, with 106 of
the respondents aged below 50. Now, although this data reflects what
we know about Internet use and confirms the researchers’ forecasts
regarding the kind of people who were expected to respond, the sample
also includes 23 senior citizens. So, even if the sample is skewed in terms
of Internet users only, the number of respondents is high enough to be
fairly balanced in terms of age.
As far as social background is concerned, most respondents were educated at least to secondary school level and above and represented a
wide variety of occupations. Finally, the vast majority of respondents
were familiar with English, having learnt it either at school or college,
or else were self-taught. 150 respondents also claimed to know at least
one other of the languages from which the clips in our corpus were
translated. Forty respondents had visited the USA.
The people in the sample maintained that they watched an average of
4.6 hours of TV a day and that they had a preference for films, sitcoms,
sci-fi and cartoons. In terms of quality of Italian dubbing, 34.9% of the
sample considered dubbed products to be good, 30.8% judged them to
be satisfactory and 11.8% excellent. 15.9% of the sample found their
quality to be mediocre and 6.7% considered them to be poor.
Furthermore, 40.5% of respondents favoured having a choice between
dubbing and subtitling, a choice which nowadays is only available in
Italy on pay TV and some programmes broadcast by satellite channels.
The general perception of language-specific features
Likelihood versus unlikelihood
The distribution of the general score of all the features taken together is
quite normal around the central point of the proposed scale of
Perception of Dubbing by Italian Audiences
107
measurement (0–10) with a mean of 5.19 and a standard deviation of
2.69. After having collapsed this variable into a dichotomic one, results
show that 43% of the sample perceive the language features in question
‘more towards unlikely to occur’ in natural Italian while 56.9% considered them to be ‘more towards likely to occur’. Crossing this variable
with socio-demographic data, it was found that the former view is
strongly correlated with age and education as can be observed in
Tables 8.1 and 8.2. Respondents above the age of 50 appear to be more
oriented towards unlikelihood and the same pattern was observed for
the more highly educated members of the sample.
That older people found the language more ‘unlikely’ (69.6%) than
younger members of the sample may well be due to the fact that they
are less likely to have undergone the process of internationalisation
(see Table 8.1). In other words, younger people are more likely to have
Crosstabulation of Age and GPLO
Table 8.1
GPLO
AGE
Below 30
Between 30 and 50
Above 50
Total
Table 8.2
Count
% within Age
Count
% within Age
Count
% within Age
Count
% within Age
Unlikely
Likely
Total
42
39.6%
26
39.4%
16
69.6%
84
43.1%
64
60.4%
40
60.6%
7
30.4%
111
56.9%
106
100%
66
100%
23
100%
195
100%
Crosstabulation of Education and GPLO
GPLO
EDUCATION
Non university educated
University educated
Total
Count
% with in
Education
Count
% with in
Education
Count
% with in
Education
Unlikely
Likely
Total
55
38.2%
89
61.8%
144
100%
29
56.9%
22
43.1%
51
100%
84
43.1%
111
56.9%
195
100%
108 Rachele Antonini & Delia Chiaro
been influenced by having grown up in a more globalised context, surrounded by the lyrics of international music and the slogans of global
advertising. Furthermore, they are likely to have learnt at least one foreign language at school. Thus, having been more exposed to foreign
linguistic forms, younger people are likely to be linguistically aware
and may well be able to recognise foreign forms seeping through into
Italian forms.
The fact that the more educated members of the sample find Italian
dubbese is self-evident if we presume that education correlates with
more linguistic competence and awareness.
Perception of specific language features
Elaboration of data revealed that four of the features had a mean which
was greater than 5 (già, exclamations, bad language and utterances left
in the original language) and two features had a mean which was less
than 5 (terms of address and written scripts left in the original language). These findings appear to indicate that the former cluster of items
are judged as leaning more towards likeliness of occurrence, while the
latter two of the features are judged as leaning more towards unlikeliness
of occurrence (Figure 8.2).
Number of cases
10
34
35
41
34
25
34
Mean scores
8
5.47
5.37
6
4.99
5.45
5.17
4.59
4
2
0
Già
Original
language
Terms of Bad language Non
Exclamations
address
translated
written
Figure 8.2 Comparison of mean scores and standard deviations* of viewer
perception of lingua specific translational uneasiness
* The vertical line inserted within each bar represents the standard deviation.
Perception of Dubbing by Italian Audiences
109
The features considered more unlikely to occur are:
Untranslated written scripts: These features obtained the lowest mean
score of all those examined, averaging 4.59. Predictably, signs, letters,
newspapers etc. in a foreign language were not seen as much of a likely
occurrence in everyday Italian life. This information is not always
translated into Italian and comments in the notepads revealed a very
low understanding of instances of writing in the clips. The problem is
that some inserts contain vital information which is denied to viewers
unfamiliar with the Source Language. Thus, with regard to audiences’
rights to be able to be fully privy to a TV product, those who do not
know foreign languages appear to be somewhat short-changed.
Terms of address: The translational norms for adjusting English screen
texts for Italian, a language which connotes social rules regarding distance and familiarity through the use of the pronouns tu and Lei are
complex and haphazard (Pavesi, 1994). For example, Italian dubbese
frequently adopts the use of Lei (polite form) with a person’s first name,
thus jarring both socially and linguistically. Questions on clips exemplifying mismatches in social distance between interlocutors and terms
of address achieved a mean score of 4.99 in terms of likelihood of arising in naturally occurring Italian. So, they were perceived as being on
the borderline between likeliness and unlikeliness of occurrence.
However, the qualitative data supplied by respondents in the notepads
provided significant input. For example, several respondents commented on the unusual occurrences of the terms amico [friend] which is
the norm for translating the US ‘guy’; Signore [Sir]; figliuolo [son] and
even cara suocera [dear mother in law]. Thus, the scores reveal that
respondents seemed perfectly aware of the fact that amico, Signore and
figliuolo are not used in naturally occurring Italian.
The features considered more likely to occur are:
Exclamations: Respondents were questioned on their perception of
likeliness of occurrence in Italian of three English exclamations which
remain untranslated in dubbed programmes: ‘wow’, ‘hey’, and ‘oh oh’
(in the sense of ‘Oh no, something’s wrong’). No incidence of the expressions were found in home-made fictional TV programmes, although the
term ‘wow’ does indeed occur in institutionalised TV talk such as talk
shows, reality TV, advertisements and so on. Furthermore, the LIP
(Lingua Italiana Parlata) corpus gave us zero occurrences of ‘wow’ and
‘oh oh’ and only five concordances for ‘hey’ (spelt ehi), two of which
occurred in non-standard Italian (Neapolitan dialect). Respondents
110
Rachele Antonini & Delia Chiaro
gave exclamations a mean score of 5.17 thus deeming them moderately
likely to occur in natural Italian. As far as the exclamation ‘wow’ is concerned, several older respondents informed us that although it was not
part of their own linguistic repertoire, it was certainly used by their
children.
Untranslated spoken forms: These forms achieved a mean score of 5.37.
Terms which remained untranslated (‘condom’, ‘take away’) appear to
have been generally understood by recipients. Several commented on
the use of the term ‘condom’, claiming that they understood it even if
it was not an Italian word and thus did not find the term unusual in any
way. On the other hand acronyms created a certain amount of perplexity amongst respondents. For example, a clip from the sitcom Willy the
Fresh Prince of Bel Air featured Will Smith asking for a BLT sandwich.
Despite plenty of clues from the context no respondents understood
that the acronym refers to ‘Bacon, Lettuce and Tomato’.
Taboo language: The sum of the score of respondents’ perception of
taboo words translated from English resulted in a mean of 5.45. In a
certain sense the score is extremely generous if we consider that none of
the translated taboo words actually exist either in everyday Italian or
‘screen’ Italian; this is because the Italian norm is firstly, mainly to tone
down the force of swear words and vulgarities and secondly, to adopt
invented conventions (Pavesi and Malinverno, 2000). For example, the
stock translation for the insult ‘fucking bastard’ would be the literal fottuto bastardo, a form of abuse which is not part of naturally occurring
Italian usage. However, although respondents gave a somewhat lukewarm response, comments in the notepad clearly leant towards the
unusualness of translational choices.
Già: This feature gained the highest score with a mean of 5.47. In other
words it was fairly well accepted as a naturally occurring form. In Italian
dubbese, già, an adverb normally used to mean ‘already’, substitutes for
UK English ‘yes’, US English ‘yeah’ and German ja. For obvious reasons
of lip synchronisation the use of sì, pronounced with spread lips, would
be out of the question as a substitute for the pronunciation of unrounded,
jaw lowered ‘yeah’ and ja. Thus, già has become the conventional substitution. Such is the force of this norm that già appears to be just as
frequent in programmes dubbed from Hispanic languages in which
‘yes’ is sí. In an examination of a parallel corpus of home-made Italian
programmes, già is used almost exclusively as an intra-sentential adverb
meaning ‘already’ and is rarely used independently. A query on the LIP
corpus resulted in 641 concordances with only five occurrences of già
Perception of Dubbing by Italian Audiences
111
used alone in assertion of the previous utterance. Furthermore, the
word never stands alone, in all five instances it is either preceded by e as
in e già or ah as in ah già. It would appear from our dubbed corpus that
it is just in Italian dubbese that già stands alone. In addition, unquantifiable observation reveals that the form may be slowly seeping through
to the language of home-produced Italian TV fiction. In fact, while
going into print the authors noticed an increasing occurrence of independently used già in affirming function in several Italian-produced
series, serials and soaps (Orgoglio a mini-series screened on RAI1 and Un
posto al sole an Italian soap opera).8
Conclusion
The two features which respondents thought were most likely to occur
in naturally occurring Italian were già and taboo words. Although
these scores were interpreted as an acceptance of likelihood of occurrence in naturally occurring Italian, at the same time they are hardly
awe-inspiring. In fact, the scores in all four more ‘positive’ categories
are rather middle-of-the-road. Had the scores been lower, then translations could be discarded as being unnatural and consequently of poor
quality. Conversely, if scores had been higher they would have pointed
towards excellence. However, as scores stand they point towards average acceptance. This automatically raises the question: does this imply
average quality? However, it was decided to precede the terms ‘likelihood’ and ‘unlikelihood’ with the phrase ‘more towards’ which
modulates the strength of perceived likelihood or unlikelihood of
occurrence. Furthermore, the qualitative data revealed a marked awareness of the fact that what respondents were watching was ‘television
language’ and thus acceptable as such even if it was somewhat removed
from the reality of everyday speech.
Although none of the features were actually rejected, in other words
respondents generally gave all elements a pass mark in terms of likelihood of occurrence, they were hardly convinced of their Italianness.
If Italian dubbese is to be regarded in terms of excellence, then
operators are slightly off target. However, if numeric data is checked
against qualitative data, what emerges is that a significant number of
viewers who are perfectly aware of the fact that, for example, già is a
convention, are willing to accept it on screen but admit to not using it
themselves.
Our data reveal that the Italian dubbing industry appears to be
producing a nation of viewers who are suffering from a syndrome of
112
Rachele Antonini & Delia Chiaro
linguistic bipolarity. On the one hand they are aware that TV dubbese
is unlike real Italian, on the other hand they are willing to accept it but
as long as it remains on screen. Respondents allege that they do not use
these forms yet there may well be a mismatch between what they
think they do and what they actually do. However, the fact that they
notice their children using expressions like ‘wow’ coupled with an
awareness that they do not and would not use the form themselves is a
clear signal of some kind of language awareness.
Finally, operators in the Italian dubbing industry should pay heed to
these results and consider investing in human resources who are able to
mediate between cultures in such a way as to render Italian dubbese
more similar to naturally occurring Italian. If this were possible, the
future repetition of such an experiment could then budge scores slightly
upwards so as to seriously point towards excellence in the long run.
Notes
1. The Andersen report prepared on behalf of the European Commission
Directorate-General for Education and Culture entitled Outlook for the
Development of Technologies and Markets for the European Audio-Visual Sector up
to 2010 (June 2002), provides a detailed nation-by-nation analysis of trends
in European viewing habits.
2. The Univideo Executive Summary (June 2003), elaborated by SIMMACO
Management Consulting, provides data regarding purchasing trends in both
the Italian and the European DVD sector as a whole. Data clearly show that
the DVD player and disc market is the fastest growing, and consequently the
most commercially significant, in the European home entertainment sector
(www.univideo.org/dossier/studi.asp).
3. A decision to exclude products for the big screen was quite deliberate and
mainly due to the fact that, at least in Italy, while technically speaking dubbing processes are similar for both types of programmes, for many feature
films more time is dedicated to the whole process and often special actors are
involved (Benincà, 1999). Indeed some directors are extremely concerned
with the translational process (Stanley Kubrick and Woody Allen) while other
producers go as far as choosing well-known voices (Buena Vista). Although
most films are not dubbed with such care and given the fact that more Italians
watch TV than go to the movies anyway, the researchers deemed that other
products were more typical of what Italians actually watched. Furthermore,
it seemed that the stressful work involved in dubbing scores of episodes of
everyday TV products would by default produce poorer material in qualitative terms, yet it is what most Italians watch most of the time.
4. The authors wish to thank Piero Conficoni, webmaster at the University of
Bologna’s Department of Interdisciplinary Studies in Translation, Languages
and Culture (SITLEC), for creating the interactive website and programming
the database (www.sitlec.unibo.it/dubbingquality).
Perception of Dubbing by Italian Audiences
113
5. Special thanks go to Giuseppe Nocella for supervising the experimental
design and data analysis.
6. A pop-under is a window that appears when surfers exit sites they have just
visited. Unlike a pop-up, which appears as soon as surfers visit a site, thus
running the risk of irritating users by interfering with what they are doing, a
pop-under can only be seen once surfers have abandoned the site they have
just visited. A pop-under, in fact, remains hidden behind the site in use, and
is thus less intrusive than a pop-up.
7. CNEL is the Consiglio Nazionale dell’Economia del Lavoro (National Council for
Economy and Work) and EURISKO stands for European Research on
Consumption, Communication and Social Transformation.
8. Interestingly, the authors noticed a shift from già standing alone in episodes
from the US series Felicity included in the corpus and instances of eh già in
new episodes broadcast in June 2004. Furthermore, in August 2004 a popular
weekly crossword magazine, La settimana enigmistica, featured the following
clue: a three-lettered synonym for ‘yes’.
References
Alfieri, G. (1994) ‘Lingua di consumo e lingua di riuso’. In L. Seriani and
P. Trifone (eds) Storia della lingua italiana. Vol. II, Scritto e parlato (pp. 161–235).
Torino: Einaudi.
Antonini, R., Bucaria, C. and Senzani, A. (2003) ‘It’s a priest thing, you wouldn’t
understand: Father Ted goes to Italy’. Antares VI: 26–30.
Baccolini, R., Bollettieri Bosinelli, R.M. and Gavioli, L. (eds) (1994) Il doppiaggio.
Trasposizioni linguistiche e culturali. Bologna: CLUEB.
Benincà, S. (1999) ‘Il doppiaggio: uno studio quali-quantitativo’. Quaderni del
Doppiaggio II (pp. 53–148). Finale Ligure: Comune di Finale Ligure.
Bollettieri Bosinelli, R.M., Heiss, C., Soffritti, M. and Bernardini, S. (eds) (2000)
La traduzione multimediale. Quale traduzione per quale testo? Bologna: CLUEB.
D’Amico, M. (1996) ‘Dacci un taglio, bastardo! Il doppiaggio dei film in Italia’. In
E. Di Fortunato and M. Paolinelli (eds) Barriere linguistiche e circolazione delle
opere audiovisive: la questione doppiaggio (pp. 209–16). Roma: AIDAC.
Fuentes Luque, A. (2001) ‘Estudio empírico sobre la recepción del humor
audiovisual’. In L. Lorenzo García and A.M. Pereira Rodríguez (eds) Traducción
subordinada (II). El subtitulado (pp. 85–110). Vigo: Universidade de Vigo.
Gallozzi, G. (2004) ‘Il silenzio dei doppiatori’. L’Unità, 18th February.
Gambier, Y. (ed.) (1998) Translating for the Media. Papers from the international
conference ‘Languages and the Media’. Turku: University of Turku.
Gambier, Y. and Gottlieb, H. (eds) (2001) (Multi)Media Translation. Amsterdam
and Philadelphia: John Benjamins.
Heiss, C. and Bollettieri Bosinelli, R.M. (eds) (1996) La traduzione multimediale per
il cinema, la televisione e la scena. Bologna: CLUEB.
Ivarsson, J. (1992) Subtitling for the Media. Stockholm: TransEdit.
Karamitroglou, F. (2000) Towards a Methodology for the Investigation of Norms in
Audiovisual Translation. Amsterdam: Rodopi.
Pavesi, M. (1994) ‘Osservazioni sulla (socio)linguistica del doppiaggio’. In
R. Baccolini, R.M. Bollettieri Bosinelli and L. Gavioli (eds) Il doppiaggio.
Trasposizioni linguistiche e culturali (pp. 51–60). Bologna: CLUEB.
114
Rachele Antonini & Delia Chiaro
Pavesi, M. (1996) ‘L’allocuzione nel doppiaggio dall’inglese all’italiano’. In
C. Heiss and R.M. Bollettieri Bosinelli (eds) La traduzione multimediale per il
cinema, la televisione e la scena (pp. 117–30). Bologna: CLUEB.
Pavesi, M. and Malinverno, A.L. (2000) ‘Usi del turpiloquio nella traduzione
filmica’. In C. Taylor (ed.) Tradurre il cinema (pp. 75–90). Trieste: University of
Trieste.
9
Transfer Norms for
Film Adaptations in the
Spanish–German Context
Susana Cañuelo Sarrión
Introduction
This chapter summarises some findings on the relationships between
cinema, literature and translation in the Spanish-German context.1
Firstly, I will show how the combination of three transfer forms – film
adaptation, literary translation and audiovisual translation – can be
systematically disentangled by using Polysystem Theory as an epistemological framework (Even-Zohar, 1990). Secondly, I will present a corpus
of Spanish film adaptations and analyse its reception in Germany by
applying a working model, that I call systematisation, which takes into
account several selection factors. Finally, I will examine the extent to
which the data allow for the detection of transfer norms.
The systematisation model
The systematisation model I propose in these pages has been designed
to illustrate and explain all possible combinations of the three forms of
transfer that take place when a literary work is filmed and both the
book and the resulting film are translated into another language.
Figure 9.1 illustrates the pattern of relationships.
Figure 9.1 contains six elements: three literary works and three films.
They are located in six systems (three literary systems and three cinematographic systems), which should be defined in each case depending on the works and cultures involved. This diagram not only takes
into account the possible relations established between a given pair of
languages, but also considers the possibility of using an intermediate
115
116
Susana Cañuelo Sarrión
S1
LT1
SX
LTX
SY
FX
SK
LTY
SZ
FZ
SQ
FY
S: System
LT: Literary Text
F: Film
Figure 9.1
or pivot language to carry out the audiovisual or literary translation.
The transfer processes are represented by arrows. The broken arrows
represent the literary translation processes, the solid arrows represent
the audiovisual translation processes, and the dotted arrows represent
the adaptation processes from literary work to screen. There are also
mixed arrows (dash-and-dot), which indicate that the intersemiotic
transfer has been combined with a linguistic one, that is, the film
adaptation has been accompanied by a linguistic translation.
The process always begins in the literary system in which the film
adaptation has been done (LT1). The next steps vary from case to case
and language to language; therefore, instead of using numbers, the
other systems are marked with variables (x, y, k). From this diagram we
can derive five possible basic combinations, from which other secondary alternatives can be also derived:
Combination I: literary translation before film adaptation.
Combination II: film adaptation before literary translation.
Combination III: film adaptation starting from a translation.
Combination IV: intermediate literary translation.
Combination V: intermediate audiovisual translation.
The diagrams corresponding to these five combinations are included as
Figures 9.9–9.13 in the Appendix to this chapter. Some examples are
Transfer Norms for Film in Spanish–German Context
117
discussed in the section below entitled Combinations where I present
the most common combinations, followed by the Spanish film adaptations that have made it to Germany. These examples help to illustrate
how each combination should be interpreted.
Spanish production of film adaptations
The corpus of film adaptations consists of all those Spanish feature
films produced between 1975 and 2000 that are based upon Spanish
literary works. The term ‘film’ refers to only those fictional full-length
films produced for the cinema. I do not include in this concept short
films, documentaries or TV-movies. ‘Spanish’ should be understood as
relating to Spain, not to all Spanish-speaking countries. The corpus
includes 311 film adaptations.
Figure 9.2 shows the relationship between Spanish general film
production and the production of film adaptations. Whereas general film
production has experienced many upheavals, the production of film
adaptations has remained comparatively stable. On average, literary
adaptations represent 22% of the total film production in Spain, but only
15% if we consider exclusively the adaptations of Spanish literary works.
One initial obstacle encountered in this study is the absence of any
central institution from where to gather data on the transfer of
Spanish films to Germany. There is no public institution, company, or
corporation (neither in Spain nor in Germany) that systematically collects data on the reception of Spanish films in Germany. Several
160
140
120
100
80
60
40
20
19
7
19 5
7
19 6
7
19 7
7
19 8
7
19 9
8
19 0
8
19 1
8
19 2
8
19 3
8
19 4
8
19 5
8
19 6
8
19 7
8
19 8
8
19 9
9
19 0
9
19 1
9
19 2
9
19 3
9
19 4
9
19 5
9
19 6
9
19 7
9
19 8
99
0
Total of Spanish Films
Spanish Film Adaptations from Spanish Literary Works
Figure 9.2
118
Susana Cañuelo Sarrión
Festivals
25%
Cinema
21%
DVD
1%
Video
17%
TV
36%
Figure 9.3
14
12
12
10
9
8
8
9
8
7
6
6
6
6
6
5
4
3
3
2
2
2
1
0 0
1
2
3
2
1
0
19
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
2099
00
0
2
Figure 9.4
sources, including the Spanish ICAA (Instituto de Cinematografía y
de las Artes Visuales) and the German SPIO (Spitzenorganisation der
Filmwirtschaft), have given me information, but none of the lists
provided is exhaustive. However, I have gathered sufficient data on the
presence of Spanish films in Germany, mainly thanks to the Lexikon
des internationalen Films (2000), which provides interesting information
and helps to trace provisional transfer norms.
According to the Lexikon des internationalen Films and its online
database (www.filmevona-z.de), 410 Spanish feature films reached the
German film industry during the period 1975–2000. However, this
figure only refers to works distributed in commercial cinemas, television, video, and DVD. Other films were also shown at film festivals and
Spanish weeks, where approximately 150 Spanish films were screened.
Since 61 of the films were distributed in Germany in commercial ways
and at festivals, the total number of Spanish films that reached the
Transfer Norms for Film in Spanish–German Context
119
German market between 1975 and 2000 can be set at 499. Figure 9.3
shows the different ways in which they were distributed.
The film adaptations of this corpus represent a small percentage –
15% – of the 499 Spanish films seen in Germany. They number 77 and
their distribution by year is shown in Figure 9.4.
There does not seem to be an obvious pattern of regularity in
the annual reception of Spanish adaptations in Germany, and perhaps
the most interesting factor is the constant ups and downs suffered in the
number of imports.
In contrast to the distribution of general films in Germany, most
Spanish literary film adaptations did not enter the country by TV but
through festivals, where 43% of the productions were seen. 34% of
them were shown on television, 13% in cinemas and the remaining
10% were sold on VHS and DVD, as can be seen in Figure 9.5.
Cinema
13%
Festivals
43%
DVD
1%
Video
9%
TV
34%
Figure 9.5
Audiovisual translation in Germany
Germany is traditionally a dubbing country, which means that films
distributed through commercial channels are mostly dubbed. After the
Second World War, cinema was used by the Allies to denazify the occupied territory and numerous American, French, British and Soviet films
were imported. This activity led to the creation of four large dubbing
centres in the country: France dubbed its films in Munich, Great Britain
in Hamburg, the United States in West Berlin and the Soviet Union in
East Berlin. Nowadays, the German dubbing industry is firmly established and is considered one of the best in the world. The most important
dubbing studios continue to be located in Munich, Hamburg and Berlin.
Films exhibited at festivals, on the other hand, are mostly subtitled.
Sometimes in German but in most cases in English, since subtitles in
120 Susana Cañuelo Sarrión
this language facilitate the distribution of the film all over the world
and is frequently a requirement of the submission instructions of many
festivals.
Combinations
Out of the 311 film adaptations of the original corpus, only 77 (25%)
have been distributed in Germany. Of those 77, there are 39 whose corresponding literary work has not been translated into German and cannot
therefore be assigned to any combination of the models discussed here. I
should point out that the systematisation only considers combinations in
which the three transfer processes are present, that is, film adaptation,
audiovisual translation and literary translation. So, there are only 38
films which can be taken into consideration and classified as follows:
Combination I
13
Combination II
Combination III
Total
22
3
38
A total of 13 film adaptations have followed Combination I, where
the translation of the literary work is prior to the film adaptation. A
literary text in Spanish is translated into German, once or more times.
Later, the work is turned into a film in the Spanish cinematographic
system. Finally, the film is either dubbed or subtitled into German and
distributed in Germany. New translations of the literary work can also
be done after the release of the film adaptation. Figure 9.6 shows the
combination corresponding to the film La casa de Bernarda Alba (Mario
Camus, 1987). Other examples are: Pascual Duarte (Ricardo Franco,
1976), Bodas de sangre (Carlos Saura, 1981), La colmena (Mario Camus,
1982) and El perro del hortelano (Pilar Miró, 1996).
In this group, we find, among others, four adaptations of works
originally written by Federico García Lorca, two by Camilo José Cela,
and one by Ramón Del Valle-Inclán, Miguel de Cervantes, Félix Lope de
Vega and Vicente Blasco Ibáñez. One of the main considerations is that
they are some of the most celebrated and well known Spanish authors,
and the works that have been adapted to the screen were written in past
centuries or in the first half of the twentieth century. Therefore, it is not
surprising that literary translations existed prior to the release of the
film adaptations and that, in most cases, new literary translations have
been published after the distribution of the films.
Transfer Norms for Film in Spanish–German Context
S1
S2
Bernarda Albas Haus
E. Beck, 1950, 1968, 1978
F.R.Fries, 1987
1
La casa de Bernarda Alba
Federico García Lorca,
1936
4
Bernarda Albas Haus
E. Beck, 1992
H.M. Enzensberger, 1999
2
S3
S4
3
La casa de Bernarda
Alba
Mario Camus, 1987
Bernarda Albas Haus
DubG, 1991 (TV: West3)
Figure 9.6
S1
“El Sur”, in El Sur
Adelaida García
Morales, 1983
S4
3
Der Süden,
A. Sorg-Schumacher &
I. Bergmaier, 1989, 1992
(2nd ed.)
1
S2
El Sur,
Víctor Erice, 1983
Figure 9.7
2
El Sur (Der Süden),
SubG, 1984 (Spanish
Cinema Week in
Dortmund) DubG,
1985 (Cinema)
S3
121
122
Susana Cañuelo Sarrión
Twenty-two film adaptations have followed Combination II. First,
a Spanish literary text is adapted to the screen in the Spanish
cinematographic system. Subsequently, the feature film is subtitled or
dubbed into German, and a translation of the literary text is later done in
German. In this case, no translation of the literary work existed prior to
the film adaptation. Examples include: Los santos inocentes (Mario Camus,
1984), Tiempo de silencio (Vicente Aranda, 1986), Las edades de Lulú (José
Juan Bigas Luna, 1990), ¡Ay, Carmela! (Carlos Saura, 1990), El maestro de
esgrima (Pedro Olea, 1992), and Historias del Kronen (Montxo Armendáriz,
1995). In most cases, Combination II encompasses film adaptations made
shortly after the publication of the literary work in Spain, which explains
why there are no prior literary translations in German. Figure 9.7 shows
the combination corresponding to El sur (Víctor Erice, 1983).
Only three of the translated movies followed Combination III: I Picari
(Los alegres pícaros, Mario Monicelli, 1988), Uncovered (La tabla de Flandes,
Jim McBride, 1994) and The Ninth Gate (La novena puerta, Roman
Polanski, 1999). These are all Spanish co-productions based on Spanish
literary works, but filmed in Italian (the first one) and English (the other
two). It is difficult to ascertain which was the text that the screenwriter
used to work on the adaptation. However, considering their
nationalities, it is most likely that they used the translations of the
literary works rather than the originals in Spanish. As for the audiovisual translation, it is also likely that all the productions have been
S1
El Club Dumas,
Arturo Pérez Reverte,
1993
S3
2
1
Der Club Dumas
C. Schmitt, 1995, 1997
The Dumas Club,
S. Soto, 1996, 1998
5
S2
Die neun Pforten
C. Schmitt, 2000
3
La novena puerta
DubS/SubS, 1999
The Ninth Gate
Roman Polanski,
1999
S1: Spanish literary system
S2: German literary system
S3: British literary system
S4: Spanish cinematographic system
S5: German cinematographic system
Figure 9.8
S4
4
Die neun Pforten,
DubG, 1999 (Cine)
DubG, 2000 (Video)
DubG/SubG, 2000
(DVD)
DubS: Dubbed into Spanish
SubS: Subtitled into Spanish
DubG: Dubbed into German
SubG: Subtitled into German
S5
Transfer Norms for Film in Spanish–German Context
123
translated from the Italian and English versions of the film. Figure 9.8
shows the combination corresponding to The Ninth Gate. It is interesting to notice that the title of the second literary translation, made
after the release of the film, has been changed in order to match the
title of the film, although the translator remains the same.
Literary genres
As may be expected, there is a clear predominance of the narrative
genre – both in the original Spanish corpus (77%) and in the corpus of
films translated into German (78%) – since commercial films are heavily reliant on narrative. This suggests that this genre is the easiest both
to be adapted to the cinema and possibly to be translated into other
languages. By contrast, of all the Spanish adaptations distributed in
Germany only one is based on a work of poetry (A un dios desconocido,
Jaime Chávarri, 1977).
Film genres
As for the film genres, the majority of adaptations made in Spain are
dramas (53%), followed by comedies (27%), thrillers (12%), and
adventure/action films (4%). A very similar ranking represents the
adaptations that have made it to Germany: 58% are dramas, 16%
comedies, 18% thrillers, and 5% action films. Worth noticing is the
slight decrease in the number of comedies transferred into German,
since normally humour does not travel well abroad. If we consider the
total of Spanish films shown in Germany, the number of comedies is
also lower, whereas thrillers fare considerably better.
Literary authors
In the case of the original authors of the literary works, there seems to
be no evidence whatsoever of a direct relation between the number of
adapted works into Spanish films and the number of film adaptations
that have been translated into German. For example, six of Fernando
Vizcaíno Casas’s literary works have been adapted for the screen in
Spain, but none of the films has been ever distributed in Germany. And
Miguel Delibes, who is the Spanish writer with most adaptations in the
original corpus (9), has only seen one of the films distributed in
Germany. It seems plausible that films based on literary authors who
are well known in Germany are also better represented in German
124
Susana Cañuelo Sarrión
cinematography. Indeed, this is the case with writers like Federico
García Lorca, Alberto Vázquez Figueroa and Arturo Pérez Reverte, whose
works are very popular among German readers. Thus, of the five Spanish
film adaptations made from works by García Lorca, four have been
released in Germany. In the case of Vázquez Figueroa, out of six films
based on his novels, four have made it to Germany and Pérez Reverte
has had five works adapted to the screen, of which four have been
released in Germany.
Film directors
When we look at the different film directors, there is no clear relation
between the number of adaptations made in Spain and the number of
adaptations that have been exported to Germany. For instance, Francisco
Betriú and Vicente Escrivá have both directed five adaptations each, but
none of them has reached the German market. Similarly, Rafael Gil and
Pedro Lazaga have done eight and six adaptations respectively, and
none of them has been seen in Germany. Vicente Aranda is the director
with the lion’s share in the corpus, but only four of his ten adaptations
have been translated into German. Mario Camus, with eight adaptations in the Spanish corpus, is only represented with four among the
adaptations released in Germany.
Imanol Uribe, Gerardo Herrero, Rafael Romero Marchent and Pilar
Miró, on the contrary, are the directors who have a higher proportional
representation both in the Spanish corpus and among the adaptations
exported to Germany (5–3, 4–3, 5–3 and 4–3 respectively). Although
Carlos Saura is not the director with the highest number of adaptations
in the Spanish corpus (only three), all of his works have been shown in
Germany. That is hardly surprisingly, since Saura is one of the most
famous Spanish directors in Germany, mainly thanks to his musical trilogy with the dancer Antonio Gades (Bodas de sangre, 1981; Carmen, 1983;
El amor brujo, 1986). Saura is the director with most films (23) distributed
in Germany in the period 1975–2000. He is followed by another of the
most international Spanish directors, Pedro Almodóvar, of whom twelve
productions have made it to Germany in the same period.
Conclusion
In the transfer of Spanish film adaptations from Spain to Germany
there are some recurrent patterns that invite us to think about the
existence of transfer norms; norms that come to the surface when we
Transfer Norms for Film in Spanish–German Context
125
consider the three transfer processes discussed above: film adaptation,
literary translation and audiovisual translation.
Some norms are operative in the combinations followed by the film
adaptations on their way to the German market. Of the five potential
patterns outlined, most films followed the first two combinations (I and
II), whereas combinations IV and V were not present at all. These norms
can be called ‘combinatory norms’ and tell us about the steps followed
in the process of transfer from one culture to another and about the
languages involved. These norms do not seem to fit under the category
of ‘preliminary norms’, defined by Toury (1995: 56–61) as those that
operate before the translational process of transfer itself takes place. If
we agree that the selection of films does not happen by chance, these
norms reflect rather the existence and nature of a concrete transfer
strategy, that is, the selection of elements of the Source System and the
systemic mechanisms that govern this selection.
Some regularities have been detected in the case of the preferred
literary genres to be adapted to the screen (narrative, with 78%) and the
film genres more likely to be translated (drama, with 58%). As regards
the selection of directors and literary authors, there is no evidence of a
direct relation between the number of adapted works in Spanish and
the number of transferred adaptations into German, but it is noteworthy
that some of them are more popular than others: Federico García Lorca,
Alberto Vázquez Figueroa, Arturo Pérez Reverte, among the writers; and
Carlos Saura, Imanol Uribe, Pilar Miró, among the filmmakers. However,
further investigation into the mechanisms of export and import in the
Spanish-German film context is needed in order to establish if a transfer policy really exists at this level.
Regarding the reception of the Spanish films in Germany, no evidence
of recurrent patterns has been found in the yearly distribution of adaptations, but there seems to be some regularity in the distribution channels
(43% festivals, 34% TV) and the audiovisual translation modes implemented (dubbing for cinema, video and TV; subtitling for festivals).
Another relevant fact is that Spanish film adaptations represent a small
percentage (15%) of the total of Spanish films shown in Germany between
1975 and 2000. Nevertheless, in-depth research into the particularities of
distribution and exhibition is needed in order to define the position and
function of Spanish film adaptations in the German system.
I have not dealt with operational norms, which govern all the linguistic decisions made during the individual processes of the transfer (Toury,
1995: 56–61). A comprehensive analysis of the three processes – that is
film adaptation, literary translation and audiovisual translation –
126 Susana Cañuelo Sarrión
involved in the transfer of the 38 Spanish works to Germany is needed.
Such an approach will allow the researcher to discover some of the
operational norms followed and to establish how the ‘combinatory
norms’ may have influenced the different decisions taken during each
of the individual transfers.
Note
1. This is fully discussed in my PhD dissertation supervised by Dr Luis Pegenaute
at the Universitat Pompeu Fabra in Barcelona. Between 2002 and 2004 the
research was carried out at the University of Leipzig thanks to a scholarship
from the foundations La Caixa and DAAD (Deutscher Akademischer
Austausch Dients). I am grateful to them for the financial support and to the
Institute for Applied Linguistic and Translation Studies in Leipzig, especially
to Prof. Dr Wotjak for his assistance. Thanks also to Dr Patrick Zabalbeascoa
and Dr Aline Remael for their suggestions.
References
Even-Zohar, I. (1990) Polysystem Studies. Special Issue of Poetics Today 11(1). www.
tau.ac.il/~itamarez/works/books/ez-pss1990.pdf
Filmstatistisches Taschenbuch. Wiesbaden: SPIO, 1976–2000.
Filmstatistisches Jahrbuch. (2001) Baden-Baden: Nomos.
Lexikon des internationalen Films. (2000) Munich: Net World Vision. Online database: www.filmevona-z.de
Toury, G. (1995) Descriptive Translation Studies and Beyond. Amsterdam and
Philadelphia: John Benjamins.
Transfer Norms for Film in Spanish–German Context
Appendix
S1
S2
1
LT1
LT2
2
S3
S4
3
F1
F2
S: System
LT: Literary Text
F: Film
Figure 9.9
Combination I: literary translation before film adaptation
S1
S3/4
2/3
LT1
LT2
1
S3/4
S2
F1
2/3
F2
S: System
LT: Literary Text
F: Film
Figure 9.10
Combination II: film adaptation before literary translation
127
128 Susana Cañuelo Sarrión
S1
LT1
1
S2
LT2
SK
LT3
2
SY
F1
F2/3
SQ
S3
F2/3
S: System
LT: Literary Text
F: Film
Figure 9.11
Combination III: film adaptation starting from a translation
S1
LT1
S2
F2/3
LT3
LT2
SY
S3
SZ
F2/3
S4
F1
S: System
LT: Literary Text
F: Film
Figure 9.12
Combination IV: intermediate literary translation
Transfer Norms for Film in Spanish–German Context
S1
SX
LT2
LT1
SY
F1
SK
LT3
SZ
F2
SQ
F3
S: System
LT: Literary Text
F: Film
Figure 9.13
Combination V: intermediate audiovisual translation
129
10
Voice-over in Audiovisual
Translation
Pilar Orero
Introduction
Nowadays audiovisual translation (AVT) is a thriving field within
Translation Studies. This is, however, a recent development. Although
research in the audiovisual field dates back to 1932 (Franco and Orero,
2005), it remained in the realm of Film or Media Studies and it was
only in the 1980s that it started to be studied from a translation
perspective, within the discipline of Translation Studies. This transition from Film Studies to Translation Studies may account for the
blurred terminology in use, the research guidelines and the somewhat unbalanced interest shown in the many modes within AVT.
While subtitling and dubbing have been attracting interest in both
research and teaching at university level, other techniques such as
voice-over have been left aside or not clearly understood, as pointed
out by Grigaravič ītė and Gottlieb (1999), Franco (2000), and Gambier
(2003).
Some scholars have analysed this unbalanced situation in the amount
of attention paid to the many communication forms within AVT.
Gambier and Suomela-Salmi (1994: 243) suggested the following
possible explanation:
Up till now, research [in AVT] has mainly been concerned with the
subtitling and dubbing of fictive stories/fiction films. In the light of
the huge variety of audio-visual communication, this may seem
somewhat surprising; in fact, however, it reflects the prevailing
orientation in translation theory, which is still highly dominated by
literary translation.
130
Voice-over in Audiovisual Translation
131
One could argue that most research carried out in AVT has concentrated
on dubbing and subtitling because these are the modes used to translate
fictive stories and fiction films, objects of study which tend to be
favoured by academics. However, it seems to have been forgotten in
many studies that in some countries like Poland and the Baltic States
voice-over is used as the translation mode for films. The traditional
stance in most academic studies has been to associate voice-over with
the translation of documentaries covering topics like nature and travel
(Luyken et al., 1991). The general trend has been to consider it as a technique suitable for the translation of non-fictional genres, an approach
that Franco (2000: 3) regrets:
Translating reality must inevitably be a straightforward, nonproblematic activity. What such a belief implies is that translated
foreign material within non-fictional output (e.g. interviews in news
and documentaries and sometimes commentaries as well) constitutes uninteresting data for the purposes of research. Traditionally
claimed as objective, deprived of the artifices of literary language or
cinematic invention, factual programmes would and could not represent any real challenge to the translator or stimulus to the
researcher; in sum, the translation of ‘real life’ would constitute a
boring field of study.
Still, voice-over is normally included when scholars want to provide
a taxonomy of the many – and not universally used – audiovisual
translation modes. In some cases, voice-over has been classified
within the technique of dubbing by authors such as Luyken et al.
(1991: 80), Baker and Braň o (1998: 75), Shuttleworth and Cowie (1997:
45), and Franco (2001: 290). No doubt, simplification and a lack of
understanding of the media and its process have made voice-over to
be seen in the same light as dubbing, which is certainly a different
mode, subject to different translation and production processes. As a
result, it comes as no surprise that reference works on AVT have not
considered voice-over as a discrete entry (Luyken et al., 1991; Dries,
1995; Baker and Braň o, 1998; Shuttleworth and Cowie, 1997; SánchezEscalonilla, 2003).
The unresolved terminology employed in the wider discipline of AVT
(Orero, 2004: vii) also applies to the field of voice-over. Many definitions
of the term have described voice-over in a misleading or inaccurate
form. Thus, it has been referred to as a category of revoicing, lip
synchronisation dubbing, narration and free commentary (Luyken
132 Pilar Orero
et al., 1991: 71; Baker and Braňo, 1998: 75; O’Connell, 2003: 66); as a
type of dubbing, either ‘non-synchronized dubbing’ (Dries, 1995: 9), or
its opposite ‘doublage synchrone’ (Kaufmann, 1995: 438). It has also
been described as ‘dubbing-with-voice-over’ (Baranitch, 1995: 309), as a
type of interpreting (Pönniö, 1995: 303; Gambier, 1996: 8) and finally
as ‘half-dubbing’ (Hendrickx, 1984).
The process of voice-over has also been described as ‘the easiest and
most faithful of the audiovisual translation modes’ (Luyken et al.,
1991: 80; Díaz Cintas, 1997: 112). This definition, however, has not
helped towards further understanding of the technique and bears little resemblance to the real process of translation. A possible reason for
this reputed easiness and faithfulness of voice-over is its alleged disregard for synchronisation between source and target texts, pointed out
originally by Luyken et al. (1991: 80) and later by Lambert and
Delabastita (1996: 41), Franco (1998: 236), and Grigaravič ītė and
Gottlieb (1999).
To date, attempts to define voice-over seem to have focused on its
reception. That is, voice-over is viewed as the final product we hear
when watching a programme where a voice in a different language than
that of the original programme is heard on top of the original soundtrack. This new voice is normally recorded some seconds into the beginning of the original utterance – and sometimes finishes before the
actual person on screen – allowing the viewer to hear part of the
original, although this practice is not universal.
These technical dimensions mentioned by numerous translation
scholars when defining voice-over may have a common starting point
that can be traced back to Luyken et al. (1991: 80) who in their seminal book Overcoming Language Barriers in Television described voiceover not as a complex and highly specialised translation technique
but as:
The faithful translation of original speech. Approximately
synchronous delivery. It is normally used only in the context of a
monologue such as an interview response or series of responses
from a single interviewee. The original sound is either reduced
entirely or to a low level of audibility. A common practice is to
allow the original sound to be heard for several seconds at the
onset of the speech and to have it subsequently reduced so that the
translated speech takes over. This contributes to the sense of
authenticity in the translation and prevents a degree of mistrust
from developing. Alternatively, if the translation is recorded as
Voice-over in Audiovisual Translation
133
part of the original production, it may follow the original speech
exactly.
While studying the transfer of commercial videos, Mailhac (1998: 222)
points to the possible source of inconsistency that surrounds the term
voice-over:
It should be pointed out that the term ‘voice-over’ is used with the
meaning it normally has in English where it refers to the voice of an
unseen commentator heard during a film, a television programme or
a video. Therefore, it does not correspond to what is called ‘un voice
over’ in French, since this refers to situations in which a voice giving
a translation is heard simultaneously on top of the original voice (see
Pönniö, 1995). The French equivalent of ‘voice-over’ would be commentaire. I have heard the term ‘over-voicing’ to describe the superimposition of a second voice in the context of an interview in a
commercial video; ‘half dubbing’ also seems possible to refer to this
type of superimposition when applied to a feature film dialogue.
And Franco (2000: 32) concludes by stating that:
The confusion seems to have arisen from the fact that terms adopted
by Audiovisual Translation Studies, such as ‘voice-over’ and others
common to factual translation (e.g. ‘commentary’), have all been
arbitrarily borrowed from its predecessor Film Studies, whose
concepts do not imply any translating activity.
A new academic and research approach to voice-over and to AVT in
general should take into consideration Film and Media Studies without
forgetting traditional research methodology from the field of
Translation Studies. It should focus on the translation process as much
as on the reception of the audiovisual product, and should also determine the terminology which at present is – to say the least – muddled.
In what now follows, I offer a detailed analysis of the two different
types of voice-over translation according to the way it is carried out
professionally.
Voice-over in TV and radio
It is true that voice-over is used both in TV and radio, as for example in
the BBC World Service. In its news or current affairs programmes, what
134 Pilar Orero
is said by someone whose mother tongue is not English is translated
and voiced-over into English, in an attempt to convey the feeling of
authenticity of the contents (Luyken et al., 1991: 80), the voice of the
speaker (Pönniö, 1995: 304), and the accent or regional variation
(Fawcett, 1996: 76).1 As Franco (2001: 290) explains:
... the type of delivery we hear in voice-over translation is an important strategic way of reassuring viewers that what they are being told
in their own language is what is being said in the original language,
although it is known that what they will be listening to is in fact
only a representation of the original discourse.
In Spain at least, voice-over is a more common mode of audiovisual
translation than subtitling, and the leading translation mode when
people speaking other languages appear live on TV broadcasts. The programmes range from news and sports to gossip, and even reality shows,
and are broadcast on all Spanish TV stations, local as well as national,
in Spanish or in any of the other three official languages (Basque,
Catalan and Galician); on state-owned stations TV3, 24/3; City TV in
Catalonia; TV1 or TV2 in Spain: and on privately owned national channels such as Tele 5 and Antena 3.
When voice-over is used for the translation of an audiovisual programme, two voices are usually heard. We hear one voice in the background (the original speech) and the voice of the translation. When
there is a speech or sound bite of President Bush or Tony Blair, the translation is usually heard through voice-over. In some exceptional cases
three languages can be heard, as is the case when we hear Osama Bin
Laden speaking, whilst being translated into English and then into
Spanish or Catalan. Cases in which three languages can be heard simultaneously are known as pivot translation (Grigaravičītė and Gottlieb,
1999: 46), a field of study that also merits further research.
The compilation of further data on the number and type of
programmes as well as on the times of the broadcasts when voice-over
production and post-production are used on TV is another much
needed research project. Material of this nature will help us to contextualise and to put into perspective voice-over translation versus the
two more popular forms of subtitling and dubbing. It would also
provide information that could confirm, or reject, the popular association of voice-over and documentaries. A project of compiling data
could prove to be challenging, given the large number of TV channels
Voice-over in Audiovisual Translation
135
now on offer in Catalonia and Spain. The city of Barcelona alone has
over ten TV stations which broadcast in Catalan plus all the TV
stations – digital, satellite and terrestrial – which broadcast to mainland Spain. Still a methodological approach to the compilation of
reliable data would be needed for any rigorous analysis with some
serious conclusions to be undertaken.
Translating for voice-over production
Much translation for voice-over is carried out in the post-production
phase, described by Luyken et al. (1991: 80) as ‘narration or re-voiced
narration’, which so far is the mode most widely studied in academic
works dedicated to AVT. However, there is an important market for
TV and radio programmes with translation undertaken as part of the
production process. This second type I refer to as translation for
production.
A post-production translation is the translation of a programme
which is a finished product by the time it reaches the translator; it usually comes with a complete dialogue list, e.g. the BBC’s The Human Body
(2001). In the case of translation for production, however, the translator
has to work with rough, unedited material which will undergo several
processes before being broadcast. The material to be translated can be
made of interviews or already existing post-production programmes.
When the team of journalists working on a programme are planning its
contents and format, they may decide to use excerpts from some existing foreign programmes – regardless of the format or genre. It can be a
documentary, sports personality interviews, sports events (World
Swimming Championship, Formula 1 Racing, European Champions
League football matches), or awards ceremonies (Oscars, MTV), all of
which can be either live or a repeat.
In order to develop these programmes the team of journalists, all
working within the production department, will either buy a number
of minutes of an already existing and edited programme or interview,
or produce their own interview. If the first approach is chosen, these
excerpts will be incorporated in the main body of the programme,
either simply to provide images to illustrate the narration, or to provide
authentic comments delivered by the actual speakers. For instance, in
order to make a programme on the war in Iraq, the journalists may
want to use declarations or interviews given in English by Tony Blair or
George W. Bush, bought from a news agency; or they may prefer to use
136 Pilar Orero
footage of a documentary in English such as Lost Treasures from Iraq,2
which they will buy from an international TV agency for which they
only have to provide the topic and the company will offer the images.
Alternatively, they may want to have an exclusive interview with the
UN weapons chief inspector, Hans Blix, or with Bush’s private weapon
inspector David Kay, who resigned on 23rd January 2004 after his
report announcing that no weapons of mass destruction had been
found in Iraq.3
The translator will then be sent the bulk of material that has been
bought (some excerpts of Bush and Blair’s comments to translate, a
30-minute documentary on the Museum of Baghdad, and an in-house
produced 35-minute interview with Hans Blix or David Kay). On some
occasions the translator may be asked to go to the TV station to carry
out the translation in situ. If the physical presence of the translator is
not required at the television station, the material will be sent to the
translator without a dialogue list. While this can be expected from an
unedited home-produced interview, there is no apparent reason why a
commercially broadcast programme will only come in audiovisual format, with no transcript. By contrast, the absence of a document with
the written dialogue for post-production translation tends to occur only
when the producing team buys an excerpt from a broadcast programme
from an agency instead of purchasing it directly from the TV producer.
Whatever the length and duration of the excerpt, it invariably reaches
the translator only in audiovisual format. In Spain, the inclusion of
translators in a TV or radio production team may be due to the tight
production schedule of TV programmes or to the journalists’ lack of
foreign language competence; this is still a major obstacle, perhaps
likely to change in the near future, although present university education programmes in Spain have not been designed to address and rectify
this particular problem. It should also be pointed out that journalists
working from Spanish for British radio and TV stations usually encounter the same problem.
Here we have seen some of the characteristic features of production
in contrast to post-production voice-over. The material is unedited, it is
always urgent,4 and it rarely comes with the written transcript; hence
it constitutes a challenge to the listening and comprehension skills of
the translator (Orero, 2005). It may be added that the translation will
be edited in order to be synchronous in content with the images which
it will accompany. A final point concerns the translator who never
knows what will eventually be shown in the edited broadcast
programme.
Voice-over in Audiovisual Translation
137
Conclusion
As we have seen in the previous section, the translation for voice-over
production is characterised by several constraints:
●
●
●
Time: translations have to be done within a very tight schedule.
Translating from the screen: a written text to support the audiovisual
material is not usually available to the translator.
Working with unedited, rough material which will be edited after
being translated, not by the translator but by another different
professional.
In this chapter, I have tried to raise the academic visibility of voice-over in
general, and voice-over for production of TV and radio programmes in
particular, seeing it as an important translation mode in need of much
more scholarly attention, if only to match the popularity it enjoys on TV.
Notes
1. Fawcett (1996) explains how, in recent years, TV stations in the UK have
incremented the use of voice-over for documentaries replacing the use of
standard English for the accent ‘appropriate’ for the programme.
2. www-oi.uchicago.edu/OI/IRAQ/iraq.html
3. ‘US chief Iraq arms expert quits. The head of the team searching for weapons
of mass destruction in Iraq, David Kay, has resigned. Mr Kay said he did not
believe Iraq possessed large stockpiles of chemical or biological weapons’
(http://news.bbc.co.uk/1/hi/world/americas/3424831.stm).
4. While some research on translation under time pressure has been undertaken in the field of literary translation (Jensen and Jakobsen, 1998; Jensen,
1999) the scope of AVT research remains largely unexplored.
References
Baker, M. and Braň o, H. (1998) ‘Dubbing’. In M. Baker (ed.) Routledge Encyclopedia
of Translation Studies (pp. 74–6). London and New York: Routledge.
Baranitch, V. (1995) ‘General situation with electronic media in Belarus’.
Translatio, Nouvelles de la FIT-FIT Newsletter XIV(3–4): 308–11.
Díaz Cintas, J. (1997) El subtitulado en tanto que modalidad de traducción
fílmica dentro del marco teórico de los Estudios sobre Traducción. (Misterioso
asesinato en Manhattan, Woody Allen, 1993). Valencia: Universidad de Valencia.
PhD Thesis.
Dries, J. (1995) Dubbing and Subtitling: Guidelines for Production and Distribution.
Düsseldorf: European Institute for the Media.
Fawcett, P. (1996) ‘Translating film’. In G.T. Harries (ed.) On Translating French
Literature and Film (pp. 65–88). Amsterdam: Rodopi.
138 Pilar Orero
Franco, E. (1998) ‘Documentary film translation: A specific practice?’. In
A. Chesterman, N. Gallardo San Salvador and Y. Gambier (eds) Translation in
Context (pp. 233–42). Amsterdam and Philadelphia: John Benjamins.
Franco, E. (2000) Revoicing the Alien in Documentaries. Cultural Agency,
Norms and the Translation of Audiovisual Reality. Leuven: KUL. PhD Thesis.
Franco, E. (2001) ‘Voiced-over television documentaries. Terminological and
conceptual issues for their research’. Target 13(2): 289–304.
Franco, J. and Orero, P. (2005) ‘Research on audiovisual translation: some
objective conclusions, or the birth of an academic field’. In J.D. Sanderson
(ed.) Research on Translation for Subtitling in Spain and Italy (pp. 79–92). Alicante:
Universidad de Alicante.
Gambier, Y. and Suomela-Salmi, E. (1994) ‘Subtitling: a type of transfer’. In
F. Eguíluz, R. Merino, V. Olsen, E. Pajares and J.M. Santamaría (eds) Transvases
culturales: Literatura, cine, traducción (pp. 243–52). Vitoria: Universidad del País
Vasco.
Gambier, Y. (1996) Les transfers linguistiques dans les médias audiovisuels.
Villeneuve d’Ascq (Nord): Presses Universitaires du Septentrion.
Gambier, Y. (2003) ‘Introduction: screen transadaptation: Perception and reception’. The Translator 9(2): 171–89.
Grigaravič ītė, I. and Gotttlieb, H. (1999) ‘Danish voices, Lithuanian voice-over.
The mechanics of non-synchronous translation’. Perspectives: Studies in
Translatology 7(1): 41–80.
Hendrickx, P. (1984) ‘Partial dubbing’. Meta 29(2): 217–18.
Jensen, A. and Jakobsen, A. (1998) ‘Translating under time pressure: an empirical
investigation of problem-solving activity and translation strategies by nonprofessional and professional translators’. In A. Chesterman, N. Gallardo San
Salvador and Y. Gambier (eds) (2000) Translation in Context (pp. 105–16).
Amsterdam and Philadelphia: John Benjamins.
Jensen, A. (1999) ‘Time pressure in translation’. In G. Hansen (ed.) Probing the
Process in Translation: Methods and Results. Copenhagen Studies in Language 24
(pp. 103–19). Copenhagen: Samfundslitteratur.
Kaufmann, F. (1995) ‘Formation à la traduction et à l’interprétation pour les
médias audiovisuels’. Translatio, Nouvelles de la FIT-FIT Newsletter XIV(3–4):
431–42.
Lambert, J. and Delabastita, D. (1996) ‘La traduction de textes audiovisuels:
modes et enjeux culturels’. In Y. Gambier (ed.) Les Transferts linguistiques dans
les médias audiovisuels (pp. 33–58). Villeneuve d’Ascq (Nord): Presses
Universitaires du Septentrion.
Luyken, G.M., Herbst, T., Langham-Brown, J., Reid, H. and Spinhof, H. (1991)
Overcoming Language Barriers in Television: Dubbing and Subtitling for the European
Audience. Manchester: European Institute for the Media.
Mailhac, J.-P. (1998) ‘Optimising the linguistic transfer in the case of commercial
videos’. In Y. Gambier (ed.) Translating for the Media (pp. 207–23). Turku:
University of Turku.
O’Connell, E.M.T. (2003) Minority Language Dubbing for Children. Screen Translation
from German to Irish. Series XL. Vol. 81. Frankfurt: Peter Lang.
Orero, P. (2004) ‘Audiovisual translation: a new dynamic umbrella’. In P. Orero
(ed.) Topics in Audiovisual Translation (pp. vii–xiii). Amsterdam and Philadelphia:
John Benjamins.
Voice-over in Audiovisual Translation
139
Orero, P. (2005) ‘La traducción de entrevistas para voice-over.’ In P. Zabalbeascoa
Terran, L. Santamaria Guinot and F. Chaume Varela (eds). La traducción
audiovisual: Investigación, enseñanza y profesión (pp. 213–22). Granada:
Comares.
Pönniö, K. (1995) ‘Voice over, narration et commentaire’. Translatio, Nouvelles
de la FIT-FIT Newsletter XIV(3–4): 303–7.
Sánchez-Escalonilla, A. (ed.) (2003) Diccionario de creación cinematográfica.
Barcelona: Ariel.
Shuttleworth, M. and Cowie, M. (1997) Dictionary of Translation Studies.
Manchester: St. Jerome.
11
Broadcasting Interpreting:
A Comparison between
Japan and the UK
Tomoyuki Shibahara
Introduction
Although broadcasting interpreters play a significant role in the news
programmes today, little is known about their role and their profession
(Mitsufuji, 2002). An analysis of the interpretation style of broadcasting
interpreters could serve as a useful reference for many other interpreters
in terms of improving their delivery. Therefore, I believe it is important
to introduce and compare the work of broadcasting interpreters in Japan
and the UK. Based on my personal experience, I take a look at the role
of these professionals working for the BBC (British Broadcasting
Corporation) and NHK (Japan Broadcasting Corporation).
In this comparison of broadcasting interpreting styles in both countries,
I will attempt to explain the reasons behind some of the differences. At
the same time, I will try to find out if there is a ‘common interpreting
policy’ to be found between the two broadcasting organisations.
Definition and history of ‘broadcasting
interpreter’ in Japan
In some countries and languages, the word for translator also encompasses that of the interpreter. However, in Japan, there is a clear-cut
distinction between interpreter and translator, despite the fact that in
general conversation the word translator is used synonymously with
interpreter. The main difference between these two professionals is that
the translator is the person who translates written material, and the
interpreter, spoken dialogue, and although my official job title at the
140
Broadcasting Interpreting in Japan and the UK
141
BBC Japanese Unit was ‘broadcasting translator’, I prefer ‘broadcasting
interpreter’.
Japanese people, in general, tend to believe that interpreting is
somewhat more difficult than translating. This may be due to the fact
that for a long time the main purpose of foreign language education in
Japan has been to absorb new information and technology by reading
foreign books. As a result, foreign language education in Japan tends to
put more stress on reading and writing than on listening and speaking.
The Japanese feel that it is much easier to be a translator because they
perform tasks which are similar to what they did in school. As they feel
that it is more difficult to listen and to speak in English, it is then considered very hard to be an interpreter. When interpreter is combined with
the word broadcasting, for many Japanese people it takes on star status.
In actual fact, the term ‘broadcasting interpreter’ refers to a professional who performs mainly interpreting for broadcasting, although
the job sometimes includes translating as well. It is very common for
‘ordinary’ interpreters to work only part-time as broadcasting interpreters, because job opportunities are very limited and therefore, few people
can make a living by broadcasting interpreting alone. To the best of my
knowledge, there are no full-time, in-house positions in Japan for broadcasting interpreters and they are employed on a freelance basis. The
only exception is the BBC Japanese Unit, although interpreters then
have to work in London.
Broadcasting interpreters are usually able to watch beforehand the
material that they are going to interpret. They can either prepare the
Japanese translation or, if there is not enough time to do so, make rough
notes that they will use while on air. Live coverage such as press
conferences are interpreted simultaneously.
In Japan, broadcasting interpreters made their first appearance on
public television in 1969, when Apollo landed on the moon. The event
was covered live on TV, and the Japanese audience listened to the simultaneous interpretation. Since top-notch interpreters were assigned to do
this job, the performance was very impressive. It resulted in the general
perception that broadcasting interpreters are somewhat superior to
‘ordinary’ interpreters.
During the 1980s, it became a trend among Japanese TV stations to
make use of sound-multiplex broadcast systems to revoice foreign news in
Japanese. This trend reached a climax when in 1991 the Gulf War broke
out. During this period, TV stations were forced to increase the number of
broadcasting interpreters. It had both positive and negative effects. On the
one hand, it enabled many young and promising interpreters to acquire
142
Tomoyuki Shibahara
experience in broadcasting interpreting. On the other hand, and unlike
the case of the Apollo Project, many Japanese viewers are well informed
about war issues and jargon. As a result, broadcasting interpreting came to
be criticised as inaccurate, with many viewers stating that it was very hard
to make out what the interpreters were saying.
Broadcasting interpreters at the
BBC Japanese Unit
In the case of the BBC Japanese Unit, interpreters are employed fulltime and work in-house. To be more precise, they work as contract
employees of the BBC’s contract company. The length of each contract
is one year, renewed annually. It offers financial stability for the interpreters and enables them to have a fixed schedule, which means that
they do not have to work very late at night or very early in the morning
unless they agree to do so. The fact that interpreters work five days a
week for a prolonged period of time is beneficial to them, enabling
them to accumulate and enhance necessary knowledge and skills. This
is a great advantage in terms of keeping the quality of broadcast high.
However, this employment style can be a double-edged sword. Firstly,
even if the workload increases, there is no change in salary. For example,
when NATO bombed Kosovo in 1999, the interpreters covered the daily
NATO press conferences live every day for more than two months. The
quantity of simultaneous interpretation at least quadrupled during that
period. But since press conferences took place within regular working
hours, there was no extra remuneration. In addition, the pay scale is
based on seniority, and there was no merit pay.
The work of the BBC Japanese Unit consists of providing Japanese
translation for BBC World TV, which transmits mainly news and documentary programmes. The broadcast in Japan started in 1994, and it
was introduced as a successor to BBC World Service radio’s Japanese
language service, which ceased its operations in 1991. There are three
major tasks that are assigned to the broadcasting interpreters:
1. Interpreting the news programmes.
2. Translating documentary programmes and checking the translation.
3. Voicing-over of documentary programmes.
I would like to focus on the first task since it is the one most similar to
the activity carried out by broadcasting interpreters working for NHK,
the biggest broadcasting organisation in Japan.
Broadcasting Interpreting in Japan and the UK
143
News programmes consist of two parts, the lead and the piece. The
lead is the introduction which is then followed by the piece or the correspondent’s report. The piece is also called ‘TX’.
Transcripts for the lead are usually filed in the computer system
beforehand. It is the responsibility of the interpreter who is in charge of
interpreting the anchor on that day to prepare the Japanese translation.
Since the interpreter reads aloud the translation at the very same time as
the programme goes on air, the translation has to be as long as the original
and fit within the same time frame as the English source text. If the interpretation is too long it will spill over into the TX, and will obstruct the
interpretation of this section. This can have a very negative effect since
the lead and the piece are interpreted by different interpreters.
There are almost no transcripts available for interpreters who are in
charge of the piece but correspondents’ reports are usually filed in the
computer system in video format; interpreters can download the footage in order to prepare the Japanese translation. As in the case of the
lead, when read aloud the translation should fit within the time frame
of the original. The translation should also occur in concert with what
is happening on the TV screen so that there is no conflict between the
interpretation and the picture. If there is not enough time to prepare
the Japanese translation, interpretation takes place simultaneously
while on air. Simultaneous interpretation is also called for in the case of
live coverage, when for example the anchor interviews a guest in the
studio, or the correspondent files a live report.
Interpreting policy and quality control
The quality of the Japanese is greatly emphasised by the editor, stressing
that the standards should be comparable to NHK news. Under the slogan of ‘less is more’, interpreters are asked to edit the information so
that the interpretation is not only accurate but also easy for the viewers
to understand. Delivery is considered important and interpreters are
offered elocution training in Japanese. Interpreters are asked to comply
with the ‘less is more’ policy even when they are interpreting
simultaneously; a rapid and somewhat garrulous delivery, which is
rather common in simultaneous interpretation, is not welcomed.
On BBC World TV, the interpreter’s name does not appear on the
screen, making it difficult for the viewer to identify the interpreter
when there is any misinterpretation. Although editors are in charge of
checking the output of the interpreters, they tend to focus on purely
technical matters such as voice level, paper rustle, etc. rather than on
144
Tomoyuki Shibahara
the linguistic merits of the translation. Besides, given the time pressure,
it is almost impossible for the editors to check the entire output.
As for checking word usage, this is basically left to the interpreters to
cross-check among themselves. Under this system, the translation of
new words tends to cause debate among the interpreters. In order to
solve this problem, there have been attempts at forming a Translation
Standardisation Committee in charge of making the use of certain terms
consistent. However, it is very difficult to create a consensus among the
interpreters and, so far, the attempts have not resulted in any committee
being formed. Nonetheless, a Standard Translation Database System is
now in place, where interpreters take turns to define the translation of
new words and add them to the database. In any case, editors have the
final say in deciding which Japanese term should be used.
Broadcasting interpreters at NHK
Broadcasting interpreters for NHK are provided by NHK Joho Network
Inc., an affiliate company of NHK. The Bilingual Center, one of the
divisions of NHK Joho Network Inc., is in charge of selecting and arranging interpreters and translators. More than one thousand interpreters and
translators in some 60 languages are registered at the Bilingual Center.
Since NHK interpreters work on a freelance basis, they have considerable flexibility. They can decide if they do not want to work very late or
very early hours. When there is a high volume of work, interpreters are
called in more often, which means an increase in their income, as for
example during the reporting on the War in Iraq in spring 2003.
However, working on a freelance basis means that there is no income
security, and it can be very difficult for interpreters to rely entirely on
the income from NHK to make a living. Besides, it is rather difficult for
young interpreters to work for NHK in the first place because it is a very
competitive environment and most of the positions have already been
taken by experienced interpreters. Had it not been for the emergency
situation presented by the war in Iraq, many interpreters might not
have had a chance to work for the NHK.
NHK broadcasting interpreters work for two channels: BS-1, a satellite
news and sports channel; as well as General TV, a terrestrial channel. In
both cases, broadcasting interpreters work for the news programmes
only. NHK has many overseas documentary programmes but these are
translated separately by broadcasting translators. Voice-over is also done
separately by voice actors and actresses. Although BS-1 has a larger and
more constant demand for interpreters, because it broadcasts many
Broadcasting Interpreting in Japan and the UK
145
foreign news programmes, I would like to limit my explanation below
to the case of General TV.
The tasks of interpreters working for General TV are:
(1) Simultaneous interpretation: this is applied when something
urgent happens and there is no time to prepare a translation beforehand. In the case of the War in Iraq, there were three 8-hour shifts a
day (00:00–08:00; 08:00–16:00; and 16:00–24:00), and two interpreters were always standing by. When the initial air strike took place,
NHK ran the live coverage of ABC News, which was interpreted
simultaneously. Interpreters were also asked to monitor other news
channels such as BBC, CNN, and Fox. If editors decided to run any of
these channels, interpreters were asked to interpret simultaneously. In
addition to simultaneous interpreting, interpreters are asked to provide
whispering interpreting (chuchotage) for NHK editors. For this task,
interpreters covered mainly the news conferences of the White House
or the US Central Command. Editors listened to the chuchotage in
order to decide which part of the news conference should be cut out
and used – usually with subtitles – in the news bulletin following. In
this kind of work interpreters are then asked to pin-point the footage
with the video technician. Having found where the footage begins and
ends, interpreters are asked to interpret the contents. In most cases, the
interpretation is used to make subtitles in Japanese and, when there is
a shortage of manpower, interpreters are also asked to help with the
subtitling.
(2) Monitoring of foreign news channels in case editors decide to use
short footages from these.
(3) Reading the Japanese translation of short footages: sometimes
interpreters are asked to voice-over the Japanese translation in the
actual telecast. Professional voice actors and actresses are rarely used for
this task unless it is something very important and there is enough time
to do so.
(4) Providing translation to make subtitles for short footages.
The last two tasks are assigned to interpreters who have not reached the
level of simultaneous interpreting or who have opted out of simultaneous interpreting. They follow similar working shifts as simultaneous
interpreters and two to three interpreters are assigned to each shift. The
difference in the functions between simultaneous interpreters and
other interpreters is clearer in NHK than in the BBC Japanese Unit.
Despite helping with the above two tasks, simultaneous interpreters
146 Tomoyuki Shibahara
and other interpreters basically work separately, dealing with different
types of tasks.
Interpreting policy and quality control
The person responsible for the selection of interpreters considers accurate
interpretation very important together with a high degree of fluency in
simultaneous interpretation. As in the case of the BBC Japanese Unit,
interpreters are expected to edit and to organise the information. It is
also important to avoid words which are not allowed to be used on air,
such as terms which may be considered discriminatory.
Both on BS-1 and General TV, the interpreter’s name appears on
screen. The programme is shown in the NHK news room where editors
can watch it and, in the event there is a problem with the interpreting,
they can quickly point them out to interpreters.
As for the standardisation of certain translation terms, there is at least
one set of guidelines for the General TV team. BS-1 channel provides a
bulletin board that interpreters can refer to for the translation of new
words and on some occasions they hold study sessions, as happened
before the War in Iraq broke out.
Comparison between BBC Japanese Unit
and NHK (General) TV
Interpreters are employed in-house, full-time at the BBC Japanese Unit,
while interpreters employed by NHK work on a freelance basis. This
may be because interpreters at BBC World TV have to cover an entire
news bulletin, whereas interpreters at NHK have to interpret only parts
of the news bulletin.
Since job security is not guaranteed at NHK, interpreting fees are
higher than those paid by the BBC Japanese Unit. Working five days a
week, interpreters in NHK can earn roughly twice as much as interpreters in the BBC Japanese Unit working full-time.
The work of the BBC Japanese Unit requires broadcasting interpreters
to be all-rounders, while their counterparts at NHK are required to specialise in a certain area. This may be because of the different nature of
the contents of the programmes broadcast by the channels for which
they work. BBC World TV tends to follow a fixed pattern with news and
documentary programmes broadcast every thirty minutes. Interpreters
have to cover both types of programmes, although documentaries are
translated and voiced-over beforehand. On BS-1 and NHK General TV
news programmes are not always run every hour, and they can be
Broadcasting Interpreting in Japan and the UK
147
followed by sports or variety programmes, which do not necessarily
require broadcasting interpreters.
Difference in the interpretation policy
and quality control
The BBC Japanese Unit likes to stress the ‘broadcasting’ dimension of
this profession. In other words, it encourages interpreters to edit the
information in order to make the target text clearer to the viewers.
Although this is also done by interpreters at NHK, more emphasis seems
to be put on the accuracy of the interpreting. This may be due to the
fact that NHK is the biggest authority among TV stations in Japan and
considers it one of its duties to set high standards. However, this does
not mean that BBC World TV sacrifices accuracy, as it has to compete
with other TV channels such as CNN and NHK, and their interpreting
has to be more than ‘just accurate’.
The greatest difference in the quality control is whether interpreters
are under constant monitoring. It can be said that NHK has a very
strict system in this respect, while the BBC Japanese Unit has a more
democratic, if not flexible, system. Some interpreters at the BBC
Japanese Unit felt that the name of the interpreter should appear on
the screen; the reason for this being that it could provide an incentive
for interpreters to take more responsibility for their work and, consequently, boost the overall standard of interpreting. Having personally
worked in both organisations, I have found that adding the name on
the screen does indeed have a great effect on the interpreters. However,
this proposal was rejected at the BBC Japanese Unit because of financial considerations.
Conclusion
Interpreting is a way of communicating and the effectiveness of the
communication may be measured by judging how well the message is
conveyed. No matter how faithful the interpretation is to the original, it
means very little to the viewers if the final output is very hard to understand. In the case of broadcasting interpreting, it may be assumed that
viewers are not knowledgeable about all the subjects discussed. Therefore,
it is not sufficient to ‘convert’ the Source Language into the Target
Language and hope that the viewer will understand the message behind
the facts. Interpreters have to understand the message in the Source
Language and, if necessary, edit the information to make it easier for the
148
Tomoyuki Shibahara
viewers to understand. It may be said that any type of interpreting
implies editing information in one way or another. However, in broadcasting interpreting it is essential that interpreters actively edit the information to ensure that communication is as effective as possible.
This editing leads, of course, to yet another question as to how free or
faithful an interpreting act should be.
References
BS Hōsō Ts̄yaku Gur̄pu (1998) Hō sō ts̄yaku no sekai (The World of Broadcast
Interpretation). Tokyo: ALC.
Mitsufuji, K. (2002) ‘Interpreting as audio-visual translation (AVT)’. Interpretation
Studies 2: 87–98.
Part III
Accessibility to the Media
This page intentionally left blank
12
Interlingual Subtitling for
the Deaf and Hard-of-Hearing
Josélia Neves
Introduction
Considerable mileage has been covered since the early days, in the late
1950s, when audiovisual translation (AVT) began to be addressed as
a subject in its own right. Concerns have moved from the dubbing/
subtitling debate to more specific domains such as the analysis of issues
pertaining to discourse analysis, technical constraints and audience
design. Comprehensive lists of audiovisual translation typologies
(Luyken et al., 1991; Gambier, 1996; Díaz Cintas, 2003) have reflected
the gradual change in scope and are making space for newer forms of
language transfer within the audiovisual context. Indeed, the lines once
drawn between language transfer types in the media are becoming less
and less visible. Principles and techniques are merging to give way to
specific offers, directed to specific audiences. This implies that the very
concept of ‘mass’ media is changing; technology is now allowing masses
to be broken down into smaller groups and products are tailor-made to
the expectations and the needs of defined sub-groups. AVT will
inevitably need to follow the general trend in the audiovisual market
and, rather than aiming to cater for a general audience, audiovisual
translation now finds itself focusing on the needs of smaller distinct
audiences in order to respond to them in a more adequate manner.
This concern with making translations accessible to the intended
receptor is not new. As a matter of fact, in his seminal work ‘Principles of
correspondence’, Nida (1964) calls our attention to the fact that translations need to be directed towards the different types of audiences. This
scholar takes up the topic later (1991: 20) to specify that translators need
to look at ‘the circumstances in which translations are to be used’. Ever
since, academics and translators have gone to great lengths to find
151
152 Josélia Neves
theories and practices that reflect such concerns for their audiences.
Kovačič (1995: 376) questions the meaning of ‘reception’ in subtitling, to
see it as a multilayered construct in which factors such as ‘the sociocultural issue of non-TV context influencing the process of receiving
subtitles’, ‘the attitudinal issue of viewers’ preference for subtitling over
dubbing or vice versa’, ‘the perceptual issue of subtitle decoding (reading
and viewing) strategies’, and ‘the psychological or cognitive issue of the
impact of cognitive environment on understanding subtitles’ are of
paramount importance in audience design. Gambier (2003: 185) adds
that ‘these four aspects (socio-cultural, attitudinal, perceptual and
psychological/cognitive) could be used to inform a model for research
on subtitle reception’. These very same aspects have proved to be of
utmost importance in the still ongoing study of SDH in Portugal; and it
is with these in mind that I address a situation that is changing just as
writing and reading are changing. In fact, technological and policy
changes are offering new challenges to all those working within AVT
and it would be interesting to see practices improving, by taking advantage of the new products for the benefit of audiences who should not be
seen as minorities but as one of the many parts of a fragmented reality.
It is in this context of fragmentation and plurality that a different
type of subtitling solution, interlingual SDH, has emerged. A new concept to most people, interlingual SDH is gradually gaining visibility,
particularly in the DVD market where, further to intralingual subtitles
for the hearing-impaired, a track is provided with interlingual subtitles
for the hearing-impaired in a language other than that spoken in the
original soundtrack. The analysis of 200 randomly chosen DVDs, made
available in video rental shops in Portugal, showed that close to 38%
contained intralingual SDH in English; 9% contained a track with interlingual SDH (15 titles had interlingual SDH in German; and 3 in Italian);
and none of such titles contained SDH into Portuguese. Indeed, these
numbers show that not many DVDs offer interlingual SDH, and those
that do offer it, seem to do so by applying the conventions of traditional
subtitling on DVD, which, I believe, do not cater for the needs of people
with hearing impairment.
However unusual and scarce the offer, this ‘new product’, in its hybrid
make-up, lends itself to reflection and allows one to address problems
that are shared with the more common types of subtitling: both interlingual subtitling for hearers and intralingual SDH.
In its complexity, interlingual SDH might be regarded as a perfect
example of transadaptation (Gambier, 2003). On the one hand, as happens with traditional interlingual subtitling, one is faced with an
Interlingual Subtitling for Deaf and Hard-of-Hearing
153
instance of translation between two different languages. On the other
hand, as happens with intralingual subtitling, further to taking speech
into writing, there is a need for the adaptation of multiple aural messages (speech, sound effects and music) so as to produce a visual (most
often verbal) substitute for the information that cannot be picked up
by people with hearing impairment. In my view, what most strongly
determines the nature of this kind of subtitling is the need to ‘adapt’ a
product to a target audience or addressee, which, according to Nord
(2000: 196), is:
... not a real person but a concept, an abstraction gained from the
sum total of our communicative experience, that is, from the vast
number of characteristics of receivers we have observed in previous
communicative occurrences that bear source analogy with the one
we are confronted with in a particular situation.
In other words, if we are to consider subtitling as ‘translational action’
(Vermeer, 1989: 221), serving a functional end, its skopos needs to be
perfectly understood by all those involved in the commission. Quite
often, the commissioners of SDH, and the subtitlers themselves, are not
fully aware of the particular needs of their ‘clients’ for not much is given
to them in terms of audience design or reception analysis. In fact, only
by knowing the distinctive features of the target audience will people
be reasonably aware of the possible effects of their work on the receptor.
Only then can anyone aim at the utopian situation where the ‘new
viewer’s experience of the programme will differ as little as possible
from that of the original audience’ (Luyken et al., 1991: 29).
No distinction appears to be made between being deaf and being hardof-hearing in terms of intralingual subtitling. Both conditions go
hand-in-hand as if they were complementary or even interchangeable.
Guidelines for intralingual subtitling assume that their subtitling solutions cater for the needs of all alike and, in so doing, I would suggest that
they are catering for the needs of neither. According to Nord (2000: 195):
... the idea of the addressee the author has in mind, is a very important
(if not the most important) criterion guiding the writer’s stylistic and
linguistic decisions. If a text is to be functional for a certain person or
group of persons, it has to be tailored to their needs and expectations.
An ‘elastic’ text intended to fit all receivers and all sorts of purposes is
bound to be equally unfit for any of them, and a specific purpose is
best achieved by a text specifically designed for this occasion.
154
Josélia Neves
If subtitlers are to produce functionally adequate texts for these
particular people, it seems fundamental that, however difficult it may
be, one clarifies what is meant by ‘deaf’ and by ‘hard-of-hearing’, to
which I add another term: ‘Deaf’.
Deaf vs. hard-of-hearing (HoH)
Medically and clinically speaking, one is considered to be ‘deaf’ whenever hearing loss is so severe that one is unable to process linguistic information through hearing alone.1 On the other hand, one is considered to
be ‘hard-of-hearing’ when there is a hearing loss, whether permanent or
fluctuating, which adversely affects an individual’s ability to detect and
decipher some sounds. In other words, despite the hearing loss, there is
still some residual hearing. Rodda and Grove (1987: 43) draw the line
between audiological and social understanding of these terms:
Defining hearing loss is a fairly simple matter of audiological assessment, although the interpretation of the simple pure-tone audiogram
is more difficult. Defining deafness is exceedingly complex; it is as
much, if not more, a sociological phenomenon as an audiological
definition.
This particular reference to the sociological dimension of deafness is of
utmost importance when it comes to knowing our addressees better. In
fact, it leads us to a different concept, the concept of ‘Deaf’ which refers
to the cultural heritage and community of deaf individuals, that is the
Deaf culture or community. In this context, the term applies to those
whose primary receptive channel of communication is visual. It is also
in this context that we refer to the Deaf as a minority group, with a
language of their own (national sign language) and with an identity
that differs from that of hearers and that of the hard-of-hearing. Further
to this, we are dealing with a group of people whose first language is not
that of hearers in their country. The (oral) national language is, even for
those Deaf people who are Bi-Bi (bilingual and bicultural), a second
language with all that this entails.
With all this in mind, I will try to reformulate the traditionally
canonical ‘for the deaf and hard-of-hearing’ by readdressing hearing
and deafness in terms of the two ends of a scale that spans between two
distinct poles: being a ‘hearer’ and being ‘Deaf’. Here too, I prefer to
address deafness in sociological rather than audiological terms. To be
‘hard-of-hearing’ means to be somewhere in the ‘grey zone’, midway
Interlingual Subtitling for Deaf and Hard-of-Hearing
155
between two distinct realities. Usually, people who are hard-of-hearing
identify themselves with the hearing community. They have acquired
the condition through age or disease but they mainly partake of the
social order of the community in which they were raised. Their mother
tongue is the spoken language of their national group. They have a
notion of the sound systems which inhabit their environment. Even in
cases of severe hearing loss, there is usually some residual hearing and,
above all, sound is still stored in a fairly accessible memory bank, ready
to be retrieved through effective stimuli. Being Deaf, on the other hand,
means belonging to a world in which there is no sound, not necessarily
meaning that there is silence. Emmanuelle Laborit, a French Deaf
actress and author of an autobiography, The Cry of the Gull (1998),2
describes silence as ‘the absence of communication’, adding that ‘I’ve
never lived in complete silence: I have my own noises that are
inexplicable to hearing people. I have my imagination and it has its
noises that are inexplicable to hearing people’ (ibid.: 10). Further to this,
Deaf people communicate among themselves using their natural language: sign language. Laborit (ibid.: 88) clarifies:
That’s right. As far as I am concerned, sign language is my voice and
my eyes are my ears. Frankly, I don’t feel deprived of anything. It’s
society that makes me handicapped, that makes me dependent of
hearing people, that makes it impossible to contact a doctor directly,
that makes me need to have conversations translated, that makes me
have to ask for help to make a phone call or for captioning on TV
(there are so few captioned programs). With more Minitels and captioning, I or rather we, the deaf, could have better access to culture.
There wouldn’t be a handicap, a deadlock, a border between us.
Up until fairly recently, Deaf children were brought up to ‘oralise’. This
meant that they were taught to pronounce words and to make use of lip
reading to understand speech. Communicating through sign language
was not widely accepted and Deaf people were forced to use the national
oral language, regardless of the fact that their hearing and speech apparatuses were not tuned to such a task. The ways in which Deaf children
are educated determine their development and their perception of the
world. Their proficiency in the use of language will be of paramount
importance in their ability to decode messages. Needless to say, Deaf
people’s reading competence will necessarily be different from that of
hearers, and that of the hard-of-hearing as well. And subtitles are all
about reading.
156
Josélia Neves
Reading film ... in search of relevance
Watching a film is all about decoding information that is conveyed
through multiple channels (speech, sound, and image). Reading film is
a complex process. It is problematic to say that full access to audiovisual
texts is ever attained, even in the case of people with no impairment. In
accepting the dialectic interaction between the producer and the
receiver in the construction of meaning, and in the knowing that the
audiovisual text is a perceptive whole that does not equal the sum of its
parts, it is as obvious that decoding polysemiotic texts is a demanding
task for all.
Speech is usually decoded through cognitive processing, and sound
effects and visual signs are often impressionistic and concurrent.
Image, sound and speech are interrelated, quite often in a redundant
manner. Such redundancy makes access to the message easier and it is
only when there is no direct access to one of the elements, (either for
some sensory impairment or for lack of knowledge in the Source
Language) that subtitles gain importance. Interlingual subtitles usually
bridge the gap between the SL and the language of the target audience;
intralingual subtitles replace more than speech, they are expected to
account for paralinguistic information and for acoustic elements. If
they are to be fully integrated with the visual and auditory channels,
subtitles must also guarantee a certain degree of redundancy in relation to concurrent information, fitting in with the whole in an integrated manner, guaranteeing ease rather than adding to the load of
decoding effort. Díaz Cintas (2003: 195) sheds light on the issue by
reminding us that:
Even for those with adequate command of the foreign language,
every audiovisual product brings with it a range of additional
obstacles to comprehension: dialectal and sociolectal variation, lack
of access to explanatory feedback, external and environmental sound
level, overlapping speech, etc., making translation of the product
crucial for the majority of users.
This is so much more pertinent in the case of people with hearing
impairment. To a greater degree than in the case of people who do not
understand the SL, to the Deaf and HoH subtitles are essential, rather
than redundant. They are the visual face of sound. For the HoH they
are a stimulus and a memory exercise; for the Deaf, they are the only
means to gain access to aural information. However redundant, sound
Interlingual Subtitling for Deaf and Hard-of-Hearing
157
and image tell different stories. Watching images alone will not allow
us to grip the whole, just as listening without viewing will never allow
for a full understanding of the whole audiovisual construct. When subtitling for these specific audiences – the Deaf and the HoH –, it is up to
the subtitler to turn into written words both the dialogue that is heard
and the sound effects that are only apprehended. In other words, subtitlers need to be familiar with filmic codes so that they may capture
the different effects in order to transfer sensations and emotions into
words. Deciding what is relevant is a difficult task, for relevance is
determined by the needs of the addressee. Ideally, as mentioned before,
subtitlers should aim at producing equivalent effects on their audience
as those produced in the target audience of the original. But, as Gutt
(1991: 384) puts it:
... this raises the question of what aspects of the original the receptors would find relevant. Using his knowledge of the audience, the
translator has to make assumptions about its cognitive environment
and about the potential relevance that any aspects of the
interpretation would have in that cognitive environment.
Knowing the addressee’s cognitive environment is relatively easy for
subtitlers who translate into their mother tongue and for an audience
who is believed to belong to their social group. But when it comes to
subtitling for the Deaf, hearing subtitlers rarely have true knowledge of
the cognitive environment of their target audience. This may be due to
the lack of specific training in the area, or even due to the fact that
subtitlers are not aware that their ‘translation action’ is specifically
directed towards addressees that do not share the subtitler’s language or
culture. According to Gutt (ibid.: 386) and his approach to translation
and relevance:
... whatever decision the translator reaches is based on his intuitions
or beliefs about what is relevant to his audience. The translator does
not have direct access to the cognitive environment of his audience,
he does not actually know what it is like – all he can have is some
assumptions or beliefs about it. And, of course, as we have just seen,
these assumptions may be wrong. Thus our account of translation
does not predict that the principle of relevance makes all translation
efforts successful any more than it predicts that ostensive communication in general is successful. In fact, it predicts that failure of communication is likely to arise where the translator’s assumptions about
158 Josélia Neves
the cognitive environment of the receptor language audience are
inaccurate.
This alone may account for many of the problems often found with
subtitles for the deaf and HoH, which I think should be rightfully
named ‘subtitling for the Deaf and HoH’. What is relevant? How do
these two addressees (for they are indeed two different communities)
perceive the world? How do they read film? And how do they read
words (subtitles)? How much information do they need for meaning to
be fully understood? When are words too much/too many? When are
they not enough? Only when subtitlers find answers to these questions will they be better equipped to determine their addressees’ profile. Only when that is done will they be able to decide on issues such
as how much and which information is to be presented in the text;
how this information is to be structured; and which linguistic and
stylistic devices are to be used to present the selected information, so
that the translation may yield ‘the intended interpretation without
putting the audience to unnecessary processing effort’ (Gutt,
1991: 377). Further, and keeping the audiences’ needs in mind, I would
posit that subtitlers should question what supplement begs to be added
for their benefit.
In fact, when asked to comment upon the open subtitles offered on
Portuguese television programmes, the criticism that was most often
made by Deaf respondents was that subtitles were ‘difficult’. Deeper
probing led to the conclusion that the respondents were referring to the
great effort they put into decoding (any) written text and particularly
into the reading of subtitles. Not many people referred to the need for
supplementary information on sound effects. Most of the respondents
had little notion that such information could be added at all, as at the
time of the survey in early 2003, there was no significant offer of intralingual subtitles for the Deaf in Portugal. They demanded nothing,
because they expected nothing, they did not know they could get anything different. But they voiced their frustration about not following
subtitles: ‘subtitles come and go too rapidly’; ‘the vocabulary is too difficult’; ‘the sentences don’t make sense; they are too long and convoluted’, were some of their comments.
These reactions to common, open subtitles call for further reflection
on the different needs of the Deaf audience. One might question what
could be done to attain the minimax effect (Kovačič, 1994: 246) to
improve reception or to guarantee the viewer’s comfort. The answer is
not always straightforward. But further discussion on how the Deaf
Interlingual Subtitling for Deaf and Hard-of-Hearing
159
read might shed some light on the issue. Laborit (1998: 120) speaks of
her reading experience in the first person:
I was one of the rare students in the Morvan program who read a lot.
As a general rule, deaf people don’t read very much. It’s hard for
them. They mix up the principles of oral and written expression.
They consider written French a language for hearing people. In my
opinion, though, reading is more or less image-based. It’s visual.
If we can extrapolate, what seems important here is the fact that Deaf
people in general do not enjoy reading very much, and whereas the Deaf
have this ‘visual’ reference base for the reading process, hearers complement it with an auditory reference system. Further to this, many have not
developed the tools that allow them to take a step forward from simple
word-processing to processing of a higher order such as inferencing and
predicting, (often done subconsciously by the hearer), and planning,
monitoring, self-questioning and summarising (metacognitive techniques that are specific to highly skilled readers). Normative studies on
the reading abilities of Deaf people (Di Francesca, 1972; Conrad, 1979;
Savage et al., 1981; Quigley and Paul, 1984) substantiate this idea that
Deaf people attain very poor standards in reading. The results of the well
known study conducted by the Office of Demographic Studies at
Gallaudet College, USA, in 1971, indicated that ‘although the reading
skills of deaf students increase steadily from 6–20 years, they peak at a
reading level equivalent to Grade 4 in the United States school system
(approximate chronological age 9 years)’ (Rodda and Grove, 1987: 165).
It is widely accepted that subtitles are constrained by time and space
to the point of making language hostage to parameters that will dictate
options when writing subtitles. Such parameters vary in nature. Some
pertain to the programme itself (genre, global rhythm, type of action);
others to the nature of subtitles (spatial/temporal features; position of
subtitles on the screen; pauses between subtitles; gap between dialogue
and subtitles; density of information); and others to textual and paratextual features (semantic/syntactic coherence; register; role of punctuation). Further to these constraints many more will derive from
factors such as the medium (cinema, VHS/DVD, TV); country of transmission (AVT policies and practices); producers’, distributors’ and clients’ demands; or, in the case of television broadcasting, the time of
transmission and the expected audience.
The problems that hearers find in reading subtitles will obviously be
all the more acute for the Deaf viewer. Luyken et al. (1991) suggest that
160
Josélia Neves
average adult reading speeds hover around 150 to 180 words per minute.
This number varies according to the manner in which the text is presented, to the quantity and complexity of the information, and to the
action on the screen at any given moment (De Linde, 1995: 10). The
6-second rule has been widely accepted as rule of thumb for ‘readable’
subtitles. On being questioned about the adequacy of such a ‘norm’ for
the specific needs of the Deaf, and with reference to studies on how the
Deaf read subtitles, d’Ydewalle states that ‘the 6-second rule should be
replaced by a 9-second rule as they are typically slow readers’ (personal
email, 17 February 2003).
Reducing the amount of information is not, in my opinion, the solution to this need for extra processing time. Reduction is often achieved
through the omission of information or by sacrificing interpersonal
meaning. Redundancy is a feature of all natural languages and serves to
make messages better understood. Such redundancy – phonetic, lexical,
collocational or grammatical in nature – serves mainly to make up for
possible interference, or noise. In subtitles, losing redundancy for the
sake of economy is common practice, often resulting in a greater processing effort on behalf of the reader/viewer. In fact, subtitling goes against
the grain of most translation practices in that rather than extending,
subtitles usually seek to condense as much information as possible in as
little space as possible. This often results in reduction strategies that can
make decoding rather taxing, particularly for those who are reading
subtitles in their second language.
In the case of the Deaf reader/viewer, redundancy is of utmost
importance, for such elements will make reading less demanding. This
obviously adds tension to the difficult equilibrium between economy
and expenditure. Economy cannot be had if it is at the expense of
meaning. The first priority, when subtitling for this particular audience, needs to be the conveyance of meaning as fully as possible in as
‘readable’ a manner as possible, and only then, in as condensed a form
as possible. Quite often, extra reading time might need to be given to
allow for the reading of longer subtitles; however, if drawn out subtitles
mean the use of simpler structures or better known vocabulary, it may
well be worth sacrificing synchronisation and having subtitles appearing a little earlier or staying on a little longer, thus adding to reader
comfort.
Cohesive devices are also easily sacrificed in the search for economy.
When cohesion is given by the image, subtitles may jeopardise cohesive devices; the problem arises when cohesion needs to be reinforced
through language for lack of visual redundancy. What is conveyed
Interlingual Subtitling for Deaf and Hard-of-Hearing
161
through subtitles is never the message but a possible version of the
message and special care needs to be taken so that nothing can
contradict that particular version. It is easy for the image to add,
confirm or even contradict the verbal message conveyed. For Deaf
audiences, who depend on visual cues to assist language decoding,
special care needs to be taken so that subtitles may be truly complementary and so be seen as part of the whole, never as an obtrusive, or
even cumbersome, add-on.
Language has its own means of guaranteeing internal and external
redundancy. Speech naturally involves linguistic, paralinguistic and
non-linguistic signs. Paralinguistic signs cannot be interpreted except
in relation to the language they are accompanying. On the other hand,
non-linguistic signs are interpretable and can be produced without the
co-existence of language. Non-linguistic signs or natural signs such as
facial expressions, postural and proxemic signs, gestures, and even some
linguistic features ‘are likely to be the most cross-culturally interpretable’
(Cruse, 2000: 8) but also the source of many misunderstandings in
cross-cultural communication. Such kinetic elements are no greater a
problem to the Deaf than they are to hearers.
However, paralinguistic signs are more often hidden from the Deaf
person, for they are only sensed in the tone or colour of voice in each
speech act. There are times when such paralinguistic signs actually
alter the meaning of words; and more often than not, punctuation
cannot translate the full reach of such signs. Whenever such signs
have informative value, there is a need for explicitation, one of the
‘universals of translation’ according to authors like Toury (1980: 60).
In subtitling for the Deaf, explicitation is a fundamental process to
compensate for the aural elements that go missing. In the case of paralinguistic information, there might even be a need to spell out what
can only be perceived in the way words are spoken. Sound effects
(such as ‘phone rings’) are often straightforward to describe, but
expressing what comes with the tone of voice (irony, sadness, irritation) can be difficult. In feature films and series, paralinguistic features are most frequently found in moments when the story is being
pushed forward by emotional interplay or when characters reveal
something in themselves that goes against their words. This could
mean that adding extra information might alter the intended pace or
cut down on the tension. Finding adequate solutions for the problem
is a challenge for those working in the area. The use of smileys to convey such information has proved effective thanks to the economy of
their pictorial nature. Deaf TV viewers in Portugal have reacted most
162
Josélia Neves
favourably to the introduction of commonly used smileys in the subtitling of a primetime soap opera. 3 Even though the introduction of
information about paralinguistic signs may be considered redundant
for hearers, it is fundamental for the Deaf if they are to get a better
perception of the expressiveness of the intersemiotic whole. A study
conducted by Gielen and d’Ydewalle (1992: 257) concludes that ‘redundancy of information facilitates the processing of subtitles’. If Deaf
viewers are to gain better access to audiovisual texts, such redundant
components of speech will need to be made explicit, and then too, be
redundant in terms of the subtitles that convey speech utterances.
Such redundancy will be a fundamental element to boost the limited
short-term working memory characteristic of Deaf individuals (Quigley
and Paul, 1984).
Short-term working memory is of crucial importance in the reading
process for it allows the reader to understand complex sentences and
linked ideas. Coherence and cohesion is only possible if the reader can
keep different fractions of information active so that, once processed
together, meaning might be achieved. This does not mean that Deaf people are cognitively impaired and unable to process information efficiently; what this means is that they make use of other strategies (strong
visual memory) to process information. As Rodda and Grove (1987: 223)
put it:
Hearing impairment does not incapacitate their central comprehension processes. Provided deaf readers can grasp the semantic context
of a message, they seem to be able to exploit the syntactical redundancy of natural language and to comprehend its contents with surprising degree of efficiency.
Deaf viewers will benefit from subtitles that are syntactically and
semantically structured in ways that will facilitate reading. Long complex sentences will obviously be more demanding on their short-term
working memory. Short direct structures, with phrasal breaks (e.g.
avoiding the separation onto different lines of the article from the
noun), will ease comprehension and make the reading of subtitles far
more profitable. This does not mean that Deaf people cannot cope
with complex vocabulary. In fact, subtitles may be seen as a useful
means to improve the reading skills of Deaf viewers, as well as an
opportunity to increment both their active and passive vocabulary.
However, difficult vocabulary should only be used to some specific
purpose, and only when there is enough available time for the
processing of meaning. In fact, this principle could apply to all sorts
Interlingual Subtitling for Deaf and Hard-of-Hearing
163
of subtitling for, as Gutt (1991: 380) reminds us when referring to
translation in general:
... rare lexical forms [ ... ] are stored in less accessible places in memory. Hence such unusual forms require more processing effort, and
given that the communicator would have had available a perfectly
ordinary alternative, [ ... ] the audience will rightly expect contextual
effects, special pay-off, from the use of this more costly form.
In the case of intralingual subtitling, where verbatim transcription of
speech is frequently sought, pruning text is particularly difficult. In
interlingual subtitling, functional shifts are less exposed and, therefore,
it may be easier to adapt text to the needs of the Deaf addressee. In the
case of intralingual subtitles, editing might be understood as not giving
the Deaf all that is given to hearers. Writing every single word on the
screen, transcribing every utterance, is not, in my view, serving the
needs of this particular audience. Not having enough time to read subtitles and to process information; not understanding the meaning of certain words; or not being able to follow the flow of speech, cannot be
understood as being given equal opportunities. Paraphrasing, deleting
superfluous information, introducing explanatory notes, making explicit
the implicit, may mean achieving functional relevance for the benefit of
the target audience. Borrowing Reiss’ terminology (1971: 161), in SDH we
need to make ‘intentional changes’ for our readers are definitely not
those intended for the original text. In order to gain accessibility for
those who cannot hear, we need to strive for ‘adequacy of the TL reverbalisation in accordance with the “foreign function” ’ (ibid.) that is being
aimed at. Gutt (1991: 377) also sheds light on this issue:
Thus if we ask in what respects the intended interpretation of the
translation should resemble the original, the answer is: in respects
that make it adequately relevant to the audience – that is, that offer
adequate contextual effects; if we ask how the translation should be
expressed, the answer is: it should be expressed in such a manner
that it yields the intended interpretation without putting the audience to unnecessary processing effort.
Making sound visible
As mentioned before, audiovisual texts are multimodal in their making
and sound plays an important role in their narrative force. Most of the
164 Josélia Neves
time, hearers make very little effort to process sound in a film. Unlike
speech, that requires cognitive effort to be decoded, sound effects and
instrumental music convey meanings in a discreet way. According to
Monaco (1981: 179), ‘we ‘read’ images by directing our attention; we do
not read sound, at least not in the same conscious way. Sound is not
only omnipresent but omnidirectional. Because it is so pervasive, we
tend to discount it’.
Furthermore, in contrast to speech and to paralinguistic features pertaining to oral expression, sound and music need no translation. A
hearer will quite easily pick up the suggested meanings transmitted
through sound effects that often underline images, sustaining them
and guaranteeing continuity and connectivity. Processing music is
somewhat more difficult. However, hearing viewers have grown to
understand filmic conventions and have come to associate musical
types with certain genres and with particular filmic effects. Quite often,
film music has been taken from its intersemiotic context to grow in the
ear of radio listeners, and to gain a life of its own. Yet, while associated
to image, its meaning is strongly felt even if its existence is subtle and
little more than a suggestion. In fact, ‘it makes no difference whether
we are dealing with speech, music, or environmental sound: all three
are at times variously parallel or counterpuntal, actual or commentative, synchronous or asynchronous’ (Monaco, 1981: 182) and it is in
this interplay that filmic meaning grows beyond the images and the
whole becomes artistically expressive.
One might question the pertinence of describing music and sound
effects to people who have never been able to hear. There is no question
that to the HoH, a comment on sound effects or music may activate the
aural memory and may even guide viewers to consciously direct their
residual hearing capacities towards relevant content. Those who have
lost their hearing later on in life will recall previous experiences and
may find cultural, social and historical references in certain melodies or
rhythms. But even those who were born deaf can, never having heard,
make those very same connections creating their own references, for
even though they cannot hear, they live in a world of sound, which
they perceive through vibrations and they learn to appreciate through
other sources (literature, television, etc.). Once again, a Deaf person’s
testimony may clarify doubts (Laborit, 1998: 17):
I was lucky to have music when I was a child. Some parents of deaf
children think it’s pointless, so they deprive their children of music.
[ ... ] I love it. I feel its vibrations. [ ... ] I feel with my feet, or my whole
Interlingual Subtitling for Deaf and Hard-of-Hearing
165
body if I’m stretched out on the floor. And I can imagine the sounds.
I’ve always imagined them. I perceive music through my body, with
my bare feet on the floor, latching onto the vibrations. The piano,
electric guitar, African drums, percussion instruments, all have
colors and I sway along with them. That’s how I see it, in color. [ ... ]
Music is a rainbow of vibrant colors. It’s language beyond words. It’s
universal. The most beautiful form of art that exists. It’s capable of
making the human body physically vibrate.
I firmly believe that every effort should be made to convey sound and
music visually. Present subtitling systems do not allow for much more
than alphabetical characters in the form of subtitles. Many teletext systems do not allow for the use of symbols, such as , and a # is often used
to indicate the presence of music. New strategies are already found in the
subtitling of DVDs, where special care is given to identifying music
through the use of symbols; yet the same cannot be said for television for,
at present, technical conditions are still hindering progress in this domain.
I believe that with the advent of digital technology, it will be possible to
convey music visually, thus adding more information for those who cannot hear. Until that day comes, subtitlers will need to be sensitive to sound
and music to be able to decode their inherent messages and to find adequate and expressive solutions to convey such sensations verbally. Even
though it may be difficult to find words that fully convey the expressive
force of sound and music, the translator (subtitler) should try to produce
an analogous aesthetic effect (Nord, 1996: 83), as well as to reproduce the
expressive content of the original in the case of thematic music.
Conclusion
To conclude, it seems appropriate to address the issue of SDH anew in
order to bring to the fore commonly held and often erroneous notions
about SDH: that SDH is intralingual subtitling; that the Deaf and HoH
are one and the same group; and that these specially conceived subtitles
are for hearing impaired viewers alone.
In the first case, and until fairly recently, this might have been a fact.
Indeed, and particularly on television, SDH has mainly been the intralingual transposition of speech from the oral to the written mode.
However, interlingual SDH might now be found on DVD, even if as yet
only in a small number of films and languages. This may lead us to
believe that common interlingual subtitles are sufficient and are used
by hearers and hearing-impaired alike. The latter assumption may be
166
Josélia Neves
true, but the former can only be a misconception, for Deaf viewers will
not only be reading subtitles in a second language but will be relating
them to yet a third lingua-culture, that of the original audiovisual text.
It would appear obvious that the future of interlingual subtitling for the
deaf and HoH lies ahead for, as Díaz Cintas (2003: 200) puts it:
... failing to account for this type of subtitling would imply a tacit
acceptance of the fallacy that the deaf and hard of hearing only
watch programmes originally produced in their mother tongue,
when there is no doubt that they also watch programmes originating
in other languages and cultures. This in turn would mean that they
are forced to use the same interlingual subtitles as hearing people,
when those subtitles are, to all intents and purposes, inappropriate
for their needs.
This takes us to the second notion that, regardless of the languages that
might be at stake, SDH serves the needs of Deaf and hard-of-hearing
alike. As mentioned above, it is misleading to consider the needs of each
of these groups similar in nature. It goes without saying that most of the
time neither of the two gets its due in what is presently offered. It would
be excellent to be able to arrive at a compromise that would not be too
taxing to either of the two. However, in an ideal world, these two audiences should be given different subtitles. Whilst that ideal situation is
not yet with us, it seems only natural to me that one should try to
address those at the far end of the spectrum, the Deaf, in the knowledge
that the hard-of-hearing will be the ones left to choose between two not
very adequate solutions. If they choose to follow SDH they will obviously need to make light of the extra care that has been put into the
making of such subtitles without succumbing to the feeling of being
patronised. In fact, such subtitles were made with a different audience
in mind and should, therefore, be read as such.
As for the last notion, that SDH can only be of help to Deaf and HoH
viewers, this could not be any falser. Indeed, the extra information
about sound in intralingual SDH and the care with reading time and
reading ease, might be felt as quite unnecessary for the majority of people, but it might be of use to those who, for some reason, do not master
the target language. In this group I would include poor readers; nonnative speakers trying to learn the language of the subtitles; and a vast
number of people with cognitive impairment.
At this point in time, only DVD can offer differentiated solutions
in up to 32 available tracks, but with television going digital and
Interlingual Subtitling for Deaf and Hard-of-Hearing
167
interactive, audiences will be further fragmented and will be looking
for a translation solution that best suits their specific needs. This will
mean that subtitles will be viewed on call, and open subtitling, as we
now know it, will no longer make sense in its undifferentiated existence. The option of a translation solution will obviously require that
multiple solutions (dubbing, interlingual subtitles, intralingual and
interlingual SDH, adapted subtitling) be produced for each audiovisual
programme. The question of financial feasibility and cost effectiveness
is clearly pertinent. Language transfer of any kind costs money and it
is difficult to foresee whether providers of audiovisual materials will be
willing to pay for various versions of the same programme to suit all
needs and preferences.
In short, the special care that goes into making SDH easily read would
most certainly be welcome in all sorts of subtitling solutions and particularly helpful to many viewers who, not being hearing-impaired, are
less proficient readers. The extra information about sound, music and
paralinguistic features, quite unnecessary features for hearing viewers,
might be seen to raise aesthetic awareness and to those who are competent audiovisual consumers provide an additional information channel. I truly believe that, rather than SDH, subtitles for the Deaf (SD) are
essential for this particular group, helpful to the HoH, and not necessarily disruptive or annoying to many hearers, provided that they are made
with sensitivity, sensibility and coherence.
Notes
1. See Hands & Voices (www.handsandvoices.org/resource_guide/19_definitions.html) for detailed definitions.
2. The original Le Cri de la mouette was published in 1994 and has since been
translated into a great number of languages. It is widely accepted by the Deaf
community as being one of the most illustrative accounts of the difficulties
the Deaf person finds in trying to be part of the hearing society.
3. In the knowledge that the Deaf use smileys in their SMS messages, a set of
eight smileys were used in Portuguese subtitles to depict emotions and tone
of voice. After being tested on cards with a restricted group of Deaf people,
they were introduced in the subtitling of the Brazilian soap Mulheres
Apaixonadas, aired daily in Portugal by SIC, a commercial channel which
introduced SDH, via teletext, for the first time on 6 September 2004.
:(
:)
:-/
:-S
:-&
sad
happy
angry
surprised
confused (drunk, dizzy, ...)
168 Josélia Neves
;-)
;-(
:-O
:-º
irony /second meanings
acting out to be sad or angry
screaming
speaking softly
References
Conrad, R. (1979) The Deaf School Child. London: Harper & Row.
Cruse, A. (2000) Meaning in Language. An Introduction to Semantics and Pragmatics.
Oxford: Oxford University Press.
De Linde, Z. (1995) ‘ “Read my lips” Subtitling principles, practices and problems’. Perspectives: Studies in Translatology 3(1): 9–20.
Díaz Cintas, J. (2003) ‘Audiovisual Translation in the third millennium’. In
G. Anderman and M. Rogers (eds) Translation Today. Trends and Perspectives
(pp.192–204). Clevedon: Multilingual Matters.
Di Francesca, S. (1972) Academic Achievement Test Results of a National Testing
Program for Hearing Impaired Students. Series D, 9. Washington, DC: Office of
Demographic Studies, Gallaudet College.
Gambier, Y. (1996) ‘Introduction: la traduction audiovisuelle un genre
nouveau?’. In Y. Gambier (ed.) Les transferts linguistiques dans les médias audiovisuels (pp. 7–12). Villeneuve d’Ascq (Nord): Presses Universitaires du
Spetentrion.
Gambier, Y. (2003) ‘Screen transadaptation: an overview’. Génesis – Revista
de Tradução e Interpretação do ISAI 3: 25–34.
Gielen, I. and d’Ydewalle, G. (1992) ‘How do we watch subtitled television programmes?’. In A. Demetrion, A. Efklides, E. Gonida, and M. Vakali (eds)
Psychological Research in Greece: Vol. 1 Development, Learning and Instruction
(pp. 247–59). Thessalonica: Arestotelian University Press.
Gutt, E.A. (1991) ‘Translation as interlingual interpretive use’. In L. Venuti (ed.)
(2000) The Translation Studies Reader (pp. 376–96). London and New York:
Routledge.
Kovač ič, I. (1994) ‘Relevance as a factor in subtitling reductions’. In C. Dollerup
and A. Lindegaard (eds) Teaching Translation and Interpreting 2 (pp. 245–51).
Amsterdam and Philadelphia: John Benjamins.
Kovač ič, I. (1995) ‘Reception of subtitles. The non-existent ideal viewer’.
Translatio, Nouvelles de la FIT-FIT Newsletter XIV(3–4): 376–83.
Laborit, E. (1993) Le Cri de la mouette. Paris: Robert Laffont.
Laborit, E. (1998) The Cry of the Gull. Washington: Gallaudet University Press.
Luyken, G.M., Herbst, T., Langham-Brown, J., Reid, H. and Spinhof, H. (1991)
Overcoming Language Barriers in Television: Dubbing and Subtitling for the European
Audience. Manchester: European Institute for the Media.
Monaco, J. (1981) How to Read a Film. The Art, Technology, Language, History, and
Theory of Film and Media. New York and Oxford: Oxford University Press.
Nida, E. (1964) ‘Principles of correspondence’. In L. Venuti (ed.) (2000) The
Translation Studies Reader (pp. 126–40). London and New York: Routledge.
Nida, E. (1991) ‘Theories of translation’. TTR – Traduction, Terminologie, Rédaction.
Studies in the Text and its Transformations 4 (1): 19–32.
Interlingual Subtitling for Deaf and Hard-of-Hearing
169
Nord, C. (1996) ‘Text type and translation method, an objective approach to
translation criticism: Review of Katharina Reiss. Möglichkeiten an Grenzen der
Übersetzungskritik’. The Translator 2(1): 81–8.
Nord, C. (2000) ‘What do we know about the target-text receiver?’. In A. Beeby,
D. Ensinger and M. Presas (eds) Investigating Translation (pp. 195–212).
Amsterdam and Philadelphia: John Benjamins.
Quigley, S. and Paul, P.V. (1984) Language and Deafness. California: College-Hill
Press.
Reiss, K. (1971) ‘Type, kind and individuality of text. Decision making in translation’. In L. Venuti (ed.) (2000) The Translation Studies Reader (pp. 160–71).
London and New York: Routledge.
Rodda, M. and Grove, C. (1987) Language, Cognition and Deafness. London:
Lawrence Erlbaum Associates.
Savage, R.D., Evans, L. and Savage, J.F. (1981) Psychology and Communication in
Deaf Children. Toronto: Grune & Stratton.
Toury, G. (1980) In Search of a Theory of Translation. Tel Aviv: The Porter Institute
for Poetics and Semiotics.
Vermeer, H.J. (1989) ‘Skopos and commission in translational action’. In
L. Venuti (ed.) (2000) The Translation Studies Reader (pp. 221–32). London and
New York: Routledge.
13
Audio Description in the
Theatre and the Visual Arts:
Images into Words
Andrew Holland
Introduction
The UK’s Royal National Institute of the Blind (RNIB) has defined audio
description (AD) as ‘an enabling service for blind and partially sighted
audiences [ ... ] describing clearly, vividly and succinctly what is happening
on screen or theatre stage in the silent intervals between programme
commentary or dialogue – in order to convey the principal visual elements of a production’.1 Essentially, it is an attempt to make accessible a
work of theatre – or any other audiovisual production – for an audience
who are either blind or have partial sight by giving in a verbal form some
of the information which a sighted person can easily access.
In broad terms, a description in the theatre consists of two main
elements. The first is a description of the set and costumes – the design
or style of the production. This information is usually given before a
performance, and may be pre-recorded. The second element is a description of the action which takes place during the play. In theatre, this has
to be given live in order to accommodate changes in pace which are an
integral part of live performance. Audience members are present in the
auditorium and wear a headset (which can use infrared or radio technology) to listen to the description. This experience is very similar to
the way in which you might listen to simultaneous interpreting.
The crucial skill for the describer here is to time the description so
that it does not interfere with the words being spoken by the actors
from the stage. For films and recorded TV programmes, of course, the
description of action is timed and recorded to run alongside the
soundtrack.
170
Audio Description in Theatre and Visual Arts
171
The process of audio describing – some examples
The simple definition of description as ‘an enabling service’ which
concentrates on the ‘principal visual elements’ of a play, film, or work
of art, does not get us very far in understanding how to describe something. It leaves a number of questions unanswered. In theatre we might
ask the question: What makes a character? Is describing what they wear
and what they look like enough? In the visual arts the same question
pertains: By describing the physical characteristics of a work of art, are
we really describing the art? It was Braque (1952) who noted: ‘One can
explain everything about a painting except the bit that matters’.2
Before going into these questions, I want to consider three examples
of description, all of which have something in common. The first is
from Act 1, Scene V of Twelfth Night by William Shakespeare, and
occurs when Olivia quizzes her steward Malvolio about a young man
at her gate:
‘What kind of man is he?’ Olivia asks.
‘Why, of mankind’, comes Malvolio’s reply.
‘What manner of man?’
‘Of very ill manner; he’ll speak with you, will you or no.’
‘Of what personage and years is he?’ She enquires.
‘Not yet old enough for a man, nor young enough for a boy; as a
squash is before ‘tis a peascod, or a codling when ‘tis almost an apple.
‘Tis with him in standing water between boy and man. He is very
well favoured, and he speaks very shrewishly. One would think his
mother’s milk were scarce out of him.’
The other two examples are from the book The Man Who Mistook His
Wife For A Hat, by Oliver Sacks.3 When examining a patient who
appeared to have problems with his sight, Dr Sacks handed him an
object to identify. The patient described it as (1985:13): ‘A continuous
surface [ ... ] infolded on itself. It appears to have [ ... ] five outpouchings,
if this is the word [ ... ] A container of some sort ? [ ... ] It could be a
change-purse for coins of five sizes’. A second object was described like
this: ‘About six inches in length [ ... ] A convoluted red form with a linear green attachment [ ... ] It lacks the simple symmetry of the Platonic
solids, although it may have a higher symmetry of its own’.
What do these three extracts have in common? They certainly give us
details about what or who they are describing, but do they clarify or
confuse?
172
Andrew Holland
In the first example, Malvolio is being deliberately obtuse, and his
answer tells us more about him than it tells us of the ‘young man at the
gate’. We could perhaps make a stab at his age, but his height? Build?
Hair colour? Skin colour? Looks? Clothes? Demeanour? Social standing?
None of these details are divulged. In the other two examples the descriptions are certainly detailed and closely observed, but do they communicate anything at all to a third party? How much easier would it have
been for us to understand, if we knew that the ‘continuous surface,
infolded on itself, with its five outpouchings’ was, in fact, a glove. Sacks
(1985: 9) tells us about his patient that he could only see parts, not the
whole picture: ‘His eyes would dart from one thing to another, picking
up tiny features, individual features [ ... ] A striking brightness, a colour,
a shape would arrest his attention and elicit comment – but in no case
did he get the scene-as-a-whole’. In contrast, an audio description aims
to provide the details that allow the picture as a whole to be formed.
Working for the theatre
Vocaleyes (www.vocaleyes.co.uk) was set up in 1998, following two
research projects I undertook for the Arts Council of England. It was
constituted as a charity to provide professional audio description services to theatres and to increase the provision and quality of access to
visually impaired people. At the time, although a number of theatres
provided audio description, this was mostly done on a volunteer basis,
and the describers were resident in theatres.
Initially Vocaleyes served the needs of touring theatre. The rationale
behind this was that with the volunteer describers being resident in
theatres, a production which toured to, say, ten of these theatres would
be described by ten different sets of describers. At each theatre the
describers would have to write the description from new and some
would have very little time to prepare. Some descriptions would be good
and some not so good. Some describers would have had training, others
would be self-taught.
The language used would be different; the actions described would be
different too; the ability of the describer to time their script and to
present it in sympathy with the mood of the play would vary from place
to place. No other part of the artistic output would be subject to these
regional variations. The result from the point of view of access is that a
blind or partially sighted person attending each of the ten theatres on
this tour would experience ten totally different productions, whereas
their sighted companion would experience the same production ten
Audio Description in Theatre and Visual Arts
173
times. Access, then, is not neutral. It changes the entire nature of the
artistic experience.
Vocaleyes aimed to create a high quality professional service by
placing the describers with the touring production rather than with the
receiving theatre. Having the describers move around the country with
the production meant that the description could remain consistent
from theatre to theatre, and even develop as the describers became
more and more familiar with the production, and as the production
itself developed over the length of a tour. Placing the describer nearer
the creative process, it also allowed describers to become familiar with
certain directors and the specific challenges presented by their work,
making them more able to describe – in Dr Sacks’ phrase – ‘the picture
as a whole’.
Impartiality
When I trained to be a describer we were constantly reminded to be
‘impartial’ and ‘objective’. Our job was to ‘say what we see’. The more
description I undertake, however, the more impossible I feel that
position to be. We were warned away from talking to creative people: ‘If
you talk to the director or the actors’, we were told, ‘they’ll tell you what
they want you to see’. While there is a certain amount of logic in this
assertion, there is a counter claim that this information – what they
‘want you to see’, their artistic intention if you like – is a valuable thing
to know about.
To my mind, a describer should wish to know as much as possible
about the art they are describing, a position which appears more obvious in other fields which have nothing to do with theatre or the visual
arts, such as quite literal fields, for example cricket. There are limited
parallels between audio description and cricket commentary. In some
ways they are the complete opposite: in cricket, not much happens and
the commentators have an awful lot of time to describe it – so they talk
about the weather and the buses and the birds – whereas in theatre,
masses happens and the describer has no time at all. In cricket, the
action is spontaneous whereas in theatre (most theatre at least) the
action is rehearsed and repeated. But a good cricket commentary relies
on the commentator not only knowing the rules – what’s going on –
but also what the two teams are trying to achieve by their various
tactical manoeuvres. A commentary given by someone who knew nothing of the game would make no sense. The listener would not know
what was being attempted, or what achieved. The whole narrative of the
174
Andrew Holland
game would be lost and instead it would be reduced to isolated, abstract
actions with bat and ball – in the same way that the glove was reduced
to isolated, abstract physical details. Now, to some people cricket is just
a whole load of isolated, abstract actions with bat and ball, but not to
followers of the game, and it is these people who the commentary is
there to serve.
The same is true in theatre. The people who come to the theatre have
an interest in it or they would not be there. They will not all have
worked in the theatre, in the same way that cricket followers have not
all played cricket. But unless the person who describes the action of a
play has a good understanding of theatre as a whole the narrative of the
action will be reduced to a string of meaningless details. So describers
should not be afraid to engage with the creative team on a production.
They should find out as much as they can. Once in a position of
knowledge, it is then up to the describer to choose whether to use this
information or not.
Decision-making in audio description
I would like to offer an example from a production of Henrik Ibsen’s
Ghosts (see for example, Michael Meyer, 1980). The set was the garden
room of Mrs Alving’s country house. It was a large, square room, with
walls about thirty feet high. The walls were panelled with wood and
painted a pale, grey-blue. In the wall to the left, a tall door led through
to a dining room. In the wall to the right, a slightly smaller door led to
a hallway. Most of the rear wall of the room was cut away, and the room
extended back to form a glass conservatory, beyond which was a view of
bleak, steep mountains. Placed around the walls were a number of simple, white-painted wooden chairs. In the centre was a white, circular
pedestal table.
I talked to the director about the set and he said to me: ‘It’s a room
with no history’. While I could understand what he meant, I felt that
the phrase was too loaded to use directly. It didn’t actually describe
anything that was there. I only knew what he meant by the phrase ‘It’s
a room with no history’ because I could see the room. But the phrase
did bring out something about the atmosphere the room gave off; how
ordered it was. Something that was an important part of the design.
There was nothing in the room specific to its owner, Mrs Alving.
No pictures or photographs or ornaments on the mantlepiece. No
knick-knacks. No clutter. So while the phrase ‘This is a room with no
history’, would have a limited meaning to a visually impaired person
Audio Description in Theatre and Visual Arts
175
on its own – and indeed could be interpreted a number of ways which
might be misleading – a description could express the concept through
the physical details of the room. A description could structure the
details in such a way as to emphasise the ordered sterility, to bring out
how the distant bleak mountains and the grey, flat sky made the room
feel isolated, cut off. This feeling, which is being reinforced to a sighted
audience all the time they are watching the play, needed to be present
in the earlier description so that the set had an artistic force, and was
not just a geographical placing of walls and furniture. In short, so that
it was a ‘set’ and not just a room.
A decision which describers also have to make is where to strike the
balance between the literal truth and imaginative truth of something.
A production of Philip Pullman’s The Firework-Maker’s Daughter, for
instance, used a host of imaginative illusions in its telling of the story;
a physical style of theatre for which the production company Told by an
Idiot is well known. As the heroine Leeyla lights the fuse of a firework,
a woman dressed in red, with red gloves, their fingers ending in red
metallic tassels, comes on to the stage and moves along the fuse, her
tasselled fingers waggling to become sparks. The two levels – one literal
and one imaginative – coexist in the same moment, and there is comic
potential in the fact that the spark is, in fact, a human being.
In another scene, Leeyla is on a journey. She wears a simple Chinesestyle costume with a conical straw hat. As she walks, an actor takes the
hat and holds it at a distance so that it resembles the volcano to which
Leeyla is travelling. The actor pulls a party popper underneath so that
streamers fly up through a hole in the hat, like the volcano erupting. As
sighted viewers we are able to appreciate this moment both in terms of
the narrative and how it is technically achieved. We are admiring the
picture that is created as well as the artistry involved in its creation.
Any description of this moment therefore has to do justice to the physicality of the performances as well as supporting the narrative. It needs
to reflect the technical or literal truth as well as the imaginative truth.
Too far in one direction and we miss out on the performance style. Too
far in the other direction and we wonder why someone has stolen
Leeyla’s hat.
But where to strike the balance will vary from play to play. Let us take
a look at two quick examples from a stage production of Peter Pan. Peter
and the Lost Boys find themselves at the Mermaid’s lagoon, a shimmering bay of green-blue sea which laps against crystal cliffs. In the centre
of the bay is a large pock-marked rock which at low tide resembles a
human skull. During the play, characters dive extravagantly off the
176
Andrew Holland
rock into the sea, swimming around, with their heads bobbing in the
waves as they tread water. At least this is the illusion. The reality is that
the sea is made from sections of green-blue cloth which stretch out
from the rock in the centre of the bay to the cliffs at the back. Each section of cloth is edged with elastic, and is constantly moving to recreate
the lapping of the sea. When the characters dive, they are actually
jumping down through the gap between two billowing cloth sections
on to a crash mat beneath, then poking their heads up between the
cloth and the crystal cliffs, as though treading water. The question is,
which version of this description should we give? The first suggests the
water is real, which it is not. The second is prosaic and undermines the
illusion. We do not want crash mats in this beautiful lagoon.
Again, a sighted audience is able to sustain both worlds simultaneously, so in the end we went for something of both. The introductory
notes described both the way the sea worked technically and the
impression that was created. During the play itself, we supported the
illusion of it being sea, while slipping in the odd word which reminded
people of the way the illusion was achieved.
The other big question in Peter Pan is how to describe the flying. No
matter how clever the mechanism involved, an audience is always
aware that it is not ‘real’. But to describe this great moment in literal
terms – ‘Peter, Wendy and the boys get hooked up on to wires and are
then hoisted into the air, kicking their legs and waving their arms
about as though flying’ – would be to seriously short change our
audience. So again, we need to find a way to do both, dispensing with
the technical stuff before the story begins, so that when the time
comes and Peter sprinkles his magic dust, the children do what those
characters do – and fly.
The live nature of theatre
I mentioned earlier that unlike cricket, where the action is spontaneous,
in most theatre the actions are rehearsed and repeated night after night.
But this is not always the case; a fact which again emphasises the advantage of working closely with a company and knowing how they work.
Some directors, for instance, do not block their actors’ moves.
This means that from one performance to the next, the actors will –
externally – do different things. If you only have the opportunity to see
the production once or twice, description can be tricky as describers are
constantly being surprised. They have timed the description of a certain
action to fit within a certain pause. But in the next performance, not
Audio Description in Theatre and Visual Arts
177
only is the action different but the length of the pause is too. This is an
unusual way of working and may seem confusing to some. But to the
director, it emphasises the live, unfixed nature of theatre which distinguishes it from other art forms. You do not want to reheat the same meal
every night but to use the same ingredients to cook it afresh. Understanding
something of this approach is important if you want to describe more
than just the actions. It means that, as a describer, you can begin to look
at the production differently, focusing on the internal motivations of the
characters rather than their outward behaviour.
An example of what happens when you do not adequately reflect
motivations occurred during one description I heard. In David Halliwell’s
play Little Malcolm and His Struggle against the Eunuchs (1967), art student
Malcolm holds court in his dingy flat, discussing art and revolution
with his college mates. But despite the bitterness of the weather, he does
not light the metered gas fire, telling his friends that he has no money.
He sends them off on an errand and as soon as they leave, Malcolm
takes money from his pocket, crouches by the fire and lights it, warming his hands. While he is doing this a new character, Anne, comes in.
The staging of this production highlighted Malcolm’s selfish
duplicity. As he crouched by the fire with his back to the door, the door
slowly opened, making it appear to us that he might get caught. The
tension was then dispelled when another character – one we had not
met before – entered. But Malcolm himself was clearly rattled by the
possibility that it could have been his friends returning. That he could
have been caught. The description of this moment, though, remained
entirely literal, giving us a sequence of external actions with no underlying message: ‘The friends went out. Malcolm lit the fire. Anne came
in’ – and all the subtleties of the carefully constructed moment were
missed. Not only did the description fail to articulate the drama, it also
failed to give us Malcolm’s reaction, and therefore to extend our
knowledge of his character. The narrative was reduced to a series of
actions, but description needs to be more than this.
Training audio describers
But if we cannot ever be wholly objective in our work, we can try to be
non-judgemental. When training theatre describers I use a number of
exercises where trainees describe photographs or illustrations. This is to
consider a number of issues which affect description generally – how
information is given and received – and to separate them off from those
issues which relate specifically to audio description in the theatre.
178 Andrew Holland
A couple of years ago I was training describers in a regional theatre
and we were looking at some Victorian Spot illustrations, the kinds of
sketch, sometimes humorous, which were used to illustrate books or
pamphlets. In one particular drawing, a man walked along a street. The
trainee’s description went along these lines: ‘He is wearing an overcoat
which is not of the first quality. He carries an umbrella, for effect. And
a briefcase which probably contains little more than his sandwiches’.
The coat, umbrella and briefcase were all there, certainly, but I asked the
trainee how she knew, for instance, about the sandwiches in the briefcase? ‘Because I know men like this!’ she said.
In this example the describer is judging the character rather than
describing him, and coming up with a whole lot of detail the picture
itself did not contain. There was no evidence that the overcoat was ‘not
of the first quality’ – no fraying of the collar, no marks of wear, no sagging hem. And the two other details could not possibly be obtained by
looking at this image alone. If the next frame, so to speak, had shown
the man in the rain, would his umbrella have been up? And there was
certainly no way, using sight alone, that we could know the contents of
the man’s briefcase. Of course, were the man a character in a play, we
would be given a whole lot more information about him. He might,
during the course of the action, open his briefcase and reveal ‘little
more than sandwiches’. But this might be intended to surprise an audience. In this case it is the action which reveals something, not the man’s
physical appearance. So a balance has to be struck which recognises
that although we cannot be wholly impartial, we do have a duty to be
non-judgemental. But is this balance the same in every work?
Christopher, the narrator in Mark Haddon’s novel The Curious Incident
of the Dog in the Night, makes many very astute observations about the
difficulties of describing people, especially where emotions are concerned (2003: 19):
I find people confusing.
This is for two main reasons.
The first main reason is that people do a lot of talking without using
any words. Siobhan says that if you raise one eyebrow it can mean lots
of different things. It can mean ‘I want to do sex with you’ and it can
also mean ‘I think that what you just said was very stupid’.
Siobhan also says that if you close your mouth and breathe out loudly
through your nose it can mean that you are relaxed, or that you are
bored, or that you are angry – and it all depends on how much air
comes out of your nose and how fast and what shape your mouth is
Audio Description in Theatre and Visual Arts
179
when you do it and how you are sitting and what you said just before
and hundreds of other things which are too complicated to work out
in a few seconds.
This seems to me to get to the heart of the problem. Using our sight we
are exposed to hundreds of physical hints at any one time. We are
processing this information to work out attitudes, emotions and relationships. This processing is constant and, for the most part, we are
unaware of it. In the theatre we will be interpreting and re-interpreting
a character all the time she is on stage; all the time she is talking; all the
time she is listening; all the time she seems disengaged with the central
action. Sometimes even by her absence.
A describer has two problems. The first is that the description cannot
be continuous. To fit around the dialogue of a play, a describer may
have only a couple of seconds to sum up a particular emotion. If the
description concentrates purely on the physical details – the raised eyebrow, for instance – this may not tell a visually impaired member of the
audience what the entire facial expression suggests to a sighted audience member. The second problem is that the information is verbalised
and can only be given in a linear way whereas sight, although focusing
on certain details, can also hold other details in the background.
Verbalising something gives it a prominence in that one thing is mentioned and not another thing. This means that it is the describer who is
choosing what the audience should focus on.
During an intense dialogue between Henry and Stella, Henry walks
over to the window. Well, how does he walk? Is the move casual or is it
more of a stomp? Is he displaying the same impetuosity we have seen in
him before? Or is this behaviour something new to us? This arrogant
strut, is this a side of his character we have not seen before? And is the
fact that Henry is going towards something important? Is it more
important that he is moving away from Stella? Or that he is turning away
so that she cannot see his face? Or perhaps our focus should be on Stella.
Is her reaction to what Henry does more significant? For each seemingly
simple moment there will be many possible descriptions and, as I said
before, there may be only one or two seconds in which to describe it.
The mere fact that as describers we have to choose to emphasise one
thing rather than another, means that – whether we like it or not – we
are making an artistic decision. We are contributing to how a piece of
art is experienced by a member of the audience. The more we know
about the piece of work, the more likely we are to make the right choice
of what to describe.
180
Andrew Holland
Audio description in the visual arts
The issue of ‘interpretation’ becomes more prominent when describing
the visual arts, if for no other reason than the fact that in theatre a
visually impaired member of the audience is integrating the information given by the describer with any residual sight they have and also
with what is coming from the stage aurally. A description will often be
only minimal because the voice of the actor is doing so much. With a
piece of visual art, however, less is likely to come from the work itself.
This means that the description will take on a greater role in interpreting
as well as describing the material.
In 2002, Vocaleyes worked with the RNIB and English Heritage on
the Talking Images project. We undertook three case studies to examine
the use of audio description within the visual arts, working with galleries who were interested in opening up their collections for blind and
visually impaired visitors. We wanted to examine the language of
description, and cover a range of work from historical to contemporary
and from figurative to abstract. An example from one of the case studies
will serve to illustrate something of the process we went through. This
involves the language of description, specifically how ‘interpretative’ a
description should be.
For the first case study we worked on an exhibition of drawings and
late reliefs by the abstract artist Ben Nicholson, curated by Kettle’s Yard
in Cambridge. Working with a group of visually impaired users, we
tried out a number of different descriptions of the same work. This first
version attempted to concentrate on the forms of the work and remain
as much as possible ‘un-interpretative’:
1968 Ramparts (oil on carved board)
A rectangular background, some 19 inches high and 21 inches wide –
that is about 48 by 53 centimetres – is painted a smooth earthy
brown.
Standing proud of it is a slightly smaller rectangle – this one divided
up into a number of smaller, overlapping shapes.
At top and bottom are areas of white. Between them a line of three
differently sized rectangles. The one to the left is brown like the
background. The central one is a darker brown, and the third, a
lighter, orangey brown.
The line created by these three rectangles starts off – to the left – as
horizontal and almost central. But a little way across, the line shifts
so that the two rectangles centre and right slope downwards. To the
Audio Description in Theatre and Visual Arts
181
far right is a tall rectangle – painted the same brown as the background.
Two other forms seem to float above the relief. Their colour is similar
to the two white sections. Both are similar in shape – a trapezium –
with parallel sides, horizontal tops, but with a bottom edge which
slopes down towards the right.
One is positioned within the top white section and to the right. Its
slanting edge runs along the top edge of the slanting brown line.
Carved within it is a circle – the inner edge painted white.
The other trapezium sits next to it – just left of centre – and a little
lower. In this, another circle has been inscribed rather than cut.
What does this rather literal description tell us? It was greeted by the
focus group with a stunned silence. Then, after a little while one of the
group said: ‘Well, you’re on a hiding to nothing there’. And they were
right. There is a possibility that after listening carefully to this description a number of times we could perhaps make a stab at reconstructing
what it looks like: which shape fits where; what overlaps what; something of its composition. But what is the point of that? As a sighted
person I do not go through that process. I do not look at a work in order
to memorise what goes where. My encounter with the work is on a completely different level. I am looking for signs of intent, of humour. I am
responding to certain elements of composition or colour. I am asking
questions about why I respond in the way I do. As a sighted person I
have some kind of emotional relationship with the piece. I experience
the arrangement of physical details – it is not enough just to know they
are there.
The second description that we tried out with the focus group
allowed for a level of interpretation. Colours, for instance, were given
tactile qualities which reflected the physical process the artist used to
make the piece. So the areas at top and bottom became areas of ‘frosty,
silvery white – scratched and rubbed in places to create an uneven
surface, like snow drifting across dirty ice’. Another area had been
‘roughly scraped so that the grubby brown of the hardboard shows
through like a stain’. With the three differently sized rectangles in
the centre, colour was described in terms of how it is perceived by the
viewer – the visual tricks it plays. So, the central, darker rectangle
with its blacker sheen ‘makes it seem to sink back away from us into
the relief, although in reality it stands proud of the one to the left’.
The line of rectangles to which this belongs ‘progresses in a line across
the work from left to right’. Their shift downwards ‘as though a
182
Andrew Holland
geographical fault has sheered this layer and pushed it bodily down.
Now sloping, these two rectangles seem in danger of sliding out of
the composition – squeezed out from between the frosted white
sections at top and bottom’.
The language creates a narrative within the piece, and makes reference outside the world of the work to our own world to try to capture
some of the work’s dynamic. The sense we get as sighted viewers is of
the shapes somehow acting on each other. Although they are static,
the angles, different depths and contrasting colours of the various
geometric shapes create a sense of slow, powerful movement. There
are internal forces which although dynamic, achieve a kind of balance
or equilibrium. As a sighted viewer I experience both the feeling of
movement and the feeling of stasis. So to describe the piece simply in
terms of the relative position of each individual shape is to reduce it to
pointlessness.
It could be argued that the references to snow, ice and geographical
faults are a personal judgement which cannot be justified. But the
focus group got much more from this description than the earlier, uninterpretive version. They felt they had had some kind of encounter
with the work. They felt that it was worth listening to, whereas the first
version, although accurate, gave them nothing at all.
The interdependence of the senses
Back in 2004 I attended a talk given by experimental psychologist
Dr Charles Spence in which he described a series of experiments
which showed how interdependent our senses are. People’s experience of touch, for instance, is affected by sight and sound. In one
experiment, subjects were asked to touch a number of pieces of sandpaper which they could not see. Wearing headphones, they could
hear the sound their finger made as it rubbed the sandpaper. The
experiment showed that the subject’s experience of the sand-paper
was directly influenced by the sound-frequency that was played to
them as they touched the paper. ‘Rougher’, they said, and ‘smoother’,
even though the sand-paper stayed the same. The fact that the senses
are interdependent is something that artists have used in their work
for centuries, and much figurative art goes to great lengths to transcend the visual.
Berenson (1896: 5) writing about Giotto stated how: ‘Psychology has
ascertained that sight alone gives us no accurate sense of the third
dimension’ and that ‘every time our eyes recognise reality, we are, as a
Audio Description in Theatre and Visual Arts
183
matter of fact, giving tactile values to retinal impressions’. He argued
further that:
Painting is an art which aims at giving an abiding impression of artistic reality with only two dimensions. The painter must, therefore, do
consciously what we all do unconsciously – construct his third dimension. And he can accomplish his task only as we accomplish ours, by
giving tactile values to retinal impressions. His first business, therefore, is to rouse the tactile sense, for I must have the illusion of being
able to touch a figure ... before I shall take it for granted as real.
Cloth has weight, weave and texture. The simple glove that we talked
about earlier may not have ‘five outpouchings’ but it might, perhaps, be
fine, intricate, old, coarse, supple, warm, frayed, discarded. Clothes may
be rich, delicate, finely woven, roughly stitched, worn formally or with
a casual disregard. Rooms have three dimensions; urns are cracked; a
dog balances precariously; bodies have mass and density; are fragile,
infirm; water has luminosity; bodies sweat; woodland reeks of decay. A
description has to work just as hard to ‘give tactile values’ to the ‘retinal
impressions’ described, to appeal to more than the visual.
Conclusion
This brings me back to why I feel that the definition given at the beginning of this chapter, that is, of audio description as ‘an enabling service’,
is inadequate: because what it picks up on is the language of discrimination and disability rather than the language of artistic endeavour and
achievement.
The danger is that we regard audio description as we would a screen
reader – a simple access tool, a kind of functional software, a window to
meaning. With a screen reader there is a direct equivalence between the
text which is written and the text which is voiced. There are few choices
or decisions for the software to make. The relationship between them is
clear and unambiguous. Some things may be a little hard to understand
because of a strange intonation, but we can work out what the meaning
is. But audio description is not like this. It can never be transparent. By
its very nature it will change the experience someone has of the art.
I also wonder if the term ‘enabling’ would be acceptable in any other
area of the arts? When translating a play, for instance, are we simply
‘enabling’ someone with another language to know what the play is
about – its storyline, its characters, its themes? Or do we want our
184
Andrew Holland
audience to experience the play directly, with all its complexities? With
no direct equivalence between one language and another, a translator
has to make a series of decisions about which one of several possible
meanings or nuances to go with. This decision will inevitably have a
knock-on effect on the rest of the play, and the translator may even be
accused of skewing the text in some way. But the option not to make
choices, to translate literally and include, perhaps, alternatives within
the text, while it might enable people to know about the text, would
create an un-watchable piece of theatre. The audience would be constantly reminded that they were watching a translation of ‘something
else’. They would come out of the theatre more aware of the problems
faced by the translator than of what the playwright wanted to say.
In the same way, audio description is a way of translating artistic
material from one medium to another. There is no direct equivalence
between a moment on stage and the words chosen to describe it. The
exhortation to be ‘impartial’ doesn’t recognise this fact – and too
often has the effect of focussing the audience’s attention on the enabling
tool – forcing them to remain distant from the actual piece. For me, and
for many visually impaired people, that is not enough.
Description should aim to get to the heart of a work of art and to recreate an experience of that work by bringing it to life. It should not be
content with telling someone the physical details of something they
cannot see. When you leave the art gallery, you want to come away
discussing the art, not the description. So in order to be non-intrusive a
description has to make decisions and not pretend it is not there.
I have tried to argue that in order to write a meaningful description,
an audio describer has to do more than ‘say what he sees’, that this
phrase is a nonsense which attempts erroneously to divorce ‘seeing’
from ‘understanding’, and that in this process of understanding the
other senses are involved alongside sight.
The quotation given earlier by Oliver Sacks seems to me to get to the
heart of what a good description should be. Of his patient he said (1985:
9): ‘He failed to see the whole, seeing only details, which he spotted
like blips on a radar screen. He never entered into a relation with the
picture as a whole’. Good description must allow the viewer to enter
into a relation with the object, person or painting being described ‘as a
whole’. This means integrating the description so that it becomes part
of the artistic experience, rather than keeping that experience at arms’
length.
And the third example, the one the patient described as a ‘convoluted
red form with a linear green attachment’? Dr Sacks suggested the patient
Audio Description in Theatre and Visual Arts
185
smell it (1985:12–13): ‘Beautiful’, the patient said – ‘An early rose. What
a heavenly smell!’
Notes
1. Quoted from the introduction to A National Standard for Theatre Audio
Description – AUDEST report, 1998.
2. ‘Il n’est en art qu’une chose qui vaille. Celle que l’on ne peut expliquer’. Trans.
Ben Nicholson.
3. The article of the same name was originally published in the London Review
of Books, 1981.
References
Berenson, B. (1896) The Florentine Painters of the Renaissance. New York and
London: G.P. Puttnam’s Sons.
Braque, G. (1952) Cahiers 1917–1952, Paris. Translated by Ben Nicholson [Quoted
by Peter Khoroche in his introduction to Ben Nicholson: Drawings and Painted
Reliefs (2002). London: Lund Humphries, Ashgate Publishing].
Haddon, M. (2003) The Curious Incident of the Dog in the Night. London:
Vintage.
Halliwell, D. (1967) Little Malcolm and His Struggle Against the Eunuchs: A Play.
London: Faber.
Meyer, M. (Trans.) (1980) Ibsen Plays: Ghosts, The Wild Duck, The Master Builder.
London: Methuen.
Pullman, P. (1995) The Firework-Maker’s Daughter. London: Corgi Yealing.
Sacks, O. (1985) The Man Who Mistook His Wife For A Hat. London: Gerald
Duckworth.
Shakespeare, W. (1623) Twelfth Night. In S. Wells and G. Taylor (eds) (1986) The
Oxford Shakespeare. The Complete Works. Oxford: Oxford University Press.
14
Usability and Website
Localisation1
Mario De Bortoli and
Jesús Maroto Ortiz-Sotomayor
Introduction
Established multinationals or small companies expanding into overseas
markets need a well-coordinated international sales and marketing
effort in order to succeed. A website can serve as a company’s premier
marketing tool, a facilitator of direct sales, and a technical support
mechanism; but it can also be used for purposes of public, customer,
investor or employee relations. When users are able to interact successfully with a website, positive impressions and attitudes about both the
site and the associated organisation are created. Hence, having a web
presence, like advertising, should make possible the boosting of a corporate image. This effect is crucial because the website may be related to
branding, especially if it is a vehicle for sales.
To this end, websites are often customised, or localised, for foreign
markets, taking into account local language issues, business and social
standards, and aesthetic preferences. Localising a website is a complicated but necessary task. The idea of making different versions of a site
for other cultures demonstrates a willingness to show the consumers
that the organisation is willing to accommodate their needs. According
to research carried out by Hayward and Tong (2001), users perceive a
company more favourably (that is, as more trustworthy, more likeable)
when they see a version of the company’s website in their mother
tongue, regardless of the user’s proficiency in the English language.
Written text plays a crucial role on the web, as most websites –
particularly corporate sites – are content-based. Too many companies
have found themselves in trouble by entrusting their translation to
someone in the company who has travelled the world and is ‘fluent’ in
six languages, or to people who happen to be bilingual but have no
186
Usability and Website Localisation
187
localisation background whatsoever. We have all had the experience of
laughing at copy poorly translated into English. However, failing to
gain market share because of linguistic issues is not a laughing matter.
In any case, successful localisation involves a lot more than simply
translating content. The manner in which people carry out tasks can
differ from culture to culture. For example, approaches such as the
‘shopping trolley’ or ‘shopping cart’ metaphor found on many websites
may not transfer accurately to some cultures, which may dramatically
limit the usability of an e-commerce site, thus reducing potential
revenue in those countries.
Having made large investments, companies should not allow their
websites to be downgraded in their international versions. Fortunately,
there is a growing acknowledgement amongst international businesses
that each of their foreign markets is best served by its own culturally
specific website. Furthermore, there is an important financial case
behind this awareness since non-English speaking Internet users alone
represent over 64% of the world online population (Global Reach).2 The
fact is that although many international businesses have had their
English websites translated into the languages of their main foreign
markets, some of these sites have not performed as successfully as their
home versions. For a site to be well received and successful – today more
than ever – it should address those intangible aspects that make a group
of people a community, rather than deal exclusively with customising
the obvious, superficial items such as measurement units or currency.
Failure of localisation
Quite apart from the linguistic dimension, localising the content of a
website is not easy; from a technical point of view, localising web content poses some daunting challenges. Websites come in many shapes
and forms, from a few pages of HTML created in basic text editors to
vast scripted or database-driven sites. Internal company sites, intranets,
are also becoming more popular for the private dissemination of information in a structured manner. Timeliness and up-to-the-minute content are rapidly becoming the key discriminators of a company’s website,
and as the web is a global phenomenon, the speed at which this content
is localised is also becoming an issue for many companies.
In addition to the linguistic and technical issues, website localisation
also faces cultural challenges. According to Hofstede (1991), culture dictates how people from a specific location view and react to images and
messages in relation to their own patterns of acting, feeling and
188
M. De Bortoli & J.M. Ortiz-Sotomayor
thinking, often ingrained in them by late childhood. Any differences
in these patterns are displayed in the choice of symbols, rituals and
values of any given cultural community. A culture influences the
perceptions, thoughts and actions of all its members, and it is this
shared influence that defines them as a group.
These various sets of ideas and expectations that culture provides are
all brought to bear when interacting with technology. Dialogue between
humans and computers is constrained not only by the design laws of
the computer, but also by the user’s understanding of the world and its
norms. If the design of a computer system does not match the user’s
understanding of the task in hand then the interaction between the
two will be sub-optimal. Products designed in one culture for use in
another often fall into this category. This is generally the result of the
two common errors in the localisation process discussed below.
Designers do not necessarily know
about other cultures
The first stumbling block encountered by most localisation projects is
the limits of human intuition about other people. However much we
believe we know about a group or an individual, as human beings we
tend to be extremely poor at anticipating their requirements. When
considering the needs of users from another country, even well intentioned designers may be unaware of their own bias and ignorance about
the people of that culture. Often they are unable to filter out interface
features, which may handicap users from other cultures (Fernandes,
1995). Some designers attempt to circumvent these problems by enlisting a friend or colleague who has lived in the target culture and can
speak the language. Unless these individuals have spent a considerable
amount of their time in that country working on markedly similar
projects, it is unlikely that their insights will be any more accurate than
those of anyone else. In reality, successful localisation begins with the
recognition that we do not necessarily know the requirements of other
cultures when a project is launched. What is really needed is a systematic approach to gathering information about the users of a particular
product or service in other countries.
Cosmetic changes are not enough
The second most common mistake is to pay attention only to superficial differences between cultures in the belief that this represents an
Usability and Website Localisation
189
adequate attempt at localisation. In order to improve the quality of
what is essentially guesswork, designers tend to use guidelines, helping
them address the features which vary superficially across cultures.
These guidelines cover such aspects as basic differences in currency and
calendar.
Guidelines often serve to give designers and management the mistaken impression that the website has been localised and effectively
‘fireproofed’ for cross-cultural usability problems. In fact, culture actually influences interaction with computers at levels significantly deeper
and less observable than the use of local calendars or currencies.
Human Computer Interaction (HCI) research shows that successful
interaction depends on more than just using the correct language. For
authors like Bourges-Waldegg and Scrivener (1998) and French and
Smith (2000), interaction is also dependent upon the culturally
embedded meaning of objects such as icons and metaphors – for
example, the desktop or the shopping cart. Whilst the USA and the UK
share a common language, a website which, for instance, has recourse
to the metaphor of the white pages (the US phone directory) to help
users find individuals’ contact details may not be appropriate for use
in the UK, where yellow pages are the standard. This is despite the fact
that to all appearances the site may not appear to be in need of
localisation. What is needed, therefore, is a new definition of effective
localisation and its scope.
Extending the scope of localisation
through the inclusion of HCI expertise
As mentioned above, website localisation efforts have traditionally been
concerned mainly with translation and character encoding issues.
However, it is our belief that this alone is not sufficient to meet the
technology needs of users from other cultures. Indeed, there is a considerable amount of evidence detailing the difficulties and failures
experienced by users of culturally inappropriate systems. The extent of
human diversity is such that the mere translation of an interface from
one language to another is not always sufficient to meet the needs of
another culture.
HCI approaches model systems from the user’s perspective and
therefore are well placed to inform localisation requirements for a site if
employed early enough in the design process. During the 1990s
cross-cultural HCI research expanded from issuing guidelines and
importing models from the social sciences (Hall, 1976; Hofstede, 1991)
190 M. De Bortoli & J.M. Ortiz-Sotomayor
to developing its own frameworks (Bourges-Waldegg and Scrivener,
1998; French and Smith, 2000). Papers with a global dimension regularly feature in all major HCI conference programmes.
However, despite this explosion of research interest, the number of
designers using HCI support for cross-cultural interactive systems
remains low. Additionally, a worrying number of misinterpreted theories
have been imported piecemeal from other fields such as social science,
linguistics and cognitive psychology. This is not uncommon in interface design where imported theories are often adopted and cited by
designers unaware of the research background. These theories then
gain credibility within the design community at the expense of other
findings (Green et al., 1996).
There is a clear need for culturally sensitive technology, a need which
is currently not being met. What is obvious is that designers should now
be expected to add localisation to their skill-set. The interdisciplinary
nature of localisation means that it does not lend itself to ‘one-size-fitsall’ solutions, which can be learnt and applied in identical fashion to all
projects. The background of each applicable theory and the subtleties of
local culture and language must be understood fully.
Whether the product in question is software, a website or a mobile
phone, preparing the user interface for use in an international context
calls for expertise from a variety of fields. Designers are required for
their creative abilities. Equally, properly qualified linguists and translators are necessary not just to translate content, but also to ensure that
the essential meaning of each message is communicated adequately.
However, successful interaction cannot be boiled down to a simple matter of aesthetic preferences and translation. This ignores the behaviour
on the part of the user: people all over the world have different,
culturally rooted responses to stimuli and act accordingly. For example,
Chinese consumers seem to prefer shopping at online stores that offer
the possibility of bargaining in spite of the price they actually end up
paying sometimes being greater than that at other stores (Liang and
Doong, 2000). HCI professionals utilise a range of approaches, from
cognitive models to usability testing and user-participatory methods,
which help determine the requirements of different cultures. These can
be applied to make recommendations on the subsequent process of
localisation. If properly implemented at an early enough stage by
professionals with the correct knowledge of localisation issues, these
recommendations can significantly improve the end result, increase
return on investment and improve the relationship of a brand with its
target group of consumers.
Usability and Website Localisation
191
The market
In many ways, the choice of one brand over another is increasingly
becoming a political choice by consumers, who express a whole range
of values when buying a product. Therefore, for the product to be successful it must represent and address the consumer’s concerns and
beliefs. And this is where a simple literal translation can never be
enough. Each market has to be scrutinised and culture-specific solutions found. Consumers only respect a brand that respects them. There
are many opportunities for brands that are willing to listen to their
markets and are prepared to go the extra mile in the localisation process.
Brands have to understand that markets are there to be seduced, not
patronised, and the only way to seduce them is to know how they think
and feel, and act accordingly. From a marketing perspective, brands
need to realise that being international means that each section of their
target market is as important as the home one. German or Polish website
users believe that they are entitled to as much attention from a brand as
their American or British counterparts. After all, they may be paying
just as much money for the product. At the end of the day it is for the
seller to make an effort in a transaction, not the buyer.
Generally speaking, the recipe for international success is to convince
people that the product is good and that it was produced with the consumer’s needs in mind. Each linguistic community will need to have this
explained from a slightly different angle, determined by priorities of the
local culture. Problems in this process occur when English-speaking, and
especially American, businesses tend to confuse increasing Internet use
worldwide with increasing Americanisation of other cultures. There
seems to be the misperception that if people in another country have
access to the web, they will already have been exposed to enough Western
influences to be able to use sites of Anglo-American origin. In fact, despite
its history, there is nothing inherently American or even Western about
Internet use, as evidenced by the latest estimates for Internet users by
language growth stating Chinese has grown by 622% and Arabic by
2062% between 2000 and 2008 (Internet World Stats: online).3
All of the above means that an international campaign, or a multilingual website, cannot be researched and developed in English, and
then sent for translation, an attitude that is viewed as arrogant by many.
As stated earlier, superficial approaches to localisation do not adequately
meet the needs of users in different countries, and do not create the
positive awareness that companies crave for their brand. In order for
localised sites to perform effectively, differences in culture must be
192 M. De Bortoli & J.M. Ortiz-Sotomayor
reflected in the design of each one. To achieve this, the localisation
process has to be integrated at a much earlier stage of the planning or
creative process than is now done. Every aspect should be discussed and
studied before its development, and only then implemented. This represents a revolution in localisation as we know it at present – but a
necessity for most companies at a time of increasing dependency on
foreign markets.
In fact, a better-integrated localisation approach might mean
significant savings on post-production adaptations via economies of
scale and the pooling of assets and resources via new technologies, as
well as greater branding consistency and, as a result, greater return on
investment. The reduced cost of localisation achieved in this way might,
for example, also allow entry into previously unviable markets.
Conclusion
Products such as websites expose companies to global markets, but few
of them pay adequate attention to the vast audiences outside their own
borders. Among those that do, most are only prepared to pay the absolute minimum for translated versions of their main site. Whilst superficially localised, these sites do not usually fulfil their intended function
as they may remain culturally unsuitable. Localisation must start from
an in-depth knowledge of the local culture and requirements, then
address those requirements within the framework of existing local
structures. Cultural differences affect interaction at levels significantly
deeper than language. Addressing these differences requires the early
attention of professionals with expertise in a variety of fields, including
user-centred design, HCI, psychology and linguistics. Only a synthesis
of this expertise can guarantee the reaping of the enormous benefits
that globalisation can offer to truly international businesses.
Notes
1. This chapter would not have been possible without the assistance and cooperation of Robert Gilham, HCI consultant at Amberlight Partners.
2. An online marketing multilingual website at http://global-reach.biz
3. Information available on www.internetworldstats.com/stats/.htm
References
Bourges-Waldegg, P. and Scrivener, S.A.R. (1998) ‘Meaning, the central issue in
cross-cultural HCI design’. Interacting with Computers 9(3): 287–309.
Usability and Website Localisation
193
Fernandes, T. (1995) Global Interface Design: A Guide to Designing International
User Interfaces. Boston, MA: AP Professional.
French, T. and Smith, A. (2000) ‘Semiotically enhanced web interfaces for shared
meanings: Can semiotics help us meet the challenge of cross-cultural HCI
design?’ IWIPS 2000. Baltimore, USA, 23–38.
Green, T.R.G., Davies, S.P and Gilmore, D.J. (1996) ‘Delivering cognitive
psychology to HCI: the problems of common language and of knowledge
transfer’. Interacting with Computers 8(1): 89–111.
Hall, E.T. (1976) Beyond Culture. Garden City, NY: Anchor Press.
Hayward, W.G. and Tong, K.K. (2001) ‘The influence of language proficiency on
web site usage with bilingual users’. www.psy.cuhk.edu.hk/~usability/
research/HaywardTong.pdf
Hofstede, G. (1991) Cultures and Organizations: Software of the Mind. Maidenhead:
McGraw-Hill.
Liang, T. and Doong, H. (2000) ‘Effect of bargaining in electronic commerce’.
International Journal of Electronic Commerce 4(3): 23–43.
This page intentionally left blank
Part IV
Education and Training
This page intentionally left blank
15
Teaching Screen Translation:
The Role of Pragmatics in
Subtitling
Erik Skuggevik
Introduction
It is an inescapable fact that subtitling, as a form of interlingual
translation, must simplify information, if not wherever possible, then
wherever necessary. This need to simplify arises because subtitles do not
replace the original language of the film, they coexist with it, as well as
with the other audio and visual channels of the film, and even – should
the subtitles be too complex – compete with them. Thus, having to
match reading speed to speaking speed invariably leads to choices about
what to prioritise in subtitles, and this forms an equal part of the whole
translation process.
The subtitlers, in many ways, have nowhere to hide. They present
their translated rendition of whatever is spoken at the precise moment
when it is said, and any viewer with a grasp of the original language is
able to make an instant comparison. This is what terrifies any novice,
and even some experienced subtitlers; a fear that can manifest itself in
too rigid an adherence to the original syntax and to the denotative
values of the spoken words, often resulting in – amongst other things –
over-long subtitles and literally-translated metaphors.
The confidence to translate subtitles with a degree of economy might
improve with experience and sensitivity, but as an alternative to the
intuitive skills acquired in this process, we can also develop or demonstrate such awareness through subtitling pragmatics. In this chapter we
shall be looking at the speech act analyses of Grice (1975) and Jakobson
(1960), and consider a number of aspects that, to varying degrees, may
be said to form part of the emerging field of subtitling theory.
197
198 Erik Skuggevik
The five levels of competence in subtitling
In order for us to delineate the field into which we are about to delve,
we may briefly outline five levels of subtitling competence. Grice’s
cooperative principle and its maxims can be associated with the third,
while Jakobson’s speech act theory can be aligned to the fourth.
1. The first level is technical competence, that is the ability to deal with
the sheer practical demands of the job as it appears to most working
subtitlers: use of software, line breaks, positioning on the screen,
time and space restrictions, use of italics, etc. This is an area where
mistakes are easily quantifiable, as there are clearly identifiable rights
and wrongs.
2. The second level concerns the linguistic skills of the students; the
translators’ expertise and sensitivity to their own and to the Source
Language (SL). These are language skills that students will have to
draw on in any translation work.
3. The third level refers to the translators’ understanding of social and
cultural (non-linguistic) aspects and the awareness of their relative
values. Any number of dictionaries cannot be a substitute for the
hands-on experience of living and breathing the way of life of another
culture, its language use and the hierarchies of social values.
4. The fourth level is possibly the most elusive analytically but also the
most universal: comprehension of the psychological or emotional
dimension inherent in the action that accompanies the spoken
words.
5. There is, in my opinion, a fifth level which is the competence which
allows taking all these previous areas into consideration in a holistic
exercise of determining strategies based upon the limitations and
possibilities on offer, in order to formulate any given subtitle. I shall
return to some additional considerations about this fifth level of
competence at the end of this chapter.
Within the discipline of pragmatics, many of these dimensions are considered as part of the importance that context, readership, cultural
expectations and implicatures play in communication. However, as we
shall see, subtitling places rather specific demands on the type of
analysis needed since its main concern is the transfer of the communicative act itself, not in its entirety (like literary translation), but
partially, as the visual and audio channels are running concurrently
and are complementary.
Role of Pragmatics in Subtitling 199
Subtitling and implicatures
One problem area characterising subtitling results from violations of
the maxims of the cooperative principle (Grice, 1975), or differently
put, the problems of transferring the operative differences of these
maxims between language-cultures without having recourse to the
social or cultural landscape in which they exist.
We should note here that the terms conventional (or standard) implicature versus conversational implicature may be most easily recognised
through reference to denotative and connotative values respectively.
There is a close parallel distinction between stated meaning (as both
denotative value/standard implicature) and implied meaning (as both
connotative value/conversational implicature), and although there are
instances where the two sets of distinctions have their different uses, the
overlap is sufficient to explain why both sets of terms are used at times
interchangeably. For our purposes, the term ‘standard’ (Levinson, 1983)
is used in preference to Grice’s ‘conventional implicature’, to make it
visually (and phonetically) easier to distinguish from conversational
implicature.
In order for us to investigate the usefulness of Grice’s formulations of
the cooperative principle and to what extent it applies to subtitling, it
might be convenient to summarise the initial list of maxims provided
by Grice (quoted in Katan, 1999: 211):
●
●
●
●
The maxim of Quantity: give as much information as needed.
The maxim of Quality: speak truthfully.
The maxim of Relation: say things that are relevant.
The maxim of Manner: say things clearly and briefly.
According to Grice (1975), these maxims will inform the choices made
by both speakers and listeners when deciding how to communicate or
interpret information: as standard implicature when no maxim has
been violated, or as conversational implicature when one or more of
them have been flouted.
Looking around, particularly at social interaction in the Western
world which we may identify as problematic (from our own experience
or from observation), it becomes evident that there are a few other possible maxims in operation:
●
●
●
Be polite.
Do not embarrass.
Do not dominate.
200
●
●
Erik Skuggevik
Do not be too subservient.
Do not exaggerate your status.
The fact that there are maxims other than the ones suggested by Grice is
not surprising. He freely admits that his list is not exhaustive, and hints
that there may be additions. That there may be hierarchies involved in
the importance of the maxims is also something Grice (1975: 46) concedes himself when he states that: ‘It is obvious that the observance of
these maxims is a matter of less urgency than the observance of others;
a man that has expressed himself with undue prolixity would, in general, be open to milder comments than a man who has said something
he believes to be false’. In other words, it is worse to lie than to talk too
much. But this may not be the order of priorities in every culture, as
clearly pointed out by Baker (1992: 233):
In some cultures “Be polite” indeed seems to override all other
maxims. Loveday (1982: 364) explains that “No” almost constitutes
a term of abuse in Japanese and equivocation, exiting or even lying
is preferred to its use.’ If this is true, it would suggest that the maxims
of Quality and Manner are easily overridden by considerations of
politeness in some cultures.
Grice’s maxims, and their hierarchical potential, must then be seen as
culture-specific, and although they provide a method of making explicit
the motives and inferences of speakers, they do not assist translation
and are in general better suited to describe communicative failures than
prescriptive strategies. Indeed, it is worth registering that the relationship between the cooperative principle and its maxims is analogous to
the relationship between the Saussurean langue and parole, where Grice’s
definitions are firmly rooted in the description of the parole of maxims
and implicatures.
In the field of commercial film translation, the market is dominated
by US (and to some extent British) films, and for many European cultures the Anglo-Saxon hierarchical organisation of Gricean maxims
and their own have large areas of overlap, some more than others.
Below, however, we shall have a quick look at a particular difference in
nuance between Norwegian and English maxims.
Norwegian and English variation
The gap between the operative maxims in American or British societies
and Norwegian society is generally very narrow. However, one
Role of Pragmatics in Subtitling 201
difference lies in the maxim of being institutionally (or minimally)
polite. When subtitling a large number of occurrences of statements
ending in the words ‘please’ or ‘thank you’ into Norwegian (Swedish
and Danish would be much the same) the tendency is often to omit
these words, and this is not just because it is space-saving: to include
the Norwegian semantic units vennligst, takk or er du snill (denotative/
connotative equivalents of ‘please’ or ‘thank you’) might often seem
excessively polite, and could therefore activate the connotative,
Norwegian interpretation (conversational implicature) of irony or sarcasm. A scene at an English restaurant where Mr A says, ‘Could I have
the menu, please?’ presents us with a standard implicature in English,
but a Norwegian Mr B who says, Kan jeg få menyen, takk/er du snill?
runs the risk of invoking the conversational implicature ‘I need a
menu and I am getting impatient’. The omission of ‘please’ therefore
seems logical, and rarely presents a problem in Norwegian subtitles.
Indeed this difference in linguistic code is summed up in what many
Norwegians describe as a British preoccupation with ‘please’ and
‘thank you’.
Occasionally, however, the violation of this (institutional) politeness
maxim becomes foregrounded, as when the withholding of a ‘thanks,’
which would normally have been omitted in a Norwegian subtitle, gives
rise to a conversational implicature in English. The following scene
takes place in a restaurant, where the waiter attending a table finds the
guest quite uncooperative:
ENGLISH
NORWEGIAN
- Would you like pepper?
- Vil du ha litt pepper?
- No.
- Nei.
- I was only asking.
- Jeg bare spurte.
- And I said no.
- Og jeg sa nei.
To a British audience it is not difficult to see that the reason for the
waiter’s remark (and he gets our sympathies) was prompted by a feeling
of ‘a void’ in the normal communicative pattern. It is virtually obligatory
to say ‘No, thanks’ when declining an offer in English, whereas in
Norwegian, judged by the subtitles at least, the feeling created is that
the waiter is over-sensitive and that the guest has a valid point. One
might say ‘No, thanks’ in Norwegian as well, but then it is an added
202 Erik Skuggevik
degree of politeness. In Britain it is obligatory. While the English version
violates the maxim of politeness, the Norwegian does not. Since Grice’s
proposal offers no methods of transferring maxims between cultures,
we shall return to this point after we have looked at Jakobson’s
communication functions.
The kind of difference in cooperative hierarchies discussed above
could in theory be formulated between different languages and cultures as the closer two cultures are, the less divergence between the
maxims. The further removed from each other cultures are, the more
important it becomes to use strategies to act as a form of intermediary
area in which comparisons of relative value might be conducted with
greater ease.
In order to provide a possible Rosetta Stone between the difference in
relative value of culturally defined maxims, we can turn to an analysis
of communication, essentially more psychological than cultural, based
on the assumption that there must at least be a high value attached to
the maxim of Relation (be relevant), as indeed communication seems
pointless if there did not exist a maxim that addressed the question
‘why are we talking at all?’. Here Jakobson’s (1960) speech act analysis,
originally a linguistic model, may provide us with an approach making
explicit the dynamics; this is important in cases of violation of the
maxims of Grice’s cooperative principle, that is when culturally codified conventions are challenged, to the point when the subtitler’s skill
will be put to the test.
Communication in subtitling
Jakobson (1960) proposes that there are six necessary components of
any speech act. I will extend these to encompass all forms of communication as the functional principles involved are exactly the same.
Jakobson’s components are obligatory in the sense that if any one of
them is removed no communication can take place. They are illustrated
in Figure 15.1.
Sender
Figure 15.1
Code
Contact
Context
Message
Jakobson’s six components
Receiver
Role of Pragmatics in Subtitling 203
We shall briefly list some examples of these six categories:
●
●
●
●
●
●
Sender (or addresser): Me, the Prime Minister, a person from the
sixteenth century, etc.
Code: Spanish, mathematical indexes, legalese, cockney, morse code,
body language, etc.
Contact: Person to person, a newspaper, Internet, telephone, runes on
a stone, TV, etc.
Context: Attending a lecture, sitting on the underground, walking the
dog, etc.
Message: (The lexical/semiotic content) ‘Meet me at 11pm’, ‘There will
be tax rises’, etc.
Receiver (or addressee): You, the President, the phone operator, a person
in the twenty-third century.
As an example of a situation involving these components we could pick
anything that may be viewed as communication, but the one that you,
in your present role as reader, are experiencing will serve as an illustration. You (the receiver) are reading this paper, the factual content of
which is the message, written by me (the sender). The contact that makes
this possible is the book you have in front of you. The context is the situation you are in right now, sitting at home, at college, on the bus –
maybe you have been asked to read the paper or found it by accident.
The code employed is English, somewhat academic, specialised in the
field of Translation Studies. Remove one of these ingredients and you
will not be able to make any sense, contact or inferences from your
experience (although the removal of context alone is an impossible
proposition, as there can be no context-free experience).
These formal components of the communicative act can be seen as
universal. The recognition and identification of each one of them (albeit
subliminally) lies at the core of any analysis of communication. For
instance, in order to correctly interpret any violation of a Gricean maxim,
it would be necessary to first – and this is an unspoken assumption in
Grice’s proposal – eliminate the possibility that there may be a breakdown in communication, due to faults in the code, mode of contact,
misunderstandings of context, or misidentification of receiver/sender.
Only then can any non-adherence to accepted maxims be given an inference or implicature, and any message interpreted accordingly.
Once the presence of these six components is established, it becomes
clear that at different times any one of them may seem more central
to the tenor of a communicative act. Translation theory scholars have
204
Erik Skuggevik
Metacommunicative
Code
Phatic
Emotive
Sender
Contact
Referential
Conative
Receiver
Context
Performative
Message
Figure 15.2
Jakobson’s six functions of communication1
long recognised that particular linguistic features may be linked to a
corresponding problem or situation – e.g. Bühler (1934: expressive,
appellative, informative); Reiss (1971: informative, expressive, operative). Each of Jakobson’s constituents may be seen as capable of
providing the focus for a type of communication, either because there
is a semantic problem (code, contact), a particular objective (sender,
receiver), an informative purpose (context) or a particular aesthetic
imperative (message). As acknowledged by Jakobson (1960), communication displays certain features when it gravitates towards each
of the six constituents, and he ascribes to them the functions in
Figure 15.2.
Jakobson’s functions could easily form the basis of a further classification of ‘text types’ leading to a more comprehensive version than
Reiss’s model, (informative, expressive and operative – in Jakobson’s
terms referential, emotive and conative), although Reiss (1971: 172)
does not believe this is a fruitful approach. Nord (quoted in Munday,
2001: 74)2 added the phatic function to Reiss’s text types, leaving
Jakobson’s performative (poetic) and metacommunicative (metalinguistic) functions to be incorporated into a potentially more complete
system.
However, Jakobson himself never suggested that these functions
should define text types as such. On the contrary, the functions, as a
dominant factor of any communication, are in constant flux and in any
act of communication the focus could shift quickly from one to another,
as different approaches may be employed in order to achieve a particular goal: cajole, tempt, flatter, threaten, reason... For subtitlers this is of
course a most familiar occurrence. Let us briefly look at a summary of
Jakobson’s (1960) definitions:
1. Emotive: When the communication is focused on the sender
(addresser), expressing how s/he feels and thinks: ‘I am totally
Role of Pragmatics in Subtitling 205
2.
3.
4.
5.
6.
opposed to this plan!’, ‘Never mind your husband, what about me?’,
‘I love this place’, etc.
Metacommunicative (metalinguistic in Jakobson’s original usage):
Communication about communication (redefinitions, reformulating, explaining words) or when the code itself is in focus, see the
example below from the film Meet the Parents.
Phatic: When language is used just to establish or maintain contact
as in ‘How are you doing?’, ‘Nice day today, isn’t it?’, etc.
Referential: It takes place when the communication is appropriate to
the context, e.g. when in a meeting, it is appropriate to discuss the
agenda; when lecturing, it is appropriate to impart information.
Referential communication is focused on something external, independent facts, descriptions, relevant information, etc.
Performative (poetic in Jakobson’s original usage): Communication
that indulges in the poetic, aesthetic or musical qualities of the
code. It is important to remember that this does not necessarily
imply ‘something beautiful’, just that communication works performatively. Most modern adverts do in fact perform visually and
musically by associating aesthetic values with their product
(Barthes, 1977).
Conative: Not to be confused with ‘connotative’. Communication
focused on the receiver’s (addressee’s) needs and situation: ‘Don’t
worry, nothing bad will happen, you just take your time’, ‘Is that
something you can live with?’, ‘I won’t forget you’, etc.
Multiple choices
As far as subtitling is concerned, it is of course imperative to recognise
that it is not the words themselves which necessarily determine the
communicative function of a statement, but the way they are given
expression; their relationship to tone of voice, body language and situation. As an example, we shall look at a phrase subtitlers often encounter, ‘I’m OK’, heard in many films, from which several examples are
given below.
‘I’m OK’ simply seen on a page will make most people interpret it
(by a kind of ‘cultural collocation’) as a standard implicature: ‘I am
fine, no need to do anything’. But the phrase is often uttered to express
entirely different sentiments. These are indicated, most importantly,
by context, mood and body language, something that becomes clear
when attempting to subtitle a large number of versions of ‘I’m OK’
206
Erik Skuggevik
into Norwegian. In the following six examples, different connotations
of ‘OK’ mean that they cannot be expressed by one and the same
phrase:
1. Referential: factual response to context. Nate asks Lisa whether she wants
another drink. She smiles, shakes her hand and replies: ‘I’m OK’ –
Nei, takk [No, thanks].
2. Conative: seeking to reassure. Claire puts her arm around Russell, who
is visibly upset following an emotional ordeal. He is shivering, looks
blankly straight ahead and says quietly: ‘I’m OK’ – Det går bra / Jeg
klarer meg [It goes well / I’ll manage].
3. Performative: the delivery of the message is rhythmically adapted to give
sensual or poetic expression to the words. Asked how he is feeling, John,
sitting in a hot Jacuzzi, sipping champagne, replies in an overly slow,
breathy voice, pausing between each word, lingering for a long time
on the letter ‘o’: ‘I ... am ... ooooo ... kay’ – Jeg ... har det ... deilig! [I have it
lovely].
4. Emotive: statement of feeling. Someone runs jubilantly out from a
doctor’s surgery, having just received news that s/he does not have
some suspected disease, shouting to a friend: ‘I’m OK!’ – Jeg er frisk! /
Alt er bra! [I am well (healthy)! / All is good!].
5. Phatic: confirming that the communication channel works. The chairman of a telephone meeting checks that all the connected participants are hooked up, and asks: ‘Can everyone hear me?’. Several of
the participants answer: ‘I’m OK’ – Alt i orden [All is in order].
6. Metacommuncative: terms used to illuminate other terms. A confused
foreigner asks a friend: ‘What does “I am swell” mean?’ The friend
replies: ‘It means “I’m OK” ’ – Det betyr: Jeg er ok [It means, I am OK].
Although the Norwegian subtitling choices above no doubt might have
been expressed differently by another subtitler, the fact remains that
there is no way of translating the words ‘I’m OK’ into Norwegian (or
any other language) without considering the underlying connotative
message.
Code
In the early stages of the film Meet the Parents we have a situation where
the daughter brings her new boyfriend home to meet her parents.
However, the reunion serves to isolate the boyfriend as he witnesses a
Role of Pragmatics in Subtitling 207
stream of private language, childish nicknames and small ‘games’
between father and daughter:
Hi, Daddy! Hi!
- I missed you so much, Pamcake.
- I missed you too, Popjack.
← Daughter runs up to dad.
← Father hugs.
← Daughter
Oh, boy! Oh, boy! Oh, boy!
← Father, as he swirls daughter
around in an embrace.
Shortstack, shortstack, coming up!
← Both father and daughter as
they ‘stack’ hands.
Where’s my widdle girl?
← Mother, with apron
appearing in doorway.
Mommy! My mom!
← Daughter, goes to hug
mother.
Should we choose to analyse the scene above in terms of Gricean maxims
(even if Grice does not deal with conventions or the cooperative principle
in relation to group dynamics), there are a number of maxims in such
interactions that are clearly relevant to Greg, the ‘forgotten’ guest. There
certainly is a breach of the maxim of relation (relevance), because to Greg,
the whole scene becomes an embarrassment as it does not involve him at
all. The scene also demonstrates a breach of the maxim of politeness. But
Grice’s maxims cannot tell us (for translation purposes) in what way the
language in the scene communicates that sense of exclusion.
Following Jakobson’s categories, what we see in this extract is the
practice of a kind of specialised, somewhat regressive, childish language,
practised by the family only, and in Jakobson’s terminology, this scene
has at its centre its very own code. Greg, the alienated outsider, has no
access to this private language. How do we translate words like ‘pamcake’, ‘popjack, ‘shortstack’, ‘shortstack coming up’, or ‘my widdle girl’?
The words are confusing, certainly, if all you have is a dictionary.
Although there are referential references in the dialogue, at the centre,
most importantly, is the use of a private code, a code that constitutes a
secret family bond. This is functionally what we need to hold on to. The
translated subtitles should not ignore the alienating effect of this private code because Greg’s alienation and embarrassment is what the film
revolves around from that moment on.
Now imagine you are watching a film sequence where Mr Jones and
Mrs Smith are talking about aeroplanes. But they are both pilots.
208 Erik Skuggevik
Perhaps their conversation is, purely phatic, that is, not so much about
aeroplanes but just about talking to kill time, or to just stay in touch.
That can give the subtitler good guidance in what choices to make, to
avoid language that accidentally makes it sound more technical than
necessary, or at least make sure that the tone of the translation reflects
the phatic function of the scene as a whole.
Or imagine the following scene in a film: a coal miner exhausted
after a terrible ordeal down a burning mine shaft exclaims: ‘I think I
need a cup of tea’. On the face of it, this seems very clear. The statement
appears referential in nature, has a clear outside physical component
and is grammatically straightforward. Indeed, I cannot imagine too
many different ways of translating this line into Norwegian. But imagine
for a second that in your country drinking tea is something that not
many people do and those who do belong to the social elite. So against
this background a coal miner wanting a cup of tea would seem a bit
strange. What do you do? In the UK, having a ‘cuppa’ can be associated
with caring, hospitality, safety and normality. The expression ‘I think I
need a cup of tea’ in this situation is not primarily referential in nature,
it concerns itself with the situation: it is an expression of the speaker’s
feelings. The function, according to Jakobson, is emotive, that is, it has
to do with the feelings of the speaker and is actually expressing a need
(albeit with well-known British understatement) for comfort and normality. If both good knowledge of the English institution of tea or the
English language is lacking in the Target Language (TL), there may be a
distinct advantage in translating this request for tea in a more general
way. But how? Given that we identify the function of this statement as
emotive, we should remain within that sphere, and we could maybe
consider: ‘I need something soothing/relaxing to drink’.
The meaning of ‘No’
We have discussed, above, the situation where a waiter might have felt
slightly offended by the guest’s failure to say ‘no, thanks’ when asked
whether he wanted pepper on his food. We recognised that there was a
violation of the Gricean maxim of politeness which caused this offence
in English, whereas in Norwegian the expectation is virtually absent.
Admittedly, this is not a major problem in Norwegian subtitling, but all
the same, it is interesting to analyse further. How should we view the
importance of the flouting of the maxim of politeness in this situation?
According to Jakobson’s functions, ‘thanks’ and ‘please’ are an institutionalised form of conative behaviour since politeness (as standard
Role of Pragmatics in Subtitling 209
implicature) usually carries a conative function. Omission of this
obligatory response makes the guest perform an emotive act; his answer
has become self-centred. In Norwegian it is more culturally acceptable
to give an emotive or referential answer to an offer with no conative
element. When subtitling this dialogue, we should endeavour to reflect
the guest’s rudeness, as was the waiter’s interpretation. How do you
make the emotive ‘no’ more emotive or impolite without adding more
words? One method is simply to add an exclamation mark (more commonly used in Norwegian). This would not interfere with the word
economy, and could almost imperceptibly accentuate the unexpectedness of the guest’s response. This strategy could also work in English:
- Would you like pepper?
- No!
- I was only asking.
- And I said no.
At the beginning of this discussion we mentioned the possibility of a
fifth, holistic, subtitling level of competence and some further remarks
on the subject might seem appropriate at this point. A number of related
observations have been made by Ivarsson and Carroll (1998) and
Gottlieb (2003), but we shall have a look at two which have received
limited attention.
Dynamic coexistence
As language users we have a relationship to words which is not only
technical or semantic, but also sensuous or, at least as far as the speaker
is concerned, performative. In subtitling we need to give consideration
to a certain mirroring of dynamism between what is said and how it
reads. By ‘dynamic coexistence’ I refer to the relationship between the
spoken and the written word; for a subtitle to sit comfortably with what
we hear, we also need to get a reasonable semblance of the aural impact
of the words. What we are referring to here is how we read a written
text. As pointed out by De Linde and Kay (1999: 20), it is important that
we take into account the inner voice that makes people, even when
reading, pronounce the words:
Studies in homophonic reading support the notion that inner speech
is significant in the reading process. Most studies confirm that the
speed and/or accuracy of silent reading is influenced by the sounds
210
Erik Skuggevik
of words. Likewise, sentences containing a series of words that rhyme
or alliterate (tongue-twisters) are more difficult to read both silently
and aloud.
We need to remember that subtitles are a ‘one-way street’ in the sense
that, unlike words on a page, we cannot go back and check or re-read. A
subtitler must therefore ensure the viewer gets the message the first
time around by not overcrowding the subtitles with phrases or words
that require concentrated reading.
Most subtitlers recognise that the performative element inherent in
swearing or slang must be given attention but so must the rhythm of
idiomatic expressions or the poetic nature of amorous communication.
The notion of dynamic coexistence might also rule out the possibility
of translation on plain readability alone, as when letter combinations
and words read with such difficulty that – never mind the correctness
of the semantic and grammatical equivalence – the viewer simply cannot disentangle the subtitle before it has disappeared.
A consideration of the dynamic coexistence of speech and subtitles
should also ensure that we let short expressions stay short in translation,
or give a semblance of verbosity in the subtitle if the speaker is verbose.
Sometimes, by chance, it happens that an entire sentence of as many as
ten words can be expressed accurately by a short phrase in translation,
but much as subtitles strive for economy of expression, in such cases one
might have to add words simply in order to mirror the time it takes to
read a subtitle with the time it takes for it to be spoken in the original.
The importance of body language
Although originally presented in order to define poetics, Jakobson’s
functions are far from being just linguistic. In a visual medium such as
a film the viewer will be bombarded with clues as to what is going on.
Communication is also body language and usually coincides functionally with the spoken word. Somebody stroking someone over the head
will most often match that with words fulfilling the same function – in
this case the conative, centred on the receiver. A person wildly waving
their arms about in discussion will normally use words that reflect the
emotive function; someone using their hands to delineate or draw
shapes and spatial relationships will equally be inclined to using referential language; a handshake is the physical equivalent of establishing
verbal contact, both given a phatic function, etc. Sometimes the subtitler may be struggling with a particularly obtuse or unusual phrase,
Role of Pragmatics in Subtitling 211
sentence or sequence to translate, even if they have a general idea of
what the meaning of it is. In such instances it may be a useful strategy
to simply switch off the sound and view communication at the level of
body language: who is relating to whom, in what manner, what are the
power dynamics in the situation, etc. This exercise can often unlock –
or at the very least confirm – some of the psychological relationships
and messages in a particular scene. We do this in our own lives every
day, only then we cannot switch the sound off!
However, if body language does not match words, the effect is of
course entirely different. It is quite impossible to imagine a person with
a referential body language, using his hands in a manner of somebody
making a business presentation, while at the same time declaring his
undying love for his girlfriend. If somebody’s physical action is stroking
another person’s hair (conative), but the concurrent verbal action is
talking about the weather (phatic), or themselves (emotive), then there
is a jarring of functions. As viewers, we may sense that a film sequence
expresses something rather odd without knowing how or why. However,
this way of analysing may give a clue about how to combine different
elements in order to reflect what is odd. These kinds of relational
dynamics may be structured as shown in Figure 15.3.
This is obviously virtual reality and it is not suggested here that such
tables should be drawn up; there are far too many variables in communication to do this. But it is possible to make explicit what functional
relationship the spoken words have to a particular audiovisual sequence
as an aid to solving a particular subtitling problem. In cases where there
is a clash between body and verbal messages, the subtitler must of course
only focus on the expression of the verbal element, as the interrelationship of physical and verbal actions are intentionally at odds, a priority
that also holds in cases of poor-quality acting.
Verbal function
Physical function
Phatic (talking about the
weather)
+ conative (comforting)
= Insincerity
Emotive (talking about
oneself)
+ conative (comforting)
= Self-centredness
Conative (declaring one’s
admiration for someone)
+ referential (using hands
factually like a politician)
= Patronising
Figure 15.3
Cumulative functions
Cumulative effect
212
Erik Skuggevik
Conclusion
The process of breaking down the communicative functions of filmic
dialogue, with a view to transferring them to subtitles, can be seen as a
largely structuralist activity, and as such amounts to more than just
pragmatic considerations. Indeed, subtitling choices are not simply
modified by the associative functions we have discussed but are, in
fact, defined by them.
In subtitling, the balance between action, meaning and words needs
to be accurate, which is successfully incorporated by the fifth competence mentioned in our discussion. When confronted with a subtitling
task, we must ask ourselves: ‘What is most important here?’, and the
various areas outlined as levels of competences need to be considered in
reverse order. First, we need to understand the communicative function,
and consider how to formulate this expression. Secondly, we must try to
align this expression with the cultural associations of the situation,
including any possible violations of Gricean maxims. Thirdly, we must
try to get close to the meaning of the spoken words themselves. But in
our search for the words, or the right cultural associations and potential
implicatures, we must not forget the communicative function, since
this is the most basic of any equivalence we might be trying to recreate.3
By looking at the different interpretations and translations of the phrase
‘I’m OK’ we have shown that creating equivalence at word level may
not be possible. In all translation acts there will be a compromise, but
what we cannot ignore when subtitling is the communicative function
performed by a phrase or sentence.
Notes
1. Jakobson uses the terms ‘metalinguistic’ and ‘poetic’, which in this discussion, have been replaced by ‘metacommunicative’ and ‘performative’ in order
to move from a linguistic to a more general semiotic dimension.
2. No doubt Jakobson (1960) must have been widely read, including by Nord,
but his typology does not seem to have had enough impact to make text type
analysis incorporate all six functions.
3. The existence of equivalence at a more general level, for example between
subtitle and filmic dialogue, is arguable, at least in a conventional way. It
must be remembered that subtitles are not in themselves a complete product
as they are symbiotically tied to the film. They are the result of an interlingual repositioning of selected elements of the original verbal content taken
from, and reintroduced back into, the overall narrative of the film. It is also a
basic premise that for equivalence to be a relevant concept, a replacement of
the original needs to have taken place. Strictly speaking, subtitles live a life
of coexistence, not replacement.
Role of Pragmatics in Subtitling 213
References
Baker, M. (1992) In Other Words. London: Routledge.
Barthes, R. (1977) Image, Music, Text (Transl. Stephen Heath). London: Fontana
Press.
Bühler, K. (1934 [1967]) Sprachteorie: Die Darstellungsfunktion der Sprache. Stuttgart:
Gustav Fisher.
De Linde, Z. and Kay, N. (1999) The Semiotics of Subtitling. Manchester: St. Jerome.
Gottlieb, H. (2003) Screen Translation: Six Studies in Subtitling, Dubbing and VoiceOver. Copenhagen: University of Copenhagen.
Grice, H.P. (1975) ‘Logic and conversation’. In L. Cole and J.L. Morgan (eds)
Syntax and Semantics (pp. 41–58). New York: Academic Press.
Ivarsson, J. and Carroll, M. (1998) Subtitling. Simrishamn: TransEdit.
Jakobson, R. (1960) ‘Closing statement: linguistics and poetics’. In R. DeGeorge
and F. DeGeorge (eds) (1972) The Structuralists: From Marx to Levi-Strauss
(pp. 85–122). New York: Anchor Books.
Katan, D. (1999) Translating Cultures: An Introduction for Translators, Interpreters
and Mediators. Manchester: St. Jerome.
Levinson, S.C. (1983) Pragmatics. Cambridge: Cambridge University Press.
Loveday, L.J. (1982) ‘Conflicting framing patterns: the sociosemiotics of one
component in cross-cultural communication’. Text 2(4): 359–74.
Munday, J. (2001) Introducing Translation Studies: Theories and Applications.
London: Routledge.
Reiss, K. (1971) ‘Decision making in translation’. In L. Venuti (ed.) (2000) The
Translation Studies Reader (pp. 168–79). London: Routledge.
16
Pedagogical Tools for
the Training of Subtitlers
Christopher Taylor
Introduction
Several research projects exploiting the potential of the ‘multimodal
transcription’ as devised originally by Thibault and Baldry (2000) have
pointed to useful tools for the screen translator with particular reference to the subtitler. The methodology as adapted here consists in the
drawing up of a table consisting of rows and columns (see Figure 16.1)
of the Italian soap opera Un posto al sole.
The first column indicates time in seconds while the second column
contains the individual frames of a film which may be, as in the
example in Figure 16.1, of one-second duration. The third column
(visual image and kinesic action) describes in some detail what is visible on the screen and what action is taking place. This is effected
through a series of codes (‘D’ stands for distance, ‘CP’ for camera position, ‘VS’ for visual salience, ‘VC’ for visual collocation and ‘VF’ for
visual focus) and features such as perspective, lighting effects, primary
and secondary visual salience, gaze vectors and many other elements
are thus described. The fourth column describes the sounds that are to
be heard in each frame, including musical accompaniments and dialogue. The final column is reserved for the appending of subtitles
where required.
The minute description contained in the middle columns allows for
a thorough analysis of the multimodal text in all its manifestations,
and sensitises the analyst to the complicated meaning-making resources
contained therein. The purpose of this methodology, as far as subtitling
students are concerned, is that it enables them to base their translation
choices on the meaning already provided by the other semiotic
214
Pedagogical Tools for Training Subtitlers
8
CP: horizontal
D: medium close
VS: Renato gesticulating,
women passively
absorbing.
VC: apartment furnishings.
Gaze: Renato looks at
both women, looking for
agreement.
Lighting natural.
Renato moves far right,
hands still forward and
moving frenetically.
Questo non è
un desiderio...
This isn’t just a
wish!
9
(as above)
VF: Renato looks down at
Giulia (power pose).
Hands folded down, palms
up.
Questa è una
pazzia!
This is
madness!
10
(as above)
Giulia opens palms in
gesture of frustration.
Renato,
calmati!
Renato, calm
down!
215
Figure 16.1
modalities contained in the text (visual elements, music, colour, camera
positioning, gestures). This process, though time-consuming from a
practical point of view, has proved to be an extremely valid pedagogical
instrument.
Students at Trieste University have been encouraged to carry out this
type of analysis over a whole range of film types (feature films, soap
operas, cartoons, advertisements, television series, news broadcasts,
documentaries, etc.) as material for their graduate dissertations. The
results have shown how the various film types require different treatment when translating for subtitles. An obvious example is that of the
nature documentary where the correspondence between the visual and
the verbal is at its highest, as compared to an animated scene from a
soap opera where the dialogue may refer to many events and people
outside the immediate setting.
216
Christopher Taylor
However, given the impracticality of the method for professional
purposes with anything longer than a short television advertisement,
a second stage in the research brought in the concept of phasal analysis,
following the ideas of Gregory (2002) who used this term to describe
his analyses of written texts. Thus, rather than individual frames, film
texts can be divided or parsed into phases, sub-phases and ‘phaselets’
based on the identification of coherent and harmonious sets of semiotic modalities working together to create meaning in recognisable
chunks, rather in the manner of written texts. Such texts can be analysed in terms of their phasal construction, which includes the transitions that separate the phases and that behave somewhat like
conjunctions or discourse markers in the written mode; various devices
from the clear cut from one shot/scene to another, to fading in or out,
to a blurred screen heralding a dream sequence, etc. are employed. This
further extension of the original tool provides the basis for a thoroughgoing analysis of any film text, and will provide what is hoped will be
a useful addition to the pedagogy of film translation. This chapter
reports on the progress being made in refining this tool within the
environment of university courses in screen translation. It also closely
adheres to the ideas of Kress and van Leeuwen (1996) relating to multimodal texts and visual grammar.
Discourse, Design, Production, Distribution
and Interpretation (DDPDI)
Kress and van Leeuwen (1996) describe the creation of multimodal
texts in terms of four, not necessarily chronological, stages – discourse,
design, production and distribution. They also add a final stage completing the whole process, that of interpretation on the part of the
receiver of the multimodal text. Firstly they use the term ‘discourse’ to
refer to an abstract level of ‘text’ that transcends even the concept of
genre. In a sense it hangs in the air from where it can be channelled
into the language use associated with identifiable genres and genrelets
through instantiation. For example, there exists a discourse of ‘love’
which may manifest itself in talk between lovers, in written love letters,
on the problem pages of tabloid newspapers or in a Danielle Steele
novel or, what is of particular interest for this chapter, in a film
sequence.
Design involves the use of all the semiotic resources available to the
multimodal text creator (words, images, music, etc.) in instantiating
Pedagogical Tools for Training Subtitlers
217
discourse. It involves, for example, choosing between the verbal and
the visual, or deciding the weighting of various semiotic modalities.
Such design decisions vary from individual to individual and from age
to age. Consider, for instance, commercial advertisements which once
were a purely verbal phenomenon, a written text appearing in a
newspaper, whereas now they are the embodiment of the sophisticated
multimodal text, at times using a vast array of meaning-making
resources. Feature films can also exploit the whole gamut of semiotic
modalities, especially combining them to create meanings that transcend the significance of the individual elements. Film design thus lies
in the hands of scriptwriters, set designers, costume designers, make-up
artists, lighting engineers, etc.
Production can be described as the organisation of the expression,
the putting into practice of the designed discourse. Film production is a
complicated process; film is a complex semiotic event involving many
participant ‘producers’. Actors, of course, produce the spoken discourse.
Cameramen are responsible for the production of the visual discourse,
musicians for the soundtrack and sound engineers for accompanying
effects. It is also true that production is often shared. For example, while
recognising that the camera crews create the visual text, the actors also
contribute in some way by dint of gesture, body language, movement
and the like. The star of the film may sing to a musical accompaniment.
Many other ‘producers’ may be involved and the long list of
acknowledgements that appear at the end of any film is testimony to
this. But then, in the final analysis, it is the director who could be said
to produce the whole multimodal event, or at least exercise control over
that ‘production’.
The distribution stage is that in which the multimodal product is
presented to, potentially, millions of receivers. Music, for example, is
distributed as CDs, on television, or on the stage. Film is distributed in
cinemas, on television, as video cassettes and, increasingly, as DVDs
and through online streaming. Home movies, holiday videos and wedding shoots are examples of home-made multimodal texts where perhaps a small group of people handle the whole process from discourse
through distribution to interpretation, but in the case of film, the
number of people involved may be very large and disparate and the
stages may be handled totally separately.
As all communication depends on interpretation, the result of discourse, design, production and distribution needs to be interpreted
by the end user. The act of interpretation completes the process, and
218 Christopher Taylor
not necessarily in a predictable way. As Kress and van Leeuwen
(1996: 8) point out: ‘Which discourses interpreters or users bring to
bear on a semiotic product or event has everything to do, in turn,
with their place in the social and cultural world’. And this is interpretation at a first remove. The question of interpretation becomes much
more important when the original multimodal text is a translated
product.
DDPDI and film text
We shall now examine Kress and van Leeuwen’s stages as they relate
specifically to film text, and subsequently as they relate to film subtitling. The discourse of film on one level can be seen as the discourse of
the particular theme or subject matter, e.g. Mafia discourse in The
Godfather, or cowboy discourse in John Wayne vehicles, and these
include the nonverbal elements such as Armani suits in the one instance
and guns and holsters in the second. At another level, film language is
a law unto itself and differs from real language use. Starting from the
premise that film scripts are ‘written to be spoken as if not written’
(Gregory and Carroll, 1978), any comparison with real, spontaneous
spoken language should demonstrate similar patterns in both. However,
experiments carried out at the University of Trieste tend instead to point
to the difficulty of reproducing genuine oral discourse in film.
Comparing the scripts of fifty modern films set in contemporary, ‘real’
environments with an equivalent-sized sample of spoken language from
the Cobuild Bank of English corpus, it was discovered that, at least in
terms of a number of chosen variables, the film scripts differed considerably from the corpus sample. The use of six discourse markers (‘now’,
‘well’, ‘right’, ‘yes’, ‘OK’ and ‘so’), all typical of authentic spoken language, is much more marked in the corpus sample than in the film
scripts. Other experiments that tested for the frequency of tag questions, hedges and vague language (‘sort of’, ‘kind of’, etc.) in television
series produced similar results.
In terms of design, in the design of a film the discourse is channelled into something with possibly multiple aims. For example the
discourse of ‘love’ in the film Notting Hill provides a vehicle for the
classic boy–girl development of the plot, but is also designed to entertain and amuse. It creates feelings of empathy, antagonism or eroticism
by using the semiotic modalities at its disposal. The shift from script to
screen with the skilful use of multimodal resources such as music,
light, camera angle, etc. can create a new semiotic phenomenon. At
Pedagogical Tools for Training Subtitlers
219
this point it is sometimes hard to distinguish design from production.
Just as actors can break out of the scripted strait-jacket and redesign an
original script (transcriptions of film dialogue often show discrepancies in relation to the original script, as the actors begin to ‘feel’ the
part and, as a result, render the dialogue more authentic), they can also
redesign the ‘stage directions’ and produce some semiotic element that
enhances, modifies, even ‘brands’ the original discourse. Consider
Bogart and Bergman’s famous kiss in the film Casablanca, as compared
to the mechanical design of a porno movie.
Finally, distribution occurs after the director has completed the
editing process and presented a finished product. The director’s role
is crucial in the development of the semiotic potential that enables
film to become a recognisable ‘language’ (Kress and van Leeuwen,
1996: 85), to be interpreted by an audience whose size can never be
predicted. The extent and direction of distribution depends partly on
the intrinsic worth bestowed by the first three stages in the filmmaking process and the subsequent popularity or critical acclaim of
the product, but also depends on commercial and political factors
which are instrumental in terms of audience reach, while audience
interpretation, inasmuch as it manifests itself in positive or negative
response, is equally instrumental in maintaining or improving
distribution levels.
Phasal analysis
The focus of the study now shifts back to Gregory’s (2002) concept of
phasal analysis. Language functions to realise social interaction; that is,
it combines the three components of the context of situation, namely
field, tenor and mode (Halliday, 1994), to create discourse. In practically
any text, including multimodal texts, it is possible to identify discrete
phases characterised by factors that bind that particular part of a text
together as a single, homogeneous unit. The three components of the
context of situation (field, tenor and mode) relate directly to the three
(meta)functions of language delineated by Halliday (1994): ideational,
interpersonal and textual functions.
The ideational function of language is that of providing interlocutors
with the means to talk about the world, in the general sense, and their
experiences within the world. The interpersonal function enables relationships to be established, maintained and altered and for roles to
emerge in communication acts. The textual function is what allows
communicators to construct discourse and to present information
220
Christopher Taylor
cohesively and coherently. In terms of phasal analysis, those
continuous and discontinuous stretches of discourse which can be identified as phases share ideational, interpersonal and textual consistency
and congruity. A clear example is provided by any television soap
opera. A number of parallel plots or stories run simultaneously, but to
maintain the interest of the viewers, they are presented clip by clip in a
seemingly endless succession of short alternating scenes or scene
sequences. The latter can be considered discontinuous phases, often
divisible in turn into subphases and sub-subphases, which retain a
number of features that are intrinsic, even unique, to those phases.
Thibault and Baldry (2000: 320) were the first to apply Gregory’s
phasal analysis to multimodal texts, explaining that phases in this
sense are characterised by ‘a high level of metafunctional consistency [ ... ] among the selections from the various semiotic systems
(music, action, dialogue, etc.)’. Gregory (2002) uses the term ‘communicating community context’ (CCC) as a macro version of the concept
of the context of situation in which an act of communication takes
place. It refers to the dimensions of time, geographical location, ideologico-political situation, etc. The CCC therefore partially reflects and
partially constructs the functioning of language in conversation, but
in the case of film, the CCC surrounds a group of actors being directed
to play a scene in what we shall call an ‘artificially produced situation’
(APS). This APS must relate, like all other communicating community
contexts, to the participants’ knowledge of the world and, naturally, to
the language they are using.
However, the actors’ knowledge of the world (time, place, tradition,
culture, etc.) in the form of sets of schemata variously described as
scripts (Minsky, 1975), frames (Schank and Abelson, 1977), mental
models (Johnson-Laird, 1980) or scenarios (Sanford and Garrod, 1981),
is harnessed to an imaginary character’s gnostology. This would provide
some explanation as to why scriptwriters and actors find it difficult to
recreate convincing dialogue but also raises another question. Could it
be that the great actor’s gift (or acquired skill) is that of constructing
and reconstructing schemata to the point of making an APS seem real?
The phasal analysis of a (multimodal) text provides ‘a connection with
conceptual and gnostological analysis – a basis for the investigation of
how speakers construct reality’ (Gregory, 2002: 342). And this applies
equally to the unreal situations that we find in film. Even in film
situations, the participants interact coherently in terms of all the semiotic modalities in play, and viewers understand films (usually) because
they make an effort to ‘refer language to some understandable social
Pedagogical Tools for Training Subtitlers
221
interaction’ (Fries, 2002: 347). This understanding/interpretation
is assisted here by phasal analysis which throws up identifiable
patterns of use. Just as in written texts, within the continuous and
discontinuous stretches of discourse, items re-occur and co-occur to
create recognisable circuits and chains of meaning, similarly, in multimodal texts, nonverbal elements will re-occur and co-occur in the same
way, and in an integrated relationship with the verbal elements, forming clusters of semiotic modalities.
The ‘Bundaberg Beer’ text
Figure 16.2 shows three frames from a short Australian advertisement
for Bundaberg ginger beer. The pictures in the frames best represent
the three macro-phases identified within this multimodal text. In
T
Visual frame
VI + kinesic action
Soundtrack
1
Man walking
purposefully
through outback.
Music/no speaking
2
Action with bottle.
‘You can’t walk
away from the
truth’
‘Naturally brewed
from our own
family owned
company’
‘Non puoi fuggire
dalla verità’
‘Prodotta in modo
naturale dalla
nostra azienda di
famiglia’
Attention focussed
on product.
TRUE BREW
GINGER BEER
GINGER BEER
QUELLA VERA
3
Figure 16.2
Subtitle
222
Christopher Taylor
macro-phase 1 a man can be seen walking through the outback on a
hot day, passing in front of a mountain, wading through a stream, and
climbing a rocky outcrop. When he gets to the top, macro-phase
2 commences as he spies a bottle of Bundaberg ginger beer nestling in
a bed of ice; he takes the bottle, opens it and drinks it with enormous
gusto. These actions are accompanied by a voice off describing the
product in a broad Australian accent. Finally in macro-phase 3 the
product is shown in the conventional style with the camera moving
backwards and forwards over a picture of the product and the slogan:
‘True brew ginger beer’.
The phases are identified as such in that various semiotic modalities
can be seen to come together to form a unified whole. For example, in
macro-phase 1 the single action of walking through the outback is
accompanied by a particular piece of music which ends at the end of the
phase. The transition then leads us into macro-phase 2 which is of
course connected to 1 but recognised as a distinct unit – that of the
drinking of the beer accompanied by a different musical item. There is
also connection at an intertextual level. This advertisement forms part
of a series, linked very clearly by a number of factors – the same
characters, the same music, the same appeal to the outdoors, very
similar wording, the same slogan, the same voice off and, of course, the
same product.
In Kress and van Leeuwen’s (1996) terms, the multimodal discourse
of all the ads is that of convivial outdoor beer drinking in hot weather
allied to the conventions of drinks advertising. The design is re-proposed throughout the series of advertisements which take the verbal
and visual discourse of the original and create new, instantly recognisable multimodal texts using mostly the same resources. In terms of
production, different discourses are used in the creation of genrelets
of the subgenre ‘Bundaberg ads’ of the main genre ‘drinks ads’. For
example, the man’s no-nonsense voice and pronounced Australian
accent add texture to the discourse. These advertisements are then
distributed on television to a large audience, originally to an audience anchored in the cultural context of Australia, and of beer drinking. Interestingly, the commercial promotes a non-alcoholic product
but within the collective semiotic knowledge regarding serious beer
drinking. Its discourse is not that of Coca Cola. Figure 16.3 shows the
division in macro-phases and micro-phases of the Bundaberg beer
advertisement, indicating how the verbal and nonverbal elements
combine.
Pedagogical Tools for Training Subtitlers
223
MACRO-PHASES:
1. Man walking through outback Musical accompaniment
2. Action with bottle
3. Attention on product
Macro-phase 1:
1. Man walking away through grass
2. Side shot of legs (mountain behind)
3. Passes tree under red sky
4.
Back view of man
5.
Side shot of legs
6. Walking through stream
7. Close-up of feet
8. Emerging from stream Music changes
9. Walking through undergrowth
10.
Climbing rock face
11.
Reaches summit
Macro-phase 2:
12. Man sees bottle
13.
Reaches for bottle ‘You can’t escape from the truth’
14.
Opens bottle ‘Naturally brewed from our own family owned company’
15.
Drinks from bottle
Macro-phase 3:
16. Close-up of label, panning back
17.
Bottle shown and words TRUE BREW GINGER BEER
Figure 16.3
Some suggestions for the subtitler – DDPDI
As mentioned above, the discourse of the Bundaberg advertisements
hanging in the air is that associated with beer drinking and television
advertising, particularly as these relate to the original linguistic and
cultural setting. The ads are archetypically Australian. This poses a
224 Christopher Taylor
dilemma for the subtitler who has to tap into both an abstract discourse defined by local cultural mores, and translation norms for written subtitles. The dilemma hinges on the activating of a localisation
process (adapting the text to a local audience’s expectations) as opposed
to adopting the opposite strategy of foreignising the text and maintaining the original flavour. It is possible that ginger beer is best advertised in its Australian context, as this appeals to the foreign viewer
more than a grafting of a locally connotated (written) text onto the
original. On the other hand, as subtitling is an additive process, nothing is lost of the original including the voice and accent, and this may
be enough to preserve the Anglo-Saxon context, even if the written
text is localised. In fact, it is riskier to foreignise the wording – i.e. give
it an Australian feel – in, say, Italian (the extreme case of this would be
simply to leave the subtitles out altogether) as the discourse of the subtitles has also to adhere to the linguistic games being played. Thus the
play on the key word ‘truth/true’ [verità/vera] needs to be kept; the use
of monosyllabic generic terms, rhyme and word play are all up there in
the abstract as integral parts of the discourse of advertising. While the
discourse of Italian advertising can to a certain extent fit into this slot,
the final slogan poses a question. The translation of ‘True brew ginger
beer’ with Ginger beer quella vera is dictated by the need to translate
(localise) the term ‘brew’ which, unlike ‘true’, is not at all familiar to a
mass audience. Does this matter, if the sound and rhythm and rhyme,
and particularly the picture, convey the message adequately? No answer
will be given to this basically rhetorical question because its purpose is
simply to illustrate the kind of thinking that needs to be activated in
addressing the problem of localising or foreignising the discourse of
the source text.
The design of the visual and oral discourse in the Bundaberg texts
remains, and in a subtitled version the only alteration to production is
in the verbal text. However, the latter is transferred back from the spoken to the written medium, which is itself complicated by the fact that
the original was ‘written to be spoken as if not written’. Complications
notwithstanding, the actual wording of the titles constitutes a phase of
production and a number of factors come into play, which are not found
in any other type of text production or translation process. The writing
of the subtitles is based firstly on a variegated set of considerations and
strategies of a linguistic, semantic and pragmatic nature. A limited
number of scholars and experts have written on the subject of subtitling strategies but the works of Ivarsson (1992), Ivarsson and Carroll
(1998), Gottlieb (1992, 1997, 2000) and Gambier (1995, 1998) are
Pedagogical Tools for Training Subtitlers
225
sufficient to show that a great many notions exist as to how to best
perform this craft. Secondly, subtitle production is governed by a whole
series of technical constraints ranging from positioning to timing and
designing. The art of spotting (i.e. placing the subtitles in the right place
at the right moment) is a crucial part of this production process.
Turning to distribution, the question of localising/foreignising arises
again. Subtitling a film involves projecting the discourse, design and
production of the original product into a new culture. An example of
the kind of issue that can arise is provided by one of the Bundaberg
advertisements in which a young girl on the beach spills some ginger
beer from its bottle onto a magazine, and then proceeds to drink from
the folded mag. This causes no real concern in Anglo-Saxon cultures
but in some other cultures this would be frowned upon, and showing
the advertisement would not be conducive to marketing the product.
Finally, in terms of interpretation, the words of subtitles represent a
distinct (macro) genre and are not interpreted in the same way as written scripted words, words on the screen in the original language or the
spoken words. Thus, where localisation is required, and this is often the
case in advertising, subtitles can act to soothe sensibilities and avoid
tensions. Their interpretation is, however, bound to other accompanying semiotic features, such as a raucous or a sensuous voice, or a musical
theme.
Lessons for the subtitler – phasal analysis
(1) Phasal analysis allows us to see how (multimodal) texts are
constructed, continuously and discontinuously, and thus where to
look for patterns at a lexico-grammatical and semantic level. These
patterns must be recognised and, if possible, respected.
(2) Within the ‘artificially produced situation’, subtitlers can distinguish
combinations of field, tenor and mode and attempt to relate them to
the kind of situation they wish to create for the target audience.
(3) Field: does the translator’s knowledge of the world and range of
schemata equip him/her to effectively relay the meaning of the text,
or is more care required in foreignising or localising the text?
(4) Tenor: this is the most delicate area for the subtitler. Scholars like
Kovač ič (1998) have shown that in subtitling there is a strong tendency to sacrifice the interpersonal element, those items that particularly distinguish spoken language, as exemplified in the
description of the experiment conducted in Trieste (see above). It is
at least advisable to maintain a sufficient number of these features.
226
Christopher Taylor
Kovač ič herself provides the example of the mother figure in the
televised rendering of Eugene O’Neill’s Long Day’s Journey into
Night. The mother, in the midst of feverish discussions of a subversive political nature between earnest young men at table, continues to repeat platitudes about her hair, of no consequence to the
unravelling of the plot. However, the character of the mother is
important as a kind of counterpoint to the serious revolutionary
action, and is highlighted by her inane repetitions. Similarly, the
stuttering of the character Ken, played by Michael Palin, in the
film A Fish Called Wanda, while not adding to the storyline, provides an important angle on the character and contributes to the
humour of the text, thereby deserving of the translator’s attention
through Gottlieb’s strategy of ‘imitation’ (1992: 166), that is, the
phonetic transcribing of the stutter. The dynamic force of repetition, hesitation, discourse markers, and hedging devices should
not be underestimated.
(5) Mode: in a counter-tendency to the omission of interpersonal
elements, the spoken-to-written transformation often results in overexplicitness in the information load. Baker (1995) conducted research
on written texts proving, through a comparison of parallel texts,
that translated material tended to be more explicit than equivalent
texts written in the original language. For example the ‘that’ relative,
which is often expendable, is found significantly more often in
translated texts. Abbreviations of the ‘don’t’, ‘wouldn’t’ kind are significantly more frequent in original texts. Thus the presentation of
information in translations has proved to be slightly different over a
large amount of material. The question of whether the same occurs
in subtitles is inevitably connected to the ‘written to be spoken as if
not written’ conundrum. In theory the writer of the text to be translated will have made efforts to make that text seem natural or
original, even though we have seen that this objective is difficult to
achieve. In any case, subtitlers must be aware of this and also consider that their subtitles are, in a sense, ‘written to be read as if not
written’.
(6) Subtitlers must make every effort to be faithful to the original text
in terms of ‘experience relationships’ or the ideational element,
‘interaction relationships’ or the interpersonal element, and the
medium relationships, and thus recognise where the ideational,
interpersonal and textual features show consistency and congruity.
(7) The experience, interaction and medium relationships are also
present in the non-linguistic modalities (visual, musical, gestural)
Pedagogical Tools for Training Subtitlers
227
and the consistency and congruity must be identified as an integrated whole. Phasal analysis, as adapted by Thibault and Baldry
(2000), performs this task.
(8) Subtitlers should work from the transcribed text of the original film,
not the original script. The way actors react to artificially produced
situations often results in dialogue that moves closer to real language use. As Zaber (1980: 8) states: ‘The kind and degree of training
and the sophistication an actor brings to his art are further filters on
the script performed’.
These suggestions, provided as debating points rather than prescriptive
rules, mark the end of an initial phase in theoretical research into screen
translation didactics. Much hard work of a more practical nature is now
required to make sure these ideas are converted into useful tools for the
screen translator.
References
Baker, M. (1995) ‘Corpora in Translation Studies: an overview and some suggestions for future research’. Target 7: 223–43.
Baldry, A. (ed.) (2000) Multimodality and Multimediality in the Distance Learning
Age. Campobasso: Palladino Editore.
Fries, P. (2002) ‘Some aspects of coherence in a conversation’. In P. Fries,
M. Cummings, D. Lockwood and W. Sprueill (eds). Relations and Functions
within and around Language (pp. 346–88). London, Continuum.
Gambier, Y. (ed.) (1995) Audiovisual Communication and Language Transfer.
Translatio, Nouvelles de la FIT – FIT Newsletter XIV(3–4).
Gambier, Y. (ed.) (1998) Translating for the Media. Turku: University of Turku.
Gregory, M. (2002) ‘Phasal analysis within communication linguistics: two
contrastive discourses’. In P. Fries, M. Cummings, D. Lockwood and W. Sprueill
(eds) Relations and Functions within and around Language (pp. 316–45). London,
Continuum.
Gregory, M. and Carroll, M. (1978) Language and Situation: Language Varieties and
Their Social Context. London: Routledge and Kegan Paul.
Gottlieb, H. (1992) ‘Subtitling – a new university discipline’. In C. Dollerup and
A. Loddegaard (eds) Teaching Translation and Interpreting: Training, Talent and
Experience (pp. 161–70). Amsterdam and Philadelphia: John Benjamins,
Gottlieb, H. (1997) Subtitles, Translation and Idioms. Copenhagen: University of
Copenhagen.
Gottlieb, H. (2000) Screen Translation: Six Studies in Subtitling, Dubbing and
Voice-over. Copenhagen: University of Copenhagen.
Halliday, M.A.K. (1994) An Introduction to Functional Grammar. London: Edward
Arnold.
Ivarsson, J. (1992) Subtitling for the Media. A Handbook of an Art. Stockholm:
TransEdit.
228 Christopher Taylor
Ivarsson, J. and Carroll, M. (1998) Subtitling. Simrishamn: TransEdit.
Johnson-Laird, P.N. (1980) ‘Mental models in cognitive science’. Cognitive Science
4(1): 71–15.
Kovač ič, I. (1998) ‘Six subtitling texts’. In L. Bowekr, M. Cronin, D. Kenny and
J. Pearson. (eds) Unity in Diversity? Current Trends in Translation Studies
(pp. 75–82). Manchester: St. Jerome.
Kress, G. and van Leeuwen, T. (1996) Reading Images: the Grammar of Visual
Design. London: Routledge.
Minsky, M. (1975) ‘A framework for representing knowledge’. In P. Winston (ed.)
The Psychology of Computer Vision (pp. 211–77). New York: McGraw Hill.
Sanford, A.J. and Garrod, S.C. (1981) Understanding Written Language. New York:
Wiley.
Schank, R.C. and Abelson, R.P. (1977) Scripts, Plans, Goals and Understanding.
Hillsdale, N.J.: Erlbaum.
Thibault, P. and A. Baldry (2000) ‘The multimodal transcription of a television
advertisement: theory and practice’. In A. Baldry (ed.) Multimodality and
Multimediality in the Distance Learning Age (pp. 311–85). Campobasso:
Palladino Editore.
Zaber, O. (1980) The Languages of Theatre: Problems in the Translation and
Transposition of Drama. Oxford: Pergamon Press.
17
Teaching Subtitling in a
Virtual Environment1
Francesca Bartrina
Introduction
The crucial role played by audiovisual translation (AVT) in contemporary
international communication invites translator trainers to contemplate
the different possibilities available when training translators for the
modern mass-communication market. The acquisition of new skills in
the use of Information and Communication Technology (ICT) is a challenge that instructors of subtitling and trainees must face up to. In
online multimedia courses, the use of digital technology is an imperative, but professional computer programs are expensive and educational
licences are often unavailable. In this situation, how can students get
the hands-on training they need? In response to this situation, some
academic institutions try to generate their own in-house solutions.
In the Catalan/Spanish context, two universities have answered
these challenges by creating their own programs. The Universitat
Autònoma of Barcelona (UAB) has developed the programs Subtitul@m
for subtitling and REVOice for dubbing and voice-over: ‘Both are pioneering programs that give students the opportunity to simulate real
working conditions and become familiar with the various procedures
involved in the different AVT modes’ (Díaz Cintas and Orero, 2003:
373). In 2001 Richard Samson in the Translation School of the
Universitat de Vic, in Barcelona, created the Poor Technology Group
(PTG) ‘for people who work in the field of education and who seek
innovative low-budget digital solutions to vocational (professional
training) educational problems’ (www.uvic.es/fchtd/especial/en/ptg/
ptg.html). Ever since, the group and its members have been developing
solutions to a variety of training issues, including dubbing and voiceover. They faced the demands of online subtitling instructors who
229
230 Francesca Bartrina
considered the subtitling practioners as a ‘highly specialised
professional sector that uses exceedingly expensive software’. Samson
has created the Digital Video Subtitling Compilation (DVSC) that can
be either distributed to students on CD-ROM format or downloaded as
a demo. The programs included in this project are free and available
on the following websites:
●
●
●
Subtitle Workshop (Subtitle editor): www.urusoft.net/downloads.php
Direct X (Windows multimedia library files): www.microsoft.com/
windows/directx
Digital Video Subtitling Compilation: www.uvic.es/fchtd/especial/
en/ptg/ptg.html
When teaching subtitling online, it is essential to work with digital subtitling programs because they emulate the working conditions of professional subtitlers in the contemporary market. In the real world,
subtitlers are exposed to different subtitling software packages in each
of the companies they work for. We need to train our students to be able
to use as many subtitling programs as are available on the market.2 In
what follows, I discuss some aspects of teaching subtitling in a virtual
environment.
Quality in teaching interlingual subtitling:
the Catalan and Spanish contexts
Subtitling is a way of translating what is being said in an audiovisual
text, with two characteristic features. First, there is a change of medium,
from the oral to the written form. Second, the oral message of the source
audiovisual text is also present in the translated product. The specificity
of subtitling can be dealt with in an online campus taking into account
the following requisites.
Spain is a dubbing country and the high quality of some Spanish and
Catalan dubbed programmes has been stressed. In his book, Ávila (1997:
37) praises the 1940s to the early 1970s as the Golden Age of Spanish
dubbing and deplores the declining levels of quality nowadays. The
Catalan Channel TV3 (1997: 209–10), in its Criteris lingüístics sobre traducció i doblatge, pays tribute to the translators of the best dubbed films
in the first 12 years of broadcasting of the channel. Unfortunately,
nothing similar can be said in the case of subtitling. Until recently, the
poor quality of Spanish and Catalan subtitling has led us to pose the
question: do Spanish audiences prefer dubbing because of the low
Teaching Subtitling in a Virtual Environment
231
quality of subtitling or is the low quality of subtitling due to its marginal status? (Díaz Cintas, 2003: 69).
The popularisation of DVD has increased the availability and the
amount of Spanish and Catalan subtitling. This increase does not
necessarily mean a concentration of work in the Spanish industry,
because due to the phenomenon of globalisation not all Spanish and
Catalan subtitling is done in Spain. However, we have to prepare our
students for the global market in order to jump from online campuses
to the virtual professional environment. We have to train our students to be competitive, which means that they have to be able to
offer their services as subtitlers to international subtitling companies
located around the world. The use of digital subtitling software in
online campuses is the best way to prepare for it. At the same time,
the dialogue between universities and subtitling companies must be
constant and fluid. 3
The audience
In subtitling, the oral and written messages are received simultaneously,
allowing for comparison between Source Text (ST) and Target Text (TT).
In his seminal work, Ivarsson (1992: 49) states that: ‘It is, moreover,
unnecessarily irritating to those who have some knowledge of the language when a person says something and it is not translated. The natural
reaction is: Why ever did they skip that?’. Such remarks made by the
audience make this kind of translation particularly ‘vulnerable’, as Díaz
Cintas has emphasised in several of his works (2001, 2003). Viewers
who are familiar with the Source Language of an audiovisual text tend
to notice some of the discrepancies between the target subtitles and the
source dialogue and often criticise subtitlers for their inaccuracy. When
subtitling, professionals are extremely conscious of what their audiences expect to be translated. They frequently choose the translation
solutions that are most similar to the ST.
As trainers, two ways are open for us here. The first one is to consider
that, following audience expectations, similarity to the ST is a quality
criterion in subtitling. We can encourage students to follow the order of
the verbal elements of the ST and to go for the solution which is more
similar to the original. The second one is to encourage them to choose
the most fluent solution, in terms of accessibility, as we will see below,
encouraging an experience for the receivers of subtitles that will hopefully change this prejudice. Probably, the best solution is to find a balance
between the two positions.
232 Francesca Bartrina
Unquestionably, subtitling directly affects the audience’s perception
of the audiovisual programme. Gambier (2001: 102–3) has stressed the
fact that audience perceptions of subtitling depend on sociological and
audiovisual variables such as age, reading habits, knowledge of
the Source Language (SL) and Target Language (TL), genre of the
audiovisual programme, channel, broadcasting time and rhythm of the
audiovisual product, among others.
We could argue that the impact of DVD has challenged the way in
which viewers perceive quality and subtitling. Audiences’ empowerment has been increased with the possibility of comparing dubbed and
subtitled translations of the same audiovisual programme. This situation
has already had consequences in the profession with clients demanding
that dubbed and subtitled versions of the same film be similar. It seems
that for DVD subtitling similarity to the dubbed version and literal solutions are demanded.
Another issue to be considered is the channels at the audience’s disposal to express their views about subtitling. In our field, there is a
strong demand for audience studies in order to know what the viewers
really think about AVT. However, we should remember that most audience studies carried out in Spain are done to attract publicity agents.
Mayoral (2001: 44–5) discusses the constraints and limitations that
scholars in Translation Studies have to face when embarking on research
of this nature.
The question arises as to what kind of subtitling quality awareness
can be developed in continuous online assessment, and we need to find
a balance between homogeneity and diversity. Students need to be
exposed to a range of different standards that are operative today in the
cinema, TV, video and DVD subtitling markets, but they also need
training in specific standards in order to be prepared for academic grading. We therefore have to study various subtitling practices implemented
in the market, without forgetting to use a unifying code, a unique set of
subtitling standards as proposed by Ivarsson and Carroll (1998) or
Karamitroglou (1998).
Gambier (2001, 2003) has written extensively on the expectation of
quality in screen translation. For him, the key word for quality in AVT
is ‘accessibility’, a concept including many features that can easily be
adapted to the context of subtitling: acceptability (grammar, style, terminology), legibility (position, subtitle rates), readability (reading speed
rates, shot changes), synchronicity (what is read is what is shown),
relevance (what information is to be deleted or added) and translation
strategies when dealing with cultural items. It can be concluded that
Teaching Subtitling in a Virtual Environment
233
these features are the criteria for quality in subtitling, and they can be
considered the basis for fluency in this kind of translation.
In my opinion, the concept of accessibility is extremely useful when
teaching subtitling. It is an eclectic term in the sense that it tries to
cover what has been said in general definitions of good subtitling
practice, such as that of James (1998: 245): ‘to produce subtitles which
are accurate, credible, easy to assimilate and which flow smoothly’.
And it also includes specific demands on subtitling such as: ‘written
subtitles should be made to ‘sound’ like their spoken equivalents’
(Brondeel, 1994: 29). Students should be told at the beginning of the
course that this is the quality standard upon which their work will be
assessed.
Teaching subtitling online
When teaching online, a great deal of activity on the virtual platform
becomes crucial. Trainers have to use all the online resources at their
disposal to keep students motivated. When teaching subtitling, a good
method can be the design of a set of activities aimed at familiarising students with the whole process and the spatial, temporal and textual
parameters. These activities must be an integral part of a curriculum that
is learning-centred and that involves high-level cognitive processing.4
These activities can be found within the virtual platform, together
with the audiovisual material. Sometimes it may be more convenient to
send a CD to the students with the audiovisual material, because of its
large volume. Some activities should be aimed at encouraging students
to find their own audiovisual material and to locate scripts using different Internet resources.
In distance learning, the relationship between student and teacher
has to be reinforced with specific tools like forums, chats and frequent
interaction via e-mail.5 A timetable for students’ work should be made
explicit at the beginning of the course. If students carry out activities
frequently (every one or two weeks) and trainers answer promptly with
their remarks, the motivation of the whole virtual group is bound to
increase. It is also very important to work with professional deadlines:
subtitlers need to work fast and accurately.
The literature on the subject stresses the importance of training translators to be able to work on the whole process: timing or spotting, translating, adapting, and editing. If the same professional carries out the
whole process, the quality of the subtitling is more likely to be guaranteed. Using a digital subtitling program, students can work simulating a
234 Francesca Bartrina
professional context. This experience will boost their employability
because they will be the sort of professional the market needs.
Viewing
Students working with a dialogue list need to check the written script
against the text that is actually spoken in the programme. However,
Gottlieb (1996: 284) has argued for the need to teach subtitling without
scripts, mainly because:
Without scripts as translatory crutches students remain focused on
the fact that subtitling is not just a matter of translating some lines
from a script and shaping them into neat blocks. Subtitling is a craft
in which one recreates the foreign dialogue in one’s own (written)
language, as an integral part of the original film whose visual content co-interprets the meaning of the lines as they are spoken.
Subtitling without scripts can help students appreciate the specificity of
the audiovisual mode and reinforce the fact that the spoken text and
the image are indissoluble. They are encouraged first to understand and
secondly to translate and do the timing simultaneously. Subtitling
without a script can make timing easier. However, in the professional
world, when translators are given a good postproduction script with an
adequate glossary of difficult terms the quality of the audiovisual translation tends to improve considerably. Also, the use of scripts is more
convenient and less time-consuming for teaching purposes.
When viewing the audiovisual text, students may be guided in different activities so that they are aware of all the non-spoken information
that should be translated: signs, notices, etc. They should decide whether
songs need to be translated or not, and they should also have to make
decisions on questions of politeness and the way the characters address
each other, formally or informally (for example, in Spanish Tú/Usted/
Ustedes and Catalan Tu/Vostè/Vostès).
Timing, spotting or cueing
Digital subtitling programs allow students to work with an alreadymade timing and to practise with strict constraints. In addition, they
also allow students to create the timing themselves and to learn the
technique. In the market, subtitlers have to learn to do both. If they
work for the DVD industry, they often receive a document with an
already-made timing (known in the profession as ‘master list’ or ‘genesis
file’), but students also have to be prepared to do the spotting themselves
Teaching Subtitling in a Virtual Environment
235
and a digital subtitling program allows them to practise this skill.
Activities should also be designed to allow students to change, modify
and improve an already-made timing.
The aim of these activities is to stimulate students to practise spotting, taking into consideration the start and finish of characters’ utterances, the semantic units of the dialogue, the rhythm of speech, the
changes in camera shots and sound bridges.
Translating
Timing constraints determine the priorities when subtitling. Students
have to translate what is being said on the screen, taking into account
the information given by the iconic dimension of the image, and taking
advantage of the tools given by the virtual context: online dictionaries,
encyclopaedias and databases, audiovisual translators’ chats and forums,
associations’ websites, electronic journals and a multitude of suggestions in, for example, terminology management systems and others. As
mentioned by Koby and Baer (2003: 216), when teaching the use of such
tools, our main pedagogical goal must be the acquisition of conceptual
knowledge:
In a computerized translation classroom or lab, the overarching goal
is to enable students to acquire both declarative knowledge (i.e. specific technical skills in an application) and procedural knowledge
(i.e., the ability to recognize and contextualize technical issues
within a conceptual framework) to enable them to work effectively
in an increasingly technologized translation industry, and to deal
with change proactively through their understanding of the nature
and underlying principles of the technology.
Editing
This is an important stage of the quality control. Digital subtitling programs make it easy for students to check their work – conventions of
written language and technical corrections, if needed – improving their
impact on the final product since they are able to control the whole
subtitling process. Students should be encouraged to edit both their
own work and the work of other colleagues. Mayoral (1999: 3) reminds
us that: ‘Students must be trained for teamwork, sharing translation
tasks not only with other translators but also with professionals in other
fields (actors, producers, multimedia technicians, editors, etc.)’.
Teamwork is a highly appreciated skill in the professional world. The
236
Francesca Bartrina
teacher may work here as a facilitator, distributing students’ versions
and contributing to group debates.
Priorities in subtitling
In the context of an online campus, we should prepare and develop
activities aimed at practising the following skills.
Synthesis, redundancy and synchrony
Synthetic reductions are inevitable in subtitling. They can be partial
(condensation or concision) or total (elimination or deletion). Synthesis
is related to redundancy, a very important concept in subtitling, because
the information not given by the subtitles may be supplied by other
elements present in the audiovisual text: the image and/or the sound.
Gottlieb (1994: 273) distinguishes between:
(1) Intersemiotic redundancy, which enables the viewer to supplement
the semiotic content of the subtitles with information from other
audiovisual channels – notably the image, and prosodic features in
the dialogue.
(2) Intrasemiotic redundancy, in the dialog. Especially with spontaneous
speech, not only the informative content, but also the verbal style
and characterization of the speaker are better served with some
reductions in the subtitles.
Gottlieb’s proposed distinction is very important as it helps students to
be aware of how synthesis works in subtitling and some activities should
be prepared in order to practise this skill. It is important to insist upon
all decisions being taken in relation to synchrony. Ivarsson (1992:
46–51) establishes two main types of synchrony: between image and
subtitle, and between sound and subtitle content. In his own words
(ibid.: 49): ‘When items are enumerated they should be written in the
same order as in the original so as to confuse the viewers as little as possible. (Strangely enough, viewers are much more irritated by a change in
the order than by the omission of one or two items)’.
Spatial, temporal and linguistic parameters
It is important to ask students to follow different typographical guidelines to emulate the reality of the professional world. Differences are
visible in programmes commercialised by the very same distributor,
company or TV channel. Spatial and temporal parameters given to
Teaching Subtitling in a Virtual Environment
237
students may follow the international standards of subtitling (Ivarsson
and Carroll, 1998; Karamitroglou, 1998) and the specific guidelines
applied by professional subtitling companies. Students should be familiar with different parameters but we have to make clear to them which
parameters must be used in each activity.
From online campuses to the
virtual professional environment
We may have different answers to the question ‘What is quality in subtitling?’ depending on the group of people we want to address: subtitling companies, producers, distributors, broadcasters or viewers. In
this context, how can we train translators to be good professionals?
What sort of skills do they need to have? In my opinion, the following
competences are required:
●
●
●
●
●
●
●
●
Excellent command of the two relevant languages and cultures
at work.
Knowledge of the semiotics of the audiovisual text.6
Ability to summarise.
Documentation skills.
Capacity for self-evaluation.
Ability to work fast and accurately.7
Skills to negotiate with other professionals.
Capacity to co-ordinate a project.
It is unquestionable that the use of digital software can help to develop
the following skills:
●
●
●
●
Quick adaptation to changing subtitling software.
Adaptability to real world constraints and priorities.
Control of all the stages in the subtitling process.
Professional documentation skills for online resources.
Conclusion
As trainers, we have a duty to make explicit which are the quality criteria we follow when assessing and marking students’ work. By working
in a virtual environment with the help of digital subtitling programs,
we are empowering our students and giving them the tools they need
to improve the quality of the subtitled product. In the professional
238 Francesca Bartrina
world, clients have the power to impose their own quality criteria, even
though subtitling companies may have their own in-house guidelines.
Following Gambier (2003), we also advocate accessibility as the main
quality parameter of subtitling, a concept that may include the demands
of clients, subtitling companies, viewers, scholars and subtitlers.
Notes
1. The author wishes to express her thanks for the research grant awarded by
the Generalitat de Catalunya, 2003MQD 00062.
2. Mayoral (1999: 3) considers that: ‘We should also be training students in
‘teletranslation’ (translation at a distance), since the professional market has
been globalised to the extent that a translator can work for a client in any
part of the word. This requires competence in the use of the Internet and
ability to work in teams across distance’.
3. I agree with Díaz Cintas and Orero (2003: 373) when they state that:
‘Universities are always behind as far as technological development is concerned, and maintaining contact with the industry can help to keep lecturers
updated and informed. Conversely, subtitling companies can benefit from
links with university programmes because they can have a greater say in the
design of the academic curriculum and offer advice on skills they most need
when recruiting new staff’.
4. Some guided subtitling activities can be found in Bartrina and Espasa (2003:
30–8).
5. The importance of interaction in e-learning and the experience of running
an Open Distance Learning MA in Translation Studies has been analysed by
Millán-Varela (2001).
6. Future subtitlers should be familiar with Film Studies. At the University of
Vic, students of subtitling can follow courses in the Escola de Cinema i
Audiovisuals de Catalunya thanks to a specific agreement between the two
institutions. Our students also have the opportunity to subtitle into English
the audiovisual programmes made by final-year students of cinema for international festivals.
7. We should encourage students to try and improve the working conditions of
the professional market: educating the clients, belonging to Associations of
Translators. But, at the same time, students have to learn to work under
adverse conditions.
References
Ávila, A. (1997) El doblaje. Madrid: Cátedra.
Bartrina, F. and Espasa, E. (2003) ‘Traducción de textos audiovisuales’. In
M. González Davies (ed.) Secuencias. Tareas para el aprendizaje interactivo de la
traducción especializada (pp. 19–38). Barcelona: Octaedro.
Brondeel, H. (1994) ‘Teaching subtitling routines’. Meta 39(1): 26–33. www.
erudit.org/revue/meta/1994/v39/n1/index.html
Teaching Subtitling in a Virtual Environment
239
Díaz Cintas, J. (2001) La traducción audiovisual. El subtitulado. Salamanca:
Almar.
Díaz Cintas, J. (2003) Teoría y práctica de la subtitulación: inglés–español. Barcelona:
Ariel.
Díaz Cintas, J. and Orero, P. (2003) ‘Course profile: postgraduate courses in
Audiovisual Translation’. The Translator 9(2): 371–88.
Gambier, Y. (2001) ‘Les traducteurs face aux écrans: une élite d’experts’. In
F. Chaume and R. Agost (eds) La traducción en los medios audiovisuales (pp.
91–114). Castelló: Universitat Jaume I.
Gambier, Y. (2003) ‘Introduction. Screen transadaptation: perception and reception’. The Translator 9(2): 173–89.
Gottlieb, H. (1994) ‘Subtitling: people translating people’. In C. Dollerup and
A. Lindegaard (eds) Teaching Translation and Interpreting 2 (pp. 261–74).
Amsterdam and Philadelphia: John Benjamins.
Gottlieb, H. (1996) ‘Theory into practice: designing a symbiotic course in subtitling’. In C. Heiss and R.M. Bollettieri Bosinelli (eds) Traduzione multimediale
per il cinema, la televisione e la scena (pp. 281–95). Bologna: CLUEB.
Ivarsson, J. (1992) Subtitling for the Media: A Handbook of an Art. Stockholm:
TransEdit.
Ivarsson, J. and Carroll, M. (1998) ‘Code of good subtitling practice’. Language
Today, April: 1–3. Also found at: www.transedit.st/code.htm
James, H. (1998) ‘Screen translation training and European cooperation’. In
Y. Gambier (ed.) Translating for the Media (pp. 243–58). Turku: University of
Turku.
Karamitroglou, F. (1998) ‘A proposed set of subtitling standards in Europe’.
Translation Journal 2(2). www.accurapid.com/journal/04stndrd.htm
Koby, G.S. and Baer, B.J. (2003) ‘Task-based instruction and the new technology’.
In B.J. Baer and G.S. Koby (eds) Beyond the Ivory Tower. Rethinking Translation
Pedagogy. ATA Scholarly Monograph Series, Vol. XII (pp. 211–29). Amsterdam
and Philadelphia: John Benjamins.
Mayoral, R. (1999) ‘Notes on translator training’. Intercultural Studies Group
Innovation in Translator and Interpreter Training. www.fut.es/~apym/symp/
mayoral.html
Mayoral, R. (2001) ‘El espectador y la traducción audiovisual’. In F. Chaume and
R. Agost (eds) La traducción en los medios audiovisuales (pp. 33–46). Castelló:
Universitat Jaume I.
Millán-Varela, C. (2001) ‘Building bridges: an open distance learning MA
Translation Studies’. In S. Cunico (ed.) Training Translators and Interpreters in the
New Millenium (sic) (pp. 124–35). Portsmouth: University of Portsmouth.
TV3, Televisió de Catalunya (1997) Criteris lingüístics sobre traducció i doblatge.
Barcelona: Edicions 62.
18
Subtitling: Language Learners’
Needs vs. Audiovisual
Market Needs
Annamaria Caimi
Introduction
In the context of globalisation, the question of audiovisual translation
(AVT), generally taken to mean dubbing and/or subtitling, is fundamental for linguistic education and responds to new multicultural
needs with reference to Europe and, more extensively, to the
international world. The amazingly powerful role played by TV, cinema,
video and DVD viewing in the twentieth century has brought about an
upsurge of academic interest in multimedia translation, which is moving into the twenty-first century with renewed vigour.
This chapter focuses on the important role that subtitled films and
TV programmes have in the field of language learning, with a view to
bridging the gap between producers’ budgetary constraints and linguists’ demand for quality subtitling, in order to serve the needs both
of the ever growing population of second language learners and of film
marketers.
As far as research on subtitling is concerned, the range of publications
is impressive. We are particularly indebted to Gottlieb, author of an
international annotated bibliography on interlingual subtitling for cinema, TV, video and DVD, covering seven decades, from the invention of
film subtitling in 1929 to 1999. The bibliography is published in the
journal RILA (Caimi, 2002: 215–397), and Gottlieb (2002: 222) writes in
his introduction that it is a ‘reader-friendly guide to the growing literature on interlingual subtitling’.
240
Subtitling: Language Learners and the AV Market 241
The linguistic perspective: cognition
and simplification in subtitling
Audiovisual translation researchers know that, linguistically, subtitles
must be concise and easy to understand and the text must be carefully
edited and split: the spoken word directly transcribed usually exceeds
the space and time available, since subtitles should not stay on the
screen for longer than six seconds. Yet, they should convey the relevant
information and linguistic nuances contained in the original dialogue
to satisfy second language audience needs. Viewing subtitled films is a
cross-linguistic communication experience, where the message is simultaneously conveyed by the two most common channels of communication: speech and writing. Consequently, viewers have to practise reading
and listening skills simultaneously, backed up by the visual, animated
input of the storyline of the film.
In an entertaining audiovisual situation, the measurement of listening and reading comprehension abilities may be considered from
different perspectives, as it is the result of integrated multifunctional
activities. The linguistic perspective I have chosen to highlight in this
chapter is restricted to the strategy of simplification, almost regularly
applied in subtitling, due to space and time restrictions.
Subtitlers tend to encode important information to the detriment of
details which may deprive the message of relevant, though not fundamental, clues. There are good reasons for resorting to simplification: it
facilitates the transfer from one language to another and from the spoken
to the written media. It also speeds up the viewers’ reading activity. When
resorting to simplification and reduction of the Source Text (ST), translators in general must be aware of the psycholinguistic and sociolinguistic
issues which are likely to influence the target viewers. This awareness
should guide the linguistic choices which must be in keeping with the
socio-cultural background of the target audience (Pavesi, 2002: 128;
Laviosa, 1998; Baker, 1993 and 1996; Blum-Kulka and Levenston, 1983).
Simplification influences all translation processes and is of paramount importance in subtitling, where the transfer from Source
Language (SL) to Target Language (TL) is subsidiary to the dialogue of
the soundtrack. Nowadays, subtitling has achieved general recognition
as a type of AVT. Decoding and recoding subtitles involves cognitive
processes that interact and interfere in the multifunctional action of
translators – who transfer the film dialogue into subtitles – as well as in
the combined effort of the film viewers – who have to read, understand
and interpret them. Subtitling, which is almost always seen as a language
mediation service, is increasingly being perceived as an attractive
242 Annamaria Caimi
educational option (Díaz Cintas and Fernández Cruz, 2008; Pavesi and
Perego, 2008) in the field of language learning and teaching methodology.
In what follows, I analyse some examples of written dialogue and their
corresponding subtitles from a cognitive perspective. My aim is to add
some conceptual notions to the verbal changes brought about by
processes of simplification, highlighted by cognitive insights.
Setting the scene
Let us first set the scene by sketching the plot of the film Billy Elliot, in
order to frame the cues of the dialogue in its geographical, historical and
socio-cultural context. The basic storyline is about an 11-year-old boy in
a miner’s family who discovers that he prefers to join the girls for ballet
classes rather than attending boxing classes. When his father finds out
that the money he regularly saves to send Billy to boxing lessons has been
spent by his son to learn ballet, he is furious. Billy’s father and Tony,
Billy’s older brother, do not think ballet is a proper activity for a boy, and
are prepared to back up their opinion with violence. The small family
(father, two sons and an old grandmother) lives in Everington, a depressed
coal town with ugly brick houses and giant slag heaps, which conveys the
poverty and monotony of the bleak northern British setting. Billy’s father
and older brother, both miners, are on the picket lines during the huge
miners’ strike, which in the early 1980s upset the coal-mining region of
Northern England. They speak Geordie, a dialect that is typical of people
from Tyneside. The expressions they use are often colourful, and all the
examples below are used by male characters. In Example 1, Billy is saying
goodbye to his gay friend Michael. Billy’s father is watching them with
disapproval and Tony, Billy’s brother, reacts by saying to him:
Example 1
Tony: Stop being an old fucking woman!
Italian
La pianti di comportarti
come una suocera?
Back translation
(Can you) stop behaving
like a mother-in-law?
Tony is a miner like his father and construes his message according to his
sociolinguistic parameters. He does not speak standard English. He speaks
the dialect of the coal-mining region of Northern England, where he lives
and like, most young adults, his language is unrestrained and aggressive.
Subtitling: Language Learners and the AV Market 243
The subtitler chooses to transfer this cue by replacing the vulgar
expression ‘old fucking woman’ with the term suocera, which preserves
the negative connotation, because the cliché of the mother-in-law, seen
as an annoying person, is well known to Italian speakers. This means
that the perspective of the subtitle, which is the perspective of the
translator, foregrounds the negative connotation of the message, but
removes the vulgarity of the English expression. Two different languages are concerned and the devices that involve conceptualising one
domain of experience in terms of another depend on the knowledge of
the two different cultures. In this example, where Tony speaks a type of
dialectal English, the Italian translator could have resorted to a similar
regional or working-class dialect, but he prefers to avoid dialectal terms
and instead simplifies the expression by using core vocabulary. The language of miners or factory workers is usually rich in metaphorical expressions. The translator identifies the source domain suocera [mother-in-law]
because it is in keeping with the target domain conveying the Italian
concept of suocera as ‘a pain in the neck’.
If we highlight the basic cognitive notions which motivated Tony’s
message and assess to what extent the source conceptualisation may be
transferred into the TL conceptualisation of the same metaphorical pattern, we understand that asymmetry of interpretation in the two languages is motivated by the fact that translation from L2 to L1 (subtitlers
usually translate into their mother tongue) is accomplished directly on
the basis of transferring the cultural conceptualisation of the situation.
The crucial point is to find lexical links between words which convey
the same conceptualisation of the situation in the two languages.
In this subtitle, the translator decides that an expression in Italian
which is compositionally parallel to one in English cannot be found. He
delves into Italian metaphorical devices related to women’s most salient
defects and chooses the one which, according to him, is conceptually near
enough to the phrase pronounced by Tony, although compositionally
incomplete. The aim is to give all types of viewers the possibility of quick
understanding in order not to make them lose the thread of the story.
Example 2
George: I haven’t seen hide nor hair of Billy for months.
Italian
Back translation
Neanche l’ombra del tuo Billy da mesi.
I haven’t even seen the shadow of Billy for months.
244 Annamaria Caimi
George, Billy’s boxing trainer, tells Billy’s father that the boy is no
longer attending his lessons. Perspective and foregrounding connect linguistic coding closely to visual perception in the English expression,
which is rendered into Italian with a corresponding idiomatic form. The
verb form ‘I haven’t seen’ has been eliminated because it does not affect
the meaning of the cue.
Example 3
George: He’s like a fanny in a fit.
Italian
Back translation
Si agita come una femminuccia.
He flaps about like a right wet.
The original dialogue and the translation, both give prominence to the
feminine qualities attributed to Billy in a derogative way, but the vulgar
sexual reference of the English is again absent from the Italian version.
The aim is always to facilitate comprehension by using everyday language. The learner is interested in what is meant, not in Geordie dialect
expressions, and this linguistic choice guarantees comprehension. In
the following example, George offers himself to help Billy’s father to
persuade Billy to give up ballet:
Example 4
George: Send him round to my house. I’ll knock sense into him.
Italian
Back translation
Digli di passare da me.
Lo rimetto subito in riga
Send him round to my place.
I’ll sort him out straight away.
We can observe parallel interpretations in the two versions but the
English foregrounds the language of boxing appropriate to George’s
idiolect, whereas the Italian subtitle, though sharing the same conceptual perspective, is rendered into standard, idiomatic Italian.
In Example 5, Billy’s father complains about the sacrifice he is making to save money to send Billy to boxing lessons. Interpretation, perspective and foregrounding are crucial to an understanding of colloquial
expressions both in English and in Italian. The difference between the
two versions rests with the vulgar expression used by the miner, which
is chastened by the subtitler who has decided to use mi faccio il mazzo
Subtitling: Language Learners and the AV Market 245
Example 5
Billy’s father: I’m busting my ass for those 50 pence.
Italian
Back translation
Io mi faccio il mazzo
per quei 50 pence.
I’m working my guts out
for those 50p.
[I’m working my guts out] instead of the more vulgar mi faccio un culo
così [I’m busting my ass]. The tendency to avoid transferring vulgar
expressions is typically aimed at conveying everyday colloquial language rather than ‘colourful’ idioms. I conclude this sketchy analysis of
male dialogues with a cue of Billy’s brother, addressing Mrs Wilkinson,
the ballet instructor:
Example 6
Tony: What are you going to do? Make him a scab for the rest of his life?
Italian
Back translation
Sta cercando di farne un crumiro?
Are you trying to make him a black-leg?
Although the verbose original is fitted into a one-liner, the core concept
is preserved. Here the subtitler has implemented the strategy of simplification by deleting, in hierarchical order, the redundancies connected
with the young miner’s rough way of speaking. He reduces the cue by
crossing out the reference to ‘for the rest of his life’, which is elicited by
Tony just to reinforce the core concept of the message. The term ‘scab’
is an informal, disapproving way of referring to a strike-breaker, who,
according to Tony’s social knowledge of the world, can be associated
with a person who follows his artistic aptitude, like his brother. In both
cases a person to be ashamed of.
The dialogue exchanges we have examined above stress the harshness
of Billy’s environment based on miners’ macho culture and his family’s
resistance to dance, as opposed to the redemption of masculinity through
the desire to succeed in an artistic job, despite non-conformity. The
examples show that simplification is a complex strategy entailing a hierarchy of choices, such as reduction, condensation and decimation, on
the basis of psychological, socio-cultural and linguistic priorities. These
help the translator in singling out the chunks of text which must be kept
and those which can be eliminated or condensed.1
246
Annamaria Caimi
The teaching/learning perspective: subtitling
in second language acquisition
According to Krashen (1981), language acquisition consists of the spontaneous process of rule internalisation that results from natural language
use, while language learning is the process of developing conscious L2
knowledge through formal study. I belive Krashen’s conceptual distinction is appropriate because, in second language learning, subtitled films,
TV films and TV programmes occupy an intermediate position between
spontaneous exposure in natural settings and formal instruction.
Following Krashen’s double perspective, viewers/learners may be
divided into two types: informal learners (Ellis, 1985: 288), who are
more concerned with communicating, and formal learners, who are
more interested in acquiring grammatical rules to reach proficient communicative standards. The first type is widespread among speakers of
lesser-used languages who tend to live in subtitling countries such as
Greece, Cyprus and Portugal in southern Europe, and Wales, the
Netherlands, Denmark, Finland, Norway and Sweden in the north.
Most of them are unintentional learners because they live in regions
where the established practice is to broadcast original programmes subtitled. The usual explanation for such practice is that subtitling is
cheaper than dubbing, but the situation is somewhat more complex and
includes a number of socio-cultural considerations such as local conventions, audience profile and the like. Informal/unintentional viewers/learners are interested in developing lexical resources and pragmatic
skills, as their main concern is to enjoy films, television programmes or
live events. If by convention these programmes are screened or broadcast in the original, with the support of subtitles in the TL, viewers’
exposure to the second language works to their advantage, because they
can profit by all the beneficial influences arising from the audiovisual
environment. In fact, the effectiveness of subtitles is crucially dependent upon the hidden semiotic connections between text and image,
which affect the meaning of the visual-linguistic message and the way
in which the spoken/written message is ultimately received. Due to the
subsidiary presence of first-language written cues, this type of audiovisual setting may be considered a semi-natural way of second language
acquisition, whose benefit can be evaluated according to the type or
category of viewers.
Both verbal and nonverbal cognition are interwoven throughout
multimedia subtitled products, emphasising the powerful role of simultaneous interdependence of linguistic and non-linguistic knowledge
Subtitling: Language Learners and the AV Market 247
and mental imagery. In fact, building referential links between accurate
mental representations of the word forms and mental images of relevant
pictures is useful in understanding and remembering words (Sadoski
and Paivio, 2001: 166). Fixing image–form connections is an internal
process, which accounts for how learners handle input data and how
they use L2 resources in the production of L2 messages. The input is
shaped to make it learnable and each individual learner works on
the input according to his/her L2 experience, matched to his/her
knowledge of the world, to turn input into intake (Ellis, 1985: 166). The
reception of subtitled audiovisual programmes is an invaluable device
for internalising and automating L2 knowledge. If a viewer is practising
this learning situation every day, the reception process becomes a
strategic device that motivates the spontaneous use of existing communicative resources.
If communication strategies through viewing subtitled programmes
are not practised every day, irregular exposure to L2, although facilitated
by AVT, is not enough to reach production ability (speaking/writing). It
triggers passive/receptive skills, whose effectiveness is directly proportional to the mastery of L2 grammatical data suited to the viewer’s stage
of development.
People learn a second language by being exposed to it, but they have
to be taught or corrected for their mistakes, because the linguistic rules
that are not shared by their first language cannot develop unconsciously. Second language learning is a process which involves active
mental processes and not simply the forming of linguistic habits. When
intentional or unintentional learners have to tackle a second language
in a multimedia situation, the effectiveness of their response depends
on the way they are able to exploit their cognitive and affective
variables. In other words, it depends on how they combine the mental
processes they make use of (thinking, remembering, perceiving, inferring, recognising, classifying, generalising, monitoring, analysing,
evaluating and memorising) with their personality, attitudes, motivation and emotions.
Learners’ proficiency in a foreign language, as the result of what has
been taught and learnt during a period of instruction, that is language
learners’ achievement, may be contrasted with language aptitude,
which is the natural ability to learn a language, not necessarily including intelligence, motivation or interest. Films are always popular sources
of foreign language teaching materials, and there is no doubt that they
are valuable tools for general unintentional language acquisition – based
on the gradient of mere language aptitude – as well as for formal
248
Annamaria Caimi
intentional learning goals. In fact, a good standard of subtitling encourages formal students to:
Understand more about the theme of the film.
Recognise degrees of information.
● Consider and recognise different registers.
● Assess the complexity of the text.
● Appreciate the storyline.
●
●
In the third millennium, language acquisition is pivotal to the ongoing
process of globalisation but as a collective practice, it always has to come
to terms with individual potential or limitations.
The marketing perspective:
cost-effectiveness vs. subtitling accuracy
The quality of AVT often leaves much to be desired and this is a drawback
for both untutored/naturalist acquisition, and tutored/classroom
acquisition. Audiovisual programmes in a foreign language are in need
of language mediation – for example subtitling – which demands continual improvement in quality. In order to boost an international policy
of interlingual and intercultural promotion, the actors of the subtitling
process (professional screen translators, multilingual subtitling
companies, film producers and distributors) should reach a trade-off
between what is considered an extra cost for the audiovisual product,
that is subtitling, and the most cost-effective way of marketing subtitled
programmes. Marketers must comply with the producers’ and distributors’ need for a desired level of transactions with a target market, and
analyse the demand for subtitled films in different areas of the audiovisual landscape. The marketing task is to find ways to connect the benefits of the product with the viewers’ needs and interests, by influencing
the level of demand in a way that will help the company achieve its
objectives. This task is carried out by marketing research and planning.
Within marketing planning, marketers must make decisions on target
markets, market positioning, product development, pricing, channels of
distribution, physical distribution, communication and promotion
(Kotler, 1988).
As we are moving into a more complex society where communication
at intercultural and international level is growing rapidly, it is of paramount importance to safeguard the communication possibilities of the
languages in use by multimedia products. Serious reflections on all the
Subtitling: Language Learners and the AV Market 249
linguistic constraints that audiovisual translators must overcome should
be shared by linguists and language teachers, as well as by film marketers. Linguists and language teachers are concerned with linguistic and
cultural differences that exist between SL and TL. These differences are
evidenced by a series of variables such as experience of translation practice, TL and target culture competence, general linguistic and extralinguistic knowledge, typological knowledge, age of the translator and
level of pedagogic practice.
Marketers should be aware of the importance of the tasks carried out
by film translators in order to increase users’ options for consumption,
and to value their work, not only by using emotional appeal in marketing, and advertising the storyline of subtitled products, but also by
highlighting their cross-cultural and cross-linguistic value. The international community of viewer-learners is offered a dense network of
multilingual, multimedia optional products which are functional only
if the quality of the translation is good. The notion of quality cannot be
separated from the notion of risk and uncertainty. Every economic
transaction entails a division of work between seller and buyer (you
produce a film, I watch it), but such division of work pertains to different types of activity: one is risk-taking, which is predictable and can be
planned for only on a statistical basis; the other is represented by envisaging uncertainties that cannot possibly be found by statistics. The risk
part of transactions is increasing significantly, but uncertainties are
greater and unpredictable in our society, requiring increased capacity
for learning and adaptation. The demand for higher quality in film
translation can be seen as an attempt by the viewer-learners to force
producers to improve their products, to manage more consciously the
risk and uncertainty of their offers.
From a political perspective, we should resort to legal constraints to
attain a balance between cultural and linguistic values and economic
profit. For example, the documents of the European Commission, or of
other highly influential international institutions that promote interlingual and intercultural education, should mention the importance of
optimal quality in AVT and set guidelines to safeguard language correctness and appropriateness, in order to guarantee viewer’s satisfaction
and keep language accuracy at a high level. This is a fundamental prerequisite because the general laws of the market, no matter what the marketed product may be, are profit-oriented rather than quality-oriented.
Since screen translation is an extra cost, which is often charged to local
distributors, the respect for language correctness should be suggested, if
not imposed by law.
250 Annamaria Caimi
A second prerequisite is the development of highly advanced courses
on AVT with the aim of honing the students’ linguistic as well as technical skills (see Bartrina, this volume). In addition to learning all the
translation strategies appropriate to the transfer of dialogue into subtitles, without leaving aside the question of cultural adaptation, students
should learn to match the rhythm of the film, to determine the in and
out time for each subtitle, to maintain synchronisation between text
and sound. They should also learn how to edit the subtitles and to correct and polish the language where necessary. In other words, at the end
of the course they must become professional subtitlers, able to work
with sophisticated subtitling equipment and as part of a team.
A third prerequisite concerns training programmes for film producers
and distributors, which should include lessons on the function and the
benefits of subtitling for international sales and distribution. People
involved in the communication business should prepare a subtitled version for all communication formats, including cinema.
Every major advance in subtitling technology, which covers
everything from offline PC-based subtitle preparation systems to
multilingual, fully automated transmission solutions, should be
catapulted into the twenty-first century as the unavoidable framework
for linguistic and cultural values. This will give due emphasis to the
role of languages as the most important means for international
exchange and mutual comprehension, the most important means of
communication.
Note
1. See Pavesi and Tomasi (2001) for an in-depth study on the translation universal
of simplification.
References
Baker, M. (1993) ‘Corpus linguistics and translation studies: Implications and
applications’. In M. Baker, G. Francis and E. Tognini-Bonelli (eds) Text and
Technology: In Honour of J. Sinclair (pp. 233–50). Amsterdam and Philadelphia:
John Benjamins.
Baker, M. (1996) ‘Corpus based translation studies: The challenges that lie
ahead’. In H. Somers (ed.) Terminology, LSP and Translation. Studies in Language
Engineering in Honour of Juan C. Sager (pp. 175–86). Amsterdam and Philadelphia:
John Benjamins.
Baldry, A. (ed.) (2000) Multimodality and Multimediality in the Distance Learning
Age. Campobasso: Palladino Editore.
Subtitling: Language Learners and the AV Market 251
Blum-Kulka, S. and Levenston, E. (1983) ‘Universal of lexical simplification’. In
C. Faerch and G. Kasper (eds) Strategies in Interlanguage Communication
(pp. 119–39). London and New York: Longman.
Caimi, A. (ed.) (2002) Cinema: Paradiso delle Lingue. I Sottotitoli nell’Apprendimento
Linguistico. Special Issue of RILA (Rassegna Italiana di Linguistica Applicata)
34(1–2).
Díaz Cintas, J. and Fernández Cruz, M. (2008) ‘Using subtitled video materials
for foreign language instruction’. In J. Díaz Cintas (ed.) The Didactics of
Audiovisual Translation (pp. 201–14). Amsterdam and Philadephia: John
Benjamins.
Ellis, R. (1985) Understanding Second Language Acquisition. Oxford: Oxford
University Press.
Gottlieb, H. (2002) ‘Titles on subtitling 1929–1999’. In A. Caimi (ed.) Cinema:
Paradiso delle Lingue. I Sottotitoli nell’Apprendimento Linguistico. Special Issue of
RILA (Rassegna Italiana di Linguistica Applicata) 34(1-2): 215-397.
Kotler, P. (1988) Marketing Management. Englewood Cliffs: Prentice Hall.
Krashen, S. (1981) Second Language Acquisition and Second Language Learning.
Oxford: Pergamon.
Laviosa, S. (1998) ‘Universals of translation’. In M. Baker (ed.) Encyclopedia of
Translation Studies (pp. 288–91). London: Routledge.
Pavesi, M. and Tomasi, A. (2001) ‘Per un’analisi degli universali della traduzione
in testi scientifici: L’esplicitazione e la semplificazione’. In C. Bettoni, D. Calò
and A. Zampolli (eds) Atti del secondo convegno dell’AITLA (pp. 129–49). Perugia:
Guerra.
Pavesi, M. (2002) ‘Sottotitoli: dalla semplificazione nella traduzione
all’apprendimento linguistico’. In A. Caimi (ed.) Cinema: Paradiso delle Lingue.
I Sottotitoli nell’Apprendimento Linguistico. Special Issue of RILA (Rassegna
Italiana di Linguistica Applicata) 34(1–2): 127–42.
Pavesi, M. and Perego, E. (2008) ‘Tailor-made interlingual subtitling as a means
to enhance second language acquisition’. In J. Díaz Cintas ( ed.) The Didactics
of Audiovisual Translation (pp. 215–25). Amsterdam and Philadelphia: John
Benjamins.
Sadoski, M. and Paivio, A. (2001) Imagery and Text – A Dual Coding Theory of
Reading and Writing. Mahwah, N.J. and London: Lawrence Erlbaum.
Index
A Fish Called Wanda, 226
accent, 134, 137, 224
accessibility, 1, 5–6, 12–13, 86,
149, 163, 231–3, 238
UK, 6, 12, 16
USA, 6, 12, 16
see also audio description; blind;
deaf; subtitling for the deaf
and hard-of-hearing
adaptation(s), 10–12, 74, 76, 81,
95, 115–17, 119–20, 122–8, 153,
192, 237, 249–50
Spain / Germany, 12, 115ff
Adriana Lecouvreur, 68
Albert Herring, 10, 71, 74–7, 79, 81
American productions, 88, 98, 119
anime, 49
audience design, 151–3
audience reception, 3, 231–3
Italy, 11, 97ff
Norway, 87–9, 94, 208–9
Spain, 100, 232
see also dubbing; subtitling
audio describer, 13, 170, 172–80,
184
audio description (AD), 2, 6, 11, 180,
183–4
decision making, 174–6
impartiality, 173–4
targets, 16
theatre, 13, 170ff
see also blind; partially sighted
audio subtitling, 7
audiovisual market, 12, 15, 30, 112,
151, 200, 240ff
Billy Elliot, 242
blind, 2, 5–7, 170, 172, 180
see also accessibility; audio
description; partially sighted
body language, 6, 92, 203, 205,
210–11, 217
Braveheart, 25
British Broadcasting Corporation
(BBC), 1, 12, 133, 135, 140–3,
145–7
Britten, Benjamin, 10, 62, 71, 74–6,
78–9
see also opera
Bohème, La, 66
broadcast interpreting, see
interpreting
captioning, 56–8, 64, 91, 155
closed, 5
see also subtitling for the deaf and
hard-of-hearing; teletext
Carmen, 63, 73, 124
cartoons, 2–4, 102, 106, 215
Casa de Bernarda Alba, La, 120–1
channel
acoustic, 37–8
visual, 23, 37–8, 154, 197, 198
cinema, 2, 4, 6, 10, 22, 48–9, 58, 88,
105, 115, 118–19, 123, 125, 159,
217, 232, 240, 250
audience(s), 94
see also film
Club Dumas, El, 122
coherence, 47, 159, 162, 167
cohesion, 22, 25, 47, 160, 162
cohesive devices, 87
commentary, 24, 131, 133, 173–4
condensation, 50, 86, 96, 236, 245
constraint(s), 29, 34, 36, 43–4, 47,
57, 79, 92, 159, 232, 237,
240, 249
financial, 5
linguistic, 21, 26, 249
technical, 21–2, 50, 151, 225
textual, 21, 26
see also dubbing; subtitling;
voice-over
convention(s), 32–3, 64, 67, 80,
110–11, 152, 164, 202, 207,
235, 246
252
Index 253
cooperative principle, 198–200, 202,
207
see also maxims (Gricean);
pragmatics
cueing, 31, 37, 43, 45–6, 67, 234
see also spotting; timing
cultural reference(s), 10, 79, 100, 102 ,
188–9, 223
Curious Incident of the Dog in the Night,
The, 178
cut(s), 41, 43, 46, 65, 216
deaf/Deaf, 5, 153–67
see also accessibility; hard-ofhearing; subtitling for the deaf
and hard-of-hearing
deletion, 25, 236
Descriptive Translation Studies,
9, 36
dialect(al), 11, 26, 88, 109, 156,
242–4
see also idiolect(al); sociolect(al)
dialogue list, 91, 95, 135–6, 234
digital, 1, 6, 13, 22, 59–60, 135, 166,
229–31, 233–5, 237
technology, 3, 6, 15, 165, 229
discourse, 25–6, 75, 134, 151,
216–26
distributor(s), 3, 159, 236–7, 248–50
documentary, 91, 95, 135–6, 142, 144,
146, 215
dubbese, 102, 108–12
Italy, 102ff
see also audience reception;
dubbing
dubbing, 4, 13, 97ff, 125, 130–4, 167,
229, 240
constraints, 92
countries, 2–3, 11, 32, 36–7, 85, 93,
95, 97–8, 119, 230
industry, 98–9, 111–12, 119
versus subtitling, 10–11, 85ff,
151–2, 246
see also revoicing; voice-over
Dumas, Alexandre, 67
DVD, 1–4, 13, 22–3, 30, 32–3, 38, 48,
56, 58, 65, 67, 69, 112, 118–19,
152, 159, 165–6, 217, 231–2, 240
market, 6, 9, 21, 34, 97, 232, 234
editing, 34, 59, 65, 90, 148, 163, 219,
233, 235
English National Opera, 59,
61, 73
English Patient, The, 9,
39–41, 44
equivalence, 10, 37, 77–8, 183–4, 210,
212
European
Commission, 112, 249
TV channels, 1, 95
Union, 6
European Blind Union, 6
European Captioning Institute (ECI),
9, 33–4
expletive(s), 65
see also swear word(s); taboo
language
explicitation, 161
exposure time, 86–8
fansub(s), 49
see also (amateur) subtitling
feedback effect, 25–6, 29
Fellowship of the Ring, The, 9,
50, 57
film(s), 2–4, 6, 23, 26, 36–8, 85, 90,
95, 99, 106, 117, 131, 156, 161,
164, 170–1, 197, 210–11, 217, 219,
246–8
dialogue, 25, 133, 212, 219,
227, 241
family, 94, 96
festival(s), 1, 92, 118–19
genres, 47, 86, 123, 125, 215
industry, 1, 8, 49, 85, 92, 98, 118,
240, 248–50
Studies, 130, 133, 238
text(s), 14–15, 216, 218
transfer norms, 11–12, 115, 118,
124
translation, 15, 56, 200, 216, 249
see also adaptation(s)
frame(s), 46, 58, 214, 216, 221
globalisation, 4, 14, 98, 142, 231,
240, 248
Gregory Hines’ Show, The, 94
Grice, Paul: see maxims
254 Index
hard-of-hearing (HoH), 13, 154–8,
164–7
see also accessibility; deaf/Deaf;
subtitling for the deaf and
hard-of-hearing
humour, 10, 74, 77–9, 81, 100, 123,
181, 226
joke(s), 67–8, 100
ICT, 229
idiolect(al), 26, 244
see also dialect(al); sociolect(al)
implicature(s), 198–201, 203, 205,
209, 212
see also pragmatics; Speech Act
Theory
insert(s), 23, 33, 52, 91, 109
Internet, 1–4, 9, 13, 15, 49–50,
56, 68, 106, 187, 191, 203, 233,
238
interpreting, 4, 11–12, 50, 132, 140–1,
145–7
broadcast interpreting, 12, 140–8
Japan / UK, 12, 140–7
simultaneous, 141–3, 145–6, 170
war in Iraq, 144–6
intersemiotic, 116, 162, 164, 236
see also semiotic; polysemiotic
intonation, 25, 75, 86–7, 93, 183
irony, 33, 161, 168, 201
Japan Broadcasting Corporation
(NHK), 12, 140, 142–7
kinesic, 25, 214, 221
language transfer norms, 115ff
see also film transfer norms;
norms
language learning, 7, 15, 97, 240, 242,
246–7
learning, 15, 233, 238, 248–50
legibility, 22, 60, 232
lip synchronisation, see
synchronisation
localisation, 13–14, 186–92, 224–5
Human Computer Interaction
(HCI), 189–90, 192
Long Day’s Journey into Night, 226
Man Who Mistook his Wife for a Hat,
171
Manhattan Murder Mystery, 40, 44
maxim(s) (Gricean), 198–203, 207–8,
212
see also implicatures; pragmatics;
Speech Act Theory
Meet the Parents, 205–6
mother tongue, 31, 100, 134, 155, 157,
166, 186, 243
multimedia, 15, 229–30, 235, 240,
246–9
multimodal, 14, 163, 214, 216–22, 225
multilingual, 9, 14, 191–2, 249–50
Murder One, 95
music, 4, 10, 14, 38–9, 47, 71–81, 105,
108, 153, 164–5, 167, 214, 217–18,
220–3
narration, 4, 6, 131, 135
narrative, 14, 25, 73, 90, 123, 125,
163, 173–5, 177, 182, 212
noise(s), 38, 47, 50, 155, 160
norm(s), 9, 12, 22, 36–7, 39, 41–4,
46–7, 100–1, 103, 109–10, 115,
118, 124–6, 160, 188, 224
Norwegian Broadcasting Corporation
(NRK), 87, 89
Notting Hill, 9, 39–43, 218
omission(s), 37, 40–1, 43–4, 46, 86,
160, 201, 209, 226, 236
online, 15, 118, 187, 190, 217, 229–33,
235–7
see also training
opera(s), 1, 10, 58–69, 71–81
French, 72, 74, 76–80
German, 61, 65, 72, 77
Italian, 60–2, 65–6, 68, 72
Kobbé, G., 73–4
libretto adaptation, 71–81
see also surtitling
paralinguistic, 5, 156, 161–2,
164, 167
partially sighted, 2, 6, 170, 172
see also accessibility; audio
description; blind
Perfect Storm, The, 39–40
Index 255
phasal analysis 219–23, 225–7
see also multimodality
pivot
language, 116
translation, 134
polysemiotic, 156
see also intersemiotic; semiotic
Polysystem Theory, 12, 115
pragmatics, 14, 197–8
see also cooperative principle;
maxims; Speech Act Theory
Psycho, 23
Pullman, Philip, 175
pun(s), 74, 78, 102
punctuation, 62, 159, 161
quality, 15, 53, 56, 85, 92, 211, 232,
234, 238, 248–9
audio description, 172–3, 178
Catalan, 230–1
dubbing, 11, 99–100, 106, 111
interpreting, 12, 142–3, 146–7
localisation, 189
maxim of, 199–200
subtitling, 9–10, 34, 39, 49–51,
230–3, 237, 240
training, 230–5, 237–8
see also (amateur) subtitling
readability, 21–3, 26, 86–7, 96, 210,
232
reading speed(s), 22, 24–5, 34, 58, 86,
88, 160, 197, 232
redundancy, 25, 156, 160–2, 236
reduction, 26, 30, 160, 236, 241, 245
register, 75, 77, 88–9, 159, 248
relevance, 156–7, 163, 199, 207, 232
revoicing, 4, 10–11, 95, 131
see also dubbing; voice-over
Royal National Institute for the Blind
( RNIB), 170, 180
Royal Opera House, The (ROH) 59,
62, 64
Sacks, Oliver, 171–3, 184
second language, 52, 154, 160, 166,
240–1, 247
acquisition, 7, 246
segmentation, 24, 37
semiotic(s), 13–15, 99–100, 203, 212,
214, 216–22, 225, 236–7, 246
see also intersemiotic;
polysemiotic
shot change, 23, 42, 90, 232
sign(s), 47, 103, 109, 156, 161–2, 234
sign language, 16, 154–5
sitcom(s), 2–4, 88, 98, 102, 106, 110
slang, 31, 65, 210
soap opera(s), 2–3, 98, 102, 111, 162,
167, 214–16, 220
sociolect(al), 11, 88, 156
see also dialect(al); idiolect(al)
software, 58–60, 69, 183, 190, 237
song(s), 33, 44, 64, 71–2, 76, 81, 102,
234
sound effect(s), 28, 153, 157–8, 161,
164
soundtrack(s), 4–5, 9–12, 22–3, 25–6,
28, 30–1, 87, 93, 132, 152, 170,
217, 221, 241
Speech Act Theory, 14, 197–8, 202
see also maxims; pragmatics
spotting, 31, 37, 48, 225, 233–5
see also cueing; timing
subtitle(s) / subtitling
accuracy, 248–9
amateur, 9, 49–51, 54–7
audio subtitling, 7
constraints, 9–11, 21–2, 26–7, 29,
36, 86, 90, 234–5
countries, 11, 13, 85, 90,
97, 246
for the deaf and hard-of-hearing
(SDH), 2, 5–6, 13, 16, 28–9,
151–3, 163, 165–7
Greece / Spain, 9, 36ff
guidelines, 237–8, 249
interlingual, 6, 13, 21, 151–2, 156,
163, 165–7, 230, 240
intralingual, 6, 12–13, 152–3, 156,
158, 163, 165–7
live, 49, 58
master, 9, 234
multilingual, 9, 30, 248
Norway, 197ff
software, 3, 57, 198, 230–1, 237
see also training
supertitles, 58
256 Index
surtitles / surtitling, 1, 9–10, 58–69,
71, 73–4, 81
see also opera
swear word(s), 89, 103, 110
see also expletive(s); taboo language
synchronisation / synchrony / sync,
21, 37–9, 43, 45–6, 90, 132, 160,
236, 250
lip, 4, 92, 94, 103, 110, 131
taboo language, 89, 103, 110–11
teletext, 5, 22, 58, 165, 167
see also captioning; subtitling for
the deaf and hard-of-hearing
television (TV), 1–7, 10–11, 13, 16, 22,
38, 47, 49, 58, 65, 69, 85–91,
93–5, 98, 101, 111, 118–19, 132–3,
136, 141, 158–9, 164–6, 215–18,
220, 222–3, 246
channels, 1, 95, 137
template(s), 9, 30–4
theatre, 1, 10, 13, 58, 61, 65, 69, 72–3, 81
timing, 22–3, 28, 30–3, 74, 90, 225,
233–5
see also cueing; spotting
training, 14, 157, 227, 229, 232–3,
238, 250
audio description, 172,
177–8
interpreting, 143
online, 229ff
subtitling, 7, 14, 31, 47,
214
Catalonia / Spain, 229–31
transadaptation, 152
Twelfth Night, 171
Un posto al sole, 111, 214
Vertigo, 24, 27
VHS, 3–4, 33, 48, 119, 159
voice-over, 4, 9, 11–12,
92–3, 95, 130–7, 144–5,
229
constraints, 137
war in Iraq, 135–7
see also dubbing; revoicing
Weird Science, 24, 28
Willy the Fresh Prince of Bel Air, 110