information
Article
Driving Style: How Should an Automated Vehicle Behave?
Luis Oliveira 1, * , Karl Proctor 2 , Christopher G. Burns 1 and Stewart Birrell 1
1
2
*
WMG, University of Warwick, Coventry, CV4 7AL, UK;
[email protected] (C.G.B.);
[email protected] (S.B.)
Jaguar Land Rover, Coventry, CV4 7AL, UK;
[email protected]
Correspondence:
[email protected]; Tel.: +44-24761-50760
Received: 3 May 2019; Accepted: 19 June 2019; Published: 25 June 2019
Abstract: This article reports on a study to investigate how the driving behaviour of autonomous
vehicles influences trust and acceptance. Two different designs were presented to two groups
of participants (n = 22/21), using actual autonomously driving vehicles. The first was a vehicle
programmed to drive similarly to a human, “peeking” when approaching road junctions as if it was
looking before proceeding. The second design had a vehicle programmed to convey the impression
that it was communicating with other vehicles and infrastructure and “knew” if the junction was
clear so could proceed without ever stopping or slowing down. Results showed non-significant
differences in trust between the two vehicle behaviours. However, there were significant increases in
trust scores overall for both designs as the trials progressed. Post-interaction interviews indicated
that there were pros and cons for both driving styles, and participants suggested which aspects of the
driving styles could be improved. This paper presents user information recommendations for the
design and programming of driving systems for autonomous vehicles, with the aim of improving
their users’ trust and acceptance.
Keywords: autonomous vehicles; driving behaviour; user study; qualitative methods; acceptance;
user-centred design
1. Introduction
Technological developments in driving systems are making it possible for automated vehicles
(AVs) to become a reality in the near future. We use here the definition from [1], where AVs refer to
vehicles equipped with any driving automation system capable of performing dynamic driving tasks
on a sustained basis, and which are often labelled as driverless, self-driving or autonomous vehicles.
Manufacturers, tech companies and research centres are investing heavily in AVs and associated
technologies, with early deployments and trials already happening across the globe, e.g., [2,3]. These
vehicles have the potential to revolutionise transportation through increased mobility and safety,
reduced congestion, emissions and travel costs [4–6]. The extent of the potential benefits brought about
by AVs will depend significantly on people’s adoption of these technologies, and thus via their trust
and acceptance of such vehicles.
A substantial body of literature focuses on trust in automation, user expectations and the
development of adequate trust in systems [7]. Particular interest is placed on trust in autonomous
vehicles, given the number of factors influencing trust and its dimensions such as overtrust, undertrust
and mistrust [8,9], and the risks associated with inadequate trust levels [10]. Previous research shows
that system transparency and ease of use help build trust in automation [11]. More recently, studies
focused on the application of trust calibration into AVs to make sure drivers understand the limitations
of the system [12] and do not overtrust the technology [9]. Having the correct level of trust is especially
useful for SAE level 1–3 assisted driving [1], when the vehicle may need to hand the control of back to
Information 2019, 10, 219; doi:10.3390/info10060219
www.mdpi.com/journal/information
Information 2019, 10, 219
2 of 20
drivers during parts of the journey [13–15]. For level 4–5 AVs [1], however, when vehicles can handle
all traffic situations, there will be no handover process [16]. Nevertheless, user trust will be essential to
guarantee acceptance and adoption.
Trust in technology tends to increase with repeated interactions, as users become more familiar
with the systems in question [11,17,18]. However, there are still reservations [19] and modest long-term
projections of adoption of AVs [20]. The initial user experiences influence the levels of trust and
acceptance of technology [21], making adamant that interactions are positive the first time around.
In addition, well-designed and aesthetically pleasing interfaces tend to be more trustworthy [22].
As with any new technology, it is necessary to obtain a deep understanding of the reasons why people
trust it or not, to redesign and adapt AVs to improve its chances of acceptance and “domestication” [23].
AVs can provide a “first mile/last mile” transportation solution and be available on demand [24].
These vehicles have the potential to facilitate access to and from transport hubs and go through
semi-pedestrianised areas such as city centres [25]. The expected benefits of AVs such as less traffic,
fewer emissions and lower costs may require that users share vehicles instead of owning them [26] and
that passengers use ride-sharing schemes [27]. Studies have and are experimenting with scheduling
and dispatching services to optimise the efficiency of these pods [28]. These vehicles had been used on
recent research projects investigating trust in automation, for example, to assess usefulness, ease of use
and intention to use on-demand transportation [29,30].
Traffic efficiency and reduced congestion can be obtained through the implementation of
technological features such as communication from one vehicle to another (V2V) and between
vehicles and roadside infrastructure (V2I) [31]. AVs can implement collaborative “perception” or data
sharing about hazards or obstacles [32,33]. There is also the potential for AVs to safely drive more
closely to each other than human-driven cars. With collaborative perception, AVs can negotiate lanes
or junctions faster and more efficiently [34]. Platooning is also a possible feature: if vehicles drive in a
fleet, it can save costs and enable smoother traffic flow [35]. Another expected capability of AVs is
for them to ‘see around corners’ through advanced sensing technologies [36] so they can drive more
assertively even when a human driver would not be able to directly see the environment.
This increasing complexity of systems controlling AVs poses interesting challenges for information
sharing and processing. Occupants of AVs may have difficulty making sense of how its control systems
work [37] and therefore form incorrect mental models, defined as a representation of the world held
in the mind [38,39]. The way a vehicle behaves and the reasons behind its actions have the potential
to affect trust and acceptance. Although studies comparing human vs. machine driving exist, both
in simulators [40] and in the real world [30], they have not focused on the driving style comparing
two AVs driven by different systems. The development of this research was motivated by the need to
understand how people feel when being driven by these complex systems.
2. Literature Review
2.1. AVs vs. Pedestrians
A number of studies have been investigating the communications between AVs and vulnerable
users (e.g., pedestrians and cyclists) to better understand preferred messages and the most effective
methods of delivery [41,42]. Böckle et al. [43] used a VR environment with vehicles driving past a
pedestrian crossing and evaluated the impact of a vehicle’s external lights on the user experience.
A similar study simulated AVs with ‘eyes’ on the headlights that give the impression that the AV can
see pedestrians and indicate intention to stop [44]. One extensive study of users of short-distance AVs
focused on how the vehicle should communicate its intentions to pedestrians and cyclists via external
human-machine interaction [45]. A parallel study evaluated user expectations about the behaviour and
reactions of AVs and what information from vehicles is needed [46]. Vulnerable road users prefer the
vehicles to drive slowly and far [47], and want to have priority over AVs in shared public spaces [41].
Information 2019, 10, 219
3 of 20
Another recent example tested projections on the floor to improve communication from the vehicle
during ambiguous traffic situations [48].
External communication tools on the vehicle can minimise the likelihood of conflict when both
are sharing the same environments. However, Dey and Terken [49] observed hundreds of interactions
between pedestrians and regular vehicles and established that explicit communication is seldom used.
Pedestrians tend to rely more on the motion patterns and behaviours of vehicles to make decisions
during traffic negotiations. One lab study presented a vehicle with different rates of acceleration,
deceleration and stopping distances, and concluded that AVs should present obvious expressions to be
clear about their intent in traffic [50]. Mahadevan et al. [51] suggest that AV’s movement patterns are
key for safe and effective interaction with pedestrians and that this information could be reinforced by
other explicit communication cues.
2.2. Human Driver vs. Human Driver
The study of how people perceive the behaviour of other drivers and other vehicles is important
to guide the programming of automated driving systems. Studies have evaluated how human drivers
interact among themselves on the road and how they indicate intentions when negotiating complex
traffic situations. Drivers typically make sense of the evolution of each traffic scene by observing
and interpreting the behaviours of other vehicles and consider vehicles as whole entities, or ‘animate
human-vehicles’ [52]. Another example of previous research asked participants to drive to intersections
and assessed how they negotiated complex scenarios with other vehicles. Drivers preferred somebody
else to be proactive and felt “more confident if they do not have to be the first driver to cross the
intersection” [53].
2.3. AV vs. Human-Driven Vehicles
Studies have also been conducted in what can be considered a “transition period” consisting of
mixed cooperative traffic situations between AVs and human-driven vehicles. Drivers were asked
to evaluate AVs’ behaviours in diverse traffic situations such as lane changes, with this information
used to inform the design of driving systems with higher chances of acceptance [54,55]. In a recent
example, researchers watched several publicly available videos of AVs to evaluate the interactions on
the road, how cars communicate through their movements, and how other people interpret this [56].
They suggested that movements performed by AVs should be clear and easy to understand by
occupants of the vehicle and other vehicles, and not just part of the mechanical means of travelling
towards a destination.
2.4. AV vs. AV
The communication between two or more vehicles is a subject of growing interest given its
applications for automated driving. A seminal modelling of traffic lane changes suggests that a ‘forced’
behaviour can result in shorter travel time in comparison to a more cooperative negotiation of lane
change [57]. One study simulated cooperative vehicle systems at road intersections and evaluated
diverse scenarios, for example involving emergency vehicles [58]. They concluded that a digital
decision-making system could improve safety at junctions. Furthermore, V2V/V2I may imply that
no visible communication between vehicles is needed anymore, as all traffic negotiations could be
pre-arranged [59].
2.5. AV’s Driving Style
Early studies set to develop driving styles for AVs include examples attempting to define the
behaviours that would feel natural in a driving simulator [60]. Automated driving styles have gathered
attention in recent years [61], generally focusing on occupant’s comfort [62]. Occupants of AVs feel
these vehicles need to control the steering and speed precisely to generate a smooth ride, similarly
to how humans drive [63]. Rapid changes in acceleration or direction can compromise comfort and
Information 2019, 10, 219
4 of 20
cause motion sickness [62], which may impact driver performance, especially important in handout
situations [64]. However, a recent user study shows that preferred AV driving style may not correspond
to the way humans drive, mainly regarding decelerating: when the simulator vehicle decelerated most
in the first part of the manoeuvre, as human drivers do, users tended to feel uncomfortable [65].
2.6. Anthropomorphism
Human-robot interactions are generally preferred if the machines present human-like features or
behaviours [66]. Robots can display these behaviours through motion or gaze, with some arrangements
being perceived by humans as being more natural and competent than others [67]. One literature
review indicates that trust is most influenced by characteristics of the robot such as anthropomorphism,
transparency, politeness, and ease of use [11]. Previous research has examined how driving agents
could increase trust with more human-like appearance and behaviour, and be interpreted intuitively by
the driver [68,69]. Some studies have been trying to make AVs better at reproducing human-like driving
styles to increase safety when interacting with human-driven vehicles [70]. Anthropomorphism has
been shown to evoke feelings of social presence and allow AVs to be perceived as safer, more intelligent
and trustworthy [71]. These examples are in line with the more overarching issues of human-robot
interactions. One extensive literature review analysed studies of the behaviour of robots as they
interacted with humans [72], concluding that humans need to receive intuitive and effective signals
from robots, and that robots should act as intelligent and thoughtful entities during interactions.
3. Aims
Emerging driverless technologies can make transportation safer and more efficient, but there
are concerns from pedestrians, other drivers, and questions about how these vehicles will interact
with each other. The systems governing AVs need to be programmed to behave in specific ways to be
trusted and accepted. For example, AVs can adopt a driving style similar to humans, to rely on the fact
that people tend to trust agents that look or behave similarly to humans [68,69]. Conversely, they can
be more assertive, making use of V2V and V2I communication [31]. Human-like robots may be seen
as less efficient in negotiating junctions. Assertive robots may be perceived as unsafe or unnatural.
It is necessary to increase the safety and efficiency of traffic via the use of AVs, but at the same time
improve trust and guarantee acceptance. However, studies testing these styles using real automated
driving vehicles were not found in the literature. Therefore, we formulated these research questions:
RQ1: How would two different driving styles affect trust for the occupants of AVs?
RQ2: How should an AV drive, and how can the acceptance of this driving behaviour be improved?
This study was designed to answer these questions, through testing different vehicle behaviours
and evaluating user feedback using actual automated driving vehicles. The aim was to assess
passenger’s levels of trust and acceptance of different driving styles to understand preferences. Surveys
and interviews were conducted to obtain impressions and opinions from participants after they were
driven by Level 4 SAE AVs [1] which used two types of driving behaviours. We hypothesise that (H1)
the manoeuvres from a human-like style would be preferred, and that (H2) familiar driving behaviour
characteristics should be added to the control systems governing AVs.
4. Methods
This experiment was performed in the Urban Development Lab in Coventry, UK, consisting
of a large warehouse designed to resemble a pedestrianised area in a town centre. It has 2-metre
tall partitions dividing the internal space into junctions and corners where small vehicles can drive
autonomously (Figure 1). Participants (N = 43) were invited to be passengers in SAE level 4 [1] AVs,
(i.e., the vehicle is capable of handling all driving functions under certain circumstances). There are no
pedals or steering wheel in the test vehicles and the occupant has no control beyond an emergency-stop
button. The vehicles were driving in highly automated mode within a defined area with no safety
driver inside the vehicle, but they were remotely supervised.
Information 2019, 10, x FOR PEER REVIEW
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
5 of 19
no pedals or steering wheel in the test vehicles and the occupant has no control beyond an emergency5 ofno
20
were driving in highly automated mode within a defined area with
safety driver inside the vehicle, but they were remotely supervised.
Information
2019, The
10, 219
stop button.
vehicles
Figure 1.
1. Vehicles
Vehicles used
usedduring
duringthis
thisstudy,
study,manufactured
manufacturedbyby
RDM/Aurrigo,
parked
opposite
sides
Figure
RDM/Aurrigo,
parked
at at
opposite
sides
of
of
the
arena
prior
to
the
start
of
a
user
trial.
the arena prior to the start of a user trial.
We used
used aa mixed
mixed experimental
experimental design
design with
with repeated-measures
repeated-measures for
for withinwithin- and
and between-groups
between-groups
We
comparisons. The
The intention
intentionwas
wastotoobtain
obtaina sample
a sample
size
of 36,
to obtain
reasonable
statistical
power
comparisons.
size
of 36,
to obtain
reasonable
statistical
power
and
and
strong
dataset
for
qualitative
analysis
[73].
As
it
is
customary
with
user
research,
we
sent
some
strong dataset for qualitative analysis [73]. As it is customary with user research, we sent some extra
extra invites
to account
for participant
The was
turnout
was surprisingly
and we
invites
to account
for participant
no-showno-show
[74]. The[74].
turnout
surprisingly
good, and good,
we scheduled
scheduled
one
day for data
collection,
hence
ending
up withThese
43 participants.
These
were
one
extra day
forextra
data collection,
hence
ending up
with 43
participants.
were randomly
assigned
randomly
assigned
to
two
groups.
The
first
22
participants
experienced
the
pod
using
“human-like”
to two groups. The first 22 participants experienced the pod using “human-like” driving behaviour,
driving
while the other
rode the
configured withdriving.
the “machine-like”
while
thebehaviour,
other 21 participants
rode 21
theparticipants
pods configured
withpods
the “machine-like”
Participants
driving.
Participants
were
not
briefed
about
the
behaviour
of
the
pods
to
avoid
possible
bias.
were not briefed about the behaviour of the pods to avoid possible bias.
The recruitment
recruitment of
of participants
participants was
was made
made via
via internal
internal emails
emails sent
sent to
to employees
employees of
large car
car
The
of aa large
manufacturerbased
basedin
inthe
theUK,
UK,but
buttargeting
targetingmainly
mainly
personnel
working
administrative
activities–
manufacturer
personnel
working
forfor
administrative
activities–we
we
intended
to
avoid
those
involved
with
engineering
or
vehicle
design
roles
as
their
main
Of
intended to avoid those involved with engineering or vehicle design roles as their main jobs.jobs.
Of the
theparticipants,
43 participants,
were females,
ages from
ranged
from
22=to
60Two
(M =of37).
Two of the
43
sevenseven
were females,
and theand
agesthe
ranged
22 to
60 (M
37).
the participants
participants
did
not
complete
the
final
survey
due
to
technical
mishaps,
therefore
they
were
did not complete the final survey due to technical mishaps, therefore they were removed removed
from the
from the quantitative
dataset.
No incentives
were
given to participants.
quantitative
dataset. No
incentives
were given
to participants.
The design
design of
of the
the scenarios
scenarios was
was scripted
scripted to
to give
give the
the impression
impression to
to participants
participants that
that the
the vehicles
vehicles
The
were
taking
decisions
and
interacting
in
real
time.
Although
the
pod
is
a
highly
automated
SAE
level
were taking decisions and interacting in real time. Although the pod is a highly automated SAE level
4
4
vehicle
[1],
to
give
experimental
control,
it
was
decided
that
the
routes
used
would
be
pre-defined.
vehicle [1], to give experimental control, it was decided that the routes used would be pre-defined.
The pods
pods followed
followed aa pre-determined
pre-determined path
path and
and displayed
displayed specific
specific behaviours
behaviours to
to give
give the
the impression
impression
The
that
they
were
interacting
in
real
time.
The
duration
of
the
experiment
was
from
45
minutes
upone
to
that they were interacting in real time. The duration of the experiment was from 45 minutes up to
one
hour
per
participant,
and
they
were
fully
debriefed
at
the
end
of
the
trial.
hour per participant, and they were fully debriefed at the end of the trial.
Information
Information 2019,
2019, 10,
10, 219
x FOR PEER REVIEW
215
216
217
218
219
220
221
222
223
224
225
226
227
of 20
66 of
19
4.1. Vehicle Programming
4.1. Vehicle Programming
We programmed two vehicles to ride in this environment simultaneously four times for
We programmed two vehicles to ride in this environment simultaneously four times for
approximately four minutes each time, with one participant in each vehicle. There were six crucial
approximately four minutes each time, with one participant in each vehicle. There were six crucial
moments where both vehicles interacted with each other by “negotiating” manoeuvers at T-junctions
moments where both vehicles interacted with each other by “negotiating” manoeuvers at T-junctions
(Figure 2). The layout of the partitions meant that it was not always possible for participants inside
(Figure 2). The layout of the partitions meant that it was not always possible for participants inside the
the vehicles to see if the other vehicle was also approaching a junction or corner. For example, when
vehicles to see if the other vehicle was also approaching a junction or corner. For example, when driving
driving towards a junction from the internal road, the pod had to stop and let the vehicle on the outer
towards a junction from the internal road, the pod had to stop and let the vehicle on the outer perimeter
perimeter road pass first (Figure 3). The behaviour at the junction could be of two types, described
road pass first (Figure 3). The behaviour at the junction could be of two types, described below.
below.
Figure 2. Diagram of the arena showing the interactions at the three T-junctions, where the vehicle
Figure 2. Diagram of the arena showing the interactions at the three T-junctions, where the vehicle
going straight had priority.
going straight had priority.
228
4.1.1. Human-Like
Human-Like
Behaviour
4.1.1.
Behaviour
229
230
231
232
233
234
For the
thehuman-like
human-likedriving,
driving,
both
pods
in arena
the arena
display
thebehaviour:
same behaviour:
they
For
both
pods
in the
wouldwould
display
the same
they reduced
reduced
speed
and
“crept”
out
of
the
junctions
as
if
“looking
if
it
was
safe
to
proceed”.
In
this
speed and “crept” out of the junctions as if “looking if it was safe to proceed”. In this condition, the pods
condition,
pods out
slowed
moved
onto cautious,
junctions or
as unsure
if beingifcautious,
unsure
if the other
slowed
andthe
moved
ontoand
junctions
asout
if being
the otheror
pod
was present
or a
pod was present
hazard,
as ado.
human
mightmanoeuvre
do. This “creeping”
could
potential
hazard, or
as aa potential
human driver
might
Thisdriver
“creeping”
could alsomanoeuvre
be interpreted
as
also
be
interpreted
as
slowly
“unmasking”
the
pod’s
sensors
around
a
physical
obstacle.
If
the
other
slowly “unmasking” the pod’s sensors around a physical obstacle. If the other vehicle was approaching,
vehicle
approaching,
pod
would
stop to give way before exiting.
one
podwas
would
stop to giveone
way
before
exiting.
235
4.1.2. Machine-Like
Behaviour
Machine-Like
Behaviour
236
237
238
239
240
241
242
For the
of of
thethe
vehicles
waswas
designed
to convey
the
the machine-like
machine-likedriving
drivingcondition,
condition,the
thebehaviour
behaviour
vehicles
designed
to convey
impression
thatthat
the pods’
control
systems
already
“knew”
where
the other
vehicle
was was
at allattimes
and
the impression
the pods’
control
systems
already
“knew”
where
the other
vehicle
all times
were
thus communicating
and negotiating
the junction
beforehand.
For halfFor
of the
the pod
and were
thus communicating
and negotiating
the junction
beforehand.
halfinteractions,
of the interactions,
stopped
at junctions
and waited
the other
vehicle
was
not yet
visible)
pass. For
the other
the pod stopped
at junctions
andfor
waited
for the
other(which
vehicle
(which
was
not yettovisible)
to pass.
For
interactions,
the
pod
would
unhesitatingly
manoeuvre
through
the
junction
since
it
already
“knew”
in
the other interactions, the pod would unhesitatingly manoeuvre through the junction since it already
advance
that
the other
a hazard
oraobstacle.
“knew” in
advance
thatpod
thecould
other not
podbe
could
not be
hazard or obstacle.
Information 2019, 10, 219
Information 2019, 10, x FOR PEER REVIEW
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
7 of 20
7 of 19
Figure 3.
3. Vehicles
negotiating the
the T-junction
T-junction during
during the
the machine-like
machine-like experiment.
experiment. The
The white
white vehicle
vehicle
Figure
Vehicles negotiating
(a)
arrives
at
the
junction,
(b)
stops
and
applies
the
brakes
(as
indicated
by
the
red
LEDs
around
the
(a) arrives at the junction, (b) stops and applies the brakes (as indicated by the red LEDs around the
wheel
wings)
as
it
‘knows’
the
black
vehicle
is
approaching.
The
black
vehicle
turns
the
corner
(c)
and
wheel wings) as it ‘knows’ the black vehicle is approaching. The black vehicle turns the corner (c) and
passes in
in front
frontof
ofthe
thewhite
whitevehicle
vehicle(d),
(d),which
whichthen
then
disengages
brakes
is clear
to proceed
passes
disengages
thethe
brakes
(e)(e)
andand
is clear
to proceed
(f).
(f).
4.2. Activities
4.2. Activities
Participants were in the vehicle alone, with no specific task to perform. They had a radio to
communicate
with
the in
research
team alone,
shouldwith
theyno
need.
Twotask
volunteers
took They
part in
thisa study
at
Participants
were
the vehicle
specific
to perform.
had
radio to
the
same time,with
one the
in each
vehicle.
After
eachthey
of the
four
journeys,
participants
vehicle
to
communicate
research
team
should
need.
Two
volunteers
took partexited
in thisthe
study
at the
complete
surveys
theirAfter
trusteach
in the
participants
were escorted
to vehicle
a waiting
same time,
one inindicating
each vehicle.
ofvehicle.
the fourBoth
journeys,
participants
exited the
to
room
to fillsurveys
in theseindicating
surveys electronically
while
another
two participants
rodeto
inathe
pods.
complete
their trust inon
thetablets
vehicle.
Both
participants
were escorted
waiting
room to fill in these surveys electronically on tablets while another two participants rode in the pods.
4.2.1. Trust
4.2.1.TheTrust
main instrument used to evaluate trust was the Scale of Trust in Automated Systems [75].
The questionnaire
containsused
12 items
assessing
dependability,
reliability
The main instrument
to evaluate
trustconcepts
was the such
Scale as
of security,
Trust in Automated
Systems
[75].
and
familiarity.
Participants
ranked
statements
such
as
“The
autonomous
pod
is
reliable”
or “Iand
am
The questionnaire contains 12 items assessing concepts such as security, dependability, reliability
suspicious
the autonomous
intent, action
or “The
outputs”
on a 7-point
Seven questions
familiarity. of
Participants
rankedpod’s
statements
such as
autonomous
podscale.
is reliable”
or “I am
measure
trust,
and
the
remaining
five
assess
distrust
in
the
technology.
Distrust
responses
arequestions
inverted
suspicious of the autonomous pod's intent, action or outputs” on a 7-point scale. Seven
and
addedtrust,
to trust
to resultfive
in the
overall
trust score,
instructed by
[75]. Results
from the
measure
andresponses
the remaining
assess
distrust
in theastechnology.
Distrust
responses
are
surveys
were
statistically
analysed
using
SPSS
24.
inverted and added to trust responses to result in the overall trust score, as instructed by [75]. Results
from the surveys were statistically analysed using SPSS 24.
4.2.2. Acceptance
4.2.2.To obtain
Acceptance
qualitative data and assess the user acceptance, we asked a few questions to participants
during a brief semi-structured interview where they could describe the experience. We were particularly
To obtain qualitative data and assess the user acceptance, we asked a few questions to
participants during a brief semi-structured interview where they could describe the experience. We
were particularly interested if they noticed the behaviour of the pod approaching corners when the
Information 2019,
2019, 10,
10, 219
x FOR PEER REVIEW
Information
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
88 of
19
of 20
other pod passed in front, and how they negotiated the junctions. We also asked if participants could
interested
if they
noticedbehaved
the behaviour
theThe
podinterviews
approaching
corners
when and
the other
pod into
passed
explain why
the vehicle
in that of
way.
were
transcribed
imported
the
in
front,
and
how
they
negotiated
the
junctions.
We
also
asked
if
participants
could
explain
why
QSR International NVivo software to be coded into nodes, which are the units of information based
the
vehicle behaved
in that[76].
way.
Thewere
interviews
were transcribed
imported
the QSR
on participants’
statements
Nodes
then grouped
in categories,and
integrated
andinto
correlated
to
International
NVivo
software
to
be
coded
into
nodes,
which
are
the
units
of
information
based
on
indicate relationships and develop conclusions [77].
participants’ statements [76]. Nodes were then grouped in categories, integrated and correlated to
indicate relationships and develop conclusions [77].
5. Results
5. Results
5.1. Quantitative Data
5.1. Quantitative Data
We conducted a 4x2 repeated-measures ANOVA on the responses from the Scale of Trust in
We conducted a 4×2 repeated-measures ANOVA on the responses from the Scale of Trust in
Automated Systems [75], using the trust scores for each of the four trips as dependent scores, and the
Automated Systems [75], using the trust scores for each of the four trips as dependent scores, and the
two driving styles as a between-groups factor. We ran the Kolmogorov-Smirnov test for normality
two driving styles as a between-groups factor. We ran the Kolmogorov-Smirnov test for normality on
on these trust variables, and there were no significant differences, thus the data is normally
these trust variables, and there were no significant differences, thus the data is normally distributed.
distributed. There were no group differences between the human-like or machine-like driving styles
There were no group differences between the human-like or machine-like driving styles (F(1,39) = 1.711,
(F(1,39) = 1.711, p = 0.20) with low observed power and effect size (0.248 and 0.042 respectively). There
p = 0.20) with low observed power and effect size (0.248 and 0.042 respectively). There was a main
was a main effect of trust scores across trips F(3,117) = 25.403, p < 0.0001,
partial-eta² = 0.394, where
effect of trust scores across trips F(3,117) = 25.403, p < 0.0001, partial-eta2 = 0.394, where trust increased
trust increased across the four journeys irrespective of driving style (Figure 4), with the standard
across
the four journeys irrespective of driving style (Figure 4), with the standard deviations shown in
deviations shown in Table 1. As there were no group differences, we used paired t-tests to identify
Table 1. As there were no group differences, we used paired t-tests to identify the post-hoc differences
the post-hoc differences in trust scores across the four trips. There was a non-significant difference
in
trust scores
across the
was
a non-significant
comparisons
of
st and There
between
comparisons
of four
the 1trips.
2nd trust
scores
only (t(40) = difference
-1.980, p <between
0.055) (Table
2), with all
the 1st and 2nd trust scores only (t(40) = -1.980, p < 0.055) (Table 2), with all other paired comparisons
other paired comparisons showing significant differences.
showing significant differences.
Jian et al. (2000) scores (min 12, max 84)
Overall trust
70.00
68.00
66.00
64.00
62.00
60.00
58.00
1
2
3
4
Jorney number
Human
Machine
288
289
Figure 4.
4. Mean
Mean scores
scores of
of trust
trust per
per condition
condition through
through journeys.
journeys.
Figure
290
291
Table
Table1.1. Mean
Mean trust
trust scores
scores as
as measured
measured by
by [75]
[75]and
andrelated
related standard
standard deviation
deviation for
for both
both human
human and
and
machine
machine driving
driving styles.
styles.
Mean Trust Scores per Run
Mean trust scores per run
1
1
2
32
43
4
Human
Human
59.30
59.30
59.55
65.85 59.55
67.20 65.85
67.20
SD
SD
8.163
8.163
8.470
8.506 8.470
9.622 8.506
9.622
Machine
Machine
63.19
63.19
66.33
68.29 66.33
69.38 68.29
69.38
SD
SD
11.197
11.197
10.910
11.645 10.910
10.929 11.645
10.929
Information 2019, 10, 219
9 of 20
Table 2. Paired differences in Jian et al. [75] scores between each journey in the vehicle.
Paired Differences
292
95% Confidence
Interval of the
Difference
Std.
Error
Mean
T
df
Sig.
(2-tailed)
Lower
Upper
Information 2019, 10, x FOR PEER REVIEW
Trust score
Pair 1
run 1-Trust
−1.732
5.599
0.874
−3.499
0.036
−1.980
40
0.055
Table 2. –score
Paired
in Jian et al. [75] scores between each journey in the vehicle.
run differences
2
Paired Differences Trust score
Pair 2
293
294
295
296
297
298
299
300
301
302
Mean
Std.
Dev.
Pair 1
Pair 2
Pair 3
Pair 4
Pair 5
Pair 6
run 1-Trust
score run 3
−5.805
5.904
Mean Std.
Std.
0.922
−7.669
Dev. Error
Mean
Trust score
run 1-Trust
−7.024
7.206
1.125
−9.299
4
Trust scorescore
run run
1-Trust
score run 2
-1.732 5.599 0.874
Pair 3
Trust scoreTrust
run score
1-Trust score run 3
Pair 4scorerun
−4.073run6.509
Trust
run2-Trust
1-Trust score
4
score run 3
Trust score run 2-Trust score run 3
Trust score
Trust
scorerun
run2-Trust
2-Trust score
run7.128
4
Pair 5
−5.293
Trust scorescore
run run
3-Trust
score
run
4
4
Pair 6
Trust score
run 3-Trust
score run 4
−1.220
3.525
-5.805
1.017
-7.024
-4.073
-5.293
1.113
-1.220
5.904 0.922
−6.128
7.206
1.125
6.509 1.017
7.128
1.113
−7.543
3.525 0.551
0.551
−2.332
95% Confidence
−6.295
Interval of the
Difference
Lower −6.242
Upper
−4.750
-3.499
0.036
-7.669 -3.941
−2.019
-9.299 −4.007
-4.750
-6.128 -2.019
-7.543 −4.754
-3.043
−3.043
-2.332 -0.107
−3.941
−0.107
−2.215
T
9 of 19
df Sig. (2tailed)
40
0.000
40
0.000
40
0.033
-1.980 40
-6.295 40
40-6.2420.000
40
-4.007 40
-4.7540.000
40
40
-2.215 40
0.055
0.000
0.000
0.000
0.000
0.033
Separate analyses of the two sub-factor constructs – trust and distrust – showed slightly different
trends. Although there were no interaction effects, Distrust appears to be more stable on the first two
Separate analyses of the two sub-factor constructs – trust and distrust – showed slightly different
runs, tended to fall towards the third journey in the vehicle, and returning to a steady score by the
trends. Although there were no interaction effects, Distrust appears to be more stable on the first two
final run
(Figure
). Post-hoc
wereinnon-significant
only between
journeys
and
2, and 3
runs,
tended5to
fall towardsdifferences
the third journey
the vehicle, and returning
to a steady
score by1the
final
and 4 (Table
3).
Conversely,
the
Trust
subset
seems
to
rise
steadily
for
all
runs,
as
can
be
run (Figure 5). Post-hoc differences were non-significant only between journeys 1 and 2, and 3 and 4seen in
. Differences
here were
significant
between
allsteadily
journeys
(Table
Figure 6(Table
3). Conversely,
the Trust
subset seems
to rise
for all
runs, 4).
as can be seen in Figure 6.
Differences here were significant between all journeys (Table 4).
Distrust sub-factor scores (min 5, max 35)
Distrust
13.00
12.50
12.00
11.50
11.00
10.50
10.00
9.50
9.00
303
304
305
1
2
3
Human
Machine
4
Figure 5. Distrust scores as a separated subset from [75].
Figure 5. Distrust scores as a separated subset from [75].
(min 7, max 49)
Trust
39.00
38.00
37.00
36.00
Distrust su
303
304
305
9.00
Information 2019, 10, 219
1
2
3
4
Human
Machine
10 of 20
Figure 5. Distrust scores as a separated subset from [75].
Trust sub-factor scores (min 7, max 49)
Trust
39.00
38.00
37.00
36.00
35.00
34.00
33.00
32.00
31.00
306
307
308
1
2
3
4
Human
Machine
Figure 6. Trust scores as a separated subset from [75].
Figure 6. Trust scores as a separated subset from [75].
Table 3. Post-hoc tests for the Distrust sub-factor.
Paired Differences,
distrust sub-factor
Mean
Std.
Dev.
Std.
Error
Mean
95% Confidence
Interval of the
Difference
Lower
Upper
T
df
Sig.
(2-tailed)
Pair 1
Distrust
sub-factor
1-Distrust
sub-factor 2
0.171
3.201
0.500
−0.840
1.181
0.342
40
0.734
Pair 2
Distrust
sub-factor
1-Distrust
sub-factor 3
1.195
3.723
0.581
0.020
2.370
2.055
40
0.046
Pair 3
Distrust
sub-factor
1-Distrust
sub-factor 4
1.341
3.947
0.616
0.096
2.587
2.176
40
0.036
Pair 4
Distrust
sub-factor
2-Distrust
sub-factor 3
1.024
2.318
0.362
0.293
1.756
2.829
40
0.007
Pair 5
Distrust
sub-factor
2-Distrust
sub-factor 4
1.171
2.587
0.404
0.354
1.987
2.897
40
0.006
Pair 6
Distrust
sub-factor
3-Distrust
sub-factor 4
0.146
1.459
0.228
−0.314
0.607
0.642
40
0.524
Information 2019, 10, 219
11 of 20
Table 4. Post-hoc tests for the Trust sub-factor.
Paired Differences,
trust sub-factor
Mean
Std.
Dev.
Std.
Error
Mean
95% Confidence
Interval of the
Difference
Lower
Upper
T
df
Sig.
(2-tailed)
Pair 1
Trust sub-factor
1-Trust
sub-factor 2
−1.561
4.399
0.687
−2.950
−0.172
−2.272
40
0.029
Pair 2
Trust sub-factor
1-Trust
sub-factor 3
−4.610
3.625
0.566
−5.754
−3.465
−8.142
40
0.000
Pair 3
Trust sub-factor
1-Trust
sub-factor 4
−5.683
4.618
0.721
−7.140
−4.225
−7.880
40
0.000
Pair 4
Trust sub-factor
2-Trust
sub-factor 3
−3.049
4.944
0.772
−4.609
−1.488
−3.948
40
0.000
Pair 5
Trust sub-factor
2-Trust
sub-factor 4
−4.122
5.386
0.841
−5.822
−2.422
−4.900
40
0.000
Pair 6
Trust sub-factor
3-Trust
sub-factor 4
−1.073
2.936
0.459
−2.000
−0.146
−2.341
40
0.024
5.2. Qualitative Data
The qualitative data analysis of interviews produced 85 nodes in the three main themes describing
the reassuring human driver, the assertive machine, and the incomplete mental model presented by
participants. Opinions were divided on which was the optimal way for the vehicle to behave out of
the two designs. Participants only experienced either of the driving styles, but we received, from
both groups, arguments in favour of aspects of both human-like behaviour and the machine-like
driving style.
5.2.1. Reassuring Human
In the human-like driving style condition, the behaviour of the vehicles was designed to appear
that it was “looking”, or using sensors of some kind, before proceeding through the T-junctions.
The vehicles reduced speed every time, and stopped to give way if the other vehicle was approaching.
When asked to describe the vehicle’s behaviour, participant 11 [P11] declared: “I assume it’s got to get,
the cameras need to be out to see where it was going to, just like us, it can’t see around corners, it noses out a
little bit so it can actually see what it’s doing”. P15 complemented this: “it knows it has to give way at that
point and just check if anything is coming”.
P16 felt comfortable with the driving behaviour presented by the vehicle, saying that it was
“probably trying to inspire confidence in the passenger, I’m guessing, in terms of like the way it behaved, kind of
quite similar to a human, it’s only ever going to inspire confidence I think it’s because that’s what we’re used to”.
After being debriefed about the study and the possibility of the pod reducing speed and ‘looking’ at
junctions, P32 added concerns about vulnerable road users, such as “pedestrians or cyclists that could
have been there that don’t communicate with the pod. That may be a safer way of doing it rather than flying
around the corner”.
Information 2019, 10, 219
12 of 20
5.2.2. Assertive Machine
For the machine-like condition, the design intended to convey that the vehicles were
communicating between each other. The vehicles would manoeuvre through junctions without
reducing the speed, and stop only if another vehicle was approaching. This design was perceived
correctly by some participants, as P28 explains: “it stopped at a junction, because I assume it knew that
something was coming, as opposed to it reacting to seeing something coming”.
However, there was also the feeling that the traffic needed a more efficient approach, and that
the vehicle could have been more assertive. P40 said that “sometimes I didn’t expect it to stop, because
I thought the other pod was a bit further away but then it did, so I guess it’s cautious . . . if I was driving I’d
probably have gone”.
Interestingly, P19, who tested the human-like version, commented that a machine driving like a
human and trying to look around the corners seemed unnatural: “I think it was a bit unexpected because
my expectation with the pods is that that there would be some unnaturalism to it rather than a human driver”.
P21 complemented with their wish for the pod to be more assertive: “If I was in an autonomous pod with
sensors giving a 360-degree view at all times, I’d expect the vehicle to instantaneously know whether it was safe
or not, and not need to edge out”.
One common complaint was that the vehicles were performing sharp turns, due to the way we
purposefully designed the driving behaviour. This feeling was present in both conditions, but more
noticeable with the machine-like driving style condition. The relationship between speed and sharp
steering caused a few negative reactions from participants: “what you’d expect from a driver is a bit of a
gradual turn” [P34] and “there were moments where it was accelerating around corners, I think it catches you
unaware” [P41].
5.2.3. Incomplete Mental Model
The unfamiliarity with automated vehicles and their capabilities led participants to be unsure
about its diving style and reasons behind behaviours. For some participants, it was not very clear how
the vehicles navigated the environment, or why they behaved as they did. Some participants seemed
to be unaware of the possibility of vehicle-to-vehicle communication. For example, P22 declared that
“the [other] car hasn’t even appeared but my car had already stopped in advance, and there wasn’t a light or
anything, just stopping because they knew that a car or something would pass in front of it, but in a way that it
was impossible for it to have detected”. Likewise, P04 was unsure about the reasons behind the vehicle
behaviours: “I just assume it was a radar in front, it’s not obvious what is making the mechanism workings, the
inside, it’s just how I understand how it works. I just consider the lights when the other is on the way, they may
interact like a car, it’s not completely obvious how they move”. P43, who tested the machine-like driving,
was also unsure about why the vehicle behaved as it did, and commented that they felt uncomfortable
when it took the corner without ‘looking’, which seemed unsafe:
Normally, when you drive, you stop at the junction and check if there’s another car coming or another
driver and then will go, but here it didn’t stop, it just went. I did, in my mind, I knew there wasn’t
anything coming, but if it would be the real, in real life, I would be a bit cautious, I’d be feeling a
bit, ‘why it didn’t stop?’ it was ok at this time, but I wouldn’t feel safe, because it may be other
vehicles coming.
These uncertainties, together with limitations with the design of the journeys, led to some
participants correctly suspecting that the vehicles were pre-programmed to follow a specific route.
Ten participants mentioned their suspicions during the interviews. P27 illustrates: “So without knowing
how it all linked together and how it is integrated I assume that there is a preconceived path that the pod has to
follow, and if that’s the case then one pod is always going to know where the other pod is”.
After a discussion about vehicle-to-vehicle communication, P26 questioned: “how would that
work for other cars? I don’t know, for pods that works, for other cars you can’t expect that everyone’s going to
have, immediately have cars that all communicate between each other, overnight”. P31 added the concept of
Information 2019, 10, 219
13 of 20
familiarity and domestication of the technology, which may eventually happen: “when people get used
to it, when people grow up with it, I don’t think it will be a consideration anymore. I think it will be assumed,
that’s it, and it does that”.
6. Discussion
This study demonstrated that there were no statistically significant differences in reported trust
scores between the two driving style conditions as measured by the trust questionnaire [75]. This result
was corroborated by qualitative analysis of the interview responses. Participants’ opinions were
divided between the two driving styles, and they could list the advantages and disadvantages of both
without a strong preference for either. Therefore, our first research question could not be answered and
the first hypothesis could not be proved: the manoeuvres from one driving system were not necessarily
preferred over the other by our participants.
We showed that trust increased with time for both driving styles, being higher on the final run
once users built familiarity with the system. Previous research also indicated that trust evolves and
stabilizes over time [78]. There is evidence that trust can be learned, as users evaluate their current and
past experiences about the system’s performance [11]. Especially if interactions are positive, users can
learn to trust technology [21]. Our result probably reflects the growing familiarity with the technology
as it proved itself safe.
The overall trust and the sub-factors of trust and distrust in the machine-like driving style showed
a steadily curve throughout the four journeys in the vehicle. However, the human-like behaviour
presented a steeped change in these scores between runs 2 and 3. Although the reasons are not
completely clear, we suggest that the behaviour of the vehicle could have been perceived as awkward
by participants, and therefore they took a couple of runs to figure out this driving style, accustomate
with the vehicle and only then increase their trust the technology.
Current traffic situations are challenging for vehicles with more assertive behaviour, as our
participants pointed out. There are interactions with diverse agents and environmental features,
which are not directly in communication with the vehicle’s system [79]. Users also acknowledge that
future generations may be more comfortable with AVs and its features, as they learn to live with the
new technology.
If the benefits of automated driving are to be obtained, e.g., less traffic congestion and improved
efficiency, these vehicles should incorporate the capabilities brought about the technology, such as
platooning and collaborative perception. Vehicles will probably be able to communicate among each
other, share the knowledge of hazards and obstacles, drive in platoons, slot in between each other
at junctions, and make decisions based on information beyond occupants’ field of view. Driving
efficiently may sometimes involve performing manoeuvres that can be considered misconduct or
may make people uneasy [50]. Users may be unsure of the reasons behind vehicle behaviours, and
assertive driving seems unsafe. However, some level of rule breaking is acceptable and even expected,
for example, when a vehicle has to cross a street divisor line to overtake a stopped car [80].
6.1. Suggestions
To answer our second research question, we provide here an indication of how an AV should drive
and some recommendations to improve the acceptance of AV’s driving behaviours. Early versions of
the control systems governing AVs could start off being more conservative, similarly to how humans
behave, “leveraging the instinctive human ability to react to dangerous situations” [81]. After repeated
exposure and once users become familiar with AVs, their behaviour should become more assertive,
progressing to a machine-like driving style. The speed of this transition could be down to the occupants
to define, and therefore gradually increasing comfort and acceptance. These suggestions indicate
that our second hypothesis is only partially proved. We had hypothesised that AVs should present a
familiar driving behaviour. However, a longitudinal perspective to trust and acceptance implies that a
familiar human-like driving style could be gradually replaced by a more machine-like behaviour.
Information 2019, 10, 219
14 of 20
We also suggest that occupants should form a mental model of how AVs work beforehand, and
progressively understand the more advanced features, given that the formation of the appropriate
level of trust tends to start to be created long before the actual user interaction [82]. Users should also
be aware of the details of the systems and the reasons behind vehicles’ behaviours, as it can increase
the situation awareness [83]. Users would also benefit to know that the AV is sharing an overarching
knowledge that is governing traffic for a common good. Users should require a comprehensive mental
model of the processes behind the decision-making system embedded in the vehicles in order to build
trust instead of waiting for users to learn how the control systems work over time [37].
People tend to produce highly different mental models of AVs (not always correct or
comprehensive), and gradually add concepts and links as they experience journeys [84]. It is possible
to design procedures that encourage appropriate trust, for example, communicating to users the system
capabilities and limitations beforehand [12]. An industry-wide effort to communicate the capabilities of
vehicles may be needed. Occupants of AVs could be shown that vehicles have ‘seen’ possible hazards
and the related system performance [85], but also assure when a vehicle is communicating with another
and with infrastructure. By doing so, users would be more likely to accept what otherwise could be
deemed a risky driving manoeuvre.
AV’s communication capabilities could be displayed on internal [86] and external [46] interfaces
available to the users, as this interaction improves the understanding of the vehicle with time [87].
However, the design of the information delivered to the occupants of AVs should take into consideration
the related workload, as too much information can have a negative effect, making users anxious [88].
Pre-training could be used to improve the understanding of advanced features of AVs, for example,
via interactive tutorials [89].
6.2. Limitations and Future Work
The design of this study presented numerous challenges, which resulted in some limitations
described here. Firstly, the process of defining the laps required a meticulous and time-consuming
design of the journeys through the arena. This was coupled by the challenge of coordinating the
behaviour of one vehicle with the other, with the path to be followed and the timings for each start
and stop to be in perfect sync. Minor deviations from the expected ideal driving behaviour have been
shown to lead to AVs being perceived as awkward [56], hence our participants’ complaints about
sharp, un-natural turns and acceleration profiles, and long stop times.
The recruitment of participants and the experiments conducted for each condition happened in
two different phases, one after the other, approximately one week apart. This lack of randomisation may
have affected the results of the study, for example, if participants were primed by incidents involving
AVs. However, no high-profile accidents were in the news during the course of the data collection
phase. Additionally, the demographics of participants may not represent the target population for
these vehicles, since, via an opportunistic sampling, we obtained mainly male able-bodied participants
working for a car manufacturer. We attempted to minimise previous knowledge and experience by
excluding engineers and designers from the recruitment process. Nevertheless, the general population
with a more balanced gender and controlled age ratio could be invited to participate in future studies.
Previous studies show that occupants’ comfort and acceptance of certain driving style may be perceived
differently according to their demographic characteristics [90].
This research could have benefited from longer piloting, testing and validation of the designed
driving behaviours to increase the chances of all participants perceiving the driving styles as we
designed them. Future studies should also find better ways of designing the laps, perhaps observing
how humans drive to find the precise ideal path [62,91,92]. It is possible to use computational methods
for interpolating the curves and defining paths followed by AVs [34], but taking in consideration
that a technically ideal trajectory may not coincide with occupants’ preferences [61]. Vehicles should
also only spend the minimum necessary time stopped at junctions not to compromise the perceived
efficiency. These details may explain why some of our participants suspected that the pods followed a
Information 2019, 10, 219
15 of 20
pre-programmed path. Further research could also compare acceptance and trust between lay users
and those that went through a training program about the capabilities of the vehicles prior to the
interactions [89].
The evolution of trust during the current study indicates an avenue of research within technology
acceptance and automotive user interaction. Studies could test trust and distrust levels using longer
journeys or a large number of runs to define how scores improve over time. It would also be interesting
to understand when trust ‘saturates’. Future studies could also include a staged negative incident to
evaluate how it would affect trust levels, and if trust is ever rebuilt.
Finally, this research raised one interesting point, that of individual choices versus a common
good [93]. Stopping for no apparent reason sounded non-efficient and made participants complain.
Would users accept this behaviour, if they know that their vehicle is being held there because it is
more efficient to let a whole platoon of vehicles go by at speed instead of letting vehicles negotiate
the junction one by one? Will this represent the end of etiquette and courtesy as vehicles “get down
to business” regardless of users’ preferences [94]? More research will be needed to identify these
psychological aspects of individualism in traffic subject to a (possible) governing system controlling
all AVs.
7. Conclusions
This study presents a contribution to the design definitions of AVs, towards future driving
systems that are more acceptable and trustworthy. Two highly automated driving pods were used
simultaneously to test user trust and acceptance in relation to two distinct driving behaviours.
Human-like behaviour inspires confidence due to familiarity. However, to reduce traffic congestion and
improve efficiency, AVs will have to behave more like machines, driving in platoons and negotiating
gaps and junctions automatically. They are likely to make use of collaborative perception and share
information that vehicle occupants are unable to directly obtain. Consequently, AVs may be more
assertive than humans are in traffic, and these behaviours are generally seen as unsafe or are considered
to make people uneasy.
To improve the trust and acceptance of the automated driving systems of the future, the design
recommendations obtained from this research are the following:
•
•
•
•
Explain to the general public the details of the driving systems, for example, the recent technological
features such as V2V/V2I
Help create realistic mental models of the complex interactions between vehicles, its sensors, other
road users and infrastructure
Present the features progressively, so occupants can build this knowledge with time
Convey to occupants the sensed hazards and the shared knowledge received from the other
vehicles or infrastructure, so users can acknowledge that the system is aware of hazards beyond
the field of view.
Users may need to form new and more realistic mental models of how the AVs work, either
through an iterative process of experiencing the systems or via pre-training about the features and
capabilities of AVs. Once users better understand the driving systems and become familiar with the
technology and the reasons behind its behaviour, they will be more trustworthy, accepting and likely
to ‘let it do its job’.
Author Contributions: Conceptualization, L.O., K.P., C.G.B.; methodology, L.O., K.P., C.G.B.; formal analysis, L.O.,
K.P., C.G.B.; investigation, L.O., C.G.B., K.P.; data curation, L.O., K.P., C.G.B.; writing—original draft preparation,
L.O.; writing—review and editing, C.G.B., K.P., S.B.; visualization, L.O.; supervision, S.B.; project administration,
S.B.; funding acquisition, S.B.
Funding: This project is funded by Innovate UK – an agency to find and drive science and technology innovations.
Grant competition code: 1407_CRD1_TRANS_DCAR.
Information 2019, 10, 219
16 of 20
Acknowledgments: This study was part of the UK Autodrive, a flagship, multi-partner project, focusing on
the development of the Human Machine Interface (HMI) strategies and performing real-world trials of these
technologies in low-speed AVs (http://www.ukautodrive.com). The authors would like to acknowledge the vast
amount of work that RDM Group/Aurrigo put into this study.
Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the
study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to
publish the results.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
SAE. J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems; SAE
Int.: Warrendale, PA, USA, 2014; Available online: http://wwwwww.sae.org/standards/content/j3016_201609/
(accessed on 13 April 2018).
Eden, G.; Nanchen, B.; Ramseyer, R.; Evéquoz, F. On the Road with an Autonomous Passenger Shuttle:
Integration in Public Spaces. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human
Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 1569–1576. [CrossRef]
Nordhoff, S.; de Winter, J.; Madigan, R.; Merat, N.; van Arem, B.; Happee, R. User acceptance of automated
shuttles in Berlin-Schöneberg: A questionnaire study. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58,
843–854. [CrossRef]
Meyer, J.; Becker, H.; Bösch, P.M.; Axhausen, K.W. Autonomous vehicles: The next jump in accessibilities?
Res. Transp. Econ. 2017, 62, 80–91. [CrossRef]
Fagnant, D.J.; Kockelman, K. Preparing a nation for autonomous vehicles: Opportunities, barriers and policy
recommendations. Transp. Res. Part A Policy Pract. 2015, 77, 167–181. [CrossRef]
Wadud, Z.; MacKenzie, D.; Leiby, P. Help or hindrance? The travel, energy and carbon impacts of highly
automated vehicles. Transp. Res. Part A Policy Pract. 2016, 86, 1–18. [CrossRef]
Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors J. Hum. Factors
Ergon. Soc. 2004, 46, 50–80. [CrossRef] [PubMed]
Merritt, S.M.; Heimbaugh, H.; LaChapell, J.; Lee, D. I Trust It, but I don’t Know Why: Effects of Implicit
Attitudes Toward Automation on Trust in an Automated System. Hum. Factors J. Hum. Factors Ergon. Soc.
2013, 55, 520–534. [CrossRef]
Mirnig, A.G.; Wintersberger, P.; Sutter, C.; Ziegler, J. A Framework for Analyzing and Calibrating Trust in
Automated Vehicles. In Proceedings of the 8th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 33–38. [CrossRef]
Kundinger, T.; Wintersberger, P.; Riener, A. (Over)Trust in Automated Driving: The Sleeping Pill of Tomorrow?
In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems; CHI’19; ACM Press:
New York, NY, USA, 2019; pp. 1–6.
Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust.
Hum. Factors J. Hum. Factors Ergon. Soc. 2015, 57, 407–434. [CrossRef] [PubMed]
Khastgir, S.; Birrell, S.; Dhadyalla, G.; Jennings, P. Calibrating trust through knowledge: Introducing the
concept of informed safety for automation in vehicles. Transp. Res. Part C Emerg. Technol. 2018, 96, 290–303.
[CrossRef]
Helldin, T.; Falkman, G.; Riveiro, M.; Davidsson, S. Presenting system uncertainty in automotive UIs for
supporting trust calibration in autonomous driving. In Proceedings of the 5th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications, Eindhoven, The Netherlands,
28–30 October 2013; pp. 210–217. [CrossRef]
Lyons, J.B. Being transparent about transparency: A model for human-robot interaction. In Proceedings of
the AAAI Spring Symposium, Stanford, CA, USA, 25–27 March 2013; pp. 48–53.
Kunze, A.; Summerskill, S.J.; Marshall, R.; Filtness, A.J. Automation transparency: Implications of uncertainty
communication for human-automation interaction and interfaces. Ergonomics 2019, 62, 345–360. [CrossRef]
Haeuslschmid, R.; Shou, Y.; O’Donovan, J.; Burnett, G.; Butz, A. First Steps towards a View Management
Concept for Large-sized Head-up Displays with Continuous Depth. In Proceedings of the 8th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications—Automotive’UI 16,
Ann Arbor, MI, USA, 24–26 October 2016; pp. 1–8. [CrossRef]
Information 2019, 10, 219
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
17 of 20
Sibi, S.; Baiters, S.; Mok, B.; Steiner, M.; Ju, W. Assessing driver cortical activity under varying levels of
automation with functional near infrared spectroscopy. In Proceedings of the 2017 IEEE Intelligent Vehicles
Symposium (IV), Redondo Beach, CA, USA, 11–14 June 2017; pp. 1509–1516.
Gustavsson, P.; Victor, T.W.; Johansson, J.; Tivesten, E.; Johansson, R.; Aust, L. What were they thinking?
Subjective experiences associated with automation expectation mismatch. In Proceedings of the 6th Driver
Distraction and Inattention conference, Gothenburg, Sweden, 15–17 October 2018; pp. 1–12.
Haboucha, C.J.; Ishaq, R.; Shiftan, Y. User preferences regarding autonomous vehicles. Transp. Res. Part C
Emerg. Technol. 2017, 78, 37–49. [CrossRef]
Bansal, P.; Kockelman, K.M. Forecasting Americans’ long-term adoption of connected and autonomous
vehicle technologies. Transp. Res. Part A Policy Pract. 2017, 95, 49–63. [CrossRef]
Hartwich, F.; Witzlack, C.; Beggiato, M.; Krems, J.F. The first impression counts—A combined driving
simulator and test track study on the development of trust and acceptance of highly automated driving.
Transp. Res. Part F Traffic Psychol. Behav. 2018, in press. [CrossRef]
Frison, A.; Wintersberger, P.; Riener, A.; Schartmüller, C.; Boyle, L.N.; Miller, E.; Weigl, K. In UX We Trust:
Investigation of Aesthetics and Usability of Driver-Vehicle Interfaces and Their Impact on the Perception
of Automated Driving. In Proceedings of the 2019 CHI Conference on Human Factors in Computing
Systems—CHI ’19, New York, NY, USA, 4–9 May 2019; pp. 1–13.
Smits, M. Taming monsters: The cultural domestication of new technology. Technol. Soc. 2006, 28, 489–504.
[CrossRef]
Mirnig, A.; Gärtner, M.; Meschtscherjakov, A.; Gärtner, M. Autonomous Driving: A Dream on Rails?
In Mensch und Comput 2017-Workshopband; Digitalen Bibliothek der Gesellschaft für Informatik: Regensburg,
Germany, 2017.
Chong, Z.J.; Qin, B.; Bandyopadhyay, T.; Wongpiromsarn, T.; Rebsamen, B.; Dai, P.; Rankin, E.S.;
Ang, M.H., Jr. Autonomy for Mobility on Demand. In Advances in Intelligent Systems and Computing;
Springer: Berlin/Heidelberg, Germany, 2013; pp. 671–682.
Moorthy, A.; De Kleine, R.; Keoleian, G.; Good, J.; Lewis, G. Shared Autonomous Vehicles as a Sustainable
Solution to the Last Mile Problem: A Case Study of Ann Arbor-Detroit Area. SAE Int. J. Passeng. Cars-Electron.
Electr. Syst. 2017, 10, 328–336. [CrossRef]
Krueger, R.; Rashidi, T.H.; Rose, J.M. Preferences for shared autonomous vehicles. Transp. Res. Part C Emerg.
Technol. 2016, 69, 343–355. [CrossRef]
Fu, X.; Vernier, M.; Kurt, A.; Redmill, K.; Ozguner, U. Smooth: Improved Short-distance Mobility for a
Smarter City. In Proceedings of the 2nd International Workshop on Science of Smart City Operations and
Platforms Engineering, Pittsburgh, Pennsylvania, 18–21 April 2017; pp. 46–51. [CrossRef]
Distler, V.; Lallemand, C.; Bellet, T. Acceptability and Acceptance of Autonomous Mobility on Demand:
The Impact of an Immersive Experience. In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–10.
Wintersberger, P.; Frison, A.-K.; Riener, A. Man vs. Machine: Comparing a Fully Automated Bus Shuttle
with a Manu- ally Driven Group Taxi in a Field Study. In Proceedings of the 10th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September
2018; pp. 215–220.
Qiu, H.; Ahmad, F.; Govindan, R.; Gruteser, M.; Bai, F.; Kar, G. Augmented Vehicular Reality: Enabling
Extended Vision for Future Vehicles. In Proceedings of the 18th International Workshop on Mobile Computing
Systems and Applications, Sonoma, CA, USA, 21–22 February 2017; pp. 67–72. [CrossRef]
Arnold, E.; Al-Jarrah, O.Y.; Dianati, M.; Fallah, S.; Oxtoby, D.; Mouzakitis, A. A Survey on 3D Object Detection
Methods for Autonomous Driving Applications. IEEE Trans. Intell. Transp. Syst. 2019, 1–14. [CrossRef]
Kuutti, S.; Fallah, S.; Katsaros, K.; Dianati, M.; Mccullough, F.; Mouzakitis, A. A Survey of the State-of-the-Art
Localization Techniques and Their Potentials for Autonomous Vehicle Applications. IEEE Internet Things J.
2018, 5, 829–846. [CrossRef]
Gonzalez, D.; Perez, J.; Milanes, V.; Nashashibi, F.A. Review of Motion Planning Techniques for Automated
Vehicles. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1135–1145. [CrossRef]
Lu, K.; Higgins, M.; Woodman, R.; Birrell, S. Dynamic platooning for autonomous vehicles: Real-time,
En-route Optimisation. Transp. Res. Part B Methodol. 2019, submitted.
Information 2019, 10, 219
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
18 of 20
O’Toole, M.; Lindell, D.B.; Wetzstein, G. Confocal non-line-of-sight imaging based on the light-cone transform.
Nature 2018, 555, 338–341. [CrossRef]
Beggiato, M.; Krems, J.F. The evolution of mental model, trust and acceptance of adaptive cruise control in
relation to initial information. Transp. Res. Part F Traffic Psychol. Behav. 2013, 18, 47–57. [CrossRef]
Rasmussen, J. Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human
performance models. IEEE Trans. Syst. Man Cybern. 1983, 257–266. [CrossRef]
Revell, K.M.A.; Stanton, N.A. When energy saving advice leads to more, rather than less, consumption. Int. J.
Sustain. Energy 2017, 36, 1–19. [CrossRef]
Wintersberger, P.; Riener, A.; Frison, A.-K. Automated Driving System, Male, or Female Driver: Who’D You
Prefer? Comparative Analysis of Passengers’ Mental Conditions, Emotional States & Qualitative Feedback.
In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular
Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 51–58. [CrossRef]
Fridman, L.; Mehler, B.; Xia, L.; Yang, Y.; Facusse, L.Y.; Reimer, B. To Walk or Not to Walk: Crowdsourced
Assessment of External Vehicle-to-Pedestrian Displays. arXiv 2017, arXiv:1707.02698.
Song, Y.E.; Lehsing, C.; Fuest, T.; Bengler, K. External HMIs and Their Effect on the Interaction Between
Pedestrians and Automated Vehicles. Adv. Intell. Syst. Comput. 2018, 722, 13–18. [CrossRef]
Böckle, M.-P.; Brenden, A.P.; Klingegård, M.; Habibovic, A.; Bout, M. SAV2P – Exploring the Impact of an
Interface for Shared Automated Vehicles on Pedestrians’ Experience. In Proceedings of the 9th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, Oldenburg,
Germany, 24–27 September 2017; pp. 136–140. [CrossRef]
Chang, C.; Toda, K.; Sakamoto, D.; Igarashi, T. Eyes on a Car: An Interface Design for Communication
between an Autonomous Car and a Pedestrian. In Proceedings of the 9th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September
2017; pp. 65–73. [CrossRef]
Merat, N.; Louw, T.; Madigan, R.; Wilbrink, M.; Schieben, A. What externally presented information do
VRUs require when interacting with fully Automated Road Transport Systems in shared space? Accid. Anal.
Prev. 2018, 118, 244–252. [CrossRef] [PubMed]
Merat, N.; Louw, T.; Madigan, R.; Wilbrink, M.; Schieben, A. Designing the interaction of automated vehicles
with other traffic participants: Design considerations based on human needs and expectations. Cogn. Technol.
Work 2019, 21, 69–85. [CrossRef]
Burns, C.G.; Oliveira, L.; Hung, V.; Thomas, P.; Birrell, S. Pedestrian Attitudes to Shared-Space Interactions
with Autonomous Vehicles—A Virtual Reality Study. In Proceedings of the 10th International Conference on
Applied Human Factors and Ergonomics (AHFE), Washington, DC, USA, 24–28 July 2019; pp. 307–316.
Burns, C.G.; Oliveira, L.; Birrell, S.; Iyer, S.; Thomas, P. Pedestrian Decision-Making Responses to External
Human-Machine Interface Designs for Autonomous Vehicles. In Proceedings of the 30th IEEE Intelligent
Vehicles Symposium, HFIV: Human Factors in Intelligent Vehicles, Paris, France, 9–12 June 2019.
Dey, D.; Terken, J. Pedestrian Interaction with Vehicles: Roles of Explicit and Implicit Communication.
In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular
Applications, Oldenburg, Germany, 24–27 September 2017; pp. 109–113.
Zimmermann, R.; Wettach, R. First Step into Visceral Interaction with Autonomous Vehicles. In Proceedings
of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications,
Oldenburg, Germany, 24–27 September 2017; pp. 58–64. [CrossRef]
Mahadevan, K.; Somanath, S.; Sharlin, E. Communicating Awareness and Intent in Autonomous
Vehicle-Pedestrian Interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing
Systems, Montreal QC, Canada, 21–26 April 2018; pp. 1–12.
Portouli, E.; Nathanael, D.; Marmaras, N. Drivers’ communicative interactions: On-road observations and
modelling for integration in future automation systems. Ergonomics 2014, 57, 1795–1805. [CrossRef]
Imbsweiler, J.; Ruesch, M.; Weinreuter, H.; Puente León, F.; Deml, B. Cooperation behaviour of road users
in t-intersections during deadlock situations. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 665–677.
[CrossRef]
Kauffmann, N.; Winkler, F.; Naujoks, F.; Vollrath, M. What Makes a Cooperative Driver? Identifying
parameters of implicit and explicit forms of communication in a lane change scenario. Transp. Res. Part F
Traffic Psychol. Behav. 2018, 58, 1031–1042. [CrossRef]
Information 2019, 10, 219
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
19 of 20
Kauffmann, N.; Winkler, F.; Vollrath, M. What Makes an Automated Vehicle a Good Driver? Exploring Lane
Change Announcements in Dense Traffic Situations. In Proceedings of the 2018 CHI Conference on Human
Factors in Computing Systems, Montreal QC, Canada, 21–26 April 2018; pp. 1–9.
Brown, B.; Laurier, E. The Trouble with Autopilots: Assisted and autonomous driving on the social road.
In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, Colorado,
USA, 6–11 May 2017; pp. 416–429.
Hidas, P. Modelling lane changing and merging in microscopic traffic simulation. Transp. Res. Part C Emerg.
Technol. 2002, 10, 351–371. [CrossRef]
Ibanez-Guzman, J.; Lefevre, S.; Mokkadem, A.; Rodhaim, S. Vehicle to vehicle communications applied to
road intersection safety, field results. In Proceedings of the 13th International IEEE Conference on Intelligent
Transportation Systems, Funchal, Portugal, 19–22 September 2010; pp. 192–197.
Imbsweiler, J.; Stoll, T.; Ruesch, M.; Baumann, M.; Deml, B. Insight into cooperation processes for traffic
scenarios: Modelling with naturalistic decision making. Cogn. Technol. Work 2018, 20, 621–635. [CrossRef]
Al-Shihabi, T.; Mourant, R.R. Toward More Realistic Driving Behavior Models for Autonomous Vehicles in
Driving Simulators. Transp. Res. Rec J. Transp. Res. Board 2003, 1843, 41–49. [CrossRef]
Voß, G.M.I.; Keck, C.M.; Schwalm, M. Investigation of drivers’ thresholds of a subjectively accepted driving
performance with a focus on automated driving. Transp. Res. Part F Traffic Psychol. Behav. 2018, 56, 280–292.
[CrossRef]
Bellem, H.; Schönenberg, T.; Krems, J.F.; Schrauf, M. Objective metrics of comfort: Developing a driving style
for highly automated vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2016, 41, 45–54. [CrossRef]
Oliveira, L.; Proctor, K.; Burns, C.; Luton, J.; Mouzakitis, A. Trust and acceptance of automated vehicles:
A qualitative study. In Proceedings of the INTSYS – 3rd EAI International Conference on Intelligent Transport
Systems, Braga, Portugal, 4–6 December 2019. submitted for publication.
Smyth, J.; Jennings, P.; Mouzakitis, A.; Birrell, S. Too Sick to Drive: How Motion Sickness Severity Impacts
Human Performance. In Proceedings of the 2018 21st International Conference on Intelligent Transportation
Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1787–1793.
Bellem, H.; Thiel, B.; Schrauf, M.; Krems, J.F. Comfort in automated driving: An analysis of preferences for
different automated driving styles and their dependence on personality traits. Transp. Res. Part F Traffic
Psychol. Behav. 2018, 55, 90–100. [CrossRef]
Waytz, A.; Heafner, J.; Epley, N. The mind in the machine: Anthropomorphism increases trust in an
autonomous vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117. [CrossRef]
Huang, C.; Mutlu, B. The Repertoire of Robot Behavior: Designing Social Behaviors to Support Human-Robot
Joint Activity. J. Hum. -Robot Interact. 2013, 2, 80–102. [CrossRef]
Häuslschmid, R.; von Bülow, M.; Pfleging, B.; Butz, A. Supporting Trust in Autonomous Driving.
In Proceedings of the 22nd International Conference on Intelligent User Interfaces, Limassol, Cyprus,
13–16 March 2017; pp. 319–329.
Zihsler, J.; Hock, P.; Walch, M.; Dzuba, K.; Schwager, D.; Szauer, P.; Rukzio, E. Carvatar: Increasing Trust in
Highly-Automated Driving Through Social Cues. In Proceedings of the 8th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications Adjunct, Ann Arbor, MI, USA, 24–26
October 2016; pp. 9–14.
Zhu, M.; Wang, X.; Wang, Y. Human-like autonomous car-following model with deep reinforcement learning.
Transp. Res. Part C Emerg. Technol. 2018, 97, 348–368. [CrossRef]
Lee, J.G.; Kim, K.J.; Lee, S.; Shin, D.H. Can Autonomous Vehicles Be Safe and Trustworthy? Effects of
Appearance and Autonomy of Unmanned Driving Systems. Int. J. Hum. Comput. Interact. 2015, 31, 682–691.
[CrossRef]
Cha, E.; Kim, Y.; Fong, T.; Mataric, M.J. A Survey of Nonverbal Signaling Methods for Non-Humanoid
Robots. Found. Trends Robot. 2018, 6, 211–323. [CrossRef]
Galvin, R. How many interviews are enough? Do qualitative interviews in building energy consumption
research produce reliable knowledge? J. Build. Eng. 2014, 1, 2–12. [CrossRef]
Kuniavsky, M.; Goodman, E.; Moed, A. Observing the User Experience: A Practitioner’s Guide to User Research,
2nd ed.; Elsevier: Amsterdam, The Netherlands, 2012.
Jian, J.-Y.; Bisantz, A.M.; Drury, C.G. Foundations for an Empirically Determined Scale of Trust in Automated
Systems. Int. J. Cogn. Ergon. 2000, 4, 53–71. [CrossRef]
Information 2019, 10, 219
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
20 of 20
Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [CrossRef]
Glaser, B.G. The Constant Comparative Method of Qualitative Analysis. Soc. Probl. 1965, 12, 436–445.
[CrossRef]
Yang, X.J.; Unhelkar, V.V.; Li, K.; Shah, J.A. Evaluating Effects of User Experience and System Transparency
on Trust in Automation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot
Interaction, Vienna, Austria, 6–9 March 2017; pp. 408–416.
Dogramadzi, S.; Giannaccini, M.E.; Harper, C.; Sobhani, M.; Woodman, R.; Choung, J. Environmental Hazard
Analysis—A Variant of Preliminary Hazard Analysis for Autonomous Mobile Robots. J. Intell. Robot Syst.
2014, 76, 73–117. [CrossRef]
Vinkhuyzen, E.; Cefkin, M. Developing socially acceptable autonomous vehicles. In Proceedings of
the Ethnographic Praxis in Industry Conference, Minneapolis, MN, USA, 9 August–1 September 2016;
pp. 522–534.
Mahadevan, K.; Somanath, S.; Sharlin, E. “Fight-or-Flight”: Leveraging Instinctive Human Defensive
Behaviors for Safe Human-Robot Interaction. In Proceedings of the Companion of the 2018 ACM/IEEE
International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 183–184.
Ekman, F.; Johansson, M.; Sochor, J. Creating appropriate trust in automated vehicle systems: A framework
for HMI design. IEEE Trans. Hum. Mach. Syst. 2018, 48, 95–101. [CrossRef]
Wiegand, G.; Schmidmaier, M.; Weber, T.; Liu, Y.; Hussmann, H. I Drive—You Trust: Explaining Driving
Behavior of Autonomous Cars. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on
Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–6.
Heikoop, D.D.; de Winter, J.C.F.; van Arem, B.; Stanton, N.A. Effects of mental demands on situation
awareness during platooning: A driving simulator study. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58,
193–209. [CrossRef]
Kunze, A.; Summerskill, S.J.; Marshall, R.; Filtness, A.J. Evaluation of Variables for the Communication of
Uncertainties Using Peripheral Awareness Displays. In Proceedings of the 10th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September
2018; pp. 147–153.
Oliveira, L.; Luton, J.; Iyer, S.; Burns, C.; Mouzakitis, A.; Jennings, P.; Birrell, S. Evaluating How Interfaces
Influence the User Interaction with Fully Autonomous Vehicles. In Proceedings of the 10th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada,
23–25 September 2018; pp. 320–331.
Forster, Y.; Hergeth, S.; Naujoks, F.; Beggiato, M.; Krems, J.F.; Keinath, A. Learning to use automation:
Behavioral changes in interaction with automated driving systems. Transp. Res. Part F Traffic Psychol. Behav.
2019, 62, 599–614. [CrossRef]
Koo, J.; Kwac, J.; Ju, W.; Steinert, M.; Leifer, L.; Nass, C. Why did my car just do that? Explaining
semi-autonomous driving actions to improve driver understanding, trust, and performance. Int. J. Interact.
Des. Manuf. 2015, 9, 269–275. [CrossRef]
Forster, Y.; Hergeth, S.; Naujoks, F.; Krems, J.; Keinath, A. User Education in Automated Driving: Owner’s
Manual and Interactive Tutorial Support Mental Model Formation and Human-Automation Interaction.
Information 2019, 10, 22. [CrossRef]
Hartwich, F.; Beggiato, M.; Krems, J.F. Driving comfort, enjoyment and acceptance of automated
driving–effects of drivers’ age and driving style familiarity. Ergonomics 2018, 61, 1017–1032. [CrossRef]
Driggs-Campbell, K.; Govindarajan, V.; Bajcsy, R. Integrating Intuitive Driver Models in Autonomous
Planning for Interactive Maneuvers. IEEE Trans. Intell. Transp. Syst. 2017, 18, 3461–3472. [CrossRef]
Elbanhawi, M.; Simic, M.; Jazar, R. In the Passenger Seat: Investigating Ride Comfort Measures in Autonomous
Cars. IEEE Intell. Transp. Syst. Mag. 2015, 7, 4–17. [CrossRef]
Hardin, G. The Tragedy of the Commons. Sci. Mag. 1968, 162, 1243–1248. [CrossRef]
Parasuraman, R.; Miller, C.A. Trust and etiquette in high-criticality automated systems. Commun ACM 2004,
47, 51. [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).