Academia.eduAcademia.edu

Adaptive Driving Agent

Proceedings of the 8th International Conference on Human-Agent Interaction

The successful integration of automation in systems that affect human experiences requires the user acceptance of those automated functionalities. For example, the human comfort felt during a ride is affected by the automated control behavior of the vehicle. The challenge presented in this paper is how to develop an intelligent agent that learns its users' driving preferences and adjusts the vehicle control in real time, accordingly, minimizing the number of otherwise required manual interventions. This is a hard problem since users' preferences can be complex, context dependent and do not necessarily translate to the language of machines in a simple and straightforward manner. Our solution includes (1) a simulation test bed, (2) an adaptive intelligent interface and (3) an adaptive agent that learns to predict user's driving discomfort and it also learns to compute corrective actions that maximize user acceptance of automated driving. Overall, we conducted three user studies with 94 subjects in simulated driving scenarios. Our results show that our intelligent agent learned to successfully predict how to adjust the automated driving style to increase user' acceptance by decreasing the number of user manual interventions.

Adaptive Driving Agent From Driving a Machine to Riding with a Friend Claudia V. Goldman† Albert Harounian Ruben Mergui General Motors Computer Science Department General Motors Herzliya Pituach, Israel Bar Ilan University Herzliya Pituach, Israel [email protected] Ramat Gan, Israel [email protected] [email protected] Sarit Kraus Computer Science Department Bar Ilan University Ramat Gan, Israel [email protected] ABSTRACT KEYWORDS The successful integration of automation in systems that affect Intelligent Agents, Adaptive Behavior, User Modeling human experiences requires the user acceptance of those automated functionalities. For example, the human comfort felt ACM Reference format: during a ride is affected by the automated control behavior of the Claudia V. Goldman, Albert Harounian, Ruben Mergui and Sarit Kraus. vehicle. The challenge presented in this paper is how to develop 2020. Adaptive Driving Agent: From driving a machine to riding with a an intelligent agent that learns its users’ driving preferences and friend. In Proceedings of the 8th International Conference on Human-Agent adjusts the vehicle control in real time, accordingly, minimizing the number of otherwise required manual interventions. This is a Interaction (HAI' 20), November 10–13, 2020, Virtual Event, Australia. hard problem since users’ preferences can be complex, context ACM, NY, NY, USA. 8 pages. https://doi.org/10.1145/3406499.3415067 dependent and do not necessarily translate to the language of machines in a simple and straightforward manner. Our solution includes (1) a simulation test bed, (2) an adaptive intelligent 1 Introduction interface and (3) an adaptive agent that learns to predict user’s Advances in sensing and computational technologies pave the driving discomfort and it also learns to compute corrective actions way for automating increasing number of driving functionalities. that maximize user acceptance of automated driving. Overall, we These engineering solutions result in improved vehicle control conducted three user studies with 94 subjects in simulated driving scenarios. Our results show that our intelligent agent learned to and in freeing the human from manually controlling the vehicle successfully predict how to adjust the automated driving style to at times. Nevertheless, it is essential to recognize that humans are increase user’ acceptance by decreasing the number of user different, thus preferring different styles of driving when facing manual interventions. different routes, car occupancy, and driving contexts. Usually, default, engineering-based driving styles are pre-set to control the CCS CONCEPTS correct and safe performance of vehicles. There is a hidden • Computing Methodologies Artificial Intelligence Intelligent assumption that humans will all accept this style under all Agents • Machine Learning Applied Computing Driving Control circumstances. However, for different people, the same driving • Human Computer Interaction maneuver taken with different styles may be perceived as “too aggressive” by some, whereas others would consider it to be “too †Corresponding Author cautious” or “uncomfortable”. Integrating these contextual Permission to make digital or hard copies of all or part of this work for personal or preferences in a learning agent is a challenge. Understanding classroom use is granted without fee provided that copies are not made or distributed users’ needs and preferences is hard since these preferences are for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM diverse, can change over time [11] and they are contextual [21,24]. must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a This paper attempts to solve this challenge, which requires fee. Request permissions from [email protected]. solving three problems: (1) how can an agent interact with human HAI '20, November 10–13, 2020, Virtual Event, NSW, Australia drivers or passengers to get and interpret their preferences, (2) © 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-8054-6/20/11…$15.00 how can this agent learn from these preferences and driving https://doi.org/10.1145/3406499.3415067 contexts to predict discomfort and (3) how can this agent learn to adjust online the driving style settings of the vehicle it controls to avoid predicted discomfort. The first problem is hard since people might have difficulty in expressing their needs in such complex automated control and decreasing the number of manual contexts as driving. The evaluation of users’ satisfaction may be interventions required to accept this automated behavior. also very expensive [8] (e.g., setting cameras inside the car, analyzing the passengers’ video/audio streams). Therefore, it may be more efficient to rely on inputs expressed by users in natural 2 Related Work ways enabling them an interaction similar to one they would have The personalization of human-vehicle interactions has been with a taxi driver. Namely, we would like to develop an agent that studied by researchers and practitioners over the years including: would be able to interpret requests such as “Go faster” or “You’re the personalization of a car’s climate control system [18], the too close” to changes in the technical configuration of the car adaptation of the cruise control system [20] and automatic speed without requiring users to give technical details how the vehicle control [25] to name a few. In the more complex context of should be achieving these requests (e.g., “use the gas paddle less adjusting a driving style, many car’s settings and their smoothly” or “limit the acceleration”). Furthermore, we would like interdependencies affect the user experience (e.g., the smoothness the agent to reduce the need of the human to make explicit of turning the steering wheel, or of pressing the pedals (gas / comments by dynamically adjusting the cars’ parameters based on brake), acceleration rate), making existing approaches unsuitable estimation of the human’s preferences. To solve the last two for our task. Automating human-like driving was studied to problems, we present a novel learning-based agent for the provide personal comfort [3,4]. Our approach is distinct and novel automatic adjustment of a car’s driving style configuration by since our agent (1) learns a model of human preferences for driving processing and intelligently reacting to natural inputs from styles, (2) adapts its automated behavior online accordingly, (3) is drivers or passengers. Our agent, named the Adaptive Car evaluated with real subjects riding a simulated car and (4) Controller Agent (ACCA), was developed and tested in a state-of- intuitively interacts with humans overcoming difficulties the-art realistic simulation environment. Through extensive user encountered by drivers, when expressing what they need from the studies, with 94 human participants, we show that the ACCA can car in technical terms [6]. Natural interfaces [15] are commonly significantly reduce user’s burden of adjusting the driving style more intuitive for drivers and passengers to use and understand manually to achieve acceptable satisfaction levels of comfort and can assist in personalizing the interaction [13, 17]. Most (measured with usability questionnaires and evaluated relevant to our task is the work by Geng et al. [9], who provide a quantitively with the number of manual interventions required scenario-adaptive driving style prediction ontology. The proposed through the studies to express discomfort). We have applied a ontology presents how a car’s driving style should adapt to similar computational approach successfully in the thermal different traffic scenarios to meet most drivers’ expectations. domain [18]. We developed an intelligent (intuitive to use) However, the proposed ontology is limited to a “generalized interface for drivers to change the settings of their automotive prediction” as opposed to a “personalized prediction” [19]. Namely, climate control system, reducing the number of manual the ontology does not provide a car with the means to adapt to interventions required. This interface was implemented on a each user nor does it provide a way to adapt the prediction in real- tablet and used by all participants through all experiments, time. Other user studies in simulated driving environments enabling them to choose what adjustments to make to the driving analyzed driving styles and their effect on different users [1, 2]. style through touching buttons and sliders with intuitive However, these works collected data of user acceptance via meaning. questionnaires and not by interacting with an intelligent agent in Our adaptive solution was developed in three stages, each one real-time, as we do. Guna et al. [10] used driving data to predict contributing an essential component combined eventually to driving styles given users’ activities. Mazzulla et al. [16] studied the build up a complete adaptive agent solution: (1) Automatically relationship between drivers’ characteristics and driving styles. In interpretation of user intuitive needs: we trained an all these works, the online adaptation of the driving style is missing intelligent agent with human data, provided through an adaptive and is presented as the next required step. In the control domain, intelligent interface, enabling users to express their driving needs Senouth et al. [22] developed a fuzzy rule to adaptively modulate without stating specific values for all related parameters. The and assist the driver and vehicle torque to keep the lane preferred agent was able to interpret these human inputs into actual driving by the driver. This solution was evaluated by numerical control actions. (2) Automatically learning of human- simulations and not by real interactions with subjects as we preferred driving style settings: our agent learned 3 human- present here. Our work is novel due to the combination of human- centered driving styles from data collected in the first experiment. machine interactions with machine learning to provide one (3) Automatically learning and mitigation of user solution to interpret users’ preferences and then realize these into discomfort prediction: our agent learned to predict driving actual control actions with the aim of increasing acceptance. discomfort and learned to avoid it in real time, by adjusting the driving style of the simulated car accordingly. It computed the 3 Simulation Set Up control actions that attained the highest likelihood of being accepted by the users under similar driving situations. Our results showed that our agent adapted its control behavior successfully Our agent, user studies and evaluations were run in a Unity-based to its users’ expectations, increasing their acceptance of the simulation environment of automated driving. Subjects playing the role of passengers sit on a physical car seat while watching three screens that simulate what they could see through the car’s 4. Gap Time - Minimal stopping time to maintain while windows and mirrors while driving. Users were told they were traveling behind another car. Value: [1, 4] going to take a taxi-like ride with a simulated automated driver in 5. Forward Approach Gas Smooth - Determines how abruptly the city of San Francisco. to release the gas pedal when approaching another car. Value: [0.1, 0.9] 6. Forward Approach Brake Smooth - Determines how hard or 3.1 Simulation Scenarios soft to press the brake pedal when approaching an object. We chose an area in San Francisco city for our simulated Value: [0.1, 0.9] geographical urban area. The real content of the city was 7. Acceleration Rate - The acceleration applied to reach the transferred into graphical content from videos taken in San target speed. Value: [0.1, 25] Francisco. Fig. 1 shows the route arbitrarily generated in the city. 8. Lane Centering - Determines the location of the car relative Using this route, we designed four driving scenarios. Each to the center of the lane. Value: [0.1, 0.9] participant experienced the same route with all eleven events, 9. Turn Speed - Determines the desired speed when performing turns. Value: [8, 40] four times in random order (each simulated ride took around 5 10. Maximum Speed - Determines the maximum speed of the minutes). The events, distributed along the route (see Fig. 1) car on an unobstructed road. Value: [10, 50] include common urban events such as pedestrian crossings, traffic jams, a jaywalker suddenly crossing the street or a cyclist riding Any combination of values to these parameters would result in next to the car, approaching a jam, driving in a jam or open road, (slightly) different driving styles. Table 1 presents two basic encountering a car leaving its parking or lane, encountering driving styles, calm and active, that we defined after having tested hazards on road. These events were predefined to provide a rich them in the simulator. These styles will serve as the initial default driving context similar to real-world events. Each one of the four driving styles in the first experiment reported below. Subjects in rides each participant experienced, differ in the settings assigned our experiments, see the traffic on the road and see and hear the to each one of the eleven events, creating different driving effects of their own vehicle driving through the screens and contexts. We will examine the users’ reactions and desired speakers, as they would do if they were sitting in a seat next to a changes to the car’s driving style in these scenarios. driver in a regular car. The goal of our agent is to learn to adjust driving styles to those preferred by humans in real time. Parameter Name Calm Active Gas Smooth 0.9 0.1 Brake Smooth 0.8 0.1 Gap Distance 10 3 Gap Time 3 1 Forward Approach Gas Smooth 0.9 0.2 Forward Approach Brake Smooth 0.8 0.2 Default Acceleration 4 25 Maximum Speed 16 50 Lane Centering 0 0 Turn Speed 13 34 Table 1: Basic (Default) Driving Styles Figure 1: Simulator Route. Numbers indicate the locations 3.3 Human Agent Interactions of the events. Adjusting the values of driving parameters would be easy if users 3.2 Driving Styles could tell explicitly what they need in the language of the driving control system to attain driving comfort. However, this is not the A car's driving style is represented as a vector of driving case. In preliminary trials, participants were asked to express parameters values. These parameters correspond to the control themselves as they would to a friend who is driving or a taxi settings in the Unity simulator. We focused on the following ones, driver. We noticed that participants were able to distinguish that affect the simulated driving behavior: between the different driving styles and were able to discuss them 1. Gas Smooth - Determines how hard or soft to press the gas in terms of their perceived safety and enjoyment. However, it pedal. Value: [0.1, 0.9] turned out that participants were not able to satisfactorily express 2. Brake Smooth - Determines how hard or soft to press the their preferences in terms of parameter values, resulting in many brake. Value: [0.1, 0.9] 3. Gap Distance - Minimal distance to maintain when user inputs and general dissatisfaction, although they fully approaching an object. Value: [3, 17] understand the role of each parameter. Specifically, participants were unable to determine which parameter they should change and to what extent to bring about the desired change. Moreover, driving settings (that the agent will also log as training data). we noticed that participants express different expectations based When the agent acted in adaptive mode, changes to settings were on road conditions and context (e.g., in a traffic jam vs an open also made when the agent proactively predicted human road). Specifically, while it is reasonable to assume that users can discomfort and the agent decided on the changes to make to the distinguish between comfortable or not (e.g., feeling safe or in driving parameters based on the learned models. danger), it is unreasonable to assume that non-expert users would In the next sections, we describe the three experiments run with be able to quickly manually configure the above parameters. To the adaptive driving agent. In all, we evaluate the performance of that end, following these preliminary trials, we identified the the agent by quantifying the number of human interventions following 8 terms which people often use to express their needed to attain a comfortable ride. All data collected included the preferred driving style: 1) More Speed; 2) Less Speed; 3) More Gap; vehicle dynamics, driving contexts and subjects’ inputs if 4) Less Gap; 5) More Sport; 6) More Comfort; 7) More to Right; provided. and 8) More to Left. It is therefore our goal in this paper to develop an automated agent that is capable of intelligently translating these natural expressions to the desired set of values to assign to 4 Default Driving Agent the technical parameters (Section 3.2). Fig. 2 shows the interface we implemented on a Samsung tablet for getting inputs from real The goal of the first experiment was to evaluate how many users during the experiments. Users could express their desired manual corrections would be requested by human subjects to changes in driving to the experimenter and the control agent adjust a default driving style during a set of simulated rides. Our through this interface by resizing the circle, relocating the circle hypothesis was that humans are different and therefore driving (4 directions) or by moving the sports slider. The users did not contexts will affect the preferred style of driving. We recruited 30 have to assign a value for the change requested, just the desired subjects (17 males and 13 females), ranging in age from 21 and 50 direction of change. Subjects would sit on a car seat and will (avg. 33, s.d. 7.02). Participants were told that they were going to observe 3 wide screens showing what a passenger would see from take a ride in a simulated taxi in San Francisco for 4 laps driven the vehicle cabin. All rides occurred in the simulator screens, so by a simulated automated driver. All subjects started 2 rides with the experience was safe and did not incur any risks to the the default calm driving style and 2 more rides with the aggressive participants. Driving settings was always kept under safety style as in Table 1 (all runs were balanced). Users interacted with constraints. the simulated vehicle through the adaptive intelligent interface (see Fig. 2). When subjects entered any comment (by using the tablet), the experimenter paused the simulation and asked the subject for his actual intention. Then, the experimenter adjusted the driving parameters until a desired changed was achieved (e.g., when the comment was “The slowing down was harsh”, the correction included changing the values of the “Brake Smooth” and the “Forward Approach Brake Smooth” values). Any request for driving corrections remained in effect until the end of the current two rounds. A new driving style was implemented in the next two rounds. Fig. 3 shows a total of 633 comments received from the subjects (avg. of 21.1 per participant, s.d., 9.48); with 26.7% of the comments being related to the events we predefined (these comments were provided up to 7 seconds following an Figure 2: Adaptive Driving Agent: Natural Interface event). For example, when a jaywalker crossed the street in front of the car, many participants commented “Less Speed” since the 3.4 The Adaptive Control Agent car was perceived to be approaching the walker too fast (despite having enough time to stop). The ACCA agent we developed was able to control the simulated vehicle through the route we defined. The agent could operate in From the usability questionnaires we collected, we found that two modes: fixed, or adaptive. The agent was always initialized 66.66% of the subjects mentioned that the intelligent interface was with a driving style vector that determines the ranges of values of very easy to mostly easy to use (average score of 3.2 out of 10, the the driving parameters. The simulated vehicle applies these lower the better). The subjects score the match between the parameters to create the simulated physical dynamics of the driving corrections and their intent high with an average score of driving context. In the adaptive mode, the agent could change the 8.1 out of 10 (the higher the better). driving settings proactively during the ride, based on its learned models of the users’ preferences. Moreover, the agent could interact with the human subject, riding in the simulated vehicle. If the agent received external inputs from a subject through the natural interface, then the experimenter would interpret this input to the agent and instruct a set of changes to be made to the Figure 3: Total Manual Corrections Requested per Rides Laps Figure 4: Human Data-Driven Driving Clusters 5 Human Data-Driven Agent Our hypothesis was that the number of manual corrections can be reduced when users choose what driving style they prefer. Moreover, they chose from styles learned from the human-data collected in experiment 1. This data reflected the driving settings that converged towards the end of each round of experiment 1 when the user did not make any further comments. We tested clustering this data to find the densest types of driving settings (i.e., all data points in one cluster that have the shortest distance from the centroid). That is, we scored a clustering method with Table 2: Human Data-Driven Driving Style Profiles the standard density measure that computed the average of distances between items inside the cluster from their centroid and In experiment 2, we recruited 30 new subjects (16 males and 14 then the average among clusters. Let ‘m’ be the number of females), ranging in age from 23 and 47. The procedure of clusters, ‘x’ be a vector pointing to the data point and ‘c’ be a experiment 2 differs from that in experiment 1 in the users vector pointing to some centroid. Then the distance ‘di’ to the choosing the initial (learned) driving styles (in experiment 1, the centroid can be defined as: 𝑥⃗ − 𝑐⃗ and the density measure is initial driving styles were set as default styles, all participants the average of this value over the m clusters. The x vector experienced the same default styles with counterbalanced order). comprised (1) the driving style parameters at the end of rounds 2 In this experiment, participants were asked to choose a driving and 4, (2) the number of each type of corrections made, (3) style they would prefer in a taxi-like ride. They could choose a averages of speed, acceleration and jerk. Fig. 4 shows the calm or sportier style or one in between these to reflect the three clustering algorithms tested (K-means and DBSCAN [14,7]) and styles learned by our clustering algorithm. A reduction of 42% in their corresponding scores (K-means was tested for various values the number of manual corrections was indeed attained by getting of k: values equal to 2 and 3 are included in the table, larger values only a total number of 367 corrections from the 30 subjects (i.e., were not found to lead to better clustering solutions). Note that on average, subjects requested 12.2 corrections). We noted that the types of corrections provided do not characterize the styles, even in a simulated study as we run, different users had different but the driving dynamics do. DBSCAN was too sensitive to small number of comments, meaning some are better with some style changes in its parameters; it did not find a balanced division of chosen and some requested some additional adjustments (some points to styles. The winning clustering algorithms was K-means subjects had only a few corrections, while others had as many as (on inputs 1 & 3); this is consistent with results presented in [24]. 21 comments). The novelty is that the styles were learned from human data rides that converged (see Table 2). Normal is situated between the two From the usability questionnaires we collected, we found that the others. All scores are statistically significant different (tested with subjects gave an average score of 3.2 (out of 10) to the easiness of ANOVA). use of the adaptive interface (the lower the better). The subjects score the match between the driving corrections and their intent high with an average score of 7.9 out of 10 (the higher the better). 6 Adaptive Driving Agent Solution experiment 2 (a total of 117,100 data points) and changing the number of filters and their size, improved the accuracy to 95.96% Our end goal was to show how an adaptive agent can improve in training and 95.44% in testing with an AUC reaching 0.85. This human driving-comfort, by adjusting the automated driving retrained CNN could anticipate a user’s comment by 2.5 seconds. settings in real time. We developed such agent that first, learned [Online] Adaptation - Every half second the simulator sent data successfully a model of human discomfort from driving. Then, the (time, position, driving settings, acceleration, jerk and predefined agent learned what actual correction should be executed online, events) to the agent. Every 5.5 seconds of simulation, the logs are once discomfort is predicted. Our hypothesis is that when users processed into the 24 features that activate the CNN to predict the choose their preferred initial driving style (from those learned current user’s driving discomfort (see Fig. 5). If the agent predicts from human data) and further interact with an agent that adapts a state of discomfort, then the agent searches its training data set its driving behavior to their expected preferences, users will be for corrections already executed in situations like the current one intervening the least, compared to the results in the previous (see Fig. 6). The 9 samples closest to the current state are found and experiments. The same experimental procedure was applied with the correction with the highest probability (max vote) is chosen. the adaptive agent version implemented this time. For example: let ci (i = [1,9]) denote the 9 data points found closest [Offline] Discomfort Model - We first pre-processed the to the current state; then without loss of generality assume: c1- collected data such that each half a second time frame was 2=More Speed, c3-9=More Sport. Then, with probability of 7/9, associated with the next set of 24 features: More Sport will be chosen and More Speed, with probability of 2/9. 1. Front Distance To - the distance from the car in front Still, the user can enter their input using the interface at any time 2. #Surrounding Cars - Number of cars in a fixed radius (e.g., to reject the correction performed by the agent). 3. #Surrounding Cars (Adaptive) - Number of cars in a speed- To evaluate the adaptive agent performance, we recruited 34 dependent radius subjects (19 males and 15 females), age range in 25-52. Fig. 7 shows 4. Speed - Current speed of the car (Km/h) the results comparing the 3 conditions tested; error bars indicate 5. Acceleration - Current acceleration of the car (km/h2) standard error. We can clearly see that (1) the use of human data- 6. Avg. Speed - average speed of the car for the last 8 seconds driven driving styles brings about a significant decrease in the 7. Avg. Acceleration (calculated the same way as above) 8. Lateral Acceleration (m/s2) number of modifications made by the users, p < 0.05 (i.e., 12.2 9. Longitudinal Acceleration (m/s2) corrections on average vs. 20.9 when default driving styles were 10. Lateral Jerk (m/s3) initialized with no user choice). (2) the adaptive agent successfully 11. Longitudinal Jerk (m/s3) reduced this number compared to both other conditions in a 12. Is Max Speed Reached? (1/0) statistically significant manner, p < 0.05 (i.e., only 7.4 corrections 13. Driving Behind Another Car? (1/0) on average vs. 12.2 were required for the participants to achieve 14. – 24. 11 Predefined Events acceptable driving comfort). The statistical analysis was performed using an ANOVA test followed by post-hoc t-tests We chose 11 timeframes (equal to a total of 5.5 seconds) in a comparisons with Bonferroni correction. On average, our moving-window fashion to construct the training instances since adaptive agent performed 19.7 autonomous changes to the driving users’ inputs are not instantaneous. Each instance is classified as settings during the ride (s.d. 6.3). On average, users accepted 16.2 satisfied (1) or unsatisfied (0) to represent the users’ comfort from of these (accepted means that the agent predicted discomfort and driving with that vector assignments (SMOTE [5] was used to consequently adjusted the driving settings even prior to the artificially balance the data set). We built classifiers to predict participant actually asking for these changes). On average, users when these features’ settings will result in driving-discomfort for corrected the agent only 3.75 times (meaning even when the agent the user. Driving discomfort is understood as an event when the predicted discomfort and adjusted the settings, the user either did participant provides input to make an adjustment to the current not agree with the prediction or with the settings adjustment done driving settings. While the participant does not provide any input automatically). Finally, on average, users gave 3.65 additional to adjust these settings, we assume that the participant is corrections (that the agent did not predict discomfort accurately). comfortable with the driving style experienced. We measured the The number of requests depends on the initial driving style (see quality of these classifiers with the standard Area Under the Curve Fig. 8) and the time passed from the beginning of the drive. (AUC) score [12]. Using data from experiment 1 as a training set, we evaluated different prediction models [23]. Random Forest was From the usability questionnaires we collected, we found that the too simple to capture the dependencies between the driving subjects gave an average score of 3.6 (out of 10) to the easiness of settings and discomfort. Also, Linear Regression did badly since use of the adaptive interface (the lower the better). The subjects our data is not linearly separable (due to lack of space the graph is score the match between the driving corrections and their intent not included). However also the Multi-layer perceptron could not high with an average score of 8.6 out of 10 (the higher the better). predict well. So, we evaluated networks that can capture the time sequence relationships in the data. CNN resulted as a better predictor than the Long-Short-Term Memory. Table 3 summarizes the AUC scores attained by all predictors tested. Increasing the training set with data from experiment 1 with data collected in Figure 7: Average number of manual comments in all tests Table 3: Predicting Driving Discomfort Figure 8: Average number of manual comments per chosen Figure 5: Adaptive Driving Agent Behavior: Discomfort driving style Prediction 7 Conclusions We introduced an automated agent for adapting driving profiles to users and contexts to reduce human driving discomfort. Human discomfort was expressed by the participants each time they provided input to adjust their current driving settings. Our agent decreased the number of manual interventions required to correct driving settings. We also introduced a new approach for human modeling, based on the Convolutional Neural Network (CNN) trained on data obtained through an adaptive intuitive interface. Using K-means clustering and the CNN, we showed that this algorithm can be used in training supervised deep network models. We conclude that we can successfully integrate human models of preferences into the automated control systems to improve their utilization and effectiveness. Instead of a unilateral interaction between a driver and an automated vehicle where the user just operates this machine, we created a bi lateral interaction where the machine is aware of its user. The machine learns from Figure 6: Adaptive Driving Agent Behavior: Discomfort users’ inputs, it predicts users’ needs and proactively adjusts its Mitigation own settings to increase user satisfaction and acceptance. The complete AI agent system together with the novel adaptive interface were tested and evaluated successfully through 3 user [13] J. Hwang et al. “Expressive driver-vehicle interface design”. Procs. of Designing Pleasurable Products and Interfaces, page 19. ACM, 2011. studies covering a total of 94 human participants in a simulated [14] A. K Jain et al. Algorithms for clustering data, volume 6. Prentice hall Englewood set up of automated driving scenes. Cliffs, 1988 [15] L. Li et a. “Cognitive cars: A new frontier for adas research”. IEEE Transactions on Intellignt Transportation Systems, 13(1):395–407, 2012. ACKNOWLEDGMENTS [16] G. Mazulla et al. “How drivers’ characteristics can affect driving style”. Transportaton Research Procedia, 27:945–952, 2017. We thank Shlomi Zigart for the interface design. [17] I. Politis et al. “To beep or not to beep?: Comparing abstract versus language- based ultimodal driver displays”. Procs. of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015. REFERENCES [18] A. Rosenfeld et al. “Online prediction of exponential decay time series with [1] N. E. Assmann, et al. “Should my car drive as i do? what kind of driving style do human-agent application”. Procs. of the 22nd European Conference on Artificial drivers prefer for the design of automated driving functions?” 17 Intelligence, pages 595–603. 2016. Braunschweiger Symp. AAET, ITS automotive nord, pages 185–204, 2016. [19] A. Rosenfeld and S. Kraus. “Predicting human decision-making: From prediction [2] M. Beggiato et al. “Driving comfort, enjoyment and acceptance of automated to action”. Synthesis Lectures on Artificial Intelligence and Machine Learning, driving effects of drivers’ age and driving style familiarity”. Ergonomics, 61(8), 12(1):1–150, 2018. 2018. [20] A. Rosenfeld et al. “Learning drivers’ behavior to improve adaptive cruise [3] I. Bae et al. “Self-driving like a human driver instead of a Robocar”. In 3rd control”. Jl. of Intelligent Transportation Systems, 19(1):18–31, 2015. International Conf. on Electric Vehicle, Smart Grid and Information Technology, [21] K. E Schaefer and E. R Straub. “Will passengers trust driverless vehicles? 2018 removing the steering wheel and pedals. In IEEE International Multi- [4] Basu et al. “Do You Want Your Autonomous Car to Drive Like You?”. HRI. Vienna, Disciplinary Conference on Cognitive Methods in Situation Awareness and Austria. 2017. Decision Support, pages 159–165. IEEE, 2016. [5] N.V. Chawla et al. “Smote: synthetic minority over-sampling technique”. Journal [22] C. Senouth et al. “Personalized lane keeping assist strategy: Adaptation to of artificial intelligence research, 16:321–357, 2002. driving style”. IET Control Theory and Applications. 2018. [6] A. Degani et al. “On sensitivity and holding in automotive systems: The case of [23] S. Shalev-Shwartz and S. Ben-David. “Understanding machine learning: From the climate control”. Proceedings of the Human Factors & Ergonomics Society theory to algorithms”. Cambridge university press, 2014. Annual Meeting, volume 60:1906–1910. LA, CA, 2016. [24] O. Taubman-Ben-Ari and V. Skvirsky. “The multidimensional driving style [7] M. Ester et al. “A density-based algorithm for discovering clusters in large spatial inventory a decade later: Review of the literature and re-evaluation of the scale”. databases with noise”. KDD volume 96, pages 226–231, 1996. Accident Analysis & Prevention, 93:179–188, 2016. [8] L. Fridman et al. “What can be predicted from six seconds of driver glances?”. [25] L. Xu et al. “Establishing style-oriented driver models by imitating human Procs. of the CHI Conference on Human Factors in Computing Systems, pages driving behaviors”. IEEE Transactions on Intelligent Transportation Systems, 2805–2813. ACM, 2017. 16(5):2522–2530, 2015 [9] X. Geng et al. “A scenario-adaptive driving behavior prediction approach to urban autonomous driving”. Applied Sciences, 7(4):426, 2017. [10] J. Guna et al. “Estimation of the driving style based on the users’ activity and environment influence”. Sensors, 1(2404), 2017. [11] S.J. Hoch and G.F. Loewenstein. “Time-inconsistent preferences and consumer self-control”. Journal of consumer research, 17(4),1991. [12] J. Huang and C. X Ling. “Using auc and accuracy in evaluating learning algorithms”. IEEE Transactions on knowledge and Data Engineering, 17(3):299– 310, 2005.