User Experience of On-demand Autonomous Vehicles Part 2

Part 2: What would make you trust a digital driver?

Pontus Larsson, PhD

Human Factors Specialist, Ictech, Göteborg.

Introduction

On-demand Autonomous Vehicle Services (OAVS) – or Robotaxis if you like – was until recently viewed by the public as a utopian vision. The vision is nonetheless slowly becoming reality with the introduction of Alphabet/Waymo’s service “Waymo One”, which as of December 2018 operates commercially in a limited area in Phoenix, Arizona [1]. Waymo has been taking many small steps towards making this vision reality, not only by developing the automated drive technology itself, but also by building up the other parts of the operation, such as repair services [2], and rider insurance [3]. Although Waymo is seemingly ahead of the competitors in the development of Robotaxi services, different consortia consisting of big players such as GM/Cruise, Volkswagen/Aurora and Ford/Argo are likely eager to claim a share of this new type of mobility business. 

In our previous article on the evolving OAVS business [4], we analyzed how the relationship between the user and the vehicle might change as a result of the introduction of this new type of mobility business. Based on this analysis, we postulated that in order for people to consider the new mobility services as being a better option than the individually-owned car, they need to create a good user experience (UX)

In this second article in our series, we start going into detail of the user experience-related aspects of OAVS design and provide directions on what to focus on in terms of UX in order to become a successful player in the future mobility business. In particular, the current article focuses on one of the most discussed topics in the human-automation relationship domain, namely trust. Users’ trust is an aspect that may have serious consequences for the use of the automated system. For lower levels of automation, the main problem occurs when users trust the automation too much which may lead to unsafe, even lethal situations. Considering the high-level automation vehicles we are dealing with in the current article series, the main problem is rather that of a lack of trust in automation. Lack of trust might lead to that people will never even consider using the system [5]. Developing trust among current and potential users is thus of great importance for the OAVS business. Car blog The Drive goes as far as saying that “Developing […] trust is as important to an AV developer’s future as creating strong prediction models or reliable path-planning algorithms” [6]. 

In the current article, we mainly focus on the user’s trust in relation to the vehicle itself and the driving automation of the vehicle. Discussions on trust and automated vehicles may however also involve users’ trust towards the service and the various interfaces to the Mobility Service Provider (see our previous article). Would you for example trust the MSP to protect the data they collect about your travel patterns? Or trust that the MSP has taken the appropriate measures to prevent people with malicious intent from hacking into the control systems of the vehicle fleet? Another trust-related issue is the trust that is required for people to be willing to share their rides with others. For example, would you feel comfortable riding with passengers you don’t know and in that way potentially also let them know where you live and work? Or would you trust the service to drop off your kids safely at school? These and similar issues will be discussed in forthcoming articles.

Trust in driving automation and levels of automation

Before diving into the various aspects of user trust in relation to driving automation, it may be useful to recap the definitions of Society of Automotive Engineer’s (SAE) Levels of Driving Automation. This definition is now considered a global standard and is ubiquitously used in discussions on automated road vehicles of all kinds ([7], see Figure 1 below).

In our article series, we mainly discuss vehicles that can be classified as Level 4 or 5 in which the user never can be considered a driver of the vehicle. This means that the automation will never require the user to take over driving. The only difference between Level 4 and 5 is that Level 5 vehicles can drive under all conditions while Level 4 vehicles are restricted to a certain area or certain conditions – the “Operational Design Domain” (ODD). Level 4 vehicles cannot drive automatically outside their ODD and may or may not have traditional driving controls, i.e. steering wheel, pedals, etc. 

Levels 0-2 are to be considered driver support features that helps the driver to maintain speed, position in the lane etc. The driver is still in charge of and responsible for driving. The in-between Level 3 considers the driver not responsible for any aspects of driving when the automation is active, but it may still require and prompt the driver to take over the responsibility of driving when the automation fails to operate. Some consider this level not viable from a human factors perspective and even advise against using this level [9]. A major reason for advising against Level 3 is that it is risky to rely on drivers’ ability to take over control of the vehicle timely and properly when he/she has been a passenger for an extended period of time [9]. Actually, it has been found in numerous studies across different domains that the more capable the automation gets, the more difficult it is for a driver or operator to resume control properly after an extended period of error-free automation [9]. 

It should be noted that a Level 4 vehicle could offer the possibility to have both manual and automated drive options, and would thus also have these transitions between automated and manual drive. The difference between L3 and L4 in this sense, however, is that L4 vehicles will never rely on drivers’ ability to take over (see Figure 1: “What does the human in the driver’s seat have to do?”). Instead, they would need to have backup mechanisms allowing the vehicle to either carrying on driving or drive the vehicle to a spot where it is safe to park if the user does not take over driving when asked to do so.

Figure 1: SAE levels of driving automation. (Source: SAE) 

Trust within a human-automation interaction context can be defined as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” [10]. In the case of vehicle automation, the agent would be the vehicle itself or the part of the vehicle that the user identifies as responsible for the automation. The individual’s goals depend on the type of automation being considered. For low-level automation vehicles (SAE L1-2 [7]), automation is limited to basic vehicle control tasks. The goal would in these cases need to be restricted to keeping the vehicle in the lane and/or at a constant distance to a target vehicle or similar. For higher levels of automation (SAE L4-5 [7]), automation per definition takes care of all possible situations within its operational design domain in a safe way and does not require human driver support. Thus, in L4-5 the main goal of the individual is likely to be transported from start to the desired destination.  

For low levels of automation (L1-2), the individual, or the user, is still also responsible for monitoring the environment and the automation, and for taking over driving when needed [7]. It is crucial that the user-automation interface is clearly communicating this automation-user relationship and the shared responsibility between the user and automation. For these low levels of automation, severe risks occur when the user “overtrusts” the automation, i.e. when the user/driver thinks that the automation has more or better capabilities than it actually has [10]. A system that performs the driving task really well (keeps the car centered, handles corners really well etc.) can easily be mistaken for a system that has full autonomous driving capabilities, although it in fact only handles few of the many situations that may occur in everyday traffic [11].  Tragically, we have over the past few years witnessed accidents involving low-level automated vehicles that possibly could have been caused by such overtrust  [12-14]. For lower levels of automation, overtrust is thus a potentially severe safety issue and great care must be in order not to “oversell” the automation function’s capabilities [15, 16] – that is, giving the user the impression that the automation function is better than it actually is.  

For higher levels of automation, L4-5, overtrust would generally not be an issue when it comes to the direct user-automation interaction since these vehicles are designed to operate without any help from the user. One could speculate, however, that overtrust in L4-5 vehicles could result in hazardous situations for non-users in the surrounding traffic. Habibovic et al. [16] suggest for example that pedestrians may overestimate the capabilities of automated vehicles (AVs), by e.g. assuming that they can always stop and as a consequence behaves in a risky manner. Similar situations but involving low-level automation have already occurred in real life such as when people have attempted to test pedestrian crash avoidance systems in scenarios for which they were not intended (e.g. [17]). 

The most critical trust-related issue when it comes to level 4 or 5 vehicles and their primary users (the riders) is likely the lack of trust – distrust. To achieve widespread adoption of automation technologies, it is absolutely key to ensure that the user trusts the automation since a lack of trust likely leads to rejection and disuse [10].  

Surveys have found that for example many Americans would be afraid of riding in an automated vehicle [18]. Following the fatal Uber crash in March 2018, public trust in autonomous cars dropped even further in the US [19].  An absurd manifestation of this public automation distrust has been witnessed in Arizona where people have been attacking Waymo’s cars by slashing the cars’ tires, throwing rocks at the cars as well as forcing them to a stop and threatening the Waymo safety drivers inside [20].  

However, it is obvious that these types of measurements to a great extent reflect people’s current opinion based on media coverage, word-of-mouth, commercials and experience with current low-level automation cars and driver assistance systems. The majority of the high-level automation vehicles are so far only prototypes or pilot vehicles, and very few from the general public have actually had first-hand experience of this type of automation. Cars with driver assistance and low-level automation currently available are sometimes marketed as having higher capabilities than they actually have. It is likely that the accidents involving low-level automation vehicles to high extent affect people’s opinions about highly automated vehicles.  Automotive writer Alex Roy, suggest for example that “…as long as Tesla keeps selling “Full Self Driving” that isn’t – and people keep crashing – no true Self-Driving vendor will ever be free of trust issues”‬ [21]. Indeed, it is unlikely that the general public has in-depth knowledge of, for example, the different automation levels [7] or how a Tesla autopilot automation is different from that of a Waymo vehicle. So it is not strange that the general public would make the assumption that problems with L2 vehicles will also be present in an L4 vehicle. 

The opinions regarding fully automated will likely change (either to the better or the worse) with more and more services actually coming into the market. It is nonetheless possible that distrust will become a major issue and perhaps even a showstopper once self-driving shuttles are to be introduced to a wider audience. 

How can trust be improved?

The discussion above implies that it is central to understand how trust forms and develops and how trust-promoting user experiences can be created. In the paper by Ekman et al.  [22], a framework for understanding trust in the context of automated cars is presented which assumes that the user’s trust develops during three phases. Ekman et al. suggest that human-automation trust first develops prior to the first encounter with the automated vehicle. In this phase, called the Pre-use phase, the user is getting an initial, second-hand sense of trust in the automated vehicle formed by the available implicit information around him/her – through commercials, news, word-of-mouth, etc. Explicit information is then provided to the user by the dealer or similar, before the first actual use. Ekman et al. suggest that it is especially important in this phase to calibrate the user expectations regarding what the automation really is able to do. 

The following phase, the Learning phase, covers the users’ initial hands-on experiences with the autonomous vehicle and until the user has fully learned how to use the vehicle. In the subsequent and final Performance phase, the user’s continued use of the vehicle is considered, wherein it is likely that the user’s initially gained trust will remain constant if there are no drastic changes in vehicle performance or context from the Learning phase. It is however also possible that the user experiences the vehicle traveling in different contexts or even encounters incidents and these will affect the initially built-up trust for the better or the worse. 

In all these phases, there may be different means of how to adjust the users’ trust to appropriate levels. Thus, when designing for automation trust, developers of OAVS need to consider the entire trust life cycle, from prior to usage of the service, to the first actual usage, to extended use of the service. We will in the next sections review some of the research findings that may be used to guide UX design in the different trust development phases, primarily in relation to the vehicle itself and its interface/interaction.     

Pre-use phase. It is likely that the first actors on an OAVS market will need to invest in building trust among both their potential customers and the general public at a very early stage and until OAVS have become ubiquitous and commonplace. There are already examples of such proactive initiatives, in campaigns by Waymo [23,24] where they have partnered with organizations such as National Safety Council, Mothers Against Drunk Driving and others to bring credibility to the selling points of vehicle automation. Intel is another example of a tech company that invests both in trust research [25]  as well as plain advertisements aiming at building public trust in automation and their brand [26,27]. 

Actors already established in the passenger car business may have an advantage over the tech companies and newcomers (Waymo/Google, Intel, Uber etc.)  in terms of getting the potential OAVS customers to trust their self-driving solution. Car brands that over a long time have built up a legacy and reputation of working with safety can take advantage of that public trust when introducing autonomous vehicles.  A survey conducted by car community DriveTribe showed that of the 2520 respondents (DriveTribe users), over 50% would feel safest in a Volvo autonomous car – which is not surprising given Volvo’s reputation as a leader in terms of car safety [28]. Runner up was Tesla (20.3%), which may be due to their extensive work in deploying Level 2 automation in their production vehicles and in third place came one of the traditional manufacturers with strong safety heritage – Audi (13.5%). Although Waymo and Alphabet (Google) were the only “pure” tech company represented in the survey, the fact that they together got less than 0.5% of the votes despite the fact that Waymo to date to have performed more real-world L4 testing than any other company [29] may be a sign of tech companies having a longer way to go when it comes to gaining public trust than more established actors. To be fair though, GM, which is both an established car OEM as well as considered one of the major actors within autonomous car development through their Cruise subsidiary [30], only received 0.2% of the votes. On the other hand, a global study performed in 2015 by Boston Consulting Group surveying more than 5,500 consumers in ten countries, gave clear support to the notion that the general public finds the traditional OEMs more trustworthy as autonomous vehicle manufacturers than tech companies, new car companies or automotive suppliers and similar [31].

The effect of brand reputation on trust has been more thoroughly investigated by Forster et al. [32] who performed a study in which 519 participants rated their trust towards a simulated level 3 driving automation system labeled with different car OEM brands. Results showed that when the system appeared to be made by a brand of higher reputation (BMW or Tesla), participants cared less about negative information about the automation performance than when it appeared to be made by a mid-reputation brand (Opel, Skoda, Kia). Thus, there might be a risk for drivers overtrusting automation functions in a high-reputation-brand car (one may see the accidents involving Tesla’s autopilot as evidence for this already happening [11-14]). For level 4-5 cars, Forster et al.’s findings may have as a consequence that mid-reputation brands have to work much more to promote their self-driving technologies than high-reputation brands. It should be noted also, that different OEMs obviously have different reputations in different markets [32] so it might be easier for e.g. Toyota or Nissan to gain trust in Japan than BMW or Mercedes, while the opposite might be true for the German market.

As can be understood, building public trust in the pre-use phase is not easy, and especially so for companies without high reputations within the automotive field. Newcomer and tech companies could however piggyback on established OEMs to earn trust in their operations. One such example would be Chinese search engine giant Baidu teaming up with Volvo Car Group to build the “safest automated car on the planet” (according to Volvo CEO Håkan Samuelsson – who in this announcement also stressed the importance of creating credibility and trust in the automation technology) [33]. Previously mentioned ad campaigns that tech companies run in an attempt to increase trust can only be one component in a long-term trust-building strategy. Another proposal has been that OEMs should invest in dedicated academies to help consumers to become comfortable with self-driving vehicles in general [34].  

Companies also have to be transparent to the public in their operations and cooperate with authorities and address all problems rapidly and accurately. Analyst Sam Abuelsamid, a senior analyst at market research firm Navigant, has in relation to the Uber accident suggested that all systems tested on public roads should be reviewed and evaluated by an independent third party [35]. It is possible that allowing that type of review could increase credibility and eventually, trust. 

Ultimately it also comes down to the fact that providers of automated transport solutions have convinced users that their technology is safe enough that people can trust it with their lives. Exactly how safe a vehicle needs to be in order to be accepted by society is however as of yet unknown [36]. For example, does an autonomous vehicle need to be able to avoid all possible accidents or only the fatal ones to be trusted? These concerns need to be addressed for automation technology to be widely accepted and successfully commercialized [36].

Learning and performance phase. Increasing trust during the learning and performance phases can be done by the design of the vehicle itself and its user interface. But also the apps and external services connected to the use of the OAVS should be included in this scope since trust develops also when the user is not in direct contact with the vehicle [22,25].

There is probably a multitude of design features and characteristics that can be utilized to promote trust in the learning and performance phases. While the visual appearance of the car and its exterior design in particular may not be widely discussed in this context, we believe that this might be one aspect that may actually have a quite strong influence on trust. Considering normal, manually driven cars, ample efforts are usually dedicated to the exterior and interior design; for example, by ensuring that the visual appearance reflects the brand image [37]. Specifically, in the premium segment, it is not uncommon that the visual design should express driving-related features such as powerfulness, aggressiveness, performance, driving pleasure and similar attributes [38-40]. Such attributes could, at least intuitively, be counterproductive in terms of promoting trust in an OAVS context. Instead, we believe that visual design should emphasize safety, friendliness, intelligence, usability, and similar attributes to promote trust for autonomous vehicles. Incorporating features such as smooth lines, round corners, friendly or neutral colors and in general employing a rather boring design (from a traditional car design perspective) is probably beneficial for increasing trust. Similar ideas have been expressed by Waymo in relation to the design of their Waymo One service (see Figure 2) who in general have been working around the keywords “courteous and cautious” throughout the design and engineering process of their service: “And it does look like a pet, or perhaps a toddler’s toy, retrofitted as it was with soft, round corners. “You won’t see any harsh angles or aggressive lines,” Ryan Powell, head of the UX research and design team at Waymo, points out. “We really want them to feel approachable.”” [41]. Similar ideas are expressed by Volkswagen in relation to the design of their Sedric concept: “The language of design used to create Sedric is friendly and empathetic, and immediately generates spontaneous trust” [42].

Figure 2: Waymo Chrysler Pacifica. (Source: Waymo)

Figure 3: Volkswagen Sedric concept. (Source: Volkswagen)

Figure 4: Smart Vision EQ fortwo concept. (Source: Daimler)

Figure 5: Google/Waymo’s previous car concept, the Firefly. (Source: Waymo

The Sedric concept along with some other manufacturers’ vehicles is shown in Figures 3-5. It is evident that all these vehicles share visual design features such as the round corners, smooth lines, and neutral colors, intended to induce trust.

Regarding the user’s direct interaction with the vehicle, assuring good usability though the whole user journey can be expected to increase trust. Poor usability at any stage from ordering the service, to starting the ride, to leaving the ride will likely “spill over” on the feeling of trust towards the automation [25]. Thus, all interactions with the user such as the app used when ordering the ride, to possible welcoming or guiding messages before the trip starts (see Figures 6), the interaction during the trip as well as getting out of the ride and to next step in the journey needs should be carefully designed to be smooth and seamless in order to build the user’s trust towards the service.  

Figure 6: Waymo’s welcoming message, presented on the main user interaction screens mounted on the backside of the front seats’ headrests. (Source: Waymo

Inside the vehicle, it is naturally essential that the user is provided with easy access to buttons or other controls that perform the basic interactions with the vehicle such as starting the ride as well as means to change the route or even pull over and stop the trip [25, 46]. Another important usability aspect is that users are able to seek support whenever they feel uneasy or have questions regarding how the car or service works. An example of this is e.g. the controls for contacting remote operator or support that have been introduced in both low-speed shuttles such as the ones produced by Navya as well as in Waymo’s vehicles [49, 41]. Waymo reports that they received many calls at the beginning of their operations from users asking just about anything connected to their trip such as “Does the car know I’m in a construction zone?”[41]. It could be that users in the learning phase need to feel that a human is part of the driving operation to build that initial trust. 

During the ride, it is likely beneficial to continuously inform the users, at least the naive ones, what the car is sensing, what it is “thinking” and what its next maneuvers and intentions are. It is reasonable to believe that an autonomous vehicle would appear to be more capable of navigating through traffic when it seems able to think and sense its surroundings than when it just gives an impression of “mindless machinery” [43]. Providing the user with feedback regarding what the automation perceives, and what its intentions are, should consequently have a positive effect on trust [10].  Today’s Level 2 vehicles already today have a variety of such displays. The most detailed one is perhaps Tesla’s Autopilot where a third-person perspective of the car and the surrounding vehicles are shown in the instrument cluster [44]. 

There are however also examples of when this type of representation is shown visually to the passengers in Level 4 vehicles such as the driverless shuttle services previously tested by Uber [45], and is also used by Waymo [46]. In the Waymo One vehicles (Chrysler Pacifica minivans), screens are integrated into the back of the front row headrests which show third-person views of the vehicle and the automations’ perception of the surroundings as well as the automation’s intended path (see Figure 7). Other cars in the surrounding traffic are shown as simple rectangular blocks while pedestrians and cyclists are rendered more realistically based on the laser sensor data. The reason for this design is, according to Waymo, that “riders elevate other people above everything else—they’re the most important thing for the car to pay attention to. Rendering them realistically matches that mentality.”[46] Other items of relevance to the vehicle’s actions such as traffic cones that divert traffic at a road work site are also shown, and how the vehicle is perceiving traffic lights. Specific actions of the vehicle are explained as well (e.g. “slowed down for object”).  Waymo claims they have put a lot of effort into making this view as comprehensible as possible to the user in order to reinforce trust. Their design decisions nonetheless seem to correspond well to the interface feature guidelines that by research has been suggested to improve trust [10], such as: 

  • Show the past performance of the automation. 
  • Show the process and algorithms of the automation by revealing intermediate results in a way that is comprehensible to the operators. 
  • Simplify the algorithms and operation of the automation to make it more understandable. 
  • Show the purpose of the automation, design basis, and range of applications in a way that relates to the users’ goals. 

… ([10], p. 74)

Figure 7: Example screenshots from Waymo’s third-person view showing other vehicles (blue rectangles) and pedestrians (white point clouds). The green line shows the intended/projected path of the car. (Source: Waymo

Another way of communicating with the passengers in a trust-enhancing manner could be to incorporate human-like features in the vehicle and its interface, something referred to as anthropomorphism [10, 43]. Such features may be included in the exterior design of the vehicle as well (cf eg. Jaguar’s virtual eyes [48], see Figure 8, and also the concept vehicles in Figures 3-5 ). 

Waytz et al. [43], imposed anthropomorphism to a self-driving vehicle by giving it a voice, a name and a gender in a simulator study and found that overall trust was higher compared to a condition without the human-like features. They also exposed the participants to an accident that appeared to have been caused by another car. Interestingly, the participants blamed their own vehicle significantly less for the accident in the anthropomorphic condition, thus suggesting a relationship between anthropomorphism and perceptions of responsibility. In a similar vein, it has also been found that conversational interfaces result in higher trust in comparison to a traditional Graphical User Interface (GUI) [47].  

Figure 8: Anthropomorphism – adding human-like features such as the virtual eyes designed by Jaguar-Landrover, could increase trust [48]. (Source: Jaguar Landrover)

Many of the features and aspects described above are likely mostly efficient in the learning phase when users still are building their initial trust [22]. Once users feel confident riding in the vehicle – i.e. in the performance phase [22] –  the efforts made to increase trust in the learning phase may be perceived as superfluous and even annoying. 

As suggested by Ekman et al. however, should the vehicle drive into new contexts that the user has not experienced before or even encounter incidents, the user’s initially built trust may be reduced [22]. Therefore, it might become necessary to include means to measure the rider’s current level of trust and continuously adapt the information presented (or similar). A way to measure current level of trust could be to detect riders’ eye glance patterns. In a study by Walker et al. [50], it was found that users who reported having higher trust towards an automated car also monitored the road less. It seems highly likely also that users’ would monitor trust-related information – such as the third person view (see Figure 7) – less when the user has a high level of trust. 

The idea of adapting towards an optimal level of trust could be described by the following example:  When the car detects that the rider is starting to look at other things than the road or the vehicle information displays, trust enhancing information in screens, sound feedback etc. could be toned down and other types of information could be highlighted – e.g. entertainment or information about the destination. If the car encounters an incident or other situations that might reduce an experienced riders’ trust, and/or the eye glance patterns indicate that the user’s trust is becoming lower, the information could be again adapted to build trust and reassure the user that the car knows what it is doing. This type of real time adaptation of the interface is perhaps something which would not only be beneficial in terms of keeping users’ trust at a good level, but it could potentially also be used to optimize other aspects of the experience in the different phases of the user journey.

Conclusions

Trust is currently considered one of the key aspects of the relationship between user and an automated vehicle. It is likely that to be able to run an OAVS business successfully, the provider needs to build users’ trust in the automation technology. This may come down to more than just providing a good user interface inside the vehicle; as we have discussed in this article, gaining and maintaining people’s trust will require considering and carefully designing the whole user experience life cycle, from prior to usage of the service, to the first actual usage, to the extended use of the service. 

User-automation trust may however not be the only feature that decides whether an OAVS will be adopted or not, especially in a longer perspective when automated vehicles have become mainstream and generally accepted as a means of transportation. In the next article, we will dive into further aspects defining the user experience of on-demand vehicle services. Such as the experience of sharing the vehicle with other users, trust in relation to cybersecurity and data protection issues, motion sickness and general comfort and physical ergonomics.

Would you like to subscribe to the next article in the article series? Click here and we will send it to you when it’s published.

Acknowledgements

The author would like to thank Jan Nilsson for critically reviewing the manuscript and for providing helpful suggestions on how to improve and clarify it.

References

[1] https://www.theverge.com/2018/12/5/18126103/waymo-one-self-driving-taxi-service-ride-safety-alphabet-cost-app

[2] https://medium.com/waymo/expanding-our-footprint-in-arizona-waymos-technical-service-center-in-mesa-a00cfe7dbc34 

[3] https://www.theverge.com/2017/12/19/16796370/waymo-trov-self-driving-car-insurance

[4] https://ictech.se/om-ictech/artiklar/user-experience-of-on-demand-autonomous-vehicles/

[5] https://www.sciencedirect.com/science/article/pii/S0923474817304253

[6] https://www.thedrive.com/tech/27023/10-lessons-from-ubers-fatal-self-driving-car-crash

[7] https://www.sae.org/standards/content/j3016_201806/

[9] https://www.researchgate.net/publication/304704126_Potential_Solutions_to_Human_Factors_Challenges_in_Road_Vehicle_Automation

[10] https://user.engineering.uiowa.edu/~csl/publications/pdf/leesee04.pdf

[11] https://www.theverge.com/2016/4/27/11518826/volvo-tesla-autopilot-autonomous-self-driving-car

[12] https://www.theverge.com/2016/6/30/12072408/tesla-autopilot-car-crash-death-autonomous-model-s

[13] https://www.theverge.com/2019/5/16/18627766/tesla-autopilot-fatal-crash-delray-florida-ntsb-model-3

[14] https://www.tesla.com/sv_SE/blog/update-last-week%E2%80%99s-accident?redirect=no

[15] https://news.thatcham.org/documents/regulating-automated-driving-a-uk-insurer-view-69167

[16] https://www.researchgate.net/publication/326876842_Communicating_Intent_of_Automated_Vehicles_to_Pedestrians

[17] https://www.youtube.com/watch?v=IBCr-XBWZaQ

[18] http://newsroom.aaa.com/2017/03/americans-feel-unsafe-sharing-road-fully-self-driving-cars/ 

[19] https://www.cnbc.com/2018/05/22/self-driving-cars-are-scaring-more-people.html

[20] https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.htm

[21] https://twitter.com/AlexRoy144/status/1107193967098118150  

[22] https://www.researchgate.net/publication/284717668_Creating_Appropriate_Trust_for_Autonomous_Vehicle_Systems_A_Framework_for_HMI_Design 

[23] https://www.letstalkselfdriving.com/

[24] https://www.theverge.com/2017/10/9/16447628/waymo-self-driving-car-ad-campaign-arizona

[25] [https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/trust-autonomous-white-paper-secure.pdf 

[26] https://www.youtube.com/watch?v=nWxFT_pWhxc

[27] [https://www.theverge.com/2017/10/9/16446890/lebron-james-self-driving-car-commercial-intel

[28] https://www.driving.co.uk/news/brand-voted-trusted-self-driving-cars/ 

[29] https://www.forbes.com/sites/alanohnsman/2019/02/13/waymo-tops-self-driving-car-disengagement-stats-as-gm-cruise-gains-and-tesla-is-awol/#6cf7b25131ec 

[30] https://www.navigantresearch.com/reports/navigant-research-leaderboard-automated-driving-vehicles

[31] https://de.slideshare.net/TheBostonConsultingGroup/self-driving-vehicles-in-an-urban-context 

[32] https://www.researchgate.net/publication/327635083_Calibration_of_Trust_Expectancies_in_Conditionally_Automated_Driving_by_Brand_Reliability_Information_and_Introductionary_Videos_An_Online_Study

[33] https://youtu.be/PFJeD_byIq0?t=549 

[34] https://www.atkearney.de/documents/20152/434078/How%2BAutomakers%2BCan%2BSurvive%2Bthe%2BSelf-Driving%2BEra%2B%25282%2529.pdf/3025b1a0-4d71-e24d-51e0-2cc1f290447c?t=1493941955625

[35] https://www.theverge.com/2018/12/20/18148946/uber-self-driving-car-return-public-road-pittsburgh-crash

[36] http://agelab.mit.edu/sites/default/files/MIT%20-%20NEMPA%20White%20Paper%20FINAL.pdf

[37]https://www.researchgate.net/publication/279534579_CHARACTERIZING_AND_EVALUATING_AESTHETIC_FEATURES_IN_VEHICLE_DESIGN

[38] https://www.carbodydesign.com/2012/05/bmw-design-dna/

[39] https://www.carbodydesign.com/2012/03/mini-design-dna/

[40] https://www.carbodydesign.com/archive/2008/03/06-infiniti-fx50/

[41] https://www.fastcompany.com/90275407/the-fate-of-self-driving-cars-hangs-on-a-7-trillion-design-problem

[42] https://www.discover-sedric.com/en/the-design-experience/ 

[43] https://faculty.chicagobooth.edu/nicholas.epley/WaytzHeafnerEpley2014.pdf

[44] https://www.researchgate.net/publication/325105742_How_can_humans_understand_their_automated_cars_HMI_principles_problems_and_solutions

[45] http://driverlessratings.com/news/a-drive-in-a-140-000-self-driving-uber 

[46] https://design.google/library/trusting-driverless-cars/ 

[47] https://www.mdpi.com/2414-4088/2/4/62/htm

[48] https://www.dezeen.com/2018/09/04/jaguar-land-rovers-prototype-driverless-car-makes-eye-contact-pedestrians-transport/

[49] https://navya.tech/en/autonom-shuttle/

[50] http://www.humanist-vce.eu/fileadmin/contributeurs/humanist/TheHague2018/29-walker.pdf