METHODS

The methods section covers the recruitment of participants, the design of the study, equipment used, and procedure followed together with approach for data reduction and analysis.

Participants

In this study, 65 participants were recruited in three age bands. Twenty-five were aged between 18 and 34 (mean = 26.6, SD=4.4), 20 were aged between 35 and 56 (mean = 43.8, SD = 5.8), and 20 were aged between 57 and 82 (mean = 64.8, SD = 5.0). All of the participants were drawn from the general population, had full UK driving licenses, and were in good health, with corrected vision where applicable. Ethical permission for the study was granted by the University of Southampton Research Governance Office (ERGO Number: 41761.A3). Each participant was briefed on the nature of the study and made aware of their right to withdraw at any time. All participants signed a consent form prior to taking part in the study.

Study Design

The OESD was used to develop the design of the takeover interface (see Figure 15.2). This meant that the expected behaviour of the human driver and automated system, together with their interaction, can be predicted. To determine the accuracy of this prediction, it was compared with video footage of the actual behaviour of the 65 human drivers who took part in this study.

This study comprised three driver-to-automation handovers, and three automation- to-driver takeovers, repeated over four trials (12 handovers and takeovers in total, although this analysis is focused solely on the automation-to-human takeovers). Two of the trials had shorter out-of-the loop activity (i.e. 1 min), and two of the trials had longer out-of-the-loop activity (i.e. 10min). These longer (but still relatively short) and shorter out-of-the-loop activity trials were counterbalanced. The last automation-to-human driver takeover for each trial was analysed in this study due to the volume of data, and assuming optimal familiarity during the latter stages of each trial (65 drivers, and 4 takeovers analysed per driver).

Equipment

The driving simulator was based on a Land Rover Discovery Sport vehicle interior, with a fixed-base running STISIM software (see Figure 15.6). The simulated driving environment comprised a congested, three-lane motorway (to simulate the rush hour in the UK) in dry conditions with good visibility. To help reduce mode error (Sarter and Woods 1995; Stanton, Dunoyer and Leatherland 2011), the cabin ambient lighting displayed two distinct colours. As shown in Figure 15.6, blue was used to indicate that the vehicle was under automated control and orange was used to indicate the vehicle was under manual driver control.

Four views of the driving simulator, top-left and top-right show the vehicle is under automated control

FIGURE 15.6 Four views of the driving simulator, top-left and top-right show the vehicle is under automated control (where the ambient light colour is blue) and the bottom-left and bottom-right panels show that the vehicle is under manual driver control (where the ambient light colour is orange).

The three visual aspects of the HMI, on the left is the central console, bottom-right is the cluster, and the HUD is shown top-right

FIGURE 15.7 The three visual aspects of the HMI, on the left is the central console, bottom-right is the cluster, and the HUD is shown top-right.

Interfaces in the vehicle comprised a HUD (showing the car in automated mode but beginning the preparation for the human driver to take control, see Figure 15.7), instrument cluster (showing the icon associated with the car in automated mode, see Figure 15.7), a centre console (showing the car in automated mode, see Figure 15.7), haptic seat (to prompt the driver when the takeover begins), speech input/output (to communicate the takeover questions), and ambient light display (to indicate the driving mode, blue means automation is in control and orange means the human driver is in control). Engagement and disengagement of vehicle automation was undertaken by pressing two green buttons, mounted on the steering wheel, with the driver’s thumbs simultaneously. Audio sounds and synthesised speech were generated automatically by the vehicle automation and presented through the in-vehicle speaker system.

In addition, a Microsoft Surface tablet computer was in the cabin loaded with a Tetromino game. This was used as the secondary task to engage the driver when automation was driving the vehicle. In some conditions (if selected by the driver), a visual alert was presented on the tablet to start the takeover process. This was accompanied by a speech alert through the vehicle speakers.

A control desk at the rear of the vehicle was used by the experimenter as a ‘Wizard of Oz’ environment, to interpret the driver’s vocal responses to the speech synthesis questions during the takeover process. If the driver gave an incorrect response, then the question was repeated until a correct response was given. If more than two incorrect responses were given, then the next question was presented until all questions were answered. Then, the driver was requested to resume manual driving. All takeovers were planned (not as a result of an emergency or system failure) and proceeded at the pace of the driver.

Procedure

On arrival at the driving simulation facility, participants were welcomed and presented with a participant information sheet informing them of the details of the study. Their right to halt the study at any time was explained, and they were then provided with an informed consent form which they had to read, initial, and sign in order for the study to continue. On completion of the consent form, participants were given a bibliographical form to complete to capture demographics data. They were then introduced to the simulator. The main driving controls were explained, and they were informed of the functionality of the vehicle automation and the human-machine interfaces (instrument cluster, HUD, centre console, haptic seat, speech input/output, and ambient light display). Participants then took part in a test run, where they experienced three takeovers after 1-min out-of-the-loop intervals. After completion of the test run, the main trials started with the participant being asked to accelerate onto the motorway, join the middle lane, keep up with traffic, and follow the instructions presented on the displays. After a period of approximately 1 min, the displays indicated to the participant that automation was available and informed them via text, icon, vocalisation, and two flashing green steering wheel buttons. Participants then activated automation by pressing the two steering wheel buttons; this was followed by the instruction indicating that the automation system was now in control of the vehicle. Participants then picked up the secondary task tablet and started to engage with the Tetromino secondary task. After a period of either 1 or 10 min, dependent upon counterbalancing, the automation indicated via the displays that the driver was required to get ready to take control. Participants were expected to put aside their secondary task, and follow the instructions presented on the displays. The takeover protocol consisted of a set of questions designed to raise situation awareness. These questions were presented in vocal, word, and icon form. Participants responded vocally to each question, the answers to which were judged by an experimenter taking the part of the automation system using the Wizard of Oz approach. Incorrect or missed questions were repeated twice before moving to the next. When all the questions were answered by the participant, the human-machine interfaces indicated for them to take control, which they did by pressing the two green buttons on the steering wheel. This constituted one takeover; the process was repeated twice more. After completion, the participant was asked to pull safely to the hard shoulder and stop the vehicle. This process was repeated three more times, allowing participants to adjust the takeover displays after each trial. Once the trials were complete, participants were debriefed and thanked for their time.

Data Reduction and Analysis

The video data for each driver during the takeover process was reduced into hits, misses, false alarms, and correct rejections by comparison with the OESD as follows (and in Table 15.2):

  • • Hits: Present in the video and the OESD.
  • • Misses: Present in the video but not the OESD.
  • • False alarms: Not present in video but present in OESD.
  • • Correct rejections: Not present in video and not present in OESD.

The latter category can be difficult to calculate as it could be infinity, but for the purposes of this investigation, it was based on the total number of false alarms generated by all 65 participants, minus the number of false alarms for each individual participant.

Inter-rater reliability testing was conducted on the categorisation scheme for approximately 10% of the video footage between two analysts. Equal-weighted Cohen’s kappa was calculated (0.718) showing acceptable agreement between the two independent analysts in their classification of hits, misses, false alarms, and correct rejections (Landis and Koch 1977).

The data for each trial was pooled, and the Matthews correlation coefficient (Phi: Matthews 1975) was calculated using Equation 15.1.

 
Source
< Prev   CONTENTS   Source   Next >