In this section, the HI:DAVe team have summarised their main guidance and advice from the work undertaken into three classes of guideline: design methods, interface and interaction design, and user trials. These are presented in the three following subsections. Where appropriate, the chapters in this book are referenced for further information. Interpretation and use of the design guidelines need to be undertaken with care, as some of the guidelines might appear, at face value, to be contradictory or in conflict with each other. In these instances, it is strongly recommended that the reader go back to the source material in the relevant chapter.

Design Methodological Guidelines (DMG)

Six guidelines were identified for design methodology, as follows.

DMG1: Always triangulate data from multiple sources when addressing the research question. Do not allow yourself to become over-reliant on one data source such as a survey, interview, or focus group.

Supporting advice and evidence: Both qualitative and quantitative sources of evidence are valuable. However, they are subject to different inherent biases and degrees of reliability. This makes them complimentary and allows meaningful comparison of data. For example, a survey addressed the public opinion of autonomous vehicles in the UCEID (user-centred ecological interface design) approach, but this can only represent a snapshot in time for a limited and non-stratified sample. Hence, focus groups were used to look at detailed individual and group responses to the same issues, and carrying out two allowed a technical focus in one and a general approach in another, with the same exemplar materials. These are weak samples but allow a detailed analysis of response to presented materials. Thematic analyses reflected the original attitudes of all these data sources and later allowed a detailed summation in the formative design process. Finally, the summation across sources, combined with an iterative design process and appreciation of the strengths and weaknesses of each method, allowed a methodologically sound convergence on the detailed concepts for quantitative hypothesis testing (Chapters 1-3, 5).

DMG2: Use a design workshop format which presents a divergent idea generation phase with a convergent summation and summarising phase in order to effectively traverse the design space.

Supporting advice and evidence: It has been well established that many formal and informal design processes fail to adequately sample the design solution space for a parameterised problem. Although some recommend a depth-first search that exhaustively pursues variants on one concept or a breadth-first search that aims to exhaust all concepts at a low-fidelity level before examining their variants, the result is a biased search. This may lead to non-optimal solutions or even design failures. In qualitative design, one mitigating approach is to initiate an initial prompt or encouragement to produce multiple solutions quickly, such as brainstorming. This is then followed by an elimination and pruning phase, using convergent techniques such as Pugh design matrices to filter the number of concepts to a useful set for prototyping. The chapters in this book dealing with the UCEID stages of design show how this approach was successful in developing detailed HMI concepts for simulation and road testing (Chapters 1-3, 5).

DMG3: Preserve an iterative but rigorous design approach throughout multiple stages of design even when methods vary in timing and the nature of their outputs.

Supporting advice and evidence: It is inevitable during a multidisciplinary engineering project that a mixture of quantitative and qualitative methods will be used, and in some cases multiple different methods of both types. Good design requires a sequential progress through stages from initial design and requirements phases to concept generation to late prototypes. Although this sequential approach is both traditional and attractive for project planning, it is now well established that cascade design approaches lack flexibility and can lead to expensive prototype faults. It is generally accepted that iteration is required at each stage of design in order to capture the effect and improvements resulting from feedback from the evaluation of design alternatives. However, different methods require different time spans to complete, and quantitative methods are often prolonged and resource intensive. Hence, the results may be unavailable when project management initiates the next scheduled method. It is vital that the integration of different methods is managed in itself, and this should ensure that both concurrent and parallel information streams ensure effective continuity and review of the design by stakeholders (Chapters 1-3, 5-7).

DMG4: Document the outcomes of different stages in detail and store in an organised way with an index. This should record method outputs without bias of post hoc knowledge and prior expectations so that the progress of the design process can be accurately understood.

Supporting advice and evidence: The duration of a large project, such as this one, may span many years. During that time, a large amount of material in a variety of formats will accumulate, often on different sites. It is easy for information to become mislaid in conventional computer and physical filing systems, especially when stored on multiple sites. Method outputs can become incoherent and often disorderly as a result. It is therefore important to establish a master repository at an early stage with a coherent organisation that reflects project stages and specific methods, as well as key meetings and design decisions. Ultimately, experimental data should be stored in compliance with data protection regulation. The storage index should be easy to use, and a graphical diagram of storage organisation, including flowcharts and dates, is recommended (Chapters 1-3 and 5).

DMG5: It is essential to model the interactions between drivers and vehicle takeover technology as the prototype interfaces are being developed.

Supporting advice and evidence: Operators event sequence diagrams (OESDs) offer a valid way of modelling and predicting the interactions between the driver and the takeover technology in the vehicle. OESDs provide a framework for identifying all the agents (human and non-human) in a swim-lane format, together with their activities and interactions against a timeline. The OESDs are also a useful way to undertake co-evolution of the sociotechnical system (developing both the social (tasks of the driver) and technical subsystems at the same time - rather than the driver’s task being defined by the technical system alone). Together, Chapters 8, 15, and 20 show that there was a good correspondence between the OESDs and the behaviour of the driver (above 80%). Modelling is also useful in the design of the interaction and interfaces, as it helps in the identification of design requirements provided that the modelling work is undertaken alongside the design phase. The modelling work informed the prototype development phases as it progressed from the lower-fidelity simulators through the higher-fidelity simulators and finally to the on-road prototypes. In this way, modelling and design with OESDs supported the agile development of takeover technology in the vehicle.

DMG6: The reliability and validity of methods used to model and predict automation-driver interaction needs to be formally tested.

Supporting advice and evidence: While the evidence based for using OESD is good (see Chapters 8, 15, and 20), this cannot always be assumed across tasks and domains of application. Reliability takes two main forms: stability over time (called ‘inter-analyst reliability’) and stability between people conducting the analysis (called ‘intra-analyst reliability’). There are four types of validity for ergonomics methods: construct, content, concurrent, and predictive. Construct validity addressed the underlying theoretical basis of a method, whereas content validity addressed with the credibility that a method is likely to gain among its users. Concurrent and predictive validity address the extent to which an analysed performance is representative of the performance that might have been analysed. It is important that the methods possess a level of concurrent or predictive validity appropriate for their application.

< Prev   CONTENTS   Source   Next >