Regulation, Investigation, Control and Feedback

The Pel Air ditching and subsequent report illustrate the political landscape of investigation and regulation. CASA’s oversight of smaller operators had been criticised before, not just in the case of the Lockhart River crash, and the challenges of maintaining oversight were seen in Chapter 2 in relation to the Pelee Island and the California Learjet accidents. The state aviation authority has a legal obligation and is, therefore, accountable for the conduct of aviation inside its territory. Accident investigation agencies, similarly, facilitate aviation by determining risk, based on adverse events, and proposing mitigations. Investigation reports represent feedback on the functioning of the system. However, as we have seen, the process of both regulation and investigation is problematic.

Hutter and Lloyd-Bostock (2017) looked at regulators dealing with crises in order to examine regulators, themselves, in crisis. They make the point that a crisis can be an existential risk for the regulator in that the way in which the crisis is managed can call into question the need for the regulator in the first place. They observe that regulators exist in a social, cultural and political environment: environmental factors in my model. The legitimacy of a regulator flows from the trust that it engenders in those it regulates, the general public affected by its failures and the government to whom it is answerable. Regulatory failure can be seen as a failure to manage risk.

Two aspects of regulatory crisis are of specific interest: accountability and blame. Accountability flows from a failure to anticipate a problem and, as Hutter and Lloyd- Bostock observe, it is often the case that the risk management tools used by the regulator are themselves a source of risk. Aviation authorities typically impose different burdens of compliance on different types of operations, becoming increasingly more complex as the size of aircraft, and numbers of passenger carried, increases. Similarly, the scope of the operation shapes how the investigators approach their task. Air operators, then, exist in an ecosystem of safety with risk as a variable, rather than a constant. The behaviour of regulators reflects what Downer referred to as their ‘risk appetite’ or, as Hutter and Lloyd-Bostock observe, their view on their role in protecting business interests. This tension was reflected in the comments by the Senate Committee member after the Mount Gambier accident.

Associated with accountability is the issue of blame. In the event of a failure, someone has to pay. Pel Air presented a crisis for the regulator, CASA, when it became, first, a source of public outrage and, second, a cause for a government inquiry. It seems that CASA failed to adequately address risk in the case of smaller operators and, when exposed, were fortunate when the media cast another agency as the villain. Actors attempt to avoid blame by framing an issue in a positive way or by finding ways to transfer the blame onto others. For example, we saw earlier that Boeing suggested that the MCAS is just a part of the standard aircraft trim system and that the lack of AoA redundancy did not represent a ‘single point of failure’ situation. They also claimed that many of the assumptions they used in relation to pilot reaction times were ‘industry standard’ and were ‘the same as those used by the FAA’. The initial attempt to blame poorly-trained or inexperienced pilots for the two MCAS-related crashes was being maintained, albeit less explicitly, by the new CEO of Boeing as late as March 2020 (Kritroeff & Gelles, 2020). The discussion of the accidents in Australia illustrates multiple attempts to deflect blame onto the ATSB.

I have proposed that the emergent property at this level is trust. Both the political masters and the travelling public (and society more broadly) must have trust in the aircraft they travel in and the system that supports the conduct of aviation. A breach of trust can have serious implications. For example, in September and October 2007, the Scandinavian airline, SAS, experienced four landing gear failures on its DHC8-Q400 fleet. The last one was on 27 October 2007, and the next day the airline decided to suspend its use of the aircraft and replace the entire fleet. The technical failure could not be attributed to the airline and other operators had, similarly, suffered undercarriage failures. The drastic and significant decision to replace the whole fleet was the result of a loss of public confidence. Passengers, when making a booking, were asking what aircraft type was used on the route. When told it was the Dash 8, they chose to make other arrangements (personal communication). Public opinion represents an environmental - Level 5 - factor. In this specific instance, the public lost faith in an aircraft type, and the operator paid the price. One goal of Level 4 actors is to prevent a breach of trust in their ability to guarantee safety.

In the context of the aviation system, power resides with Level 4 agencies even if their ability to effectively exercise that power is problematic. For example, a review of regulation in Australia (Australian Government, 2014) observed that ‘relationships between industry and the CASA have, in many cases, become adversarial’. Insufficient resources, both physically and intellectual (as in the detailed understanding of technical aspects of the technology being overseen), is a consistent motif running through any examination of the functioning of regulators as is the problem of information asymmetries: regulators often are unaware of the actual status of an entity being overseen. Feedback from the point of production to those that of frame regulation is indirect and usually lagging behind operational developments. Because investigation agencies typically only look at fatal accidents or serious events, their findings typically address significant disconnects between intended system functioning and its actual enactment. These challenges increase opportunities for well- intentioned regulation to result in cross-scale interactions that result in failure at the operational level.

 
Source
< Prev   CONTENTS   Source   Next >