The 'Safety Space' Model

The first model to be described here embodies a navigational metaphor. It presents the notion of a 'safety space' within which comparable organisations can be distributed according to their relative vulnerability or resistance to the dangers that beset their particular activities. They are also free to move to and fro within this space. An important feature of this model is that it seeks to specify an attainable safety goal for real world systems. This is not zero accidents, since safety is not an absolute state; rather it is the achievement and maintenance of the maximum intrinsic resistance to operational hazards.

The model had it's origins in analyses of individual differences in the numbers of accidents experienced by groups of people exposed to comparable hazards over the same time period (we discussed this at some length in Chapter 6). These variations in liability are normally expressed in relation to the predictions of some chance theoretical distribution - the Poisson exponential series. A Poisson distribution looks roughly like the right hand half of a bell-shaped (normal or Gaussian) distribution. But the accident liability distribution is of necessity one-sided; it can only assess degrees of liability. Our concern is with the opposite and previously neglected end of the distribution, most especially with the fact that more than half of the groups assessed in this way have zero accidents. Was this simply due to chance? Were these people simply lucky? It is probable that some of them were. But it is also likely that others possessed characteristics that rendered them less susceptible to accidental harm.

In other words, this unidirectional account of accident liability - discriminating, as it does, degrees of 'unsafety' within a given time period - might actually conceal a bi-directional distribution reflecting variations in personal safety ranging from a high degree of intrinsic resistance to considerable vulnerability. It is a short step from this notional bi-directional distribution of individual accident liability to the 'safety space' shown in Figure 14.2.

Showing a number of hypothetical organisations within the same hazardous domain distributed throughout the safety space

Figure 14.2 Showing a number of hypothetical organisations within the same hazardous domain distributed throughout the safety space

The horizontal axis of the space runs from an extreme of maximum attainable resistance to operational hazards (and still stay in business) on the left to a maximum of survivable vulnerability on the right. Rather than individuals, however, we have plotted the position of a number of hypothetical organisations operating within the same hazardous conditions along this resistance- vulnerability dimension. The space's cigar shape acknowledges that most organisations will occupy an approximately central position with very few located at either extreme.

There will probably be some relationship between an organisation's position along the resistance-vulnerability dimension and the number of bad events it suffers during a given accounting period, but it is likely to be a very tenuous one. If, and only if, the system managers had complete control over all the accident-producing conditions within their organisations would we expect their accident and incident rates to bear a direct relationship to the quality of their efforts. But this is not the case. Chance also plays a large part in accident causation. so long as operational hazards, local variations and human fallibility continue to exist, chance can combine with them in ways that breach the system's defences.[1] Thus, even the most resistant organisations can still have bad accidents. By the same token, even the most vulnerable organisations can evade disaster, at least for a time. Luck works both ways: it can afflict the deserving and protect the unworthy.

The imperfect correlation between an organisation's position along the resistance-vulnerability continuum and the number of adverse events it sustains in a given accounting period has a further implication. When the accident rates within a particular sphere of activity fall to very low levels, as they have in aviation and nuclear power, the occurrence or not of negative outcomes reveals very little about an organisation's position within the safety space. This means that organisations with comparably low levels of accidents could occupy quite different locations along the resistance-vulnerability continuum, and not know it. So how can an organisation establish its own position within the space? In short, what navigational aids are available?

Each commercial organisation has two imperatives: to keep its risks as low as possible and still stay in business. It is clear that for any organisation continuing to operate profitably in dangerous conditions, the state of maximum resistance will not confer total immunity from harm. Maximum resistance is only the best that an organisation can reasonably achieve within the limits of its finite resources and current technology. Given these constraints, there are two ways by which it can locate its position within the safety space: from reactive and proactive indices.

Where major accidents are few and far between, the reactive measures will be derived mainly from near miss and incident reporting systems, or 'free lessons.' such safety information systems have been considered at length elsewhere[2] and will not be discussed further here. We can, however, summarise their likely benefits:

  • • If the right lessons are learned from these retrospective data, they can act like vaccines to mobilise the organisation's defences against some more serious occurrence in the future. And, like vaccines, they can do this without lasting harm to the system.
  • • These data can also inform us about which safeguards and barriers remained effective, thus thwarting a more damaging event.
  • • Near misses, close calls and 'free lessons' provide qualitative insights into how small defensive failures could combine to cause major accidents.
  • • Such data can also yield the large numbers required for more far- reaching quantitative analyses. The analysis of several domain- related incidents can reveal patterns of cause and effect that are rarely evident in single-case investigations.
  • • More importantly, the understanding and dissemination of these data serve to slow down the inevitable process of forgetting to be afraid of the (rarely experienced) operational dangers, particularly in systems, such as nuclear power plants, where the operators are physically remote from both the processes they control and their associated hazards.

Proactive measures identify in advance those factors likely to contribute to some future event. Used appropriately, they help to make visible to those who operate and manage the system the latent conditions and 'resident pathogens' that are an inevitable part of any hazardous technology (see Chapter 7). Their great advantage is that they do not have to wait upon an accident or incident; they can be applied now and at any time. Proactive measures involve making regular checks upon the organisation's defences and upon its various essential processes: designing, building, forecasting, scheduling, budgeting, specifying, maintaining, training, selecting, creating procedures, and the like. There is no single comprehensive measure of an organisation's 'safety health'.3 Just as in medicine, establishing fitness means sampling a subset of a much larger collection of leading indicators, each reflecting the various systemic vital signs

Effective safety management requires the use of both reactive and proactive measures. In combination, they provide essential information about the state of the defences and about the systemic and workplace factors known to contribute to bad outcomes. The main elements of their integrated employment are summarised in Table 14.1.

Table 14.1 Summarising the interactions between reactive and proactive measures

Type of navigational aid

Reactive measures

Proactive measures

Local and organisational conditions

Analysis of many incidents can reveal recurrent patterns of cause and effect.

Identify those conditions most needing correction, leading to steady gains in resistance or ‘fitness’.

Defences barriers and safeguards

Each event shows a partial or complete trajectory through the defences.

Regular checks reveal where holes exist now and where they are most likely to appear next.

3 Reason (1997).

Navigational aids are necessary but insufficient. Without some internal driving force, organisations would be subject to the 'tides and currents' present within the safety space. These external forces run in opposite directions, getting stronger the nearer an organisation comes to either end.

The closer an organisation approaches the high-vulnerability end of the space, the more likely it is to suffer bad events - though, as mentioned earlier, this is by no means inevitable. Few things alert top management to the perils of their business more than losses or a frightening near miss. Together with regulatory and public pressures, these events provide a powerful impetus for creating enhanced safety measures which, in turn, drive the organisation towards the high-resistance end of the space. However, such improvements are often short-lived. Managers forget to be afraid and start to redirect their limited resources back to serving productive rather than protective ends. Organisations become accustomed to their apparently safer state and allow themselves to drift back into regions of greater vulnerability. Without an 'engine,' organisations will behave like flotsam, subject only to the external forces acting within the space.

Consideration of the 'safety engine' brings us to the cultural core of an organisation. Three factors, in particular, are needed to fuel the 'engine, all of them lying within the province of what Mintzberg called the 'strategic apex' of the system.[3] These driving forces are commitment, competence and cognisance.

Commitment has two components: motivation and resources. The motivational issue hinges on whether an organisation strives to be a domain model for good safety practices, or whether it is content merely to keep one step ahead of regulatory sanctions (see Chapter 5 for a discussion of the differences between 'generative' and 'pathological' organisations). The resource issue is not just a question of money, though that is important. It also concerns the calibre and status of those people assigned to direct the management of system safety. Does such a task put an individual in the career fast lane, or is it a long-term parking area for underpowered or burned out executives?

Commitment by itself is not enough. An organisation must also possess the technical competence necessary to achieve enhanced safety. Have the hazards and safety-critical activities been identified? How many crises have been prepared for? Are crisis plans closely linked to business-recovery plans? Do the defences, barriers and safeguards possess adequate diversity and redundancy? Is the structure of the organisation sufficiently flexible and adaptive? Is the right kind of safety-related information being collected and analysed appropriately? Does this information get disseminated? Does it get acted upon? An effective safety information system is a prerequisite for a resilient system.[4]

Neither commitment nor competence will suffice unless the organisation is adequately cognisant of the dangers that threaten its activities. Cognisant organisations understand the true nature of the struggle for enhanced resilience. For them, a lengthy period without adverse events does not signal 'safe enough'. They see it correctly as a period of heightened danger and so review and strengthen their defences accordingly. In short, cognisant organisations maintain a state of intelligent wariness even in the absence of bad outcomes. This is the very essence of a safe culture.

Figure 14.3 summarises the argument so far. It also identifies the primary goal of safety management: to reach that region of the space associated with the maximally attainable level of intrinsic resistance - and then staying there. Simply moving in the right direction is relatively easy. But sustaining this goal state is very difficult. Maintaining such a position against the strong countervailing currents requires both a skilful use of navigational aids - the reactive and proactive measures - and a powerful cultural 'engine' that continues to exert its driving force regardless of the inclinations of the current leadership team. A good safety culture has to be CEO-proof. CEOs are, by nature, birds of passage: changing jobs frequently is how they got to

Summarising the driving forces and navigational aids necessary to propel an organisation towards the region of maximum resistance

Figure 14.3 Summarising the driving forces and navigational aids necessary to propel an organisation towards the region of maximum resistance

where they are today - and there is no reason to suppose that they are going to behave any differently in the future.

Achieving this practicable safety goal depends very largely upon managing the manageable. Many organisations treat safety management as something akin to a negative production process. They set as targets the achievement of some reduced level of negative outcomes. But unplanned events, by their nature, are not directly controllable. so much of their variance lies outside the organisation's sphere of influence. The safety space model suggests an alternative approach: the long-term fitness programme. Rather than struggling vainly to reduce an already low and perhaps asymptotic level of adverse events, the organisation should regularly assess and improve those basic processes - design, hardware, maintenance, planning, procedures, scheduling, budgeting, communicating - that are known to influence the likelihood of bad events. These are the manageable factors determining a system's intrinsic resistance to its operational hazards. And they, in any case, are the things that managers are hired to manage. In this way, safety management becomes an essential part of the organisation's core business, and not just an add-on.

  • [1] Reason, J. (1997) Managing the Risks of Organizational Accidents.Aldershot: Ashgate Publishing.
  • [2] Van der Schraaf, T.W., Lucas, D.A., and Hale, A.R. (1991) Near MissReporting as a Safety Tool. Oxford: Butterworth-Heinemann.
  • [3] Mintzberg, H. (1989) Mintzberg on Management: Inside Our Strange Worldof Organizations. New York: The Free Press.
  • [4] Kjellen, U. (1983) 'An evaluation of safety information systems of sixmedium-sized and large firms.' Journal of Occupational Accidents, 3: 273-288.Smith, M.J., Cohen, H., Cohen, A., and Cleveland, R.J. (1988) 'Characteristics ofsuccessful safety programs.' Journal of Safety Research, 10: 5-14
 
Source
< Prev   CONTENTS   Source   Next >