The Knotted Rubber Band Model

The Model Applied to Some Continuous Control Process

What follows is an attempt to elucidate the phrase 'reliability is a dynamic non-event' using the mechanical properties of a rubber band as a model. We are concerned here with the actions of someone on the frontline of the system who has control over some process or piece of equipment.

Imagine a rubber band knotted in the middle. The knot represents the system-to-be-controlled and its spatial position is determined by the horizontal forces exerted on both ends of the band. Three configurations of the knotted rubber band are shown in Figure 14.4.

The stippled area in the centre of the diagram is the safe operating zone. The task of the controller is to keep the knot in this region by countering dangerous perturbations with appropriate compensatory corrections to the other end of the band. The top illustration in Figure 14.4 is a relatively stable state in which moderate and equal tensions on both ends of the band maintain the knot within the safety zone. The middle picture shows an unstable - or unsafe - condition in which an unequal force has been applied to one side of the band, pulling the knot out the safety zone. The bottom configuration depicts a corrected state in which the previous perturbation has been compensated for

Three states of the knotted rubber band

Figure 14.4 Three states of the knotted rubber band

by an equal pull in the opposite direction. There are, of course, many other states, but these are best appreciated by actually manipulating a knotted rubber band yourself.

The rubber band has a further important property. In order to maintain the position of the knot relative to the safety zone, it is necessary to apply an equal, opposite and simultaneous correction to any perturbation. Any delay in making this correction will take the knot outside of the safety zone, at least for a short while. I call this the simultaneity principle.

In applying this model to complex, highly automated technologies, such as nuclear power plants, chemical process plants and modern commercial aircraft, we should recognise that most of the foreseeable perturbations have already been anticipated by the designers and compensated for by the provision of engineered safety devices. These come into play automatically when the system parameters deviate from acceptable operational limits. This means that the large majority of the residual perturbations - those not anticipated by the designers - are likely to be due either to unexpected variations in local conditions, or to unforeseen actions on the part of the system's human elements - controllers, pilots, maintainers and the like. The latter are likely to include both errors and violations of safe operating procedures (see Chapters 3 and 4).

What are the consequences of the simultaneity principle for the human controllers of complex technologies, taking into account the nature of the residual perturbations just discussed? The first implication is that the timely application of appropriate corrections requires the ability to anticipate their occurrence. This, in turn, demands considerable understanding of what causes these perturbations. That is, it will depend upon the knowledge and experience of the human system controllers regarding, among other things, the roots of their own fallibility. As Weick has argued,[1] these qualities are more likely to be present in systems subject to fairly frequent perturbations (or in which periods of likely perturbation can be anticipated) than in stable systems in which the operating parameters remain constant for long periods of time. Clearly, there will be limits to this generalisation. Just as the inverted-U curve (the Yerkes-Dodson law) predicts that optimal human performance will lie between states of low and high arousal, we would similarly expect optimal system performance to lie between the extremes of virtual constancy and unmanageable perturbation.

support for this view comes from field study observations of nuclear power generation, aircraft carrier flight deck operations and air traffic control.[2] In order to anticipate the conditions likely to provoke error, system operators need to experience them directly, learning from their own and other people's mistakes, as well during simulated training sessions. Error detection and error recovery are acquires skills and must be practised. This need to keep performance skills finely honed has been offered as an explanation for why ship-handlers manoeuvre closer to other vessels than is necessary in the prevailing seaway conditions.[3] Watchkeepers, it was suggested, gain important avoidance skills from such deliberately contrived close encounters.

The Model Applied to the Tension Between Productive and Protective Resources

Figure 14.5 shows the knotted rubber band in a different setting in order to demonstrate its resource implications. Every organisation needs to keep an optimal balance between production and protection (touched upon in Chapter 7 and discussed at length elsewhere).[4] The stippled region is now called the optimal operating zone and on either side there are protective and productive resources, represented as rectangles. The rubber band is a limited resource system. The more it is stretched, the less potential it has for controlling the rubber band - except, of course, by releasing the tension on one or other side.

Three configurations are shown in Figure 14.5. The top one is a balanced state in which the knot is centrally located with considerable potential for corrective action. Configuration A

Showing the resource implications of the knotted rubber band model

Figure 14.5 Showing the resource implications of the knotted rubber band model

shows an unbalanced state in which the pursuit of productive goals has pulled the knot out of the optimal zone. Configuration B is similarly out of balance, but in the opposite direction. Both configuration A and B have undesirable resource implications. Configuration A provides little or no possibility of compensating for some additional pull in the direction of productive goals, and is potentially dangerous. Configuration B, on the other hand, involves the unnecessary consumption of protective resources and so constitutes a serious economic drain upon the system. The risk in the former case is the unavailability of additional protective resources in the event of an increase in operational hazards; the risk in the latter case is, at the extreme, bankruptcy.

The Model Applied to the Diminution of Coping Abilities

The knotted rubber band has a further application derived from its capacity to become over-stretched and thus lose its potential for correcting the position of the knot. You will recall that in our discussion of the arterial switch operation (Chapter 9), it was noted that the ability of surgeons to compensate for adverse events was inversely related to the total number of events, both major and minor, that were encountered during the procedure. The implication was clear: coping resources are finite. They are used up by repeated stressors.

In the previous consideration of this phenomenon, I used Cheddar cheese to represent the limited coping resources, and a mouse that nibbled it away as representing the cumulative effects of the adverse events. But it is also possible to apply the knotted rubber band model. Let us assume that compensating for these events involves stretching the rubber band to neutralise each perturbation. Given enough of these events, the band becomes over-stretched and is unable to cope with these disturbances until the tension is released equally on both ends.

  • [1] Weick, K.E. (1987) 'Organizational culture as a source of high reliability.'California Management Review, 19: 112-127.
  • [2] Ibid.
  • [3] Habberley, J.S., Shaddick, C.A., and Taylor, D.H. (1986) A BehaviouralStudy of the Collision Avoidance Task in Bridge Watchkeeping. Southampton: TheCollege of Marine Studies.
  • [4] Reason (1997), Chapter 1.
 
Source
< Prev   CONTENTS   Source   Next >