Complexity in a Systems Theory Context

In recent years commercial aircraft have become more complex meaning that there are many components that interact with each other and consist of humans. This fact leads to the conclusion that the management of complexity is a high priority in the development of commercial aircraft.

According to Hybertson and Sheard (2008, pp. 13-16), systems engineering and by extension the systems approach, is “currently experiencing a significant expansion of scope beyond its comfort zone. This expansion is predominantly in the general area of ‘complex systems' by which we mean systems that behave more like organisms and less like machines.”

In addition, Warfield (2008) encourages systems engineers to move toward a systems science platform for systems engineers. Complexity is one issue for which systems science can be a benefit.

For a comprehensive look at what complexity means, Sillitto et al. (2019) provide the following definition for a complex system:

A complex system is a system in which there are nontrivial relationships between cause and effect: each effect may be due to multiple causes; each cause may contribute to multiple effects; causes and effects may be related as feedback loops, both positive and negative; and cause-effect chains are cyclic and highly entangled rather than linear and separable

Other researchers have provided other perspectives of complexity. For example, Thurner, Hanel, and Klimek (2018, p. 7) state that complex systems can be characterized as an extension to physics in the following ways: [1]

Entropy variation between elements

FIGURE 9.1 Entropy variation between elements.

From the above it can be concluded that time-varying and stochastic interactions are a dominant feature of complex systems. These interactions may be between humans and machines or among physical elements of the aircraft.

One of the properties of complexity according to some researchers, for example Marczyk and Deshpande (2006), is the entropy that results from the stochastic interactions between elements. Figure 9.1 shows how the entropy can vary between elements.

Complexity and Architecting

The architecting process normally faces the phenomenon of complexity. There are various techniques to manage complexity in the architecting process to provide the best and adequate solution to the stakeholders. In this process, the designer has the opportunity to define the requirements of the elements of the system architecture and the interfaces between these elements. These interfaces are a primary aspect of complexity and the architecting process is the process and the momentum that exercise to manage it with the creative’s solutions and approaches.

As explained in Chapter 8, managing complexity is one of the key aspects of the art of systems architecting. Establishing clean interfaces, which minimize interaction between components, is a critical skill in the architecting process.

The definition of complexity provides a clue as to how complexity can be managed. Maier and Rechtin (2009, p. 424) state that complexity is “a measure of the numbers and types of interrelationships among system elements.” So numbers can refer both to the number of elements and the number of interfaces.

Second, one of the heuristics quoted by Maier and Rechtin (2009, p. 397) is that “complex systems will develop and evolve within an overall architecture much more rapidly if there are stable intermediate forms than if there are not.” The latter can be summarized by saying that during design interfaces should be made as stable as possible, that is to say, as deterministic as possible and not stochastic. Some authorities, for example Marczyk and Deshpande (2006). have suggested

Complex airplane

FIGURE 9.2 Complex airplane.

(Used with permission by the New England Complex Systems Institute.)

that the probabilistic nature of interfaces can be measured by Shannon entropy (Shannon 1948).

In summary, as a simple guide to the design of a system or subsystem, the following three rules should be followed:

  • 1. The design should have as few components as possible.
  • 2. The design should have as few interfaces as possible.
  • 3. The interfaces between the components should be as deterministic as possible.

Figure 9.2 is a depiction of a complex airplane by artist Cherry Ogata of Japan. REFERENCES

Hybertson. Duane, and Sarah Sheard. 2008. "Integrating Old and New Systems Engineering Elements.” Insight 11(1): 13-16.

Maier. Mark W.. and Eberhardt Rechtin. 2009. The Art of Systems Architecting. Third ed.

Boca Raton, FL: CRC Press. Original edition, 1991.

Marczyk, Jacek, and B. R. Deshpande. 2006. “Measuring and Tracking Complexity.” Conference Paper Presented at International Conference of Complex Systems, Boston MA. June 2006.

Shannon, С. E. 1948. “A Mathematical Theory of Communication.” The Bell System Technical Journal 27(3):379-423.

Sillitto, Hillary G„ James Martin, Regina Griego, Dorothy McKinney. Dov Dori. Scott Jackson. Eileen Arnold. Patrick Godfrey, and Daniel Krob. 2019. “System and SE Definitions.” International Council on Systems Engineering, accessed 8 August, https:// www.definitions.sillittoenterprises.com/

Thurner, Stephan. Rudolf Hanel, and Peter Klimek. 2018. Introduction to the Theory of Complex Systems. Vienna: Oxford University Press.

Warfield. John N. 2008. “A Challenge for Systems Engineers: To Evolve Toward Systems Science.” Insight 11(1 ):6—8.

Humans in the System

Humans in aircraft systems are a two-edged sword. On the positive side, they provide the capability of recognizing adversities and responding to them in an intelligent way. On the negative side, they provide the source of human error generally known as cognitive bias, which is a mistake caused by previous beliefs, emotion, context, or other stress factors.

Chapter 5 showed that there are seven worldviews of systems, two of which are relevant to the topic of humans and their relations to systems. One worldview is the extreme realist view in which real systems consist of matter and energy. In this worldview', the human is not part of the system but rather interfaces with the system, as a pilot does.

The other worldview of interest with respect to humans is the complex and viable systems worldview'. Although this worldview may include other system types, humans w'ould be a logical type w'ithin this worldview. There is general agreement that humans are indeed complex systems themselves and can make both wise decisions and unwise errors.

On the complex side, humans perform a vital role in the resilience of systems, for example, aircraft systems. According to Jackson and Ferris (2013), one of the most important resilience principles is the human in the loop principle that states there need to be humans in the system when there is a need for human cognition. Apollo 11 is the best example of the human in the loop principle successfully applied.

Another aspect of humans in the aircraft system is the interaction between the human and the aircraft flight control systems. Billings (1997) lays out the requirements for this interaction in a direct and logical way.

Humans are the central source of errors caused by cognitive bias. Kahneman (2011) and Thaler and Sunstein (2008) have many of these biases that have been shown to result in decision errors, many of them catastrophic. They have also pointed the way to techniques that can be used to influence the decisions for more favorable choices.

Humans in an Aircraft Resilience Context

According to BKCASE Editorial Board (2016), resilience is “the ability to maintain capability in the face of adversity.” In general, this definition is often interpreted to mean that a system should be able to recover from an adversity w'hen it is disrupted in any way. However, recovery does not mean that the system should be able to restore its original performance before the adversity w'as encountered. It only means that it should be able to maintain a level of capability that is expected by the system owner.

Resilience is broader than safety. Safety focuses on the protection from loss of life or property. Resilience focuses on maintaining capability. Resilience is not a specific property built into modern aircraft although most aircraft have many resilience capabilities. One of these will be discussed below.

Jackson and Ferris (2013) have identified 14 top-level principles that have been shown to enhance the resilience of engineered systems, such as aircraft. Most of these principles have a direct relation to the physical architecture (design) of any system. For example, the physical redundancy principle states that a system should be designed with two or more identical branches. This is why most aircraft have multiple engines. The use of these principles is an example of system architecting discussed in Chapter 8.

A more relevant principle for this section is the human in the loop principle. This principle calls for human cognition to be incorporated into the system when needed. This principle was a major factor in the well-known US Airw'ays Flight 1549 case study also known as the Miracle on the Hudson. This case described by Paries (2011) explains how an aircraft suffered a bird strike upon take-off from an airport in New York. This bird strike caused both engines to fail thus depriving the aircraft of internal power with which to control itself. This is where the human in the loop principle comes in.

The human in the loop principle becomes part of the functional redundancy principle described by Jackson and Ferris (2013). This principle calls for two different ways to maintain capability. The first way was the absorption principle attempted by the engines, which failed. The second way consisted of both internal power and control. Internal power was provided by the ram air turbines (RATs). Control was provided by the pilot, as the human in the loop.

The net result was that the aircraft was able to ditch in the Hudson River saving all 155 persons on board.

Humans in Aviation Automation Context

A necessary human task is to control the aircraft, even if automated systems are involved in the control. Billings (1997) has identified a set of rules for determining how the human and the automated system should interact with each other. The purpose of these rules is to keep the human from making a mistake, which has been done. Hence, this is another example of the human in the loop principle.

Figure 10.1 shows an interesting question between automation and the human’s level of awareness about it. The key point of the graph is the representation of an automation in operation in relation to the advancement of time and the level of automation putting the human being in system monitoring. In this way, it creates a human machine cognitive context delta of understanding the system. As we can see in the figure, this delta shows that the setting or context of machine reading and understanding of the machine is not the same as the human being in the conception of design, so when we have possible failures (point in red in the figure) in automatism there is no transition from machine to human context and its increasing in time; monitoring is an interpretation of indicators of what the machine is doing, but usually when failure of automation occurs, the human does not always understand what the problem is. This scenario can lead to many wrong decisions being made because of a misunderstanding of the current system context and state. This transition point needs to be w'ell taken care of in design, especially in commercial aircraft due to the high degree of automatism.

Humans and machine learning

FIGURE 10.1 Humans and machine learning.

Of all these rules, the one that stands out is that “each agent [e.g., the pilot or the machine] in an intelligent human-machine system must [underlining added] have knowledge of the intent of the other agent.”

The Billings Rules

One factor in human error is the relation between the human operator and the automated system. Failure to comply with these rules may be blamed for at least one major catastrophic accident. Billings (1997, pp. 232-246) calls these rules requirements, but they can also be called heuristics since they are simply common-sense rules based on the author’s experience. Following are the primary rules; the reader is referred to the text to understand more details about each rule:

  • 1. The human operator must be in command.
  • 2. To command effectively the human operator must be involved.
  • 3. To remain involved the human operator must be appropriately informed.
  • 4. The human operator must be informed about automated systems behavior.
  • 5. Automated systems must be predictable.
  • 6. Automated systems must also monitor the human operator.
  • 7. Each agent in an intelligent human-machine system must have knowledge of the intent of the other agents.
  • 8. Functions should be automated only if there is good reason for doing so.
  • 9. Automated systems should be designed to be simple to train, learn, and operate.

Most of these rules are self-evident and logical. However, there is at least one incident in which one or more of these rules were ignored. We w'ould direct your attention to the Nagoya accident of 1994 as documented by Ladkin (1996), the pilot of this aircraft “inadvertently” put the aircraft into the go around (GA) although the aircraft was actually in the landing mode. The result was that the pilot actually “fought” the aircraft to make it land even though the aircraft was not programmed to land.

The primary rule in question is Rule (7) that requires the pilot and the aircraft to know each other’s intent. In this case, the pilot did not know the aircraft was programmed to go-around, and the aircraft did not know the pilot wanted to land.

It is assumed that following this incident changes were made to procedures or software to assure that Rule (7) was in place. In cases like this remedies can be found in intensive training. This may have been the case. Nevertheless, it is a reminder of the importance of these rules and their relevance to human error.

References

Billings. Charles. 1997. Aviation Automation: The Search for Human-Centered Approach. Mahwah. NJ: Lawrence Erlbaum Associates.

BKCASE Editorial Board. 2016. “Systems Engineering Body of Knowledge (SEBoK).” Accessed 15 April. http://sebokwiki.org/wiki/Guide_to_the_Systerns_Engineering_ Body_of_Knowledge_(SEBoK).

Jackson, Scott, and Timothy Ferris. 2013. “Resilience Principles for Engineered Systems.” Systems Engineering 16(2):152—164.

Kahneman, Daniel. 2011. Thinking Fast and Slow. New York: Farrar. Straus, and Giroux. Ladkin. Peter B. 1996. Resume of the Final Report of the Aircraft Accident Investigation Committee into the 26 April 1994 crash of a China Air A300B4-622R at Nagoya Airport, Japan. In The Nagoya A300-600 crash. Bielefeld UK: University of Bielefeld - Faculty of Technology.

Paries. Jean. 2011. “Lessons from the Hudson.” In Resilience Engineering in Practice: A Guidebook, edited by Erik Hollnagel, Jean Paries. David D. Woods, and John Wreathhall. 9-27. Farnham. Surrey: Ashgate Publishing Limited.

Thaler, Richard H., and Cass R. Sunstein. 2008. Nudge: Improving Decisions About Health. Wealth, and Happiness. New York: Penguin Books.

  • [1] Complex systems are composed of many elements, components, or particles. These elements are typically described by their state, velocity, position, age, spin, color, wealth, mass, shape, and so on. Elements may havestochastic components. • Elements are not limited to physical forms of matter; anything that caninteract and be described by states can be seen as generalized matter. • Interactions between elements may be specific. Who interacts with whom,when, and in what form is described by interaction networks? • Interactions are not limited to the four fundamental forces but can be of acomplicated type. Generalized interactions are not limited to the exchangeof gauge bosons, but can be mediated through exchange of messages,objects, gifts, information, even bullets, and so on.
 
Source
< Prev   CONTENTS   Source   Next >