Application in Statistical Process Control

In statistical process control (SPC), we are interested to detect a change in a process as soon as possible after its occurrence. A good introduction to SPC is given in Montgomery (2007). The most important tools of SPC are control charts. This method of monitoring data was firstly suggested and developed by Walter Shewhart in the 1920s. Since then a lot of control charts for various process parameters have been developed. Most of the proposed charts focus on the detection of mean changes. In many situations it is also of importance to monitor the variability of a process and the GMD has already been successfully used in that context (e.g., Riaz and Does, 2009; Ghute and Rajmanya, 2014; Zhang, 2014; Sindhumol et al., 2016, 2018). In the following, however, we are interested in monitoring the correlation structure of the process. This means that we want to know whether the process switches at a certain time point from a process consisting of independent variables to a process with positively correlated random variables. In the latter case, the data points would not be independent of each other any longer. In engineering, this is a typical hint that something is going wrong in the production process and it provides important information about the state of the production process. To reduce the rejection rate of the production process such a change should be rapidly detected. Our aim in this section is to describe how the developed procedure can be applied in that context.

Suppose that at each time point t > 1, the sample Хц,... ,Xf„ is given. In practice n is usually small, e.g., n = 5 or n = 10 and depends on the production rate. It is assumed that the random vector X, = (Xu,. ..,Xf„)' follows a multivariate normal distribution and that the random vectors X(, t > 1 are independent and identically distributed. We want to decide whether the components of X( are independent or positively correlated at each time point. This is done by applying an exponential weighted moving average (EWMA) recursion to Gt. It is given by


в e (0,1] and Zo = Ejj,i(Gi) = lyj фr. We assume that", is known. In practice this quantity can be estimated by preliminary samples. However, here we do not consider the influence of parameter estimation on the control chart behavior.

EWMA charts were introduced by Roberts (1959). They have become one of the most widely applied tools in SPC. Putting 0 = 1 results in the Shewhart chart. EWMA charts can be easily applied and they show a better detection rate than the Shewhart chart if, e.g., small or medium size changes in the mean of the process are present. An extension of the EWMA chart to correlated data was proposed by Schmid (1997) and Schmid and Schone (1997). An EWMA-type control chart for simultaneous monitoring of the mean and the autocovariance structure of a time series was introduced in Rosolowski and Schmid (2003,2006).

Since we know that G, should be small for correlated data, the control chart gives a signal at time point t > 1 when Z( < c. The constant c is called the control limit of the chart. If Z, < c, then the control chart stops and the process is determined to be out of control. Else, the next sample is taken and the new data are checked.

The performance of a control chart is usually assessed by the average run length (ARL). Let RL denote the run length of a chart, i.e., the number of samples taken until the chart gives a signal. Then ARL = E(RL), i.e., the ARL is equal to the expected run length. This quantity is calculated under the assumption that a change already arises at the first sample. The constant c is chosen such that the in-control ARL, i.e., the ARL assuming independent variables, is equal to a specified value, here 500. Since the variables Z,, t > 1 are not independent, the calculation of the run length is not straightforward. For the in-control case, it is possible to make use of a Markov chain approach proposed by Brook and Evans (1972). In the following, we will determine the ARL and the control limit using simulations based on 106 independent repetitions. The underlying correlation structure for modeling the out-of-control state is chosen as in Section 9.2. We focus again on the two correlation models described there.

The results of our analysis are given in Figure 9.4 for the sample sizes n = 10 and n = 15. Moreover, в was chosen to be equal to 0.2.

Since the correlation structure is stronger in the first described case, it is not surprising that the control chart reacts faster in this situation.

Next, we introduce a control chart for the covariance structure which is based on the ratio of an EWMA chart for the covariances and an EWMA chart for the variance,



ARLs of the EWMA charts based on G( for different sample sizes n: (a) — for correlation structure I, (b) — for correlation structure II


0 < 0(4 < 1,0 < < 1,Z« := Eiid(G,) = 27/^FandZ<2) := £„«i(7l2() = 72.

In order to apply this chart, // and 7 must be known. Setting 0(2) = 0 we get that Z{2) = Z'2) and the chart changes to the above discussed scheme.

The chart based on W, has a nice property. Since the distribution of (Wi,..., Wf) is a function of (X,,,- — i = 1,... ,n, v = 1,... ,t, the distribution of the run length of the chart based on Wt does not depend on 7. In particular, the control limits do not depend on 7. This property facilitates the tabulation of the control limits.

Within an extensive Monte-Carlo study we determined the control limits of the chart based on W,. The number of repetitions was chosen in the same way as previously. The in-control ARL was again taken to be equal to 500, the sample sizes under analysis were n = 10 and n = 15, and for the smoothing parameters all possible combinations of (00), #(2)) with 0(1) g {0.2,0.4,0.6,0.8,1} and в(2) e {0.2,0.4,0.6,0.8,1} are taken into account.

In order to assess the control chart based on W(, another performance criterion was applied. The reason for changing the performance measure is a disadvantage of the ARL, which is that for the ARL the change is assumed to arise at the beginning. More general criteria are making use of the conditional expected delay (ED) E(RL — q + 1 |RL > q), where RL stands for the run length of the chart and q for the time point of the change. A widely used performance criterion is the worst expected delay, which is the supremum of the ED over all possible values of q e N. Since in practice the supremum over an infinite set cannot be determined, we confine ourselves to the first 30 values of q, i.e., we determine maxi<^<3o E(RLq + 1|RL > q).

In Tables 9.3-9.6 the results of our comparison study are given. The worst expected delay for 1 < q < 30 is shown for n = 10 and n = 15 for both correlation structures under analysis. The value where the maximum of the expected delay is attained is given in parentheses. The values in bold refer to the best combination of the smoothing parameters, i.e., the combination leading to the smallest worst-average delay. The tables show that an overall good choice is to use 0(1) = 0(2) = o.2. In most cases, this choice lead to the smallest worst- expected delay. The tables also show that in general no statement can be made about the position at which the worst value arises. Comparing the results for the chart based on W( with the results obtained for G,, i.e., for 9^ = 0, we see that the chart with two smoothing parameters equal to 0.2 always provides better results. Thus, our recommendation is to make use of this scheme.

Of course, it would be interesting to monitor not only the correlation structure but also the mean and the variability behavior of the data at once. Simultaneous control schemes can be derived by combining the above approach with procedures for the mean and the variance as described, e.g., in Rosolowski and Schmid (2003,2006). It would also be of interest to include control charts based on cumulative sums (CUSUM charts).

< Prev   CONTENTS   Source   Next >