Measure Phase

The tools and methods of the measurement phase ensure the critical-to (CT) characteristics or KPOV baseline are identified and accurately measured against the target level. During this phase, the team brainstorms potential input variables (or Xs) for data collection activities that impact CT performance. Table 9.9 lists the common tools and methods used in the measure phase. These include measurement systems analysis (MSA),

TABLE 9.9

Measurement: Basic Tools

Tool/ Method

Description

Measurement systems analysis

Determining the components of measurement

(MSA)

system including its resolution, accuracy, stability, linearity, repeatability, and reproducibility.

Capability analysis

Determining how well a process output (i.e., a Y or CT characteristic) meets customer requirements relative to its central location and variation.

C&E diagrams

A brainstorming tool that qualitatively relates causes to their effects. It is used to identify potential Xs for data collection.

C&E matrix

A matrix that allows ranking of potential Xs to several Ys or CT characteristics using information obtained from several C&E diagrams.

Data collection

The process of collecting data on the potential process inputs (i.e., Xs) thought to impact the process outputs (i.e., Ys).

Statistical sampling

A series of efficient data collection methods that select some members of a population for data collection and analysis, but not all members of the population. This information is used to estimate the magnitude of the populations statistical parameters with a stated level of statistical confidence.

C&E = cause and effect; CT = critical-to characteristic.

capability analysis, cause and effect (C&E) diagrams, the C&E matrix, data collection planning, and statistical sampling.

The MSA is used to verify that a CT can be measured accurately and with sufficient precision to detect changes in its level as a DMAIC team implements solutions in the improve phase. Although an MSA is applied to the evaluation of a CT characteristic or a KPOV (or Y), it may also be used to evaluate KPIVs (or Xs). Table 9.10 shows six components of an MSA: resolution, accuracy, reproducibility, repeatability, stability, and linearity. Four of the MSA components are relatively easy to evaluate. Resolution requires that a measurement system should be in units smaller than the CT characteristic it is measuring. As an example, if we historically measured lead time in days but need to make process improvements in the range of hours, then the resolution should be hours or minutes. The second component, accuracy, can be estimated and corrected. If measurements consistently read low or high on average, we can adjust the system to eliminate this bias and bring it on target. Stability can also be managed.

TABLE 9.10

Measurement System Analysis (MSA)

Analysis

1. Resolution: The ability of the measurement system to discriminate changes in the characteristic being measured (1/10 rule). This rule ensues that the measurement system is in smaller increments than what is being measured. As an example, if a project metric is in hours, we would want a resolution of minutes or perhaps seconds.

2. Accuracy (Bias): The ability to measure a characteristic and be correct on average over many samples.

3. Reproducibility: The ability of two or more people (or machines) to measure a characteristic with low variation between each person (or machine).

4. Repeatability: The ability to measure a characteristic with small variation when a sample is measured several times under constant conditions.

5. Stability: The ability to measure a characteristic with the same person (or machine) and obtain the same measurement value over time.

6. Linearity: The ability to measure the characteristic over its entire range with equal variation (error).

For example, if we are using visual color standards to evaluate product quality, then inspection procedures and training can be created to periodically replace the color standards if they fade over time. The fourth of these MSA components is linearity. Our measurement tools should be used within the range it was designed (i.e., where variation remains constant). We should avoid situations in which the MSA tools are highly variable.

Reproducibility and repeatability involve people and tools. Reproducibility measures the consistency of two or more people to agree on average when measuring the same part with the same measuring tool. Repeatability is the consistency of one person measuring the same part several times using the same tool. These components are evaluated with a Gage Reproducibility and Repeatability study. Reproducibility and repeatability differ based on the distribution of the CT characteristic. If a CT characteristic is distributed as a continuous variable, then a “Variable Gage R&R” is used to evaluate reproducibility and repeatability. If the CT characteristic is discrete (e.g., pass or fail), an Attribute Agreement Gage R&R is used to evaluate reproducibility and repeatability. Not all six measurement components may be applicable for evaluating some systems. As an example, if the measurement system is automated within a single system that does not require manual intervention, then its reproducibility component does not need to be estimated.

Capability analysis is a set of tools and methods designed to compare the process performance of a CTQ characteristic to customer requirements. A capability analysis compares the VOC, in the form of specifications,

FIGURE 9.9

Capability analysis.

to the VOB, in the form of a simple ratio in a manner shown in Figure 9.9. It is important that an improvement team be able to measure a CT characteristic with sufficient accuracy and precision to determine its capability level. An ideal situation is one in which the distribution of a CT characteristic is centered on a target and with small variation. If six standard deviations of a CT distribution can be fit on each side of a specification’s lower and upper limits, then the CT characteristic is at Six Sigma. There are different versions of capability metrics, each of which can be converted into the other metrics using transformation equations. As an example, Motorola adopted the Six Sigma capability metrics “Z” and “Sigma,” while many other organizations adopted the AIAG terminology of Cp, Cpk> Pp, and Ppk. The capability metrics shown in Figure 9.10 have a simple transformation equation of Zsl = Sigma = 3 x Cp They show that Cp = 2 at the Six Sigma performance level. This is because Cp = 1 is defined as a process capability of ±3 standard deviations within the upper and lower specification limits with the process centered on target, but a Six Sigma process has ±6 standard deviations, or Cp = 2.

The yield metrics shown in Table 9.11 are also commonly used in quality programs such as Six Sigma. These include defect per unit, parts per

FIGURE 9.10

Capability metrics.

million (PPM), and rolled throughput yield (RTY). RTY measures of the number of units that pass through all the process operations defect-free. Defect-free means that no units were scrapped or reworked as they were transformed by the process, ct e more complicated a process, the lower the RTY, all о ther t hings e qual. In t he Six Sigma p rogram, t he c oncept о f

TABLE 9.11

Process Yield Metrics

Metric Definition

1. Defects-Per-Unit (DPU) = Total Defects Found in Sample / Total Units In Sample

2. Parts-Per-Miffion (PPM) =DPU X 1,000,000

3. Rolled Throughput Yield ( RTY) = П /1 [(Defect Free Units at Each Step; ) / (Total Units At Each Step i ) xlOO]

4. . RTY = Yield 1) x (Yield 2) x (Yield 3)

Over a very large number of workflow operations the RTY approximation is:

5. RTY = e'DPUTotli

Opportunities = Number of workflow operations which are right or wrong:

6. Defects-Per-Miffion-Opportunities (DPMO) = PPM'Opportunities'Unit

7. Sigma = Z value from a normal table corresponding to DPMO (Must be converted to short-term Z)

opportunity counting was also introduced to measure process yield and complexity simultaneously. PPM is calculated for each CT characteristic of a product or service. Dividing the PPM number by the opportunity count (i.e. the number of value-add operations) of the process enables calculation of the defects per million opportunity statistic. This is the metric from which the Zst or Sigma of the overall process is calculated.

Although organizations use variations of these quality metrics, several of them are equivalent, as shown in Figure 9.10. There are four groups, and placement depends on whether a CTQ characteristic is on-target or off- target and on how its standard deviation is estimated. A target is the midpoint between bilateral specifications’ lower and upper specification limits. A standard deviation is calculated using short-term or long-term historical baseline data. A short-term historical baseline is calculated using the standard deviation of samples collected as rational sub-groups. A rational subgroup is defined as a set of observations taken from a process that represents the smallest practical variation the process will produce. As an example, if the hour-to-hour variation of a process needs to be evaluated, then a rational sub-group would be an hour. This implies that there will be higher observed variation between hourly sub-groups than within them. In Figure 9.10, the improvement strategy is to improve process capability by moving the mean of a CT characteristic to the target, and then reducing its variation.

A complication was added to capability estimation in the original Six Sigma deployment at Motorola when a constant of “1.5” was arbitrarily added to a calculated Z or sigma value. This is seen in Table 9.12, where the classic probability calculations are shown in the long-term section of the table, and the Six Sigma calculations are shown with a 1.5 constant or sigma shift added in the other section. Statisticians do not accept an assumption of a 1.5-sigma shift in every process. Our recommendation is to use long-term capability calculations and to directly calculate shortterm or sub-group statistics specific for the process being analyzed. As an example, the area under a normal distribution curve at 0.5 (or, in our analysis, a defect percentage of 50%) correlates to a Z value of 0. This is the mean of a standard normal distribution. In contrast, the Six Sigma scale shifts the 0 by +1.5 standard deviations. The resultant defect percentage decreases from 50% to 6.68%. This practice may significantly overestimate process capability. The practical way to calculate short- and long-term capability is to use actual process data without this 1.5-Sigma shift.

Once the CT characteristic baseline and its capability for meeting the specification are calculated, the team begins to brainstorm the potential

TABLE 9.12

Tabulated Probabilities

Short -Term

Long-Term (Actual Percent)

Sigma

C*

PPM

Percent

Zlt

CPk

1.50

0.50

500.000

50.00%

0.00

0.00

2.00

0.67

308.549

30.85%

0.50

0.17

3.00

1.00

66.S09

6.68%

1.50

0.50

4.00

1.33

6Д10

0.62%

2.50

0.83

5.00

1.66

233

0.023%

3.50

1.16

6.00

2.00

3.4

0.000%

4.50

1.50

2 parts par billion

3.4 parts per million

Note: Statisticians do not assume a 1.5 sigma shift in every process. The recommendation is to use ‘long-term” calculations; and direcdy calculate shorter term or subgroup statistics to calculate an actual shift for your process.

causes for poor performance. This will form the basis for the data collection plan. A C&E diagram and similar methods are used to identify potential causes that may impact the CT characteristic or KPOV. The C&E helps the DMAIC team brainstorm all causes of poor capability for subsequent prioritization. In Figure 9.11, a C&E diagram is applied to an inventory investment example. The causes are grouped by the standard categories of measurements, methods, procedures, and people. In other applications, categories might include machine, measurements, methods, materials, people, or environment. A team can use other categories that fit their situation. If the C&E diagram is used effectively, then the root-cause analysis will show that one of more of the causes on the C&E diagram have a significant impact on the CT characteristic. These inputs (or Xs) are now called KPIVs. In the improve phase, we will experiment by changing the levels of these Xs to understand their impact on the KPOV (or, Y), which is the CT characteristic.

The C&E matrix shown in Figure 9.12 is used to rank Xs for data collection relative to several CT characteristics or Ys. The C&E matrix is useful if the team has two or more C&E diagrams each containing several common Xs. The team uses the matrix to assign a weighting to each X relative

FIGURE 9.11

Cause and effect diagram.

FIGURE 9.12

Cause and effect matrix.

TABLE 9.13

Data Collection Activities

Action

1. Ask the right questions to ensure the assessment meets its goals and objectives.

2. Determine the type of information and data required to answer the assessment questions.

3. Bias the data collection efforts toward quantitative data to increase analytical sensitivity'.

4. Develop a data collection plan that specifies where the data will be collected, by whom, and under what conditions.

5. Review the collection plan with your team and the people who will be part of the data collection activities.

6. Develop data collection forms that are easy to use and will not be misinterpreted; include all instructions as well as examples.

7. Remember to collect data in a way that makes for easy data entry into Minitab, Excel, or other software packages.

8. Ensure the team is trained in the correct procedures, tools, and methods, including measurement of the data.

9. Periodically analyze the assessment data to ensure it provides the necessary information.

10. Allow resources and time in the schedule for follow-up data collection efforts as necessary.

to its correlation to each Y and the overall ranking of the Ys to each other. In the example shown in Figure 9.12, a single calculation is made for the independent variable temperature relative to each of the Ys. The weighted total for temperature, across all of the Ys, is 207. After the ratings of the other Xs are calculated, they are prioritized in descending order based on the weighted totals. Data collection efforts are focused on the Xs having the highest weighted total scores. This method is useful when there are many variables.

The data collection activities listed in Table 9.13 are used to measure the Xs identified in the C&E matrix and their corresponding Ys. Each combination of Xs and the associated Ys will be analyzed in the third DMAIC phase (i.e., the analyze phase). The team starts the data collection planning by listing all of the questions that will need to be answered in the analyze phase. These questions should correspond to the root-cause analysis. Additional questions may arise during the data collection because the process is iterative, but good planning will already have listed most questions. Table 9.13 lists several useful ideas to help improve data collection. Finally, a periodic review of the data collection strategy and its related activities is important for project success.

Statistical sampling becomes important once the specific activities of the data collection plan (i.e., where data will be collected, by whom, under

FIGURE 9.13

What is sampling?

what conditions, how it will be measured, its frequency of measurement, and the sample size) are established. Figure 9.13 shows two attributes of statistical sampling (i.e., sampling is conducted when an entire population cannot be counted, and samples should be representative of the population with respect to the parameters being estimated and large enough to make inferences about the population’s parameters using the sample). The sample size depends on the statistical methods to be used, the risk we assume when stating statistical conclusions, and the distribution of the CT characteristic or Y (the metric) that is being analyzed. Ensuring a sample is representative implies we have collected data from each variable combination. As an example, if our CT characteristic is the cycle time of four machines across three shifts, we need to collect data from each machine on each shift to answer questions concerning overall performance across the four machines and three shifts, which is our population. In contrast, if a project focus is only one machine and shift, then data would be collected for the one machine and shift, which is the population. In summary, a sample drawn from a population should reflect the questions that need to be answered in the root-cause analysis.

Sampling can be complex, but there are simple guidelines that are useful for most applications. Figure 9.14 shows four common sampling methods. Simple random sampling is used if a population is not stratified relative to its Xs or independent variables. In simple random sampling, a sample

FIGURE 9.14

Common sampling methods.

of size n is randomly drawn from the population and sample statistics are calculated to estimate the population’s central location and dispersion. If there are several variables at several levels, a sample can be stratified by the number of independent variables and their discrete levels. Random samples are then drawn from each stratum. In the previous example, samples of cycle time were to be collected over the four machines and three shifts. A third sampling method is applied if a process changes over time. This is systematic or rational sub-group sampling, in which samples are periodically collected from a process. The subsequent analysis provides information of the Y or dependent variable as a time series. Finally, if a population can be represented using several naturally occurring groups or clusters then cluster sampling would be used to collect data.

Although sampling can be complex, some simple guidelines are provided in Figure 9.15. These guidelines should be verified using an exact sample size formula prior to their usage. A final consideration in sampling

FIGURE 9.15

Sampling guidelines.

is when a dependent variable or Y is highly skewed. The required sample size must be larger for a skewed distribution than for a normal distribution because variance will be greater. If the sample data are not normally distributed but this is the test assumption, then several actions can be taken that may help normalize the sample data. First there may be errors in the data that skew it, and these errors can be removed. There could also be outliers that are far from the sample mean. This condition would also skew the data. Outliers should be investigated, and if they are not representative of the whole sample, they can be removed from the sample. If they cannot be explained and removed, however, then the distribution may be skewed. The measurement system could also be biased or could contribute to sample variation. A measurement analysis should be done to verify its accuracy, precision, and other components. If portions of the sample were collected at different times, there may be more than one distribution present. If this is true, the samples should be analyzed separately. If the sample is very small or is not representative, then a larger sample could be collected. Finally, if the sample remains skewed, then a distribution test should be made, and the data fit to the correct distribution.

 
Source
< Prev   CONTENTS   Source   Next >