Simple Analysis of Process Data
It has been discussed in earlier chapters that each initiative has its own toolkit for solving certain types of problem. Because different operational initiatives, e.g., Lean, Six Sigma, business analytics, and others each have their own tools and methods, it is sometimes confusing to know when to use
TABLE 5.1
Checklist
Category |
Count |
Frequent Changes to a Job |
5 |
Lack of Standards |
3 |
Lack of Measurement Systems |
2 |
Lack of Training |
4 |
Long Cycle Times |
4 |
Infrequent Jobs |
2 |
High Output (Exceeding Capacity) |
1 |
Environmental Conditions |
0 |
Attitude |
0 |
Waiting |
10 |
Transport |
2 |
Non-Value-Added Activities |
15 |
Serial Versus Parallel Processes |
0 |
Batch Work |
10 |
Excessive Controls |
10 |
Unnecessary Transfer of Materials |
0 |
Scrap |
0 |
Rework |
10 |
Ambiguous Goals |
2 |
Poorly Designed Procedures |
3 |
Outdated Technology |
0 |
Lack of Information |
4 |
Poor Communication |
0 |
Limited Coordination |
5 |
a specific tool or method to identify the root causes for a problem. To help place the two major operational initiatives, Lean and Six Sigma, into perspective, Table 5.2 compares Lean tools relative to the Six Sigma problemsolving methodology defined as Define, Measure, Analyze, Improve, and Control. For process characterization in service or office applications, Lean methods are usually preferable for process analysis. This is especially true for situations where a problem can be identified and eliminated in a matter of days rather than several weeks. Rapid improvement events fall into this latter category. In contrast, Six Sigma methods are useful for situations that require intensive data analysis and the creation of statistical models to explain relationships between output variables and input variables. But in service systems, business analytics and advanced statistical methods will usually be needed to be extract, condition, and analyze big databases. In
TABLE 5.2
How Lean and Six Sigma Tools Compare
Phase |
Typical Tools |
Common Lean Tool? |
Common Six Sigma Tool? |
Define |
Problem Statement |
Yes |
Yes |
Process Map |
Yes |
Yes |
|
Metric Analysis |
Yes |
Yes |
|
Benefit Analysis |
Yes |
Yes |
|
Measure |
Problem Statement |
Yes |
Yes |
Cause & Effect (C&E) Diagram |
Yes |
Yes |
|
FMEA |
Sometimes |
Yes |
|
Measurement System Analysis |
Sometimes |
Yes |
|
Basic Statistics |
Yes |
Yes |
|
Process Capability |
Sometimes |
Yes |
|
Benefits Analysis |
Yes |
Yes |
|
Analyze |
Graphical Analysis |
Yes |
Yes |
Hypothesis Testing |
No |
Yes |
|
Contingency Tables |
No |
Yes |
|
One-Way ANOVA |
No |
Yes |
|
Multi-Variant Analysis |
No |
Yes |
|
Correlation |
No |
Yes |
|
Regression |
No |
Yes |
|
Detailed Process Map |
Yes |
Yes |
|
Improve Phase |
General Full Factorials |
No |
Yes |
2k Factorials |
No |
Yes |
|
Fractional Factorials |
No |
Yes |
|
Response Surface Designs |
No |
Yes |
|
Mixture Experiments |
No |
Yes |
|
Control Phase |
Statistical Process Control (SPC) |
Yes |
Yes |
Mistake-Proofing |
Yes |
Yes |
|
Measurement Control |
Yes |
Yes |
|
Training |
Yes |
Yes |
|
Validate Capability |
No |
Yes |
|
Control Plans |
Yes |
Yes |
|
Final Benefits Review |
Yes |
Yes |
these applications, Six Sigma analytical tools are limited. The tools listed in Table 5.2 vary between simple to complex with both initiatives sharing common ones. Common tools are evident for problem definition and then at the end of a project during its Control Phases.
Although Lean tools and methods appear to be easier to understand, they are not necessarily easy to implement without the assistance of experienced facilitators to help train teams to use the tools and guide them through practical implementation. Sometimes precursor systems need to be implemented prior to advanced Lean methodologies. An example is deploying a pull scheduling system that requires stable external demand and balanced operations before calculating a takt time. Also, sometimes other systems need to be deployed after an improvement to sustain it. Examples are quality controls, associate training, and preventive maintenance. In summary, a rapid improvement event focuses on making incremental improvements for an existing process, and the tool focus could be Lean, Six Sigma business analytics, or other methods. Larger scope improvement efforts will be projects or even project portfolios for end-to- end process improvement.
Process Mapping - Suppliers, Inputs, Process,
Outputs, and Customers (SIPOC)
Process maps are used at several levels of detail and in different formats depending on the desired outcomes. Some projects need to drill down to a lower level of detail than others relative to process mapping. If a team is building a VFM to understand a process, then process details and quantification are important because the map forms the basis for subsequent analysis and solutions. In contrast, a less detailed high-level process map is used to identify the boundaries of a process to understand a projects scope. An example is a SIPOC, i.e., supplier-input-process-output- customer map. A SIPOC is shown in Figure 5.10. Its advantage is that a process can be described at a high level, and then more detail can be added if the team needs to deep dive into one part of the process. This avoids unnecessary process mapping work not related to scope.
A SIPOC is constructed by working backward into a process from the voice of the customer (VOC). The customer requirements are translated into internal specifications, i.e., the process outputs. These internal specifications are placed at the output boundary of the SIPOC. The outputs are translated backward into each process step. The inputs and internal work tasks for the process step are also documented until the supplier side of the SIPOC is reached. Finally, the inputs on the supplier side are placed on the input boundary side of the SIPOC. Supplier refers to materials, information, people, energy, equipment, and other inputs that a process uses to create value. At this point in an analysis the SIPOC is used to identify major areas of the process that could be the basis for improvement projects. Or if this is a project where scope is being refined, inputs, outputs, and operations within each step are analyzed for sources of variation for subsequent analysis. More detailed process maps could also be created once the scope is clear. Process variables are settings that transform inputs into outputs within the process operations. As an example, in an accounts payable process where invoicing information comes into the process at the supplier side, within the process, workers do the work tasks needed to set the invoice up for payment. At the output side of the SIPOC, the invoice is paid. At a next lower level of analysis, the team may need to construct more detailed views of the process depending on where the root cause analysis is to be focused.
Two more detailed maps are also shown in Figure 5.10, for example. One is an office layout and the other a portion of the SIPOC but at a more detailed level and from a process perspective. The strategy in process mapping is for the team to work top-down from the higher level SIPOC

FIGURE 5.10
Process Mapping - SIPOC into the process to identify and eliminate the root causes for poor performance. If more detail is needed, then it can be added to the analysis.
Cause and Effect Diagrams
Cause and Effect (C&E) diagrams are useful for helping a team brainstorm and organize qualitative relationships between a process output variable and several input variables. After a subsequent analysis, one or more of these inputs will likely have a major influence on the output variable. In C&E diagrams the input variables are organized into categories or themes to structure the conversation around common causes. As an example, if several inputs are training related, then they may have a common cause, and the solution will be appropriate for several of these. The C&E approach is structured as opposed to open brainstorming where sticky notes are used to identify inputs or causes and then aggregated into categories after creation by the team.
Figure 5.11 shows a C&E diagram with the categories of environment, machines, methods, measurements, materials, and people. These categories can be modified based on a projects scope and root cause analysis. In the current example, high inventory investment is shown as the problem that will be analyzed by the rapid improvement team. The effect or problem is

FIGURE 5.11
Cause and Effect (C&E) Diagram that inventory investment increased 25% over the previous year without an increase in sales. In the C&E diagram, there are several branches. A team would list what they thought to be the major causes of the higher inventory investment and then drill down to lower level root causes of the problem. As an example, two major causes shown on the C&E diagram are demand variation and lead time. Inventory investment is known to be impacted by these two causes. Focusing on lead time, we might find it increased for several reasons including late deliveries, quality issues, bulk purchases, i.e., large lot sizes, or other reasons. We could also drill down lower. If we drill down to still the next lower level, the late deliveries may be associated with a few suppliers.
Using the C&E as well as more focused brainstorming with historical data to support the discussion helps the team identify the important variables or root causes for this inventory investment problem. Brainstorming using a C&E diagram is not a substitute for data collection and analysis but rather a good first step that enables a team to move forward with focused data collection or further analysis that will either validate or disprove the teams opinion of what is causing the process problem.
5-Why Analysis
A 5-Why analysis is another useful tool that helps identify the root causes of a process problem. Table 5.3 shows a 5-Why analysis applied to the same inventory investment problem shown in Figure 5.11. In this analysis, a team will ask why at least five times to drill down into more detail to uncover the causes of a problem. When using this methodology, it is important that information be gathered to support the 5-why discussion as the team drills down, acting on fact rather than opinion. There are commonalities between a C&E diagram and a 5-why analysis, but they also have some important differences. A C&E diagram is useful to gather and organize brainstorming ideas into categories. In contrast, a 5-why analysis is useful to focus down from one major category to gain more details. But they can be used in sequence.
In this example, the effect or problem is that inventory investment has increased 25% faster than sales the prior year. Higher inventory investment is caused by different root causes that need to be identified and eliminated. Table 5.3 shows how the 5-Why analysis is applied to this example. The first question is “Why has inventory investment increased 25% faster than
TABLE 5.3
5-Why Analysis
Level of Question |
Answer (Opinion) |
Supporting Information |
Effect (Output) |
Inventory investment increased 25% over last year. |
Accounting report |
Why has inventory investment increased 25% over the last year? |
Inventory turns (average inventory investment necessary to support sales) decreased by 25%. |
Operations report |
Why did inventory turns decrease by 25%? |
Lead time of production line “XYZ” increased by 5%. |
Operations report |
Why did lead time of production line “XYZ” increase by 5%? |
Machine “A” has not been running at 95% of the required target rate. |
Operations report |
Why has machine “A” has not been running at 95% of the required target rate? |
There has been a scrap problem with raw material component “B.” |
Quality report |
Why has there been a scrap problem with raw material component “B”? |
The components outer diameter periodically exceeds specification. |
Quality report |
Why has the components outer diameter periodically exceeded specification? |
Team must investigate the root causes for the diameter variation problem. |
Project Charter |
sales last year?” An increase in inventory would have been seen in financial or operational reports. Reviewing an operational report, the team may find that inventory turns decreased by 25%. Inventory turns is a ratio of the cost-of-goods-sold (COGS) divided by the average inventory investment necessary to produce the COGS. The COGS is the invested labor and materials. This increase in inventory relative to sales could be due to several factors. In answer to the question, “Why did inventory turns decrease by 25%?” the team may find that lead time of production line “XYZ” increased by 5% which required that a higher work-in-process (WIP) inventory level be maintained to keep materials flowing through the process. The lead time information would also be obtained from an operational report. Continuing through the 5-Why analysis, the team might eventually find that the cause of the high inventory investment is that machine “A” has not been running at 95% of the required target rate due to a scrap problem with raw material component “B.” Finally, the scrap problem may be found to be caused by a quality issue relative to a component “B s” outer diameter that periodically exceeds its specification. This information must be verified by walking the process to directly observe the problem. But the information gained from this methodology, when based on fact, enables a team to quickly investigate the root causes for process issues. In summary, a 5-Why analysis is useful to the extent it is supported by actual data.
Histogram
The histogram is another useful analysis tool. It shows the central location and dispersion of a continuous variable. A continuous variable is one that can be measured on a scale that can be divided into smaller intervals as needed. As an example, a meter can be broken into centimeters, which can be broken into millimeters, etc. Figure 5.12 shows the continuous variable time in minutes, a central location of approximately 11 minutes, and a range of between approximately 2 and 24 minutes. This information is useful for establishing a baseline prior to analyzing and improving the process. In this scenario, the average lead time would be reduced from its current level of 11 minutes to a lower level. Histograms are also useful for comparing several distributions with a common scale. In this example, two histograms can be used to compare lead time before versus after process improvements.
In Figure 5.12, the empirical data is shown as rectangular bars. The original data was allocated to classes corresponding to the width of the rectangular bars. An advantage of graphically displaying data using a histogram is that the data pattern can be analyzed for clues as to how the underlying process is operating. If the empirical data is symmetrical, then areas to the right or left of the center of the histogram will be approximately equal. In contrast, a histogram may extend in one direction or another. This is called skew. Highly skewed data contains outliers. An outlier is a data value far away from the center of the data. The reasons for outliers may be that the data points are not representative of the dataset, they were incorrectly measured, or this may be how the process normally operates. The improvement team would analyze histograms of the various process metrics for clues as to the root causes of the process issue. Or to show that the improvements were successful, i.e., the before versus after. In addition to creating a histogram, summary statistics are calculated using the data.

FIGURE 5.12
Histogram of Time Lost to Defects Pareto Chart
A Pareto chart compares several discrete variables or categories to their occurrence frequency or total count. Figure 5.13 uses the data from Figure 5.5 to show the occurrence frequency of several process issues. The Pareto chart shows the number of process breakdowns in decreasing order by issue. The category NVA activities has the highest total number of process issues, and the waiting has the second highest frequency. Because occurrence frequency is clearly shown from the highest to lowest levels, Pareto charts are useful for focusing a team on the most impactful issues for subsequent root cause analysis. Pareto charts are created at this first level and then at second and third levels. As an example, Figure 5.13 could be broken down to a next lower level by analyzing the reasons for the NVA or waiting. In summary, Pareto charts are useful for investigating the root causes of a process problem and helping to clearly communicate them.

FIGURE 5.13
Pareto Chart of Process Issues Box Plot
The box plot shown in Figure 5.14 provides a non-parametric graph of a continuous variable. It shows both the central location, i.e., the median and dispersion represented by the range of the dataset. The range is calculated as the maximum minus the minimum values. But other statistics are also provided. These are the 25 percentile, median, and 75 percentile levels. The 25 percentile is the data point at which 25% of the values are below. The median is the 50% percentile data point at which 50% of the values are below. Finally, the 75% percentile is a data point at which 75% of the values are below. Several box plots can be displayed in a comparative manner as shown in Figure 5.14. Discrete variables, having a common and continuous scale relative to each other, are displayed on the same graph. In the example shown in Figure 5.14, lost time is broken into the process waste categories that were shown in Figure 5.6. We also can see that the waiting category has a higher median lost time than excess inventory and a higher variation of time relative to the other categories. An asterisk represents data points marked as outliers. An outlier is a data point that is likely to be different than most of the sample data, i.e., furthest from the central location of the data. Box plots are a basis from which more advanced statistical methods can be applied to analyze the process data.

FICURE 5.14
Lost Time by Process Waste Scatter Plot
A scatter plot displays two continuous variables relative to each other. This analysis is useful to see relationships between two continuous variables and as a precursor for building models. In the example shown in Figure 5.15 as the production rate increases, the capacity also increases. The data used to construct the scatter plot shown in Figure 5.15 was taken from the example shown in Figure 5.4. A next level up from a scatter plot is developing a quantitative relationship between two continuous variables using a simple linear regression model. This type of model provides statistics that measure the strength of the relationship between two continuous variables.
Time Series Graph
A time series graph compares a continuous variable against its time-ordered sequence. In the example shown in Figure 5.16, we see that production rate and capacity when plotted based on their data collection sequence, i.e., time ordered, are both increasing. The data used to construct the graph shown in Figure 5.16 was taken from Figure 5.4. We must be careful in the root cause analysis of increases in production rate and capacity. Their

FIGURE 5.15
Scatter Plot of Capacity Versus Production Rate

FIGURE 5.16
Time Series of Capacity Versus Production Rate patterns may be influenced by a third unknown variable, i.e., a lurking variable. A time series plot is a complementary tool to use in addition to a histogram, which does not show time-related patterns. Sometimes for the same dataset, one graph shows a pattern but the other one does not.
Control Charts
Control charts are special types of time series chart. These have control limits calculated around the central location or mean of the plotted variable. The sample mean is calculated by summing all the data values and dividing by sample size. As an example, if a sample consisted of the three values 1, 2, and 3, its mean would be (1 + 2 + 3 = 6)/3 to calculate 2. The upper and lower control limits are calculated at ± 3 standard deviations from the mean. Using this rule, approximately 99.73% of the samples drawn from a normally distributed process are expected to be within the upper and lower control limits, i.e., ± 3 standard deviations from their mean. A stable process should not display non-random patterns within the control limits or any outliers beyond them.
A first step in constructing a control chart is determining the distribution of the variable being plotted, the sampling plan that is used to collect data from a process, and the number of time-ordered samples representing a predetermined amount of time over which sampling will occur. As an example, using the variable processing time from Figure 5.2, the mean is 3.8 minutes, and the standard deviation of individual values is 1.9 minutes. The control chart is shown in Figure 5.17. The variable, processing time is assumed to be continuous and symmetrical, i.e., a normal distribution. The sample consists of individual values as opposed to subgroups. There is a second chart for sub-grouped data. The initial control limits are calculated at ± 3 standard deviations from the calculated mean of the combined sample. The calculation is 3.8 minutes ± (3 x (1.9 minutes)). The lower control limit is -2.1 minutes (set the lower limit to 0). The upper control limit is 9.6 minutes. The sample uses 20 to 25 sequential values. This initial control chart is used as a reference or baseline. Because the data is individual rather than grouped, an improvement target can be added to the graph to show how stable the data is relative to the target. After the project, the team can also show before and after impacts of the improvements using a split control chart.
The theory behind control charts is that as subsequent samples are taken from the same reference distribution assuming a normal distribution, then 99.73% of them should vary randomly within the control limits. If there are extraneous sources of variation such as outliers, trends, shifts in the mean, or excessive variation, the control chart will show these patterns. In contrast, if the control chart pattern remains symmetrically distributed around its mean and random, then no process adjustments are needed. Control charts differentiate common cause variation (no pattern) from assignable or special cause variation (outliers or a set of observations forming a non- random pattern).
The control chart shown in Figure 5.17 is called an individual control chart. It displays data as individual values without sub-grouping. There are several other types of control charts that depend on the distribution of the variable being charted and how data is collected. One type is used to chart a continuous variable but where data is collected using a subgroup sample. This is the X-bar chart and its associated R or range chart. Constructing these charts requires that samples be taken as subgroups from the process and at equal time intervals. The sample size is usually 4-9 data values. The subgroup averages are plotted on the X-bar chart, and the ranges of each subgroup are plotted on the range or R chart. The ranges are maximum minus the minimum of each subgroup.
Analysis of the individual control chart shown in Figure 5.17 consists of looking for non-random patterns and outliers. An outlier is a datapoint

FIGURE 5.17
Control Chart on Individual Processing Time greater than ± 3 standard deviations from the mean of the control chart. Specifications can usually be overlaid on an individual control chart. In contrast, in an X-bar chart, the plotted data points are the subgroup averages. The likelihood that a data point is beyond either the upper or lower control limits is less than 0.27%, i.e., 100%- 99.73%. The decision would be to take corrective action if the data point is this far from the mean. The error for taking an action would be 0.27%.
There are other types of control chart, developed for various applications. If a variable is pass or fail, p-charts and np-charts are used to construct the control chart based on a binomial distribution assumption. If a variable is counted data, then c-charts or u-charts are used to analyze the data based on a Poisson distribution assumption. There are also specialized control charts for specific situations.
Once a team identifies the root causes for their process problem, using these simple tools the project charter should be updated to reflect the team’s most recent data analysis. Some questions to ask are have the goal or questions changed? Do we know the countermeasures or solutions based on the root cause analysis? How will these solutions be implemented to achieve the improvement goals? Improvement ideas identified at this point in the project are formally recorded in an improvement opportunity worksheet shown in Figure 5.18. Many of these will be completed by the rapid improvement team by the end of the event. Those requiring more time will go on a schedule for completion after the event. This approach is highly visual and hands-on, which resonates with leadership, the local process owner, and the rapid improvement event team.
If the number of identified improvements is large, then they may need to be prioritized to manage resources. The team could vote to rank the improvements using a simple calculation of number of votes per person equals the number of improvement ideas divided by 3. If there are 15 improvement ideas, then each team member has five votes. Then each person can place no more than two votes on any one idea to prevent bias in the prioritization. Figure 5.19 shows a more formal prioritization approach using a C&E matrix modified to prioritize improvement projects for business benefits.
Normally, a C&E matrix is used to prioritize process input variables or “Xs” relative to process outputs or “Ys” for project prioritization, data collection prioritization, or root cause prioritization. In the current

FIGURE 5.18
Improvement Opportunity Worksheet
example, the prioritization matrix rates several improvements relative to their impact to one or more key operational output metrics. The output metrics are ranked for relative importance to each other. A “1” is low importance and a “10” high importance. The improvements are compared to the output using a scale between 1 and 10. A “1” implies no correlated impact between an improvement and metric, whereas a “10” implies that a high degree of correlated impact exists. Other ratings are also made for the other operational metrics, i.e., VA %, production rate, etc. In the example shown in Figure 5.19 we see that improvement В has the highest overall rating. Its rating is calculated as a weighted total of “7 x 10” + “8 xlO” + “8 x 8” + ... + “8 x 5” = 520. This implies that improvement В is the first one to be completed by the team.

FIGURE 5.19
Cause & Effect (C&E) Matrix - Prioritizing Improvements
Example - Analyzing Job Shadowing Data
In this chapter we have discussed data collection strategies as well as several tools and methods for analyzing data. Because data collection is a critical component of process analysis, we need to efficiently collect the correct data and supporting documentation. Job shadowing was shown to be a useful data collection method for services and offices because of the non-standard working format and complexity of the work. It has been used in manufacturing for decades to analyze complicated operations consisting of people and equipment. In the example starting in Table 5.4, we see the results of one days shadowing of an accounts receivable work process. The time duration of several work tasks has been recorded and labeled as VA or NVA. The total working time is 9.32 hours or 559 minutes.
This data takes on new meaning if simple analytical tools are used to show relationships between the data elements. As an example, in Figure 5.20,61% of the total time is classified as NVA. The next step is analyzing each category for NVA time. It can be seen that 81% of meeting time does not add

FIGURE 5.20
Analysis of Job Shadowing Data
value. Perhaps the NVA meetings can be eliminated or reorganized to make them more efficient. Reducing the percentage of NVA meeting time will provide associates with time to do more useful work. The NVA time in the other categories can also be reduced after analysis. In summary, an advantage of using shadowing and simple analytical tools is that information can be displayed in a format that provides insight into issues and root causes. This aids communication and decision-making for rapid improvement.
Example - Inventory Analysis and Reduction
Six Sigma methods can be applied to improve inventory management. In situations where inventory investment is high, and the analysis of its root causes requires a statistical analysis, then Six Sigma modeling capability with standard Lean methods is a useful approach for analysis and improvement. In the first two Define, Measure, Analyze, Improve, and
TABLE 5.4
Analyzing Data Collected from Job Shadowing
Activity |
Time Duration |
Value |
|
5 |
NVA |
|
4 |
VA |
|
3 |
NVA |
|
8 |
NVA |
|
6 |
NVA |
Meeting |
30 |
NVA |
Report |
60 |
NVA |
|
5 |
VA |
Report |
60 |
VA |
Meeting |
60 |
NVA |
Phone |
20 |
VA |
Phone |
10 |
NVA |
Meeting |
45 |
NVA |
Meeting |
30 |
NVA |
Phone |
24 |
NVA |
Meeting |
45 |
NVA |
Phone |
27 |
VA |
Phone |
12 |
NVA |
Meeting |
30 |
VA |
Phone |
5 |
NVA |
Meeting |
20 |
VA |
Phone |
16 |
VA |
Phone |
34 |
VA |
Total (Minutes) |
559 |
|
Total (Hours) |
9.32 |
Control (DMAIC) phases, a team collects information related to the major issues impacting inventory investment. Table 5.5 shows these issues, in decreasing order of impact on inventory investment, as canceled orders, schedule changes, late deliveries, large lot sizes, missing materials, and quality issues. A team could investigate and improve any or all these issues depending on priority and ease of elimination. In this analysis the issue canceled orders represent 64% of the $1,000,000 inventory investment associated with the issues. This is also approximately 32% of the overall inventory investment of $2,000,000. Based on this analysis there appears to be six improvement areas that can be investigated to reduce investment. If the $1,000,000 associated with these problems were eliminated, then inventory turns would increase from 5 to 10, and the investment would decrease from $2,000,000 to $1,000,000.
TABLE 5.5
High Inventory Investment - First Level
Reason (Issue) |
Count |
Percentage |
Estimated Impact-"* |
Improvement Area |
Canceled Order |
134 |
64% |
5638,100 |
A |
Schedule Change |
36 |
17% |
5171,400 |
В |
Late Deliveries |
14 |
7% |
566,700 |
C |
Large Lot Size |
9 |
4% |
542,900 |
D |
Missing Materials |
9 |
4% |
542,900 |
E |
Quality Issue |
8 |
4% |
538,100 |
F |
Total |
210 |
100% |
$1,000,000 |
|
Cost-of-Goods-Sold (COGS) |
510,000,000 |
|||
Inventory Investment Due to Issues |
51,000,000 |
|||
Total Inventory Investment |
52,000,000 |
|||
Inventory Turns Ratio |
5 |
|||
** Subject to verification |
TABLE 5.6
High Inventory Investment - Second Level
Root Cause (Canceled Orders) |
Count |
Percentage |
Estimated Impact-*-* |
Improvement Area |
Inaccurate Schedule |
9 |
7% |
$42,857 |
A1 |
Late Delivery |
108 |
81% |
5514,290 |
A2 |
Missed Schedule |
10 |
7% |
$47,619 |
A3 |
Unknown |
7 |
5% |
$33,334 |
A4 |
Total |
134 |
100% |
5638,100 |
|
Inventory Investment Due to |
$638,100 |
|||
Cancelled Orders |
||||
** Subject to verification |
Table 5.6 shows the next level root cause analysis in which the canceled orders issue is broken down into the lower level reasons that orders are canceled. These include an inaccurate schedule, late deliveries, missed schedules, and a category of unknown reasons. At this level of an analysis, late deliveries represent 81 % of the cost of inventory investment associated with canceled orders. It also represents 51.8% of the entire inventory investment problem associated with the $1,000,000. This is calculated as 64% x
TABLE 5.7
High Inventory Investment - Third Level
Root Cause (Late Delivery) |
Count |
Percentage |
Impact44 |
Improvement Area |
Incorrect Invoice |
16 |
15% |
$77,143 |
A21 |
Carrier Issue |
13 |
12% |
$61,715 |
A22 |
Customer Not Notified |
73 |
68% |
$349,717 |
A23 |
Unknown |
6 |
5% |
$25,715 |
A 24 |
Total |
108 |
100% |
$514,290 |
|
Cost-of-Goods-Sold (COGS) |
510,000,000 |
|||
Inventory Investment Due to Late Deliveries |
$514,290 |
|||
** Subject to verification |
81% = 51.8%. Table 5.7 continues the root cause analysis of late deliveries down to a third level. The issue customer not notified is shown to be the major contributor to late deliveries. If we could eliminate the customer not notified issue, then the inventory investment reduction would be $349,717. A rapid improvement team would use this type of analysis to identify the most impactful root causes and solutions.
MAPPING HIGH-VOLUME TRANSACTIONS
In manufacturing, data collection historically focused on taking process snapshots. The processes varied from batch to continuous. Rapid improvement teams would move through the process to take a metric snapshot and add this information to either a floor layout if the focus was to improve a work area or to build a VFM for the total process. The metrics gathered included throughput, inventory, and others discussed in earlier chapters. Data collection has become more complicated for any virtual processes that collect, analyze, and report through IT systems and applications. Data collection requires using software tools to extract data at various parts of the process. Also, the size of databases is large, data flow is high velocity, and data formats vary. This section discusses data collection for high-velocity transaction processes. The focus is on the standardization of process mapping and virtualization of these maps to align them to analytical tools that map data flows through IT systems. This includes rework paths and the systems and data fields touched by users at system portals or user interfaces. Collecting data using algorithms, i.e., bots, and manually shadowing the people doing the manual work associated with an automated process is now the way process improvement teams work and especially for service, office, supply chain, and supporting processes. This impacts the data collection as well as the skills needed on the team.
Process mapping has historically been done using different mapping standards that vary with the organization and team. The most useful mapping formats can be used by analytical software to verify the process relationships and rules and to trace transactions through an end-to-end process. These traces show how different personas interact with operations within the end-to-end process to complete work tasks. Different workers represented in aggregate as a persona, e.g„ purchasing agent, salesperson, will interact with a process differently, i.e., they take different paths to complete the same work. There should be one optimum work or transaction path for a persona and use case or job.
Correctly formatted process maps enable algorithms to detect this variation and measure the duration and sequence of work tasks in different software applications. Analytical software transforms a process map that is in standard format by overlaying a transaction trace with task completion times on the map. Properly constructed the outcome is a process model that reflects the actual process and dynamically shows how work moves through it. An advantage of this type of model is that inputs can be changed, and the outputs measured to calculate optimum process settings and remove variation in doing work tasks. As an example, if a call center process was modeled this way, then changes to incoming transaction volume, time to service, and staffing levels would be useful inputs to the model with the output customer waiting time.
Process maps need to be refreshed and verified for accuracy. This is done by walking the process as was done when they were initially created. This hands-on approach is critical for building an accurate process map either as a physical or virtual representation. Up to now we have discussed primarily the VFM. A VFM is constructed hands-on by the people doing the work. Its construction follows standardized rules with common symbols representing work objects, operations, and other components of a process. But a VFM or any other format needs to be translated into BPMN (Business Process Model and Notation), EPC (Even-Driven Process Chain), or another standardized format. Figure 5.21 shows an example of the EPC components. The EPC method is like the BPMN method, and they can be converted one to the other with minimal loss of information. The BPMN model is more efficient than EPC with 30% less complexity. The International Standards Organization (ISO) has approved BPMN in different versions.
The BPMN format models processes using flow and connecting objects as well as swim lanes and artifacts. Flow objects include events, activities, and gateways. Events are starting inputs or outcomes from an activity. An ending event would be an outcome such as a completed report, a name added to an invoice, or an inspection complete. The activity is the work task done to produce the outcome. It could also be a subprocess with its own activities, or a call activity that reuses a previous activity. The gateway varies as a single ending event to a single starting event, a single ending event to several more starting events or several ending events to a single starting event. There are other components, rules, and conditions used by the BPMN method. In summary, BPMN is a structured process-mapping methodology that can be used by software algorithms to check model accuracy and trace and measure transaction metrics including lead time. Simulations can also be done using the model.
The SIPOC shown in Figure 5.10 is a good basis from which to begin mapping a service process. It follows the BPMN approach relative to a starting event, i.e., the activity or process and the ending event or output. Other information such as conditional routing can also be added at the next lower detail level. A service process has customer-facing and backendsupporting components. Service processes without high automation or self-service functions are characterized by high degrees of customer interaction, higher variation, higher services cost, low efficiency, high skills, and variable capacity and demand. The implementation of Robotic Process Automation (RPA) to standardize routine work tasks such as gathering information and adding data during a customer transaction can reduce or eliminate these characteristics while still providing differentiated customer service through manual components of the interaction. Artificial intelligence and self-service can help bridge final service gaps to provide the differentiated service while also gaining the efficiency of back office

FIGURE 5.21
Event-Driven Process Chain (EPC) Shapes processes. In contrast, back-office operations are highly standardized with low variation, low transaction cost, low skill requirements because work tasks are routine, and demand and capacity are predicable. Another differentiating attribute is that customer-facing operations are near customers, whereas back-office operations are remote from the customer.
Data collection will be different depending on where the improvement team is focused. The data collection work will be more standardized, and reporting systems more accurate in back-office supporting processes, although in-person job shadowing and process analysis will always be needed to validate information. In contrast, data collection in the customer-facing portion of a process will be more complicated because there will be several personas and use cases, even if portions of the process are automated. The process will be more variable either because associates do the same work with variation or customers engage the service in a non-standard way. If a team needs to create a process map in a system with both manual and automated work tasks for modeling, then some data documentation will be taken from systems and the rest from walking the process. Then the BPMN process method should be used to standardize the information.
There are several software products developed to support the BPMN structure. It is easy to create business processes using them and the mapping is intelligent in that if a component is left out the software adds the missing component to ensure the model is consistent form the starting event to ending event. There is even more sophisticated software that integrates with enterprise platforms and can trace transactions through systems including the persona doing the work and the paths taken to complete it. The transactions are also time stamped. The software is enterprise level and can record hundreds of thousands of transactions. Their dashboards are also easily configurable.
Formal process mapping is useful for moving to the next step in process efficiency, i.e., RPA. If it is known that a persona such as salespeople take a certain amount of time moving through a certain path, the software may show a shorter one. Or if a path has repetitive work tasks it may be useful to apply RPA to have the work done automatically. This enables a rapid improvement team to move quickly from gathering process data and documentation through the analysis to solutioning.
DATA COLLECTION FOR SERVICES
Table 5.8 summarizes data collection strategies for services and supporting processes. The VFM has already been discussed as a major tool for gathering process data that typically is not available in current operational reports. Collecting data to estimate production rates is done by counting the transactions exiting the process and dividing by the hours or operation. This can be done by operation, time of day, job type, shift, employee, customer, and other demographical factors. Ideally, the transaction can be automatically time stamped throughout the process. People are “shadowed” for several days, and their activity times are recorded at 15-minute time increments. With respect to equipment, materials put into the equipment could be measured or counters designed into equipment can be checked to calculate production rates.
Scrap, rework, and downtime percentages are estimated by checking financial and operational reports for material or direct labor waste. Operational audits are done using interviews, email audits, and shadowing, then counting the number of transactions that could not be used in production (scrap) or had to be reworked divided by the total number of transactions. For downtime, the estimate would be the amount of time waiting divided by total production time. Measurement of the other key operational metrics is described in Table 5.8.
MEASURING PROCESS COMPLEXITY
Operational complexity is measured using the metrics shown in Table 5.9. These include item proliferation such as components, products, services, people and machines, a high process or product non-value content, long lead times that mask process issues, high variation in demand or any part of a process, low productivity that requires expediting work, high cost that is the result of process inefficiencies, and near misses relative to schedule, milestones, personal injury, and similar events. Data collected for these metrics helps reduce complexity when root causes are eliminated from a process.
TABLE 5.8
Data Collection for Services
Metric |
Data Collection Strategy |
How to Do It |
1. VA/NVA/ BVA |
Create a value flow map (VFM) of a process workflow or a floor layout of a single work area. |
Bring the team together and create the VFM using sticky notes on a wall. Then “walk-the-process” to verify the VFM. Or build the map virtually in EPC or BPMN format. If virtual, apply analytical algorithms to trace transactions and measure performance. |
2. Production rate (units/ minute) or operation cycle time |
Count the transactions exiting the process and divide by the hours of operation. This can be done by operation, time of day, job type, shift, employee, customer, and other demographical factors, which will enable a complete analysis. |
Manual operations can be measured using data collection forms. If automation is available, they may be time stamped and algorithms applied for analysis. Virtual transactions should be tracked using algorithms. Alternatively, an audit could be done on completed transactions in emails by operation (sent mail). Personal calendars will show meeting times which in many situations is non-value adding. People can also be “shadowed” for several days and their activity times recorded at 15- minute time increments. With respect to equipment, materials into the equipment could be measured or counters designed into equipment can be checked to calculate production rates. |
3. Scrap% |
Count the number of transactions that needed to be replaced, i.e., not used in production, e.g., reports that were not required, marketing prototypes not sold, etc. |
Check financial and operational reports for material or direct labor waste. Audit operations through interviews, e-mail audits, and shadowing to identify scrap. |
4. Rework% |
Count the number of transactions that passed through process operations more than once, i.e., any work task with a prefix of “re” such as re-analyze, reinspect, redo, etc. |
Check financial and operational reports for material or direct labor rework. Audit operations through interviews, email audits, and shadowing to identify rework. |
(Continued)
TABLE 5.8
Cont.
Metric |
Data Collection Strategy |
How to Do It |
5. Downtime % |
Count the time people wait for work, the system is idle, or equipment is not available. |
Check financial and operational reports for material, direct labor, and other expenses related to downtime. Audit operations through interviews, emails, and shadowing to identify downtime. |
6. Capacity (units/ minute) |
Count the number of units per time produced at the systems bottleneck operation. |
Use a VFM with operational reports to identify the bottleneck capacity under typical production conditions. If operational reports are not available, conduct audits to obtain data. For virtual processes apply analytics to measure operational capacity. |
|
Measure the time to setup a job at every operation and especially at the bottleneck. |
Check operational reports for job setup times. Audit operations through interviews, emails, and shadowing to identify job set-up. |
8. Inventory (units in queue) |
Measure the work waiting to be done at every operation. |
Check operational reports for inventory levels (jobs waiting to be completed) and estimate how long it will take to complete these jobs. Audit operations through interviews, email audits, and shadowing to identify inventory levels. |
9. Floor area |
Measure the floor area used by a process including equipment and people. |
Review floor layouts and calculate area. |
Once complexity is measured, projects can be created to reduce it using many of the tools and methods discussed in this or earlier chapters. These include simplifying, standardizing, and mistake-proofing operations and applying specific Lean Six Sigma methods if applicable. These include process batches, mixed model production, and others. Other complexity reduction strategies are outsourcing or insourcing to reduce the number of internally produced product and services, combining design features and functions, or eliminating non-essential items and part numbers to simplify operations.
CUSTOMER EXPERIENCE MAPPING
Customer Experience Mapping (CEM) was briefly discussed in the first chapter. In this chapter we will focus on how to use CEM to collect relevant information from customers to improve their experience. In most situations, an internal team uses historical information and experience to substitute their voice for the customers voice. While this is easy to do, it is not accurate. The analogy would be to have managers build a VFM. They have a good idea of how the work is done but because they do not do the work their opinions are not completely accurate, especially for work they do not know is being performed as workarounds, i.e., the hidden factory. This is why the people doing the work every day are asked to build the VFM. The CEM workshop is based on the same principle and brings in customer employees that do perform roles, i.e., personas interacting with the suppliers employees at different points in the product and service experience.
Figure 5.22 shows the CEM concept. A customer interacts and experiences different parts of the suppliers organization as they research products and services to determine the best one based on performance, cost, and other factors. The customer could access a suppliers’ website or discuss the potential purchase with salespeople. These subprocesses have operations and touch points. It is important to understand which operations meet customer requirements, which ones have gaps that need to be closed and where the experience can be further enhanced for customer loyalty. In the research phase the goal is ease of finding relevant information for products and services under consideration for purchase. The customers employees with different roles, i.e., personas such as purchasing, legal, engineering, and others have differing experiences and expectations.
The next phase after selecting the supplier, signing contracts, and working to onboard the product or service is ease of purchase and installation. These are also subprocesses with operations and customer touch points. In the CEM workshop, the team walks these in sequence with the customer personas to identify operations exceeding expectations and those with performance gaps. The gaps will be prioritized at the end of the workshop and assigned as Lean Six Sigma Agile Project Management (АРМ) or big data analytics projects. These can be executed as rapid improvement events or longer term projects of different types.
Measuring Process Complexity
Complexity Measurement |
Definition |
Tools/Methods for Analysis |
Strategy for Reducing Complexity |
Item proliferation |
A high number of raw material, work-in-process (WIP) or finished goods items. |
Analyze the types and numbers of items using graphs, e.g., histograms, Pareto charts, and statistics. |
Outsource or insource, reduce the number of suppliers, combine features and functions, eliminate non-essential items and part numbers. |
High percentage NVA operations (time) |
NVA operations are not needed by a customer, must be reworked, or are not physically changed. |
Value stream and value flow mapping, 5S, mistake-proofing. |
Simplify, standardize, and mistake- proof operations. |
Long lead times |
Varies by application. |
Value flow analysis, process wastes, total preventive maintenance (TPM) and other methods. |
Simplify, standardize and mistake- proof operations, process batch, mixed model production, and others. |
High demand variation |
Varies by application, but demand which exceeds the average for a time period by more than ±10%. |
Histograms, Pareto charts, and time series models. |
Reduce drivers of demand variation using a root cause analysis. Solutions vary by application. |
Low productivity |
Complicated relationship, but, generally outputs/inputs or revenue/costs adjusted for inflation and measured year over year. |
Analyze reasons for low revenue or high costs. |
Reduce drivers of low productivity using a root cause analysis. Solutions vary by application. |
Low asset utilization |
Turns ratios: annual COGS/ average investment. |
Asset ratios, e.g., inventory turns = COGS average monthly inventory investment. |
Reduce lead time (improve quality, reduce lot sizes, etc.) or reduce demand variation. |
TABLE 5.9
Cont.
High unit costs |
Costs exceeding standard, industry average, or entitlement e.g., what is possible through changes of design and process. |
Analyze the types and numbers of items using graphs, e.g., histograms, Pareto charts, and statistics as well as value analysis. |
Outsource or insource, reduce the number of suppliers, combine features and functions, simplify, standardize, and apply best-in- class design practices. |
Near misses |
Events that could have but did not result in an accident. |
Reporting systems including policies and procedures and analysis of the types and numbers of near misses using graphs, e.g., histograms, Pareto charts, and statistics. |
Reduce drivers of near misses using a root cause analysis. Solutions vary by application. |
Known issues |
Safety or other issues that have been identified by audits, previous analysis, or other methods. |
Reporting systems including policies and procedures and analysis of the types and numbers of near misses using graphs, e.g., histograms, Pareto charts and statistics. |
Create continuous improvement (Cl) projects. |
Accidents |
Reportable incidents that resulted in injury, death, or property damage. |
Reporting systems including policies and procedures and analysis of the types and numbers of near misses using graphs, e.g., histograms, Pareto charts, and statistics. |
Create Cl projects. |

FIGURE 5.22
Customer Experience Mapping
The next major step in the customer experience is using the product or service. This is a complicated step that may have a duration of months to several years. An example is disposable products such as cleaning supplies that are consumed in weeks or months versus large appliances that are used for several years. This is the most complicated part of the customer experience because it depends on design, supplier production, and service capabilities including maintenance contracts. There will be several supplier teams interacting with the customer over many years. Each of these is a subprocess having different operations. The CEM workshop evaluates each subprocess and its operations relative to the customer persona experience.
Finally, at the end of the useful life, there are disposal or recycling actions that occur. The supplier should make this experience easy. Examples include smart product and services that alert the supplier toward the end of their life, providing disposable packaging with the purchase, i.e., return to the supplier or incentives and rebates to properly dispose or recycle. The goal is to also make the customer experience seamless at the end of the useful life to promote a refresh or the purchase of new products or services.
Customer Experience Mapping is a powerful way to align supplier operational capabilities with customer needs to improve product and service design and improve internal processes to increase competitiveness. Another advantage of CEM is that parts of the process will be found to be NVA from the customer perspective. These can be eliminated to reduce cost, lead times, and internal operational complexity.
SUMMARY
As part of the prework and during the rapid improvement event, there needs to be an efficient data collection strategy. Data collection includes data-gathering strategies, analytical tools and methods, and process sampling. Several data collection templates were discussed to enable an efficient collection of operational metrics. The goal was to gather information to identify the root causes for process issues and to confirm improvements were successful. Effective data collection also enables immediate process improvements as well as creating project charters for assignment to future rapid improvement events or as standalone projects. Job shadowing is an important method to collect process data. It is easily adapted to analyzing complex work. It is especially useful in data collection for non-standard work tasks in virtual systems with people. Examples include service and office workers and other professionals. It is useful for data collection and analysis in processes characterized by many meetings, computer use, and repetitive processes. Job shadowing is commonly used to track the time components of complex professional jobs and work tasks.
This chapter also provided simple and easy to use data collection and analysis tools and methods. These included various types of process map, metric summarization templates, shadowing templates, a spaghetti diagram, checklists, a C&E diagram, a 5-Why analysis, a histogram, a Pareto chart, a box plot, scatter plots, time series graphs, and control charts. Examples using some of the tools were presented at the end of the chapter. In addition, the team collects management reports with financial and operational data, process maps, floor layouts, and similar sources of information to conduct an analysis of the root causes for process issues. Once identified, root causes can be eliminated, and an improvement team develops solutions or countermeasures using the Lean Six Sigma methods. This chapter also presented several useful templates to aid data collection and analysis. These can be modified as needed.
An important data collection tool for supply chains is the VSM or if mapping a process, we use a VFM. A VSM creates a visual model of the end-to-end process, i.e., supplier to customer including key operational metrics. The advantages of using a “brown paper” exercise were also shown. This is a hands-on and collaborative approach for creating a VSM or VFM. Overlaying operational metrics onto a VSM or VFM is important to identify gaps. Additional analyses are also applied by the team. These include value analysis and the seven process wastes. Once the map is verified by “walking” the process, it becomes useful for further analysis as well as documentation to support future quality and operational audits.
In manufacturing data collection such as VSMs and VFMs historically focused on taking process snapshots. The processes varied from batch to continuous. Rapid improvement teams would use these maps to move through the process to take an operational metric snapshot and add this information to either a floor layout if the focus was to improve a work area or to either the VSM or VFM. The metrics gathered included throughput, inventory, and others discussed in earlier chapters but at a supply chain or process level. Data collection has now become more complicated for virtual processes connecting IT systems and applications. The data collection requires using software tools that extract data at various parts of the process. Because of the size of databases, most processes are now high volume or velocity.
This chapter discussed data collection for high-volume transaction processes. The focus was on standardization of process mapping and virtualization of these maps to align them to analytical tools to show how data flows through systems. This includes rework paths and the systems and data fields touched by users at system portals or user interfaces. Collecting data using algorithms, i.e., bots, and manually shadowing the people doing the manual work associated with the process is now the way improvement teams work. This impacts the upfront planning and data collection as well as the skills needed on the team.
We also discussed operational complexity. It is measured using the metrics shown in Table 5.9. These include item proliferation such as components, products, services, people and machines, a high process or product non-value content, long lead times that mask process issues, high variation in demand or any part of a process, low productivity that requires expediting work, high cost that is the result of process inefficiencies, near misses relative to schedule, milestones, personal injury, and similar events. Data collected for these metrics helps reduce complexity when root causes are eliminated from a process.
Customer Experience Mapping was also shown to be useful for collecting relevant information from customers to improve their experience. In most situations, an internal team uses historical information and experience to substitute their voice for the customers voice. While this is easy to do, it is not accurate. The analogy would be to have managers build a VFM. They have a good idea of how the work is done but because they do not do the work, their opinions are in accurate especially for work they do not know is being performed as workarounds, i.e., the hidden factory. This is why the people doing the work every day are asked to build the VFM. The CEM workshop is based on the same principle and brings in customer employees that do perform customer roles, i.e., personas interacting with the suppliers employees at different points in the product and service experience.
6