1,647
Views
19
CrossRef citations to date
0
Altmetric
ORIGINAL ARTICLES

An economic comparison of CUSUM and Shewhart charts

&
Pages 133-146 | Received 01 Jan 2006, Accepted 01 May 2007, Published online: 14 Dec 2007

Abstract

This paper compares the economic performance of CUSUM and Shewhart schemes for monitoring the process mean. We develop new simple models for the economic design of Shewhart schemes and more accurate ways to evaluate the economic performance of CUSUM schemes. The results of the comparative analysis show that the economic advantage of using a CUSUM scheme rather than the simpler Shewhart chart is substantial only when a single measurement is available at each sampling instance, i.e., only when the sample size is always n = 1, or when the sample size is constrained to low values.

1. Introduction

For many years the research community has allocated a considerable part of its effort to the design of effective quality control charts. The effectiveness of the charts is usually evaluated from a statistical point of view, but it is increasingly recognized that since it is the bottom line that matters, the control chart design must be primarily evaluated using economic criteria. The objective of this paper is to provide a thorough economic comparison between the most frequently used control charts, i.e., the standard Shewhart chart and the CUSUM chart for monitoring the mean of a quality characteristic.

Many researchers have developed and proposed models for the economic optimization of control chart design since the seminal work of CitationDuncan (1956), who was the first to introduce an economic model for the design of the Shewhart-type chart. In Duncan's model the process is represented as a series of stochastically identical cycles, ending with the restoration of the process after the detection of an actual occurrence of some assignable cause. The average cost per time unit is computed as the ratio of the average cost per cycle to the average duration of a cycle. The optimal chart parameters, i.e., sampling interval h, sample size n and control limit coefficient k, are those that minimize the average cost per time unit. This general approach has been extended, with variations, to more complex charts. The economic design of the CUSUM chart was first studied by CitationTaylor (1968). He developed a model similar to that of CitationDuncan (1956) but without optimizing the sample size and the sampling interval. CitationGoel and Wu (1973) and CitationChiu (1974) proposed similar models and algorithms for determining the economically optimum design of CUSUM charts and reported some results of sensitivity analyses.

A general model for the economic design of control charts has been proposed by CitationLorenzen and Vance (1986). Their approach may be used for the selection of the optimal parameters of a variety of charts, including Shewhart, CUSUM and EWMA, as long as certain statistical measures of performance, for example the Average Run Length (ARL), can be computed for any combination of chart parameters. CitationSimpson and Keats (1995) used two-level fractional factorial designs to identify highly significant parameters in the economic model of CitationLorenzen and Vance (1986) as applied in the case of a CUSUM chart.

All the above papers and the majority of the literature about economic control chart design use the expected cost per time unit as the optimality criterion. An alternative approach, which was followed by CitationKnappenberger and Grandage (1969) and CitationSaniga (1977) among others, adopts the expected cost per item as the optimality criterion. However, both approaches lead to almost identical optimal chart designs (CitationMontgomery, 1980).

Over the years there have been only a few papers that compare Shewhart and CUSUM charts on economic grounds. CitationArnold and Von Collani (1987) developed a method to determine a near-optimal economic design and then used it to make comparisons between Shewhart and non-Shewhart charts such as CUSUM charts. They used a loss-per-item cost function, found the optimal design parameters of the Shewhart chart and then followed a three-step procedure to determine a near-optimal non-Shewhart design: in step 1 they use the optimal sample size of the Shewhart chart as the sample size of non-Shewhart charts; in step 2 they determine the reference value and the control limit of the non-Shewhart chart by minimizing the out-of-control ARL δ, keeping the value of the in-control ARL 0 equal to that of the Shewhart chart; and finally in step 3 for the sample size of step 1 and the reference value and the control limit of step 2, they find the sampling interval that minimizes the loss function. They concluded that, under certain assumptions, Shewhart charts perform very well and cannot be improved significantly by other, more complicated charts, such as CUSUM charts.

CitationNantawong et al. (1989) performed an experiment to evaluate the effect of three factors (sample size, sampling interval and magnitude of the shift) on three control charts, namely Shewhart chart, CUSUM and geometric moving average charts, using profit as the evaluation criterion but without optimizing any of the three charts.

CitationKeats and Simpson (1994) used designed experiments to identify the cost and model parameters that have a significant impact on the average cost of CUSUM and Shewhart charts. They concluded that CUSUM charts are significantly more economical than Shewhart charts, especially for monitoring processes subject to small shifts. CitationHo and Case (1994) have also undertaken a brief economic comparison between Shewhart, CUSUM and EWMA charts and they concluded that both CUSUM and EWMA charts have a much better economic performance than Shewhart charts.

From the above exposition it appears that the results of the previous investigations regarding the relative economic effectiveness of Shewhart and CUSUM charts are inconclusive. For example, although CitationArnold and Von Collani (1987) state that Shewhart charts cannot be improved significantly by other, more complicated charts, CitationHo and Case (1994) and CitationKeats and Simpson (1994) conclude that the anticipated savings from using a CUSUM chart rather than a Shewhart one are substantial. This contradiction may be partially explained by the fact that most models for the economic evaluation of CUSUM schemes use approximations of the ARLs in the respective cost functions. Specifically, most models use the zero-state ARL for detecting a δ-shift in the mean (ARL δ), which is computed assuming that at the time of the shift, the value of the CUSUM statistic is equal to zero. However, when a shift occurs, the process under study has been typically operating for some time and the value of the CUSUM statistic may not be zero. In fact, the CUSUM statistic at the time of the shift is a random variable with a steady-state distribution. Therefore, the cost function of the CUSUM chart is computed more accurately using the steady-state ARL δ, which is the weighted average of all the ARL δ values given the value of the CUSUM statistic when the shift occurs, with the weights being the probabilities of the steady-state distribution of the CUSUM values (see CitationCrosier (1986)). This is a difficult computation, which the model presented in this paper avoids by using a somewhat different approach in formulating the cost functions.

The purpose of this paper is to resolve the existing ambiguity regarding the relative economic effectiveness of Shewhart and CUSUM charts. The vehicle is a new, accurate model for the computation of the average quality-related cost for the case of monitoring a process mean using a CUSUM chart. Using this tool the paper proceeds to a systematic numerical investigation of the conditions (problem characteristics) under which it is worth monitoring the process mean with a CUSUM chart instead of the simpler Shewhart chart. The main finding of this investigation is that the economic superiority of the CUSUM scheme is significant only when the sample sizes are restricted to be unitary, i.e., when rational grouping of observations is infeasible, or when they are restricted to be very small.

The next section describes the problem in detail and presents the proposed models for the economic design of Shewhart and CUSUM charts. Section 3 presents and discusses the results of the numerical investigation and Section 4 summarizes the conclusions of this research.

2. Problem setting and cost models

We consider a production process that operates indefinitely. There is a single quality characteristic X that must be monitored on-line, which is assumed to be a normally distributed random variable with target value μ0 and variance σ2. Note that the variance of X is not known in reality but we assume that it can be accurately estimated by sufficient past data. Also note that the assumption of the normality of X, which is typical in the majority of the related literature, is practically innocuous when the sample sizes are not too small because then the sample means are anyway approximately normally distributed by the central limit theorem. However, if the distribution of X exhibits substantial departures from normality and the sample sizes are small or unitary, then the analysis of the relative economic performance of CUSUM and Shewhart charts must be modified accordingly and is beyond the scope of this paper.

The process starts in a state of statistical control (“in control”) with E(X) = μ0 and is subject to the occurrence of two assignable causes (cause 1 and cause 2) that bring the process to an out-of-control state by shifting the mean of the quality characteristic to μ1 = μ0 + δ σ or to μ2 = μ0 − δ σ without affecting the standard deviation. The times until the occurrence of assignable cause 1 and 2 are assumed to be independent exponentially distributed random variables with means 1/λ1 and 1/λ2 time units respectively. Therefore, the expected time until the occurrence of any assignable cause is 1/λ where λ = λ1 + λ2. The probability that an assignable cause occurs in an interval of h time units, given that the interval starts in statistical control, is a function of h but for simplicity it will be denoted by γ:

The probability that assignable cause j (j = 1 or 2) occurs before the other cause in an interval h that starts in control is

It is assumed that after the occurrence of assignable cause j the process mean will stay at μ j until it is restored to μ0.

At each sampling instance a sample of size n is taken, the sample mean is computed and, depending on its value and the chart statistic, an alarm may be issued. If the chart issues no alarm, no action is taken and the next sampling instance is after exactly h time units. If an alarm is issued, it is followed by an investigation and then restoration to the in-control state, if an assignable cause is detected. The process may either continue operating or be shut down during search and repair. It is assumed that the time to sample and investigate after a false alarm is less than h. Consequently, the sampling process stops during investigation and restoration because sampling is useless when the process is known to operate in the out-of-control state. After the detection and removal of an actual assignable cause the process resumes its operation with μ = μ0; the next sample is then taken after h time units.

The sampling and inspection cost is c per unit and the fixed cost per sample is b. The cost of a false alarm is L 0, and the cost of restoring the process after a true alarm is L 1L 0. The additional expected cost per time unit of operation when the process operates in an out-of-control state is M. contains the notation that has been introduced so far, as well as the other notation that is used in this paper.

Table 1 Nomenclature

CitationLorenzen and Vance (1986) have developed a general economic model that is representative of a large class of models, which express the process operation and monitoring as a succession of stochastically identical cycles and compute the expected cost per time unit as the ratio of the expected cycle cost to the expected cycle length using the renewal-reward theorem. An alternative modeling approach is to express the evolution of the process by means of Markov chains as in the works of CitationKnappenberger and Grandage (1969), CitationSaniga (1977) and CitationNikolaidis et al. (1997). These papers deal with the economic minimization of the expected cost per unit of output. They use the steady-state probabilities that the process is in each particular state at the time of a sample, and the fractions of time spent in each particular state during a sampling interval.

In this paper we also employ Markov chains to develop the models for the economic optimization of both Shewhart and CUSUM charts. Specifically, we use a two-dimensional discrete-time Markov chain that describes: (i) the actual state of the process (operation under statistical control or under the effect of an assignable cause); and (ii) the decision that is made at each sampling instance (for Shewhart charts) or the actual value of the statistic of the CUSUM charts (in case of CUSUM charts). We start by describing the Markov model for the Shewhart chart.

2.1. Shewhart-type chart

For the case of a Shewhart chart with control limits μ0 ± k sσ/√n, the probability of a type I error at each sample is α = 2Φ (− k s) whereas the probability of a type II error is β = Φ (k s− δ √n) − Φ (− k s− δ √ n).

Let Y t denote the actual state of the process at sampling instance t, prior to investigation and restoration if needed, where Y t = 0 denotes the in-control state (μ = μ0), Y t = 1 refers to the out-of-control state where μ = μ1 = μ0 + δ σ and Y t = 2 refers to the out-of-control state where μ = μ2 = μ0 − δ σ. If at sampling instance t the absolute value of the standardized sample mean z t = ( − μ0) √n/σ exceeds the control limit k s, then a signal is issued and the process is investigated; this action/decision is indicated by a t = 1. Otherwise (|z t | < k s), no action is taken: a t = 0. The discrete-time stochastic model for the process and its monitoring scheme is based on the combination of the actual state of the process Y t and the value of a t at every t. The pair (Y t , a t ) constitutes the state of a two-dimensional Discrete-Time Markov Chain (DTMC) with the special feature that each step may have a different duration when measured in actual time units. There are six possible states and the transition probability matrix is as follows:

The steady-state probabilities of (Y t = i, a t = j), denoted π ij (i = 0, 1, 2, j = 0, 1), are obtained by solving the respective system of linear steady-state equations and can be used to evaluate the long-run expected cost per time unit as the ratio of the average cost of a transition step over its average duration; if C ij is the expected cost and T ij is the duration of a transition step from state (Y t = i, a t = j) of the DTMC to some other or the same state at the next sampling instance, then the long-run expected cost per time unit is

More specifically, the expected costs C ij between two successive sampling epochs associated with the departure from each of the six possible states of the Markov chain are

The respective time lengths T ij are given in . If the chart issues no signal (Y t = 0, 1, 2 and a t = 0) the time until the next sampling instance is just h. If there is a signal the time length is increased by the times of investigation and restoration, unless the alarm is false and the process continues its operation during the investigation (δ1 = 1); in that case T 0 becomes part of h, assuming it does not exceed hgn.

Fig. 1 Time between two successive sampling instances associated with the departure from each of the six states of the Markov chain.

Fig. 1 Time between two successive sampling instances associated with the departure from each of the six states of the Markov chain.

Note that the term cn + b appears in all the above cost expressions C ij because the sampling cost is the same regardless of the state (Y t = i, a t = j). The cost of a false alarm, L 0, appears only in C 01 because state (0, 1) is the only one associated with a false alarm. Similarly, the cost of restoring the process after a true alarm, L 1, appears only in C 11, C 21.

The expected additional cost of operating under the effect of an assignable cause during an interval T ij is somewhat less obvious as it depends on the expected time that the process operates in an out-of-control state during that interval. This cost is Mh in intervals starting from states (Y t = 1, a t = 0) and (Y t = 2, a t = 0) of the DTMC because these states signify a type II error of the chart and consequently out-of-control operation for the entire interval of length h until the next sample. Cost M (gn + δ1 T 1 + δ2 T 2) is incurred during the operational time before the removal of an existing assignable cause when (Y t = 1, a t = 1) or (Y t = 2, a t = 1). Finally, M (h − γ/λ) is the expected cost of out-of-control operation within an interval of length h, given that the interval starts with the process in control, i.e., when the DTMC is in state (Y t = 0, a t = 0) or (Y t = 0, a t = 1), as well as after the removal of an existing assignable cause when (Y t = 1, a t = 1) or (Y t = 2, a t = 1); see . In particular, if τ denotes the conditional expected time of the occurrence of an assignable cause within an interval, given that there is such an occurrence within that interval, then the process will operate under its effect for an expected time h − τ. Consequently, the unconditional expected time of out-of-control operation within an interval of length h where the process begins its operation in the in-control state is γ(h − τ). From CitationDuncan (1956) it is known that τ = [1 − (1 + λ h)exp (− λ h)]/[λ (1 − exp (− λ h))] and since γ = 1 − exp (− λ h) we get that γ (h − τ) = γ h − γ ((γ − λ h + γ λ h)/γ λ) = h − γ/λ. Thus, the corresponding expected cost of out-of-control operation is M (h −γ/λ).

By grouping similar cost terms together, Equation (Equation4) can be simplified as follows:

In the special case where λ1 = λ2 = λ/2, which implies γ1 = γ2 = γ/2, the steady-state probabilities of (Y t = i, a t = j) are

If we substitute for the above π ij in Equation (Equation5), we get:

The above expression is almost identical to the corresponding one in CitationLorenzen and Vance (1986). There is only a small difference in the sampling cost term, which is due to the different assumption in their model, namely that sampling never stops as long as the process operates.

2.2. CUSUM chart

For the case of the monitoring of the process mean using a CUSUM chart, the usual approach is to use two separate CUSUM statistics, i.e., C t h for detecting upward shifts and C t l for detecting downward shifts:

where z t = (− μ0) √n/σ is the standardized sample mean and k c is the reference value of the CUSUM chart. A signal is issued when either of the two statistics exceeds the control limit H. An alternative approach, which was proposed by CitationCrosier (1986), uses a single statistic C t that can take either positive or negative values:
and C 0 = 0. A signal is issued when C t H or C t ≤ − H. Whichever approach is used, the ARLs are computed by one of the methods suggested in the literature, i.e., integral equations, Markov chain approximations and simulation (CitationHawkins and Olwell, 1998).

We conducted a numerical investigation which showed that the use of the two separate CUSUM charts with the same k c > 0 always leads to slightly better economic results than the use of a single statistic. More specifically, the optimal k c tends to be higher in the single-statistic approach compared to the optimal value of k c when two separate CUSUM charts are used, in order to avoid positive or negative values of the statistic when no assignable cause is present. Moreover, to counterbalance the higher value of k c, the optimal value of the control limit H in the single CUSUM chart approach is lower so as to detect the assignable causes relatively fast. However, in all cases examined the cost reduction resulting from the use of two separate CUSUM charts, compared to the single-statistic approach, was less than 0.5%. Hence, we adopt the single statistic C t here because it is easier to formulate, faster to optimize and simpler to implement in practice.

The Markov chain that describes the evolution of the process when monitored by a CUSUM chart is {(Y t , C t ), t = 0, 1, 2…} where Y t is again the actual state of the process and C t is the value of the CUSUM statistic at sampling instance t. For practical purposes C t is discretized into 2m + 1 values following the approach of CitationBrook and Evans (1972).

Specifically, we partition the interval [− H, H] into 2m − 1 subintervals and we define w, the width of each subinterval, as follows:

Then, the real value of C t is transformed to an integer between −(m − 1) and m − 1 by rounding the actual value of C t /w to the closest integer in the set {− (m − 1), …, m − 1}. When the real value of C t is such that C t ≤ − (m − (1/2)) w = − H, then its value is transformed to − m, whereas when the real value of C t exceeds (m − (1/2)) w = + H, then its value is transformed to m. Thus, with the addition of the “− m” and “+ m” states that correspond to the decisions to issue a signal, the total number of states increases to 2m + 1.

Using the above discretization of C t , the Markov chain has 3× (2m + 1) possible (Y t , C t ) states, with transition probabilities defined as follows:

Thus, the transition probability matrix takes the following form:

The elements of the above matrix are divided into nine parts which contain, respectively, the probabilities of moving from C t − 1 to C t for each of the nine possible combinations of Y t − 1 and Y t . The exact expressions for the transition probabilities p ij kl are given in the Appendix.

Similarly to the case of Shewhart charts, the steady-state probabilities π ij of (Y t = i, C t = j) (i = 0, 1, 2, j = − m, …, m) are used to evaluate the expected cost per time unit function, which can be written as follows:

The ECT 2 cost function is the exact analog of ECT 1 for the case of CUSUM charts and the explanation of all terms is similar to the explanation of the terms of ECT 1. Note that this Markovian model does not require the explicit computation of ARLs and thus Equation (Equation11) avoids any inaccuracies generated by such a computation.

3. Numerical investigation and comparisons

We undertook a numerical investigation to explore the potential savings from monitoring a process with a CUSUM chart instead of using the standard Shewhart chart. The numerical investigation entails 48 cases, covering a broad range of cost parameters (c, b, M, L 0, L 1) and process parameters (λ, δ), as shown in . In all 48 cases, certain parameters were kept constant: instantaneous sampling g = 0, restoration cost L 1 = 200 and negligible times to search for an assignable cause and restore the process: T 0 = T 1 = T 2 = 0. Although the models are flexible enough to accommodate non-negligible times for search and restoration, our numerical investigation has shown that the effect of these times on the optimal process parameters and cost is minimal and thus, for the sake of parsimony, we have kept their values equal to zero. We also assume that the process is stopped for investigation whenever the chart issues an alarm (δ1 = 0) as well as when the process is being restored to the in-control operation after a true alarm (δ2 = 0). Finally, we set λ1 = λ2 = λ/2 in all cases.

Table 2 Parameter sets of the 48 numerical examples (c = 1 or 4, L 1 = 200, g = 0, T 0 = T 1 = T 2 = 0, δ1 = δ2 = 0)

For any particular set of parameters we first determine the economically optimal design and cost of the Shewhart chart from Equation (Equation5) and then compare them with the optimal parameters and cost of the CUSUM chart obtained from Equation (Equation11). To expedite the optimization procedure we allowed k s and k c to be integer multiples of 0.1 and, by setting w = 0.1, we used the same discretization step for the control limit H of the CUSUM chart with an initial value of 0.05 (m = 1). Note that the number of states used to discretize the Markov chain of CUSUM charts, given w = 0.1, depends on the actual value of H in each case. The sampling interval h was allowed to vary in 0.01 increments within the range (0, 0.1); for h ≥ 0.1 the increment was set equal to 0.1, i.e., h = 0.1 or 0.2 or 0.3 etc.

presents the optimal Shewhart and CUSUM parameters and costs for the 48 cases of with the variable sampling cost per unit c = 1. The percentage profit resulting from using the optimal CUSUM chart rather than the optimal Shewhart chart is shown in the final column of .

Table 3 Shewhart charts compared with CUSUM charts (c = 1)

shows that the optimal sampling interval and sample size of the CUSUM scheme do not differ significantly from the respective optimal values of the Shewhart scheme. In particular, in most cases both the optimal h and n of the CUSUM scheme are marginally smaller than the respective optimal h and n when the Shewhart chart was used.

The cost improvement of the CUSUM scheme over the standard Shewhart scheme is less than 0.7% in all 48 cases we have examined. Consequently, it is obvious that from an economic point of view the CUSUM scheme is not significantly superior to the standard Shewhart chart, even when the magnitude of the shift is small. Note that CitationHo and Case (1994) concluded that the CUSUM chart is much better than the standard chart even though their numerical results are very similar to ours. More importantly, our results contradict those obtained by CitationKeats and Simpson (1994), who found CUSUM charts to significantly outperform Shewhart charts. We conjecture that this contradiction may be due to the inaccuracy resulting from the computation and use of the ARLs in the model for the economic optimization of the CUSUM scheme.

The results of are surprising, considering the widespread understanding that CUSUM charts are far more effective than Shewhart charts, at least in detecting small shifts in the mean. In addition, there was a concern (expressed clearly by one of the referees) about the appropriateness of the range of parameters used in the experimentation, given that the optimal values of the sample sizes n were found to be much larger than what is usual in practice, especially when the anticipated shifts in the mean are small (δ = 0.5, cases 1–16). Taking the above observations and concerns into account we first validated the results of by simulation and then expanded the numerical investigation in order to reinforce our findings. More specifically, we obtained the optimal designs of the Shewhart and CUSUM charts for the 48 cases of increasing the per unit cost of inspection from c = 1 to c = 4. The results are shown in . The optimal sample sizes decreased, as expected; for example in the cases with δ = 0.5 the optimal n ranges between 14 and 18 when c = 4, down from optimal values as high as n = 35 when c = 1. However, the percentage reduction in the average cost from the use of the CUSUM rather than the Shewhart chart remained negligible and did not exceed 0.6% in any case. Furthermore, in many of the 16 cases where δ = 0.5 it is better not to sample at all if c = 4 but to search regularly for assignable causes and restore the process if needed. This type of policy, which resembles preventive maintenance, is justified when the potential savings from monitoring the process are small and consequently insufficient to counterbalance the relatively high sampling costs. We have also tried various other combinations of parameters and the conclusion stayed invariably the same, i.e., that the differences between the optimum costs of CUSUM and Shewhart charts are negligible for all sets of process and cost parameters and even for very small magnitudes of the shift when the sample sizes are unrestricted.

Table 4 Shewhart charts compared with CUSUM charts (c = 4)

On the other hand, CUSUM charts for monitoring the process mean are often based on samples of n = 1, since there are several applications where only one observation is available at each sampling instance (CitationHawkins and Olwell, 1998). It is therefore interesting to investigate the potential savings of using a CUSUM chart under the restriction that n = 1 and see whether or not the conclusions are similar to those of the unrestricted n case. presents the results of the numerical investigation for the same 48 cases with c = 1, exactly in the same way as in but with the restriction n = 1.

Table 5 Shewhart charts compared with CUSUM charts for n = 1 (c = 1)

The optimal sampling interval of the CUSUM chart is now much smaller than the respective one of the Shewhart chart. Indeed, the restriction on the sample size and the cumulative nature of the CUSUM scheme leads to a large number of samples, in order for the chart to be maximally effective. Thus, the optimal h of the CUSUM chart is much shorter especially when the magnitude of the shift (δ) is small.

There are even more cases now (when b is large while δ and L 0 are small) in which it is optimum to investigate regularly for possible occurrence of assignable causes without prior sampling because the chart detection power is very limited when n = 1. This phenomenon is observed mainly in the case of Shewhart charts, since these charts are not typically meant to operate with individual measurements but with rational subgroups (CitationReynolds and Stoumbos, 2004). There are some cases, though, where this preventive-maintenance-type policy is better even than monitoring with a CUSUM chart.

If we exclude the cases where preventive maintenance outperforms both Shewhart and CUSUM charts, the potential savings from using a CUSUM chart with n = 1 are great, especially when b and δ are small while M and L 0 are large. shows that the cost reduction often exceeds 20% and it may be as high as almost 50% (46.9% in case 20). Nevertheless, a side-by-side comparison of the costs in and reveals that in all cases where some SPC-type monitoring is desirable, the minimum expected cost with n = 1 is substantially higher than the minimum cost when the sample size is unrestricted, for both CUSUM and Shewhart charts. Thus, although it is commonly argued that CUSUM charts have better statistical performance when using unitary samples (see for example CitationHawkins and Olwell (1998) and CitationReynolds and Stoumbos (2004)), our numerical investigation shows that their economic performance when n = 1 is inferior to that of CUSUM charts with larger sample sizes.

Finally, it should be emphasized that there are many practical applications where the sample sizes are not necessarily unitary but they are constrained to relatively low values for rational subgrouping purposes or other reasons. Constraining the value of the sample size to, say, n ≤ 5 would result in a CUSUM chart significantly outperforming the respective Shewhart chart, unless of course the optimum unconstrained sample size would anyway be not much larger than five. The difference in the economic performance of the two charts if there is a restriction on the sample size will be somewhere between the observed differences in the cases of unconstrained n and n = 1. Consider for example case 1 of with c = 1. If there is no restriction in the sample size, from we see that the two charts have relatively large sample sizes and almost identical average costs: ECT 1 = 11.76 with n = 24 for the Shewhart chart, ECT 2 = 11.72 with n = 23 for the CUSUM. If the sample size is constrained by n ≤ 5 for rational subgrouping purposes, then the average costs of the two constrained–optimal chart designs (with n = 5) are ECT 1 = 14.12 and ECT 2 = 12.39 (a 12.3% cost difference). If the restriction n = 1 is imposed on the sample size then we see from that ECT 1 = 14.73, ECT 2 = 12.57 and the economic advantage of the CUSUM increases to 14.7%.

4. Summary and conclusions

We have presented simple and accurate Markov chain models for the economic optimization of Shewhart and CUSUM charts for monitoring the mean of a normally distributed quality characteristic. Our numerical investigation has led to the following conclusions.

1.

CUSUM charts are economically far superior to Shewhart charts only if process monitoring must be performed on the basis of individual measurements (sample size n = 1). If there are no restrictions on the size of each sample, the economic performance of the optimal CUSUM chart is almost identical to the performance of the optimal Shewhart chart even when the magnitude of the anticipated shift is small. In between these two extreme cases, namely if the sample size is restricted to low values for reasons such as the need for rational subgrouping, the CUSUM chart may significantly outperform the Shewhart chart when the shifts in the mean are small.

2.

From a pure economic perspective and if there are no restrictions on the sample size n, the common choice of n = 4 or n = 5 is always inferior to larger sample sizes when the magnitude of the shift is small, both for Shewhart and CUSUM charts. Sample sizes n ≤ 5 may be economically optimal only if the magnitude of the anticipated shift is moderate to large and the sampling cost is not very low.

3.

When the sample size is by necessity restricted to n = 1 and/or when the sampling cost is high and the magnitude of the shift is small, it is often optimal not to monitor the process through sampling but to control it by means of a preventive-maintenance-type policy. Consequently, this option should always be considered as an alternative to the usual SPC procedures.

Appendix
Derivation of the transition probabilities p ij kl of the CUSUM scheme

The exact values of the transition probabilities (Equation9) are computed from the following relationships, where ϕ (z) is the density function of the standard normal distribution.

Note that p ij 02 is computed based on the set of equations for p ij 01, by setting γ2 instead of γ1 and +δ √n instead of –δ √n. On the other hand, p ij 22 is computed based on the set of equations for p ij 11, by setting +δ √n instead of − δ √n.

Biographies

George Nenes is a Research Associate in the Department of Mechanical Engineering at Aristotle University of Thessaloniki (A.U.Th.). He has obtained a Diploma (5-year degree) in Mechanical Engineering, an M.Sc. in Management of Production Systems and a Ph.D. in Mechanical Engineering from the same university. His main research interest is in the area of statistical process control.

George Tagaras is Professor of Mechanical Engineering at A.U.Th., where he teaches and conducts research in quality assurance, inventory distribution systems and stochastic models in production and operations management. He holds a Diploma in Mechanical Engineering from A.U.Th. and an M.S. and Ph.D. in Industrial Engineering from Stanford University. He has held faculty positions at the Wharton School of the University of Pennsylvania and at INSEAD in France. He has also taught at the University of Thessaly and in the graduate program of the University of Macedonia in Greece. He has published many technical papers in a variety of journals, including Management Science, Operations Research, IIE Transactions, International Journal of Production Research, European Journal of Operational Research and Journal of Quality Technology. He is currently Associate Editor of Management Science, Senior Editor of Production and Operations Management and on the Editorial Board of IIE Transactions. He is a member of INFORMS and ASQ.

Acknowledgements

We thank the anonymous referees for their comments that contributed significantly to the improvement of the paper. The present study has been co-funded by European Union—European Social fund and National fund–PYTHAGORAS-EPEAEK II.

Notes

*Preventive maintenance is optimal (n = 0).

*Preventive maintenance is optimal (n = 0).

References

  • Arnold , B. F. and Von Collani , E. 1987 . Economic process control . Statistica Neerlandica , 41 : 89 – 97 .
  • Brook , D. and Evans , D. A. 1972 . An approach to the probability distribution of CUSUM run length . Biometrika , 59 : 539 – 549 .
  • Chiu , W. K. 1974 . The economic design of CUSUM charts for controlling normal means . Applied Statistics , 23 : 420 – 433 .
  • Crosier , R. B. 1986 . A new two-sided cumulative sum quality control scheme . Technometrics , 28 : 187 – 194 .
  • Duncan , A. J. 1956 . The economic design of X charts used to maintain current control of a process . Journal of the American Statistical Association , 51 : 228 – 242 .
  • Goel , A. L. and Wu , S. M. 1973 . Economically optimum design of CUSUM charts . Management Science , 19 : 1271 – 1282 .
  • Hawkins , D. M. and Olwell , D. H. 1998 . Cumulative Sum Charts and Charting for Quality Improvement , New York , NY : Springer .
  • Ho , S. and Case , K. E. 1994 . The economically-based EWMA control chart . International Journal of Production Research , 32 : 2179 – 2186 .
  • Keats , J. B. and Simpson , J. R. 1994 . Comparison of the X and the CUSUM control charts in an economic model . Economic Quality Control , 9 : 203 – 220 .
  • Knappenberger , H. A. and Grandage , A. H. 1969 . Minimum cost quality control tests . AIIE Transactions , I : 24 – 32 .
  • Lorenzen , T. J. and Vance , L. C. 1986 . The economic design of control charts: a unified approach . Technometrics , 28 : 3 – 10 .
  • Montgomery , D. C. 1980 . The economic design of control charts: a review and literature survey . Journal of Quality Technology , 12 : 75 – 87 .
  • Nantawong , C. , Randhawa , S. U. and McDowell , E. D. 1989 . A methodology for the economic comparison of X cumulative sum and geometric moving-average control charts . International Journal of Production Research , 27 : 133 – 151 .
  • Nikolaidis , Y. , Psoinos , D. and Tagaras , G. 1997 . A more accurate formulation for a class of models for the economic design of control charts . IIE Transactions , 29 : 1031 – 1037 .
  • Reynolds , M. R. Jr. and Stoumbos , Z. G. 2004 . Should observations be grouped for effective process monitoring? . Journal of Quality Technology , 36 : 343 – 366 .
  • Saniga , E. M. 1977 . Joint economically optimal design of X and R control charts . Management Science , 24 : 420 – 431 .
  • Simpson , J. R. and Keats , J. B. 1995 . Sensitivity study of the CUSUM control chart with an economic model . International Journal of Production Economics , 40 : 1 – 19 .
  • Taylor , H. M. 1968 . The economic design of cumulative sum control charts . Technometrics , 10 : 479 – 488 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.