522
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

Minimizing Cost of Multiple Response Systems by Probabilistic Robust Design

&
Pages 67-74 | Published online: 18 Aug 2006

Abstract

In the design of products and processes, a methodology that helps adjust the means and tolerances of the design variables to both improve conformance and lower costs is a valuable tool. In this paper, the cost of a product at the manufacturing stage is the sum of the production cost, which includes known costs for tolerances, inspection, and so forth, plus any cost for scrapping or reworking products that do not conform to specifications. We call this added cost the so-called loss-of-quality cost and evaluate it as the probability of nonconformance (of the responses) times established scrap or rework costs. Accurate probability estimates are obtained using full distributions, limit-state functions, and first-order reliability methods (FORM). Probabilities are adjusted through probabilistic robust design. The production costs and the loss-of-quality cost are competing costs and thus their sum provides a single objective function in terms of the means and tolerances of the design variables. The need to satisfy the equations in both the product model and the workings of FORM introduce nonlinear equality constraints. The minimum of the objective function, hence the minimum cost, is obtained by solving a nonlinear, constrained, optimization problem. The design of a mechanism for controlling a grating diffraction spectroscope serves as a case study using the presented method. A minimum cost, the probability of conformance, and the respective parameter settings are found for both complete and zero inspection strategies.

Introduction

It is essential in the design of a product, or the tuning of a process, to have a methodology that helps adjust the settings of the means and tolerances of the design variables to provide high conformance of the responses (within their specifications) at the lowest possible cost (Citation1 Citation2 Citation3). Herein the means and tolerances of the design variables are together called the design parameters and given the symbol p. The manifestation of such a methodology as a computer program requires a number of mathematical relations. These comprise (1) product models that relate responses to design parameters, and (2), cost models that relate costs to design parameters.

Mathematical models of a product that relate responses to design parameters are formed by first developing either mechanistic models or empirical models that arise from response surface methodology, such that the q responses Z are written as functions of the m design variables V, in the form:

Then, to complete the relation, the design variables V are assigned probability distributions that contain the d design parameters p. In a feasible design, the means of the responses must be positioned appropriately between upper and lower specifications or on the correct side of a single bound. This can be accomplished by properly adjusting the design parameters in the model. The variability of the responses, hence the sense of conformance when specifications are included, can be adjusted via the design parameters as well.

The cost–design parameter relations comprise both the production cost and the so-called loss-of-quality cost (Citation4 Citation5 Citation7). Although the production cost arises from a number of factors, this cost depends mainly on the choices of tolerances for the design variables. For example, as tolerances are tightened the production cost increases, owing to longer set-up times, instruments with higher precision, and more skilled labor. Mathematical formulae can be established to relate costs to design parameters using collected data from experiments and past records: the production cost is denoted as C P (p). The loss-of-quality cost arises from scrap and reworking costs when the product's responses exceed specifications due in part to the excess variability of the design variables. That is, as the tolerances of the design variables are tightened, the variability of any response decreases, the number of nonconforming products lessens, and the loss-of-quality cost decreases—opposite to the trend in the production cost. In general, the production cost and the loss-of-quality cost are competing costs. If we let the total product cost be defined as the sum of the production cost and the loss-of-quality cost, then to find p to give a minimum total cost we require a loss-of-quality cost relation, analogous to C P (p), denoted herein as C LQ (p).

The evaluation of C LQ (p) is complex, however, since there is no immediate relation between cost and design parameters as there is for production costs. The most common method to provide C LQ (p) invokes the concept of expected loss and is expressed as:

where f is the probability density function of a response Z, and L(Z) is the loss function. Although the loss function may be general, Taguchi's quadratic loss function, comprising a response Z and its target value τ, has become the standard and is written as:
where k converts units of response to units of cost. Note that all products off target invoke a cost. Then for Z with a normal distribution N[μ,σ], the expected loss from Eq. Equation(2) is:
For multiple responses, a reasonable expression for the loss-of-quality cost is obtained from the sum of like terms in Eq. Equation(4), although there are many variations, as suggested by others (Citation8). The means and variances of the responses in Eq. Equation(4) are obtained through moment transfer relations using the requisite derivatives from Eq. Equation(1) and the first and second moments of the design variables V, which of course are the design parameters p. We now have C LQ (p). However, this loss-of-quality cost has a few shortcomings. Specifically, the “higher/lower is best” metric typically requires a subjective estimate of some representative target and there is no precise evaluation of the cost because of the way the conversion factor k is evaluated.

A Binary Loss Function

We now present an alternative approach for providing the loss-of-quality cost that alleviates the above difficulties. Let us consider the following costing rules for a batch of a manufactured product with well-defined specifications: (1) any products that conform to specifications introduce no extra costs and (2) all products that exceed any of the specifications are treated uniformly and introduce the same additional costs. This costing philosophy applies particularly well to, for example, over-specification machined products that are reworked, nonconforming electronic integrated circuits that are scrapped, and out-of-conformance instruments that are recalibrated. Thus, the loss function L(Z) is binary, in the sense that a single, nonzero cost C S is added to the production cost of only nonconforming products. Now, for upper and lower response specifications, H and L, respectively, L(Z) becomes:

and the expected loss from Eq. Equation(2) for an arbitrary density function becomes:
Further, if we define the integrals in Eq. Equation(6) to be the nonconformance probabilities Pr{FL} and Pr{FH}, we have:
For multiple responses and their N nonconformance regions F i , the expected loss, or the loss-of-quality cost, is an extension of Eq. Equation(7) for all N regions and we write:
This loss-of-quality cost is specification oriented and handles the “higher/lower is best” metrics naturally: it includes the traditional target objective by invoking complementary “higher/lower is best” metrics on the respective response.

Evaluating The New Loss-of-quality Cost

The keys to the evaluation of this loss-of-quality cost are (1) a rational evaluation of costs that make up C S , (2) the use of limit-state functions to relate responses to specifications, and (3) the use of reliability methods to estimate probability quickly (Citation3 Citation6 Citation7). These issues are discussed next.

The value of C S , which essentially serves the purpose of k in Eq. Equation(4), comprises rework, scrap, or recalibration costs: these costs are product dependent and must be established rationally.

Limit-state functions are written as:

where v denotes samples of the random design variables, z is the response, and z o is a higher or lower specification. (For an upper specification, the negative of the right side of Eq. Equation(9) is used.) We define the safe, or conforming region, to occur when g(v)>0 and the failure or nonconforming region to occur when g(v)<0. We define the limit-state surface to be the m−1 dimensional surface where g(v)=0.

Probabilities for regions on either side of the limit-state surface are obtained as follows. It is standard practice to transform the limit-state surface from v-space to standard normal space, or u-space. This can be done mathematically by solving the set of equations that describe the transformation. The mapping has the form:

where, in each mapping, the design parameters p are fixed. When all of the limit-state surfaces are considered, the nonconformance probability of the system is expressed as:
It is not obvious, but when the design parameters are altered, the limit-state surfaces move and nonconformance probabilities change. Then, with reference to Eq. Equation(8), we have the connection between the loss-of-quality cost and the design parameters, and this gives us, indirectly, C LQ (p).

To make the calculations of probabilities in Eq. Equation(11) tractable, it is expedient to approximate a general limit-state surface with a simpler shape such as a hyper plane. Examples are shown in Fig. . This approximation provides the first-order reliability method (FORM). The hyper plane is located at the so-called most likely failure point (MLFP) denoted as u *, which is the point (1) on the limit-state surface and (2) closest to the origin in u-space. That is, mathematically for a MLFP, g(u)=0 and the vector to u * is collinear with the outward normal gradient vector α of the limit-state function. The distance to the MLFP is the 2-norm of u * and is given the symbol β. Because u-space is rotationally symmetrical, we may rotate each of the linearized failure surfaces to be perpendicular to any single negative axis. Then for the ith limit-state function, the probability of nonconformance is found via the one-dimensional Normal CDF and written as Φ(−β i ).

Figure 1. Nonconformance regions for two linearized limit-state surfaces.

Figure 1. Nonconformance regions for two linearized limit-state surfaces.

The probability of nonconformance for the system is calculated using relative positions of the N limit-state surfaces. More specifically, when all nonconformance regions are mutually exclusive, or intersections of failure regions are far removed from the origin, the probability of nonconformance for the system is taken as the sum of the N individual nonconformance probabilities. However, for limit-state surfaces close to the origin, the probability in the intersections of any pair (or higher order) of nonconformance regions may be relatively large and must be calculated (Citation8 Citation9 Citation10) to ensure the loss-of-quality cost in Eq. Equation(8) is correct. The geometry in Fig. is used to calculate the intersection probability of any two arbitrary hyper plane limit-state surfaces. The two planes of interest are identified as AA and BB and their respective MLFPs are located at distances β 1 and β 2 from the origin with an angle θ separating the vectors to each respective MLFP. The correlation between planes is ρ=cos(θ), and the probability in the region AYB is obtained via the bivariate cumulative function as follows:

The numerical evaluation of this integral is taken from Citation11. For all pairs of intersection regions F i,j , the system intersection probability is:
and this value is subtracted from the sum of the individual probabilities of nonconformance to provide a corrected value.

The Optimization Problem

The single objective function used herein is the total cost. This is the sum of the production cost and the loss-of-quality cost and is written as C T (p)=C P (p)+C LQ (p). The optimization problem then is posed as the following nonlinear constrained minimization problem:

The first and second equality constraints in Eq. Equation(14) have been addressed above. The third and fourth equality constraints locate each MLFP, as defined, to ensure a legitimate probability evaluation. The fifth constraint is an inequality constraint and provides ranges for the design parameters. The first four constraints are non linear (the fifth is linear) and thus the solution requires a nonlinear constrained optimization package. Further, the fourth constraint requires gradients of the limit-state functions. Both issues are resolved by invoking (1) the optimization tool box and (2) the symbolic tool box, respectively, in the matrix programming language MATLAB (Citation12). As an added note, it has been found that the optimization proceeds quicker if it is performed in a number of logical steps. That is, it is convenient to find a feasible solution first such that responses meet specifications before entering the formal minimization process (Citation7).

Case Study: Multi-link Diffraction Mechanism

In a grating spectroscope, a wavelength λ is visualized, if present, by the angular position of a viewing piece (Citation13). In a particular design, the angle is related to a displacement through a multi-link mechanism. The displacement S is set by a stepping motor subsystem. The mechanism has two geometrical constants C and L and two adjustable options—an off-set distance and a grating angle-denoted as K 1 and K 2 , respectively. To calibrate the mechanism, it is necessary to designate two step positions S 1 and S 2 for two respective wavelengths λ 1 and λ 2 and then adjust K 1 and K 2 to visualize these wavelengths. Once calibrated, any of the hundreds of selected step positions manifests the corresponding wavelength. The responses are the wavelengths λ 1 and λ 2 and the design variables are the off-set distance K 1 , the grating angle K 2 , step positions S 1 and S 2 , and the geometrical constants C and L. The relationships of the two responses to the six design variables are specifically:

with d=600 lines/mm. For specific wavelengths, deterministic geometrical constants, and deterministic step positions, the settings for the off-set distance and the grating angle can be determined by solving the two nonlinear equations in Eq. Equation(15) to give the nominal values of K 1 and K 2 . However, the overall uncertainty of the six design variables deems that the problem be posed as a probabilistic problem. Now, the problem is to find means and tolerances for the off-set distance and grating angle to provide sufficient conformance of the responses at best cost. It is suspected that the two responses are highly correlated because of the similarity of the expressions. Consequently, the calculations of probabilities of intersection regions, as discussed previously, will be important.

The distributions for the design variables are determined as follows. The adjustments K 1 and K 2 are assigned individual uniform distributions to model the way they are set in the course of production. The dimensions of C and L are uncertain because of the particular machining process and follow a Normal distribution. The step positions S 1 and S 2 are uncertain as a result of bearing and screw tolerances and distributions are Normal. Further, S 1 and S 2 are set by the same mechanism and are therefore fully correlated. The specific transformation equations in the form of Eq. Equation(10) are:

Specific distribution information is given in Table .

Table 1. Distribution information for control and noise variables

The four design parameters to be determined are the means and tolerances of K 1 and K 2 , thus:

and from practical considerations there are two minimum tolerance constraints: and

Costs for the mechanism are established as follows. Production costs increase as tolerances are reduced. A model that matches the cost of setting the off-set distance and the grating angle within tolerance is reciprocal and has the form cost=a+b/tol. Values for a and b are obtained by fitting the function to complementary pairs of tolerance and cost. The production cost of the stepping motor subsystem depends on the designated tolerance and presents a cost C step . The cost for machining the dimensions C and L to a selected tolerance is given as C CL . The cost for inspection of the assembled mechanism is denoted as C I . The overall production cost is:

In the analyses, we set a 1 =10, a 2 =−2, b 1 =0.5 $‐mm, and b 2 =1.0 $‐degree. The latter three costs in Eq. Equation(18) are given specifically in each analysis. The loss-of-quality cost arises from the need to recalibrate any out-of-specification mechanisms, and the unit cost C S depends on whether recalibration is performed locally, in the factory, or later, on-site, after shipment. Specific costs are given for each analysis.

The wavelengths, or responses, have tolerances of ±1 nm and the two target values used for calibration are λ 1=532 nm and λ 2=1064 nm. These targets and specifications give rise to the four limit-state functions:

Initial conditions for the means and tolerances of K 1 and K 2 are provided as follows. The means are obtained by solving the two response functions in Eq. Equation(15) with nominal values for λ 1, λ 2, C, L, S 1 , and S 2 and this gives initial conditions of 230.6 mm and 76.0°, respectively. The initial tolerances are best-case estimates of 0.0085 mm and 0.0120 degrees, respectively.

Results

We now apply the nonlinear constrained optimization process discussed above to find the means and tolerances of K 1 and K 2 to achieve minimum total cost. We perform analyses using the following two inspection strategies: In case I, we inspect every unit after assembly by checking wavelengths against step positions using test samples. In case II, we forego inspection and ship the mechanisms as is. Further, for both inspection strategies, we examine (1) how a tighter tolerance and the higher cost for machining dimensions C and L affect the design, and (2) how a looser tolerance and the lower cost for the stepping motor subsystem affect the design.

Case I—Design with 100% Inspection

In this analysis, we inspect every unit after factory assembly for a cost of C I =$50 and, if required, perform an in-factory re calibration for the unit cost C S =$500. The results are shown in Table .

Table 2. Design for total inspection with C I =$50 and C S =$500

For case I-A, we use the standard tolerances for both the stepping motor subsystem and the machining of dimensions C and L as given in Table . The costs for the stepping motor subsystem and machining are $110 and $340, respectively. In case I-B, we tighten the tolerance for the dimensions C and L and the cost increases to $440 because of a higher precision process. The results show a vastly improved design since conformance improves and cost decreases. Finally, we keep the tighter tolerance for dimensions C and L and select a less expensive stepping motor subsystem with a looser tolerance and a lower cost of $80. The results in case I-C show a mechanism design that gives a further cost reduction with slightly lower conformance. In all three analyses, the tolerances for the off-set and angle are well above minimum values and the mean values remain close to initial conditions.

Case II—Design with Zero Inspection

In this analysis, we forego any inspection after assembly and hence the associated cost now is C I =$0. However, recalibration work combined with travel to site increase the loss-of-quality cost factor to C S =$2000. The results for pairs of machining and stepping motor tolerances and their costs, as used in case I, are shown in Table .

Table 3. Design for zero inspection with C I =$0 and C S =$2000

We note in all three analyses that the tolerance for the grating angle has decreased to, or very close to, the minimum value, while in each analysis the tolerance for the off-set remains comfortably above its minimum value. Again, in all three analyses, the mean values remain close to initial conditions. It is clear that case II-C presents the lowest cost of all the designs in Tables , and with a very acceptable conformance level of 99%. Further analysis provides an interesting insight. Suppose we wish to achieve the conformance in case II-C and yet inspect each mechanism. In this scenario, the production cost increases by the $50 inspection fee, but the loss-of-quality cost decreases from $19 to $5 because the in-house recalibration cost is only $500. Total cost increases by $36 to become $762. Interestingly, this cost is $13 more than the cost of $749 in case I-C with its lower conformance. A cost-benefit analysis is required to justify the increased cost.

Conclusions

In this paper, a methodology for minimum cost design of multi-response systems has been described. The unique features of the methodology are a probabilistic loss-of-quality cost and a nonlinear constrained optimization algorithm. The underpinnings of the methodology have presented (1) a way for rational costing of both production and loss-of-quality and (2) a way for comparing inspection strategies and choices of noise tolerances. The case study has highlighted a number of important results. First, the algorithm produces a centered, or balanced, design. That is, the program adjusts the design parameters so that respective pairs of β magnitudes are equal, indicating that responses have no preferential nonconforming regions. This provides the sense of positioning responses “on target.” Next, the binary nature of responses (conformance or nonconformance) makes the loss-of-quality cost easy to implement conceptually for multiple response systems since only the probability of nonconformance for the system (and a single cost factor) is required. In practice, this probability is calculated using FORM methods and locating the MLFPs is computationally intensive, although the computation time for an analysis of the mechanism on a Celeron PC under Windows was, on average, 4 minutes. Finally, the inclusion of the intersections of pairs of nonconformance regions has improved probability estimates for highly correlated responses. Indeed, in the case study, the accuracy of the estimates has been verified by Monte-Carlo simulation using 100,000 samples, for an accuracy of 0.3%. Specifically, in case II-C the Monte-Carlo probability of conformance is 0.9934 and our result is 0.9903.

Further work is ongoing to investigate the cost–quality problem. That is, if we define quality as the probability of conformance, then, for a minimum quality, we need to find the design parameters to minimize total cost.

Acknowledgments

We express our utmost appreciation to Dr. J.A. Smith and Mr. J. Domm for providing detailed cost data for the case study, to Dehui Tong for her instructive comments on the manuscript, and to the National Science and Engineering Research Council of Canada for financial support.

References

  • Antony , J. 2001 . Simultaneous optimisation of multiple quality characteristics in manufacturing processes using Taguchi's quality loss function . Int. J. Adv. Manuf. Technol. , 17 : 134 – 138 .
  • Drezner , Z. and Wesolowsky , G.O. 1990 . On computation of the bivariate normal integral . J. Stat. Comput. Simulat. , 25 : 101 – 107 .
  • Greig , G.W. 1992 . An assessment of high-order bounds for structural reliability . Struct. Safety , 11 (3–4) : 213 – 225 . [CROSSREF]
  • Jayaram , J.S.R. and Ibrahim , Y. 1999 . Multiple response robust design and yield maximisation . Int. J. Qual. Reliab. Manag. , 16 (9) : 826 – 837 . [CROSSREF]
  • Li , W. and Wu , C.F.J. 1999 . An integrated method of parameter design and tolerance design . Qual. Eng. , 11 (3) : 417 – 425 .
  • in press .
  • Melchers , R.E. and Ahammed , M. 2001 . Estimation of failure probabilities for intersections of nonlinear limit states . Struct. Safety , 23 : 123 – 135 . [CROSSREF]
  • O'Shea , D.C. 1985 . Elements of Modern Optical Design New York : John Wiley and Sons .
  • Ribeiro , J.L.D. , Fogliatto , F.S. and ten Caten , C.S. 2001 . Minimizing manufacturing and quality costs in multiresponse optimization . Qual. Eng. , 13 (4) : 559 – 569 .
  • Savage , G.J. and Swan , D.A. 2001 . Probabilistic robust design with multiple quality characteristics . Qual. Eng. , 13 (4) : 629 – 640 .
  • Seshadri , R. and Savage , G.J. 2002 . Integrated robust design using probability of conformance metrics . Int. J. Mater. Prod. Technol. , 17 (5/6) : 319 – 337 .
  • Suhr , R. and Batson , R.G. 2001 . Constrained multivariate loss function minimization . Qual. Eng. , 13 (30) : 475 – 483 .
  • Swan , D.A. and Savage , G.J. 1998 . Continuous Taguchi—a model based approach to Taguchi's “Quality by Design” with arbitrary distributions . Qual. Reliab. Eng. Int. , 14 : 1 – 13 . [CROSSREF]

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.