Abstract
Standard cost-estimating practice involves application of a cost-improvement factor, or “learning” rate, to account for management, engineering, and production improvements that save money as successive units are produced. Lack of credible historical data that are analogous or applicable, however, makes it difficult or even impossible to determine exactly what the “correct” learning rate will be in any particular estimating context. Nevertheless, the estimator's choice of learning rate exerts a major, perhaps dominant, impact on the estimate of the total spending profile of a large-quantity production program to the extent that small variations in the assumed learning rate substantially outweigh all other contributions to the total program estimate. Effects of learning-rate uncertainty on program cost estimates can, however, be mitigated by eschewing cost models that provide estimates of “theoretical first-unit” (T1) costs in favor of models that estimate average unit cost of a realistic number of units (e.g., 10 satellites, 25 launch vehicles, 100 aircraft). The latter kind of model circumvents the controlling effect of the steep portion of the learning curve (involving the first few production units), thereby reducing the detrimental impact of learning-rate guesswork on the parts of both model developers (who need to assume a learning rate in order to normalize historical data) and model users (who need to assume a learning rate in order to estimate total production costs).