Abstract
Dynamic programming computational efficiency rests upon the so-called principle of optimality, where it is possible to decompose combinatorial problems into individual sub-problems (stages). The sub-problems are solved independently and linked together through the use of the state variables which reflect optimal decisions for other (preceding) stages.
A class of combinatorial problems which poses a particularly difficult challenge to dynamic programming methods is that which requires that decisions made in a given stage reflect the accumulation of decisions made in multiple preceding stages. It is generally accepted that such an increase in state space dimension is affected by computer storage limitations. In addition, it can be shown that such a problem structure can cause a dynamic-programming methodology to be inferior from the point of view of computation time as well.
This phenomenon is demonstrated by varying the number of preceding stages whose decisions affect the pay-off determination for a given stage in five inventory models and solving the problems by both dynamic programming and total enumeration. In addition, a comparison of computational requirements to solve a problem of practical magnitude is presented. Computational results confirm that, owing to the nature of the interaction between overlapping shift schedules, total enumeration was superior to dynamic programming and branch-and-bound integer programming for a bank-clerk scheduling problem.