Abstract
A targeted Bayesian network learning (TBNL) method is proposed to account for a classification objective during the learning stage of the network model. The TBNL approximates the expected conditional probability distribution of the class variable. It effectively manages the trade-off between the classification accuracy and the model complexity by using a discriminative approach, constrained by information theory measurements. The proposed approach also provides a mechanism for maximizing the accuracy via a Pareto frontier over a complexity–accuracy plane, in cases of missing data in the data-sets. A comparative study over a set of classification problems shows the competitiveness of the TBNL mainly with respect to other graphical classifiers.