269
Views
7
CrossRef citations to date
0
Altmetric
Original Articles

Structure learning of Bayesian networks by continuous particle swarm optimization algorithms

&
Pages 1528-1556 | Received 31 May 2016, Accepted 10 Feb 2018, Published online: 26 Feb 2018
 

ABSTRACT

In this paper, the problem of learning Bayesian network (BN) structures is studied by virtue of particle swarm optimization (PSO) algorithms. After analysing the optimal flying behaviours of some classic PSO algorithms, we put forward a new PSO-based method of learning BN structures. In this method, we treat the position of a particle as an imaginary likelihood that represents to what extent the associated edges exist, treat the velocity as the corresponding increment or decrement of likelihood that represents how the position changes in the process of flying, and treat the BN structures outputted as appendants of positions. The resulting algorithm and its improved version with expert knowledge integrated are illustrated to be efficient in collecting the randomly searched information from all particles. The numerical study based on two bechmarking BNs shows the superiority of our algorithms in the sense of precision, speed, and accuracy.

Acknowledgments

The authors are very grateful to the referee for all the encouraging comments and constructive criticisms which result in the present version of this paper.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1 In a certain sense, the idea of PSO is similar to that of genetic algorithm (GA; [Citation12]): GA requires the operators on crossover and mutation, while PSO needs an updating rule (see Theorem 3.1 for the detail); in GA, a child collects some of the information from two parents and mutates with a small probability; in PSO, a particle flies to the next position based on the previous position, the personal best position, the group best position, and a random position. This is why we use the phrase ‘evolutionary characteristics’. Further, PSO performs ‘mutation’ twice: one is using a random position and the other is transforming the information collected into a DAG. The classic PSO algorithms (including PSOBN) perform these two mutations simultaneously. This highly mutational updating rule may cause some particle to move to a local optimal position prematurely; then, more and more particles will be attracted to move to this local optimal position rapidly (because PSOBN may not effectively pool useful information as Example 3.1 illustrates); and finally, the swarm will be trapped into a local optimum. This will slow down or even terminate the search progress. This is why we say ‘the evolutionary characteristics are prematurely violated’: the evolution should be continuous, gradual, and containing no too many significant mutations.

2 See Line 2 of Algorithm 1 for the generation of Xi,k+1rnd based on the FullBNT toolbox [Citation48].

3 Here, probabilistic combination means that a given particle chooses its jth  column from the jth  column of the randomly generated particle, the jth  column of its personal best, the jth  column of the group best, and the jth  column of the expert knowledge, rather than using information from all 3 (for PSOBN) or 4 (for EPSOBN). These explanation is provided by the referee and we greatly appreciate this valuable note!

Additional information

Funding

This work was supported by the National Natural Science Foundation of China (61374183, 51535005, 51675212).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.