119
Views
7
CrossRef citations to date
0
Altmetric
Articles

The foreground detection algorithm combined the temporal–spatial information and adaptive visual background extraction

&
Pages 49-61 | Received 27 Jun 2016, Accepted 04 Nov 2016, Published online: 14 Mar 2017
 

ABSTRACT

Visual background extraction algorithm, which utilises a global threshold to complete the foreground segmentation, cannot adapt to illumination change well. It will easily choose the wrong pixels to initialise the background model, resulting in the emergence of the ghost in the beginning of detection. In order to address these problems, this article proposes an improved algorithm based on pixel’s temporal–spatial information to initialise the background model. First of all, the pixels in video image sequences and their neighbourhood pixels are used to complete background model initialisation in the first five frames. Second, the segmentation threshold is adaptively obtained by the complexity of background that uses the spatial neighbourhood pixels. Finally, the background model of the neighbourhood pixels is updated by a dynamic update rate which is gained by calculating the Euclidean distance between pixels. Experimental results and comparative study illustrate that the improved method can not only increase the accuracy of target detection by reducing the impact of illumination change effectively but also eliminate the ghost quickly.

Acknowledgements

The authors wish to thank the associate editors and anonymous reviewers for their valuable comments and suggestions on this article.

Notes

Image permissions have been obtained for all datasets from the following sources:

a, b and c are from PETS 2009 Benchmark Data, available at http://www.cvg.reading.ac.uk/PETS2009/a.html.

c, d and a - c are from ATON project of Computer Vision and Robotics Research Laboratory, UCSD (CVRR), available at http://cvrr.ucsd.edu/aton/shadow/

a - c are from PETS 2001 http://www.cvg.reading.ac.uk/PETS2001/pets2001-dataset.html.

a - c and are available at http://www.changedetection.net/

All other figures and images are reproduced by kind permission of the authors and College of Computer Science and Technology, Chongqing University of Posts and Telecommunications.

Additional information

Funding

This work is supported by Chongqing Basic and Frontier Research Project under Grant Nos. [cstc2015jcyjBX0090, cstc2014jcyjA40033, cstc2015jcyjA40034] and Outstanding Achievements Transformation Projects of University in Chongqing under Grant No. [KJZH14219].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.