184
Views
3
CrossRef citations to date
0
Altmetric
Communications

Visual-Thermal Fusion-Based Object Tracking via a Granular Computing Backed Particle Filtering

ORCID Icon, &

References

  • W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst. Man Cybern. C Appl. Rev., Vol. 34, no. 3, 2004.
  • B. Zhan, D. N. Monekosso, P. Remagnino, S. A. Velastin, and L. Q. Xu, “Crowd analysis: A survey,” Mach. Vis. Appl., Vol. 19, no. 5–6, 2008.
  • S. Sivanantham, N. N. Paul, and R. S. Iyer, “Object tracking algorithm implementation for security applications,” Far East J. Electron. Commun., Vol. 16, no. 1, 2016.
  • S. Walker, et al., “Systems and methods for localizing, tracking and/or controlling medical instruments,” Google Patents, Vol. 2, no. 12, pp. 15/466–15/565, 2017.
  • J. Severson, “Human-digital media interaction tracking,” US20100076274A1, 2017.
  • J. K. Aggarwal and L. Xia, “Human activity recognition from 3D data: A review,” Pattern Recognit. Lett., Vol. 48, 2014.
  • B. Tian, Q. Yao, Y. Gu, K. Wang, and Y. Li, “Video processing techniques for traffic flow monitoring: A survey,” in IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2011.
  • M. Brown, J. Funke, S. Erlien, and J. C. Gerdes, “Safe driving envelopes for path tracking in autonomous vehicles,” Control Eng. Pract., Vol. 61, 2017.
  • G. Jemilda, S. Baulkani, D. George Paul, and J. Benjamin Rajan, “Tracking moving objects in video,” J. Comput.., Vol. 12, no. 3, pp. 221–229, 2017.
  • M. Hanif and U. Ali, “Optimized visual and thermal image fusion for efficient face recognition,” in 2006 9th International Conference on Information Fusion, FUSION, 2006.
  • G. Bebis, A. Gyaourova, S. Singh, and I. Pavlidis, “Face recognition by fusing thermal infrared and visible imagery,” Image Vis. Comput., Vol. 24, no. 7, pp. 727–42, 2006.
  • J. Heo, S. G. Kong, B. R. Abidi, and M. A. Abidi, “Fusion of visual and thermal signatures with eyeglass removal for robust face recognition,” in in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Jan. 2004.
  • S. G. Kong, et al., “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vis., Vol. 71, no. 2, pp. 215–33, 2007.
  • T. Wilhelm, H. J. Böhme, and H. M. Gross, “A multi-modal system for tracking and analyzing faces on a mobile robot,” Rob. Auton. Syst., Vol. 48, no. 1, pp. 31–40, 2004.
  • G. Cielniak, T. Duckett, and A. J. Lilienthal, “Improved data association and occlusion handling for vision-based people tracking by mobile robots,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007,pp. 3436–41.
  • G. B. Palmerini and S. Università, “Combining thermal and visual imaging in spacecraft proximity operations,” in IEEE International Conference on Control Automation Robotics & Vision, 2014, pp. 383–388.
  • Y. Tong, L. Liu, M. Zhao, J. Chen, and H. Li, “Adaptive fusion algorithm of heterogeneous sensor networks under different illumination conditions,” Signal. Processing.,Vol. 126, pp. 149–58, 2016.
  • Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inf. Fusion, Vol. 30, pp. 15–26, 2016.
  • X. Chen, G. Tian, J. Wu, C. Tang, and K. Li, “Feature-based registration for 3D eddy current pulsed thermography,” IEEE Sens. J., Vol. 19, no. 16, 2019.
  • C. Aguilera, F. Barrera, F. Lumbreras, A. D. Sappa, and R. Toledo, “Multispectral image feature points,” Sensors (Switzerland), Vol. 12, no. 9, 2012.
  • S. E. A. Raza, V. Sanchez, G. Prince, J. P. Clarkson, and N. M. Rajpoot, “Registration of thermal and visible light images of diseased plants using silhouette extraction in the wavelet domain,” Pattern Recognit., Vol. 48, no. 7, 2015.
  • J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion, Vol. 45, no. December 2017, pp. 153–78, 2018.
  • S. Singh, R. Kapoor, and A. Khosla, “Cross-domain usage in real time video-based tracking,” in Handbook of Research on Advanced Concepts in Real-Time Image and Video Processing, IGI Global, 2018, pp. 105–129.
  • G. S. Walia and R. Kapoor, “Recent advances on multicue object tracking: a survey,” Artif. Intell. Rev., Vol. 46, no. 1, pp. 15821–47, 2016.
  • C. Ó. Conaire, N. E. O’Connor, and A. Smeaton, “Thermo-visual feature fusion for object tracking using multiple spatiogram trackers,” Mach. Vis. Appl., Vol. 19, no. 5–6,pp. 483–94, 2008.
  • G. Xiao, X. Yun, and J. Wu, “A new tracking approach for visible and infrared sequences based on tracking-before-fusion,” Int. J. Dyn. Control, Vol. 4, no. 1, pp. 40–51, 2016.
  • K. Nummiaro, E. Koller-Meier, and L. Van Gool, “An adaptive color-based particle filter,” Image Vis. Comput., Vol. 21, no. 1, pp. 99–110, 2003.
  • M. Talha and R. Stolkin, “Particle filter tracking of camouflaged targets by adaptive fusion of thermal and visible spectra camera data,” IEEE Sens. J., Vol. 14, no. 1,pp. 159–66, 2014.
  • G. S. Walia and R. Kapoor, “Robust object tracking based upon adaptive multi-cue integration for video surveillance,” Multimed. Tools Appl., Vol. 75, no. 23, pp. 15821–47, 2016.
  • C. Li, X. Wu, N. Zhao, X. Cao, and J. Tang, “Fusing two-stream convolutional neural networks for RGB-T object tracking,” Neurocomputing, Vol. 281, pp. 78–85, 2017.
  • J. Xiao, R. Stolkin, M. Oussalah, and A. Leonardis, “Continuously adaptive data fusion and model relearning for particle filter tracking with multiple features,” IEEE Sens. J., Vol. 16, no. 8, pp. 2639–49, 2016.
  • M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Trans. Signal Process, Vol. 50, no. 2, pp. 174–88, 2002.
  • G. S. Walia and R. Kapoor, “Intelligent video target tracking using an evolutionary particle filter based upon improved cuckoo search,” Expert Syst. Appl., Vol. 41, no. 14,pp. 6315–26, 2014.
  • Y. Qian, J. Liang, and C. Dang, “Knowledge structure, knowledge granulation and knowledge distance in a knowledge base,” Int. J. Approx. Reason, Vol. 50, no. 1,pp. 174–88, 2009.
  • Y. Qian, J. Liang, W. Z. Z. Wu, and C. Dang, “Information granularity in fuzzy binary GrC model,” IEEE Trans. Fuzzy Syst., Vol. 19, no. 2, pp. 253–64, 2011.
  • P. Zhu and Q. Wen, “Homomorphisms between fuzzy information systems revisited,” Appl. Math. Lett., Vol. 24, no. 9, pp. 1548–53, 2011.
  • T. Li, M. Bolic, and P. M. Djuric, “Resampling methods for particle filtering,” IEEE Signal Process. Mag., Vol. 32, no. 3, pp. 70–86, 2002.
  • “Video analytics dataset.”
  • L. Čehovin, A. Leonardis, and M. Kristan, “Visual object tracking performance measures revisited,” IEEE Trans. Image Process, Vol. 25, no. 3, pp. 1261–74, 2016.
  • J. Davis and V. Sharma, “Background-subtraction using contour-based fusion of thermal and visible imagery,” Comput. Vis. Image Underst., Vol. 106, no. 2–3, pp. 162–82, 2007.
  • “Bristol Eden project multi-sensor data set.”

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.