64
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Dynamic method to optimize memory space requirement in real-time video surveillance using convolution neural network

, &
Pages 576-590 | Received 20 Jan 2023, Accepted 15 May 2023, Published online: 08 Jun 2023
 

ABSTRACT

Real-time video surveillance is one of the most effective ways to observe crime, mischief, and violence. But, most of the recent surveillance system consumes huge memory space to store the video. This article proposed an advanced dynamic video surveillance strategy to utilize minimum memory space with the finest detection of suspicious moving objects. In the proposed system, a convolution neural network (CNN) classifier and frame resolution switching is used to detect the movement of objects with appropriate frame resolution. A high-definition (HD) frame can be recorded dynamically when any movement of a suspicious object is detected. The system records low-quality frames, which are less important. Finally, the improved gradient-based histogram equalization technique is applied to all frames to obtain enhanced suspicious frames. Several real-time imperial tests are conducted and it is observed that the proposed system detects suspicious objects with 98.25% accuracy. Besides, the system consumes 80% less memory storage.

Acknowledgements

The authors would like to acknowledge the National Institute of Technology Agartala, Tripura, India, for providing a world-class research environment including the research laboratory.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Tamal Biswas

Tamal Biswas received a B. Tech degree in computer science & engineering from West Bengal University of Technology, Kolkata, India, in 2010. He received an M. Tech degree in computer science & engineering from National Institute of Technology Agartala at Tripura, India, in 2012. He is currently pursuing Ph.D. in computer science & engineering with National Institute of Technology Agartala at Tripura, India. His research interests include digital image processing, computer vision and pattern recognition.

Diptendu Bhattacharya

Diptendu Bhattacharya received M.E.Tel.E. and Ph.D. (engineering) from Jadavpur University, Kolkata, India, in 1999 and 2016 respectively. He is currently an associate professor in the Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura, India. He stood first class first position in order of merit in his M.E.Tel.E. program. He supervised several B. Tech and M. Tech thesis and currently supervising five Ph.D. thesis. His research interests include Digital Image Processing, Machine Intelligence in Economic Time series prediction, artificial intelligence, Fuzzy time series and its prediction. He is a member of the IEEE and the IEEE Computer Society.

Gouranga Mandal

Gouranga Mandal received the B. Tech degree in information technology and M. Tech degree in computer science & engineering from West Bengal University of Technology, Kolkata, India, in 2009 and 2012 respectively. He has received the Ph.D in Computer Science & Engineering with National Institute of Technology Agartala at Tripura, India, in 2022. He is currently working as an assistant professor in the School of Computer Science and Engineering at VIT-AP University, Vijayawada, India. His research interests include digital image processing, computer vision, intelligent transport system and pattern recognition.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.