1,473
Views
1
CrossRef citations to date
0
Altmetric
Articles

Neural network pruning based on channel attention mechanism

, &
Pages 2201-2218 | Received 02 May 2022, Accepted 05 Aug 2022, Published online: 24 Aug 2022

References

  • Aghasi, A., Abdi, A., Nguyen, N., & Romberg, J. (2017). Net-trim: Convex pruning of deep neural networks with performance guarantee. Advances in Neural Information Processing Systems, 30, 3180–3189. https://dl.acm.org/doi/10.5555/3294996.3295077
  • Ayinde, B. O., Inanc, T., & Zurada, J. M. (2019). Redundant feature pruning for accelerated inference in deep neural networks. Neural Networks, 118, 148–158. https://doi.org/10.1016/j.neunet.2019.04.021
  • Bao, R., Yuan, X., Chen, Z., & Ma, R. (2018). Cross-entropy pruning for compressing convolutional neural networks. Neural Computation, 30(11), 3128–3149. https://doi.org/10.1162/neco_a_01131
  • Chang, X., Pan, H., Lin, W., & Gao, H. (2021). A mixed-pruning based framework for embedded convolutional neural network acceleration. IEEE Transactions on Circuits and Systems I: Regular Papers, 68(4), 1706–1715. https://doi.org/10.1109/TCSI.2020.3048260
  • Chen, Y., Wen, X., Zhang, Y., & He, Q. (2022). FPC: Filter pruning via the contribution of output feature map for deep convolutional neural networks acceleration. Knowledge-Based Systems, 238, 107876. https://doi.org/10.1016/j.knosys.2021.107876
  • Chen, Y., Wen, X., Zhang, Y., & Shi, W. (2021). CCPrune: Collaborative channel pruning for learning compact convolutional networks. Neurocomputing, 451, 35–45. https://doi.org/10.1016/j.neucom.2021.04.063
  • Chen, Z., Chen, Z., Lin, J., Liu, S., & Li, W. (2020). Deep neural network acceleration based on low-rank approximated channel pruning. IEEE Transactions on Circuits and Systems I: Regular Papers, 67(4), 1232–1244. https://doi.org/10.1109/TCSI.2019.2958937
  • Fernandes Jr., F. E., & Yen, G. G. (2021). Pruning deep convolutional neural networks architectures with evolution strategy. Information Sciences, 552, 29–47. https://doi.org/10.1016/j.ins.2020.11.009
  • Guo, Y., Yao, A., & Chen, Y. (2016). Dynamic network surgery for efficient dnns. Advances in Neural Information Processing Systems, 29, 1387–1395. https://dl.acm.org/doi/10.5555/3157096.3157251
  • Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149.
  • Hassibi, B., & Stork, D. (1992). Second order derivatives for network pruning: Optimal brain surgeon. Advances in Neural Information Processing Systems, 5, 164–171. https://dl.acm.org/doi/10.5555/645753.668069
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). IEEE. https://doi.org/10.1109/CVPR.2016.90
  • He, Y., Liu, P., Wang, Z., Hu, Z., & Yang, Y. (2019). Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4340–4349). IEEE. https://doi.org/10.1109/CVPR.2019.00447
  • Hewahi, N. M. (2019). Neural network pruning based on input importance. Journal of Intelligent & Fuzzy Systems, 37(2), 2243–2252. https://doi.org/10.3233/JIFS-182544
  • Hu, J., Liang, W., Hosam, O., Hsieh, M. Y., & Su, X. (2022). 5GSS: A framework for 5G-secure-smart healthcare monitoring. Connection Science, 34(1), 139–161. https://doi.org/10.1080/09540091.2021.1977243
  • Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132–7141). IEEE. https://doi.org/10.1109/CVPR.2018.00745
  • Hu, J., Wu, K., & Liang, W. (2019). An IPv6-based framework for fog-assisted healthcare monitoring. Advances in Mechanical Engineering, 11(1), 168781401881951. https://doi.org/10.1177/1687814018819515
  • Jiang, X., Wang, N., Xin, J., Xia, X., Yang, X., & Gao, X. (2021). Learning lightweight super-resolution networks with weight pruning. Neural Networks, 144, 21–32. https://doi.org/10.1016/j.neunet.2021.08.002
  • Kang, H. J. (2019). Accelerator-aware pruning for convolutional neural networks. IEEE Transactions on Circuits and Systems for Video Technology, 30(7), 2093–2103. https://doi.org/10.1109/TCSVT.2019.2911674
  • LeCun, Y., Denker, J., & Solla, S. (1989). Optimal brain damage. Advances in Neural Information Processing Systems, 2, 598–605. https://dl.acm.org/doi/10.5555/2969830.2969903
  • Li, G., Wang, J., Shen, H. W., Chen, K., Shan, G., & Lu, Z. (2021). CNNpruner: Pruning convolutional neural networks with visual analytics. IEEE Transactions on Visualization and Computer Graphics, 27(2), 1364–1373. https://doi.org/10.1109/TVCG.2020.3030461
  • Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. (2016). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710.
  • Li, J., Cao, F., Cheng, H., & Qian, Y. (2021). Learning the number of filters in convolutional neural networks. International Journal of Bio-Inspired Computation, 17(2), 75–84. https://doi.org/10.1504/IJBIC.2021.114101
  • Liang, T., Glossner, J., Wang, L., Shi, S., & Zhang, X. (2021). Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing, 461, 370–403. https://doi.org/10.1016/j.neucom.2021.07.045
  • Lin, M., Ji, R., Wang, Y., Zhang, Y., Zhang, B., Tian, Y., & Shao, L. (2020). Hrank: Filter pruning using high-rank feature map. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1529–1538). IEEE. https://doi.org/10.1109/CVPR42600.2020.00160
  • Lin, S., Ji, R., Yan, C., Zhang, B., Cao, L., Ye, Q., Huang, F., & Doermann, D. (2019). Towards optimal structured cnn pruning via generative adversarial learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2790–2799). IEEE. https://doi.org/10.1109/CVPR.2019.00290
  • Liu, Z., Xu, J., Peng, X., & Xiong, R. (2018). Frequency-domain dynamic pruning for convolutional neural networks. Advances in Neural Information Processing Systems, 31, 1051–1061. https://dl.acm.org/doi/10.5555/3326943.3327040
  • Lu, L., Shin, Y., Su, Y., & Karniadakis, G. E. (2019). Dying relu and initialization: Theory and numerical examples. arXiv preprint arXiv:1903.06733.
  • Lu, T. C. (2021). CNN convolutional layer optimisation based on quantum evolutionary algorithm. Connection Science, 33(3), 482–494. https://doi.org/10.1080/09540091.2020.1841111
  • Luo, J. H., Wu, J., & Lin, W. (2017). Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision (pp. 5058–5066). IEEE. https://doi.org/10.1109/ICCV.2017.541
  • Meng, F., Cheng, H., Li, K., Luo, H., Guo, X., Lu, G., & Sun, X. (2020). Pruning filter in filter. Advances in Neural Information Processing Systems, 33, 17629–17640. https://arxiv.org/abs/2009.14410
  • Molchanov, P., Mallya, A., Tyree, S., Frosio, I., & Kautz, J. (2019). Importance estimation for neural network pruning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11264–11272). IEEE. https://doi.org/10.1109/CVPR.2019.01152
  • Reiners, M., Klamroth, K., Heldmann, F., & Stiglmayr, M. (2022). Efficient and sparse neural networks by pruning weights in a multiobjective learning approach. Computers & Operations Research, 141, 105676. https://doi.org/10.1016/j.cor.2021.105676
  • Shao, L., Zuo, H., Zhang, J., Xu, Z., Yao, J., Wang, Z., & Li, H. (2021). Filter pruning via measuring feature map information. Sensors, 21(19), 6601. https://doi.org/10.3390/s21196601
  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Tanaka, H., Kunin, D., Yamins, D. L., & Ganguli, S. (2020). Pruning neural networks without any data by iteratively conserving synaptic flow. Advances in Neural Information Processing Systems, 33, 6377–6389. https://doi.org/10.48550/arXiv.2009.14410
  • Wang, S., Xing, C., & Liu, D. (2020). Efficient deep convolutional model compression with an active stepwise pruning approach. International Journal of Computational Science and Engineering, 22(4), 420–430. https://doi.org/10.1504/IJCSE.2020.109401
  • Wang, Z., Han, D., Li, M., Liu, H., & Cui, M. (2022). The abnormal traffic detection scheme based on PCA and SSH. Connection Science, 34(1), 1201–1220. https://doi.org/10.1080/09540091.2022.2051434
  • Wang, Z., Li, F., Shi, G., Xie, X., & Wang, F. (2020). Network pruning using sparse learning and genetic algorithm. Neurocomputing, 404, 247–256. https://doi.org/10.1016/j.neucom.2020.03.082
  • Wang, Z., Xie, X., & Shi, G. (2021). RFPruning: A retraining-free pruning method for accelerating convolutional neural networks. Applied Soft Computing, 113, 107860. https://doi.org/10.1016/j.asoc.2021.107860
  • Warmenhoven, J., Bargary, N., Liebl, D., Harrison, A., Robinson, M. A., Gunning, E., & Hooker, G. (2021). PCA of waveforms and functional PCA: A primer for biomechanics. Journal of Biomechanics, 116, 110106. https://doi.org/10.1016/j.jbiomech.2020.110106
  • Wen, L., Zhang, X., Bai, H., & Xu, Z. (2020). Structured pruning of recurrent neural networks through neuron selection. Neural Networks, 123, 134–141. https://doi.org/10.1016/j.neunet.2019.11.018
  • Wu, X., Wang, Y., & Wang, Z. (2022). A centerline symmetry and double-line transformation based algorithm for large-scale multi-objective optimization. Connection Science, 34(1), 1454–1481. https://doi.org/10.1080/09540091.2022.2075828
  • Yeom, S. K., Seegerer, P., Lapuschkin, S., Binder, A., Wiedemann, S., Müller, K. R., & Samek, W. (2021). Pruning by explaining: A novel criterion for deep neural network pruning. Pattern Recognition, 115, 107899. https://doi.org/10.1016/j.patcog.2021.107899
  • You, Z., Yan, K., Ye, J., Ma, M., & Wang, P. (2019). Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks. Advances in Neural Information Processing Systems, 32, 1–12. https://doi.org/10.48550/arXiv.1909.08174
  • Yu, R., Li, A., Chen, C. F., Lai, J. H., Morariu, V. I., Han, X., Gao, M., Lin, C.-Y., & Davis, L. S. (2018). NISP: Pruning networks using neuron importance score propagation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9194–9203). IEEE. https://doi.org/10.1109/CVPR.2018.00958
  • Zhuang, T., Zhang, Z., Huang, Y., Zeng, X., Shuang, K., & Li, X.. (2020). Neuron-level structured pruning using polarization regularizer. In H. Larochelle, M. Ranzato, R. Hadsell , M. F. Balcan, & H. Lin (Eds.), Proceedings of the conference on neural information processing systems (NeurlPS 2020) (pp. 9865–9877). Curran Associates, Inc.