17
Views
0
CrossRef citations to date
0
Altmetric
Engineering

Multiple object characterization of recyclable domestic waste using binocular stereo vision

, , , , &

References

  • Chen, S.; Huang, J.; Xiao, T.; Gao, J.; Bai, J.; Luo, W.; Dong, B. Carbon Emissions under Different Domestic Waste Treatment Modes Induced by Garbage Classification: Case Study in Pilot Communities in Shanghai, China. Sci. Total Environ. 2020, 717, 137193. DOI: 10.1016/j.scitotenv.2020.137193.
  • Maghsoudi, M.; Shokouhyar, S.; Khanizadeh, S.; Shokoohyar, S. Towards a Taxonomy of Waste Management Research: An Application of Community Detection in Keyword Network. J. Cleaner Prod. 2023, 401, 136587. DOI: 10.1016/j.jclepro.2023.136587.
  • Wu, T. W.; Zhang, H.; Peng, W.; Lü, F.; He, P. J. Applications of Convolutional Neural Networks for Intelligent Waste Identification and Recycling: A Review. Resour. Conserv. Recycl. 2023, 190, 106813. DOI: 10.1016/j.resconrec.2022.106813.
  • Yang, M.; Thung, G. Classification of Trash for Recyclability Status. CS229 Project Rep. Stanford University; 2016.
  • Mao, W. L.; Chen, W. C.; Fathurrahman, H. I. K.; Lin, Y.-H. Deep Learning Networks for Real-Time Regional Domestic Waste Detection. J. Cleaner Prod. 2022, 344, 131096. DOI: 10.1016/j.jclepro.2022.131096.
  • Zhang, Q.; Yang, Q.; Zhang, X.; Wei, W.; Bao, Q.; Su, J.; Liu, X. A Multi-Label Waste Detection Model Based on Transfer Learning. Resour. Conserv. Recycl. 2022, 181, 106235. DOI: 10.1016/j.resconrec.2022.106235.
  • Vo, A. H.; Hoang Son, L.; Vo, M. T.; Le, T. A Novel Framework for Trash Classification Using Deep Transfer Learning. IEEE Access. 2019, 7, 178631–178639. DOI: 10.1109/ACCESS.2019.2959033.
  • Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, United states, June 28, 2014.
  • Girshick, R. Fast R-CNN. International Conference on Computer Vision, Santiago, Chile, Dec 7–13, 2015.
  • Ren, S.; He, K.; Girshick, R.; Sun, J.; Faster, R.-C. Towards Real-Time Object Detection with Region Proposal Networks. 29th Annual Conference on Neural Information Processing Systems, Montreal, QC, Canada, Dec 7–12, 2015.
  • He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, Oct. 22–29, 2017.
  • Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, June 27–30, 2016.
  • NanoDet-plus. Super Fast and High Accuracy Lightweight Anchor-Free Object Detection Model; 2021. https://github.com/RangiLyu/nanodet.
  • You, Y.; Wei-Lun Chao, Y. W.; Garg, D.; Pleiss, G. Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving. arXiv preprint arXiv:1906.06310; 2019.
  • Chen, M.; Chen, Z.; Luo, L.; Tang, Y.; Cheng, J.; Wei, H.; Wang, J. Dynamic Visual Servo Control Methods for Continuous Operation of a Fruit Harvesting Robot Working Throughout an Orchard. Comput. Electron. Agric. 2024, 219, 108774. DOI: 10.1016/j.compag.2024.108774.
  • Hu, K.; Chen, Z.; Kang, H.; Tang, Y. 3D Vision Technologies for a Self-Developed Structural External Crack Damage Recognition Robot. Autom. Constr. 2024, 159, 105262. DOI: 10.1016/j.autcon.2023.105262.
  • Hirschmüller, H. Stereo Processing by Semiglobal Matching and Mutual Information. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 328–341. DOI: 10.1109/TPAMI.2007.1166.
  • Li, J. Binocular Vision Measurement Method for Relative Position and Attitude Based on Dual-Quaternion. J. Mod. Opt. 2017, 64, 1846–1853. DOI: 10.1080/09500340.2017.1321798.
  • Liu, K.; Zhou, C.; Wei, S.; Wang, S.; Fan, X.; Ma, J. Optimized Stereo Matching in Binocular Three-Dimensional Measurement System Using Structured Light. Appl. Opt. 2014, 53, 6083–6090. DOI: 10.1364/AO.53.006083.
  • Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, June 18–22, 2018.
  • Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. Shufflenet v2: Practical Guidelines for Efficient Cnn Architecture Design. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, Sep 8–14, 2018.
  • Lin, T. Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, July 21–26, 2017.
  • Misra, D. Mish: A Self Regularized Non-Monotonic Neural Activation Function. arxiv preprint arxiv:1908.08681; 2019.
  • Maas, A. L.; Hannun, A. Y.; Ng, A. Y. Rectifier Nonlinearities Improve Neural Network Acoustic Models. Proceedings of the International Conference on Machine Learning, Atlanta, USA, June 16–21, 2013.
  • Wu, Q.; Liang, T.; Fang, H.; Wei, Y.; Wang, M.; He, D. A Lightweight Deep Learning Algorithm for Multi-Objective Detection of Recyclable Domestic Waste. Environ. Eng. Sci. 2023, 40, 667–677. DOI: 10.1089/ees.2023.0138.
  • Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Machine Intell. 2000, 22, 1330–1334. DOI: 10.1109/34.888718.
  • Molchanov, P.; Tyree, S.; Karras, T.; Aila, T.; Kautz, J. Pruning Convolutional Neural Networks for Resource Efficient Inference. 5th International Conference on Learning Representations, Toulon, France, April 24–26, 2017.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.