65
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Optimizing distortion magnitude for data augmentation in few-shot remote sensing scene classification

ORCID Icon, &
Pages 1134-1147 | Received 12 Oct 2023, Accepted 03 Jan 2024, Published online: 02 Feb 2024

References

  • Cubuk, E. D., B. Zoph, D. Mane, V Vasudevan, and QV, Le. 2019. “Autoaugment: Learning Augmentation Strategies from Data” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE/CVF. 113–123. https://doi.org/10.1109/CVPR.2019.00020.
  • Cubuk, E. D., B. Zoph, J. Shlens, and QV, Le. 2020. “Randaugment: Practical Automated Data Augmentation with a Reduced Search Space.” In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, IEEE/CVF. 702–703. https://doi.org/10.5555/3495724.3497287.
  • Dai, D., and W. Yang. 2010. “Satellite Image Classification via Two-Layer Sparse Coding with Biased Image Representation.” IEEE Geoscience and Remote Sensing Letters 8 (1): 173–176. https://doi.org/10.1109/LGRS.2010.2055033.
  • Daldegan, G. A., D. A. Roberts, and R. F. Figueiredo. 2019. “Spectral Mixture Analysis in Google Earth Engine to Model and Delineate Fire Scars Over a Large Extent and a Long Time-Series in a Rainforest-Savanna Transition Zone.” Remote Sensing of Environment 232:111340. https://doi.org/10.1016/j.rse.2019.111340.
  • DeVries, T., and G. W. Taylor. 2017. “Dataset Augmentation in Feature Space.” arXiv preprint arXiv:1702.05538.
  • Hong, Y., L. Niu, J. Zhang, and L Zhang. 2022. “DeltaGAN: Towards Diverse Few-Shot Image Generation with Sample-Specific Delta.” In Proceedings of the European Conference on Computer Vision, Springer. 259–276. https://doi.org/10.1007/978-3-031-19787-1_15.
  • Koch, G., R. Zemel, and R. Salakhutdinov. 2015. “Siamese Neural Networks for One-Shot Image Recognition.” ICML Deep Learning Workshop 2 (1).
  • Kussul, N., M. Lavreniuk, S. Skakun, and A Shelestov. 2017. “Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data.” IEEE Geoscience & Remote Sensing Letters 14 (5): 778–782. https://doi.org/10.1109/LGRS.2017.2681128.
  • Lazarou, M., T. Stathaki, and Y. Avrithis 2022. “Tensor Feature Hallucination for Few-Shot Learning.” In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, IEEE/CVF. 3500–3510. https://doi.org/10.1109/WACV51458.2022.00211.
  • Lim, S., I. Kim, T. Kim, Kim C, and Kim S. 2019. “Fast AutoAugment.” In Proceedings of the Advances in Neural Information Processing Systems, NeurIPS. 32. https://doi.org/10.5555/3454287.3454885.
  • Lin, S., K. Wang, X. Zeng, and R Zhao. 2023. “An Effective Crop-Paste Pipeline for Few-Shot Object Detection.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE/CVF. 4819–4827. https://doi.org/10.1109/CVPRW59228.2023.00510.
  • Lin, Z., S. Yu, Z. Kuang, D Pathak, and D Ramanan. 2023. “Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE/CVF. 19325–19337. https://doi.org/10.48550/arXiv.2301.06267.
  • Liu, X., F. Zhou, J. Liu, and L Jiang. 2020. “Meta-Learning Based Prototype-Relation Network for Few-Shot Classification.” Neurocomputing 383:224–234. https://doi.org/10.1016/j.neucom.2019.12.034.
  • Min, H., Y. Zhang, Y. Zhao, W Jia, and Y Lei, C Fan. 2023. “Hybrid Feature Enhancement Network for Few-Shot Semantic Segmentation.” Pattern Recognition 137:109291. https://doi.org/10.1016/j.patcog.2022.109291.
  • Nishi, K., Y. Ding, A. Rich, and T Hollerer. 2021. “Augmentation Strategies for Learning with Noisy Labels.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE/CVF. 8022–8031. https://doi.org/10.1109/CVPR46437.2021.00793.
  • Seo, J. W., S. W. Lee, and S.-W. Lee. 2021. “Self-Augmentation: Generalizing Deep Networks to Unseen Classes for Few-Shot Learning.” Neural Networks 138:140–149. https://doi.org/10.1016/j.neunet.2021.02.007.
  • Sheng, G., W. Yang, T. Xu, and H Sun. 2012. “High-Resolution Satellite Scene Classification Using a Sparse Coding Based Multiple Feature Combination.” International Journal of Remote Sensing 33 (8): 2395–2412. https://doi.org/10.1080/01431161.2011.608740.
  • Subedi, B., V. E. Sathishkumar, V. Maheshwari, M. S. Kumar, P. Jayagopal, and S. M. Allayear. 2022. “Feature Learning-Based Generative Adversarial Network Data Augmentation for Class-Based Few-Shot Learning.” Mathematics Problems in Engineering 2022:1–20. https://doi.org/10.1155/2022/9710667.
  • Sung, F., Y. Yang, L. Zhang, T Xiang, PH, Torr, TM, Hospedales. 2018. “Learning to Compare: Relation Network for Few-Shot Learning.” In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE/CVF. 1199–1208. https://doi.org/10.1109/CVPR.2018.00131.
  • Vinyals, O., C. Blundell, T. Lillicrap, D Wierstra. 2016. “Matching Networks for One Shot Learning.” In Proceedings of the Advances in Neural Information Processing Systems, NeurIPS. 3630–3638. https://doi.org/10.5555/3157382.3157504.
  • Wang, J., W. Li, Y. Wang, R Tao, Q Du. 2023. “Representation-Enhanced Status Replay Network for Multisource Remote-Sensing Image Classification.” In IEEE Transactions on Neural Networks and Learning Systems, Early Access, IEEE. https://doi.org/10.1109/TNNLS.2023.3286422.
  • Wang, J., W. Li, M. Zhang, R. Tao, and J. Chanussot. 2023. “Remote Sensing Scene Classification via Multi-Stage Self-Guided Separation Network.” IEEE Transactions on Geoscience and Remote Sensing 61:1–12. https://doi.org/10.1109/TGRS.2023.3295797.
  • Wang, X., S. Wan, and P. Jin 2021. “Few-Shot Learning with Random Erasing and Task-Relevant Feature Transforming.” In Proceedings of International Conference on Artificial Neural Networks, Springer. 512–524. https://doi.org/10.1007/978-3-030-86340-1_41.
  • Xia, G. S., J. Hu, F. Hu, B. Shi, X. Bai, Y. Zhong, and L. Zhang. 2017. “AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification.” IEEE Transactions on Geoscience and Remote Sensing 55 (7): 3965–3981. https://doi.org/10.1109/TGRS.2017.2685945.
  • Xu, J., and H. Le 2022. “Generating Representative Samples for Few-Shot Classification.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE/CVF. 9003–9013. https://doi.org/10.1109/CVPR52688.2022.00880.
  • Yang, Y., and S. Newsam 2010. “Bag-Of-Visual-Words and Spatial Extensions for Land-Use Classification.” In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM. 270–279. https://doi.org/10.1145/1869790.1869829.
  • Yang, M., S. Niu, Z. Wang, D Li, and W Du. 2023. “DFSGAN: Introducing Editable and Representative Attributes for Few-Shot Image Generation.” Engineering Applications of Artificial Intelligence 117:105519. https://doi.org/10.1016/j.engappai.2022.105519.
  • Yu, Z., L. Chen, Z. Cheng, and J Luo. 2020. “Transmatch: A Transfer-Learning Scheme for Semi-Supervised Few-Shot Learning.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE/CVF. 12856–12864. https://doi.org/10.1109/CVPR42600.2020.01287.
  • Zhang, M., W. Li, Y. Zhang, R. Tao, and Q. Du. 2022. “Hyperspectral and LiDAR Data Classification Based on Structural Optimization Transmission.” IEEE Transactions on Cybernetics 53 (5): 3153–3164. https://doi.org/10.1109/TCYB.2022.3169773.
  • Zhang, Y., K. Qin, Q. Bi, W Cui, and G Li. 2020. “Landscape Patterns and Building Functions for Urban Land-Use Classification from Remote Sensing Images at the Block Level: A Case Study of Wuchang District, Wuhan, China.” Remote Sensing 12 (11): 1831. https://doi.org/10.3390/rs12111831.
  • Zoph, B., E. D. Cubuk, G. Ghiasi, TY, Lin, J Shlens, and QV, Le. 2020. “Learning Data Augmentation Strategies for Object Detection.” In Proceedings of European Conference on Computer Vision, Springer. 566–583. https://doi.org/10.1007/978-3-030-58583-9_34.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.