67
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Synthetic Data with Global-Local Level for Adversarial-based Domain Adaptation

, &
Pages 885-899 | Received 22 Nov 2021, Accepted 13 Aug 2022, Published online: 16 Sep 2022

References

  • Bilen, H., & Vedaldi, A. 2016. Weakly supervised deep detection networks. In The IEEE Computer Vision and Pattern Recognition (CVPR) Las Vegas, America, pages: 2173–2180.
  • Busto, P., & Gall, J. 2017. Open set domain adaptation. In The IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pages: 1236–1248.
  • Chen, Y., Li, W., & Sakaridis, C., et al. (2018). Domain adaptive faster r-cnn for object detection in the wild. In the IEEE Conference on Computer Vision and Pattern Recognition(CVPR),Salt Lake City, UT, USA, 1568–1581.
  • Chen, C., Zheng, Z., & Ding, X., et al. (2020). Harmonizing Transferability and Discriminability for Adapting Object Detectors. In The IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Seattle, WA, USA, 1546–1558.
  • Christos, S., Dengxin, D., & Luc, V. (2017). Semantic foggy scene understanding with synthetic data. CoRr, 90(7), 167–181.
  • Cordts, M., Omran, M., & Ramos, S., et al. (2016). The cityscapes dataset for semantic urban scene understanding. In The IEEE Computer Vision and Pattern Recognition (CVPR),Las Vegas, America, 1034–1046.
  • Everingham, M., Van, L., & Winn, J., et al. (2010). The pascal visual object classes (voc) challenge. Ijcv, 88(2), 303–338.
  • Fabio, M., Lorenzo, P. 2017. Autodial: Automatic domain alignment layers. In International Conference on Computer Vision , Venice,Italy, 36:578–590.
  • Gaidon, A., Wang, Q., & Cabon, Y., et al. 2016. Virtual worlds as proxy for multi-object tracking analysis. in Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, pp. 4340–4349.
  • Ganin, Y., Ustinova, E., & Ajakan, H., et al. (2016). Domain adversarial training of eural networks. Jmlr, 17(59), 1–35.
  • Girshick, R. 2015. Fast r-cnn. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV’15, pages: 1440–1448, Washington, DC, USA, IEEE Computer Society.
  • Goodfellow, I., Pouget, J., & Xu, B., et al. (2014). Generative adversarial nets. In The IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Boston, MA, USA. , 346–356.
  • Gu, Q., Zhou, Q., & Xu, M. (2021). PIT: Position-Invariant Transform for Cross-FoV Domain Adaptation In IEEE International Conference on Computer Vision, Montreal, Canada. , 2089–2102.
  • Hoffman, J., Tzeng, E., & Park, T., et al. (2018). Cycada: Cycle-consistent adversarial domain adaptation. Icml, 56(8), 268–280.
  • Inoue, N., Furuta, R., & Yamasaki, K., et al. (2018). Cross-Domain weakly-supervised object detection through progressive domain adaptation. In The IEEE Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, 634–645.
  • Isola, P., Zhu, J., & Efros, A., et al. 2017. Image-To-Image translation with conditional adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Hawaii,USA, volume 00, pages: 5967–5976.
  • Jia, Y., Shelhamer, E., & Donahue, J., et al. (2014). Caffe: Convolutional architecture for fast feature embedding. In, 65(12), 367–381.
  • Johnson-Roberson, M., Barto, C., & Mehta, R., et al. 2016. Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks? In The IEEE Computer Vision and Pattern Recognition (CVPR) , Las Vegas,USA, 2578–2593
  • Krizhevsky, A., Sutskever, I., & Hinton, E., et al. (2012). Imagenet classification with deep convolutional neural networks. In NIPS, 78(5), 523–532.
  • Lin, T., Goyal, P., & Girshick, R., et al. (2017). Focal loss for dense object detection. In IEEE International Conference on Computer Vision, Hawaii,USA, 1045–1058.
  • Lin, T., Maire, M., & Belongie, S., et al. (2020). Visual Common sense R-CNN . In European Conference on Computer Vision , Munich,Germany, 912–922.
  • Liu, W., Anguelov, D., & Erhan, C., et al. (2016). Ssd: Single shot multibox detector . In European Conference on Computer Vision, Amsterdam, Netherlands. , 789–798.
  • Liu, M., & Breuel, J. (2017). Unsupervised image-to-image translation networks. Nips, 57(10), 735–746.
  • Long, M., Zhu, H., & Wang, J., et al. (2016). Unsupervised domain adaptation with residual transfer networks. Nips, 68(6), 423–434.
  • Nguyen, D., Tseng, W., & Shuai, H. (2020). Domain-Adaptive Object Detection via Uncertainty-Aware Distribution Alignment. ACM Multimedia, 88(6), 278–292.
  • Odena, A., Olah, C., & Shlens, J. (2020). Unbiased Scene Graph Generation from Biased Training. In NIPS, 86(5), 312–325.
  • Paszke, A., Gross, S., Chintala, S., et al. 2017. Automatic differentiation in pytorch. In The IEEE Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pages: 1432–1442.
  • Peyman, B., Raghav, G., & Franke, U. 2020. Improved Few-Shot Visual Classification. In The IEEE Computer Vision and Pattern Recognition (CVPR) , Seattle,USA, pages: 2056–2068.
  • Ren, S., He, K., & Girshick, R., et al. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Nips, 50(6), 245–256.
  • Richter, S., Vineet, V., & Roth, S., et al. 2016. Playing for data: Ground truth from computer games. In European Conference on Computer Vision , Amsterdam,Netherlands, pages: 102–118.
  • Saenko, K., Kulis, B., & Fritz, M., et al. (2020). Meta-Transfer Learning for Zero-Shot Super-Resolution. European Conference on Computer Vision , Munich,Germany, 1432–1442.
  • Saito, K., Ushiku, U., & Harada, S., et al. (2018). Adversarial dropout regularization. Iclr, 75(10), 78–91.
  • Saito, K., Watanabe, K., & Ushiku, Y., et al. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In The IEEE Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, 856–870.
  • Sakaridis, C., Dai, D., & Van, L. (2018). Semantic foggy scene understanding with synthetic data. Ijcv, 80(3), 146–160.
  • Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126(9), 973–992. https://doi.org/10.1007/s11263-018-1072-8
  • Sankaranarayanan, S., Balaji, Y., & Jain, A., et al. (2018). Learning from synthetic data: Addressing domain shift for semantic segmentation. In The IEEE Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, 2634–2645.
  • Taekyung, K., Minki, J., & Seunghyeon, K., et al. (2019). Diversify and Match: A Domain Adaptive Representation Learning Paradigm for Object Detection. In The IEEE Computer Vision and Pattern Recognition (CVPR), Long Beach,USA, 12456–12465.
  • Tang, Y., Wang, J., & Gao, B., et al. (2016). Large scale semi-supervised object detection using visual and semantic knowledge transfer. In The IEEE Computer Vision and Pattern Recognition (CVPR), Las Vegas,USA, 656–667.
  • Tzeng, E., Hoffman, J., & Saenko, K., et al. (2017). Adversarial discriminative domain adaptation. In The IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Honolulu, HI, USA, 1223–1235.
  • Vibashan, V., Vikram, G., & Poojan, O., et al. 2021. MeGA-CDA: Memory Guided Attention for Category-Aware Unsupervised Domain Adaptive Object Detection. In CVPR, pages:4516–4526.
  • Viraj, P., Shivam, K., & Deeksha, K., et al. 2021. SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised Domain Adaptation. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , Montreal,Canada, pp. 8558–8567.
  • Wang, H., Tong, S., & Wei, Z., et al. (2020). Classes Matter: A Fine-Grained Adversarial Approach to Cross-Domain Semantic Segmentation. In European Conference on Computer Vision, Glasgow, US, 14, 642–659.
  • Yangtao, Z., Di, H., & Liu, S., et al. (2020). Cross-Domain Object Detection through Coarse-to-Fine Feature Adaptation. In The IEEE Computer Vision and Pattern Recognition (CVPR) , Seattle,USA, 13763–13772.
  • Yu, C., & Dwang, J. 2019. Transfer Learning with Dynamic Adversarial Adaptation Network[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern recognition. Piscataway: IEEE, 936–944.
  • Zhou, Q., Feng, Z., & Cheng, G., et al. (2020). Uncertainty-Aware Consistency Regularization for Cross-Domain Semantic Segmentation. CoRRabs/2004.08878.
  • Zhou, Q., Gu, Q., & Pang, J., et al. (2021). Self-Adversarial Disentangling for Specific Domain Adaptation. In The IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Seattle, WA, USA,2415–2430.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.