References
- Bishop, C. M. 2006. Pattern recognition and machine learning. New York: Springer.
- Bojarski, M., D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang et al. 2016. End to end learning for self-driving cars. arXiv preprint arXiv 1604:07316.
- Buckman, J., A. Roy, C. Raffel, and I. Goodfellow. 2018. “Thermometer encoding: One hot way to resist adversarial examples.” In International Conference on Learning Representations, Vancouver.
- Chow, K.H., L. Liu, M. Loper, J. Bae, M. Emre Gursoy, S. Truex, W. Wei, and W. Yanzhao 2020. “Adversarial objectness gradient attacks in real-time object detection systems.” In 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), Atlanta, 263–1007.
- Chung, J., C. Gulcehre, K. Cho, and Y. Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv 1412:3555.
- Elliott, D., W. Keen, and L. Miao. 2019. Recent advances in connected and automated vehicles. Journal of Traffic and Transportation Engineering (English Edition) 6 (2):109–31. https://www.sciencedirect.com/science/article/pii/S2095756418302289.
- Gong, D., L. Liu, L. Vuong, B. Saha, M. Reda Mansour, S. Venkatesh, and A. van den Hengel. 2019. “Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection.” In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, 1705–14.
- Goodfellow, I. J., J. Shlens, and C. Szegedy. 2014a. Explaining and harnessing adversarial examples. arXiv preprint arXiv 1412:6572.
- Goodfellow, I. J., J. Shlens, and C. Szegedy. 2014b. Intriguing properties of neural networks. arXiv preprint arXiv 1312:6199.
- Guo, C., M. Rana, M. Cisse, and L. Van Der Maaten. 2017. Countering adversarial images using input transformations. arXiv preprint arXiv 1711: 00117.
- Hochreiter, S., and J. Schmidhuber. 1997. Long short-term memory. Neural Computation 9 (8):1735–80. doi:10.1162/neco.1997.9.8.1735.
- Huang, S., N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel. 2017. Adversarial attacks on neural network policies. arXiv preprint arXiv 1702:02284.
- Jain, L. C., and L. R. Medsker. 1999. Recurrent neural networks: design and applications. 1st ed. USA: CRC Press, Inc.
- Jalal, A., A. Ilyas, C. Daskalakis, and A. G. Dimakis. 2017. The robust manifold defense: Adversarial training using generative models. arXiv preprint arXiv 1712:09196.
- Kaiming, H., H. Fan, W. Yuxin, S. Xie, and R. Girshick. 2020. “Momentum contrast for unsupervised visual representation learning.” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Seattle, 9729–38.
- Kingma, D. P., and M. Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv 1312:6114.
- Liu, W., X. Daniel Wang, and D. Song. 2015. “Feature squeezing: Detecting adversarial examples in deep neural networks.” In 2015 IEEE Symposium on Security and Privacy, 567–76. San Jose: IEEE.
- MordorIntelligence. 2021. “AUTONOMOUS (DRIVERLESS) CAR MARKET - GROWTH, TRENDS, COVID-19 IMPACT, and FORECASTS (2023 - 2028).” Accessed December 29, 2022 https://www.mordorintelligence.com/industry-reports/autonomous-driverless-cars-market-potential-estimation.
- Pang, L., X. Yuan, S. Jiantao, and L. Hongyang 2020. “Block switching defenses are not robust to adversarial examples.” In International Conference on Learning Representations, Addis Ababa.
- Papernot, N., P. McDaniel, I. Goodfellow, S. Jha, Z. Berkay Celik, and A. Swami. 2017. “Practical black-box attacks against machine learning.” In Proceedings of the 2017 ACM on Asia conference on computer and communications security, Abu Dhabi, 506–19.
- Papernot, N., P. McDaniel, X. Wu, S. Jha, and A. Swami. 2016. “Distillation as a defense to adversarial perturbations against deep neural networks.” In 2016 IEEE symposium on security and privacy (SP), 582–97. San Jose: IEEE.
- Park, H., J. Noh, and B. Ham. 2020. “Learning memory-guided normality for anomaly detection.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, 14372–81.
- Pomerleau, D. A. 1988. Advances in Neural Information Processing Systems 1.
- Ruder, S. 2016. An overview of gradient descent optimization algorithms. arXiv preprint arXiv 1609:04747.
- Rumelhart, D. E., G. E. Hinton, and R. J. Williams. 1986. Learning representations by back-propagating errors. Nature 323 (6088):533–36. doi:10.1038/323533a0.
- Samangouei, P., M. Kabkab, and R. Chellappa. 2018. Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv 1805:06605.
- Shixiang, G., and L. Rigazio. 2014. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv 1412:5068.
- Song, Y., T. Kim, S. Nowozin, S. Ermon, and N. Kushman. 2017. “Pixeldefend: Leveraging generative models to understand and defend against adversarial examples.” arXiv preprint arXiv:1710.10766.
- Tramèr, F., A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel. 2017. “Ensemble adversarial training: Attacks and defenses.” arXiv preprint arXiv:1705.07204.
- Udacity, A. R. 2017. “Udacity self-driving car dataset.”
- Wang, W., Y. Huang, Y. Wang, and L. Wang. 2014. “Generalized autoencoder: A neural network framework for dimensionality reduction.” In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, Columbus, 490–97.
- Zhirong, W., Y. Xiong, S. X. Yu, and D. Lin. 2018. “Unsupervised feature learning via non-parametric instance discrimination.” In Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, 3733–42.
- Zhou, J., C. Liang, and J. Chen. 2020. “Manifold projection for adversarial defense on face recognition.” In European Conference on Computer Vision, 288–305. Glasgow: Springer.