81
Views
2
CrossRef citations to date
0
Altmetric
Articles

Adversarial attack to fool object detector

&
 

Abstract

State-of-the-art deep neural network-based models have a loophole that they are prone to adversarial attacks. However, only a few attacks are demonstrated on object detection and these adversarial attacks have a limitation that they require tuning of hyperparameters which is very time-consuming. To address the problem, we propose a simple and computationally efficient technique in terms of the average number of iterations that is, Plug-n-Play Adversarial Attack (PPAA) in which constrained uniform random noises are used to generate perturbations. The proposed method is tested on the Microsoft Common Objects in Context (MSCOCO) dataset, using a state-of-the-art object detection algorithm named RetinaNet. The results show that the proposed PPAA reduced the average number of iterations to 8.64 which is 1/5th of DAG and achieves a comparable convergence rate that is 96.48% while keeping the perturbations quasi-imperceptible to human eyes, that quantifies to 1.2 * 10–2.

Subject Classification:

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.