Volume 20, Issue 2 (9-2023)                   JSDP 2023, 20(2): 113-144 | Back to browse issues page

XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

khalooei M, Homayounpour M M, Amirmazlaghani M. A survey on vulnerability of deep neural networks to adversarial examples and defense approaches to deal with them. JSDP 2023; 20 (2) : 8
URL: http://jsdp.rcisp.ac.ir/article-1-1205-en.html
Amirkabir University of Technology
Abstract:   (457 Views)
Nowadays the most commonly used method in various tasks of machine learning and artificial intelligence are neural networks. In spite of their different uses, neural networks and Deep neural networks (DNNs) have some vulnerabilities. A little distortion or adversarial perturbation in the input data for both additive and non-additive cases can be led to change the output of the trained model, and this could be a kind of DNN vulnerability. Despite the imperceptibility of the mentioned disturbance for human beings, DNN is vulnerable to these changes.
Creating and applying any malicious perturbation named “attack”, penetrates DNNs and makes them incapable of doing the duty assigned to them. In this paper different attack approaches were categorized based on the signal applied in the attack procedure. Some approaches use the gradient signal for detecting the vulnerability of DNN and try to create a powerful attack. The other ones create a perturbation in a blind situation and change a portion of the input to create a potential malicious perturbation. Adversarial attacks include both black-box and White-box situations. White-box situation focuses on training loss function and the architecture of the model but black box situation focuses on the approximation of the main model and dealing with the restriction of the input-output model request.
Making a deep neural network resilient against attacks is named “defense”. Defense approaches are divided into three categories. One of them tries to modify the input, the other one makes some changes in the developed model and also changes the loss function of the model. In the third defense approach some networks are first used for purification and refinement of the input before passing it to the main network. Furthermore, an analytical approach was presented for the entanglement and disentanglement representation of inputs of the trained model. The gradient is a very powerful signal usually used in learning and an attacking approaches. Besides, adversarial training is a well-known approach in changing a loss function method to defend against adversarial attacks.
In this study the most recent research on the vulnerability of DNN through a critical literature review was presented. Literature and our experiments indicate that the projected gradient descent (PGD) and AutoAttack methods are successful approaches in the l2 and l  bounded attacks, respectively. Furthermore, our experiments indicate that AutoAttack is much more time-consuming than the other methods. In the defense concept, different experiments were conducted to compare different attacks in the adversarial training approaches. Our experimental results indicate that the PGD is much more efficient in adversarial training than the fast gradient sign method (FGSM) and its deviations like MIFGSM and covers a wider range of generalizations of the trained model on predefined datasets. Furthermore, AutoAttack integration with adversarial training works well, but it is not efficient in low epoch numbers. Aside from that, it has been proven that adversarial training is time-consuming. Furthermore, we released our code for researchers or individuals interested in extending or evaluating predefined models for standard and adversarial machine learning projects. A more detailed description of the framework can be found at https://github.com/khalooei/Robustness-framework .
Article number: 8
Full-Text [PDF 1990 kb]   (178 Downloads)    
Type of Study: Applicable | Subject: Paper
Received: 2021/01/19 | Accepted: 2023/07/5 | Published: 2023/10/22 | ePublished: 2023/10/22

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2015 All Rights Reserved | Signal and Data Processing