The introduction of deep learning technology can improve the performance of cyber-physical system... more The introduction of deep learning technology can improve the performance of cyber-physical systems (CPSs) in many ways. However, this also brings new security issues. To tackle these challenges, this paper explores the vulnerabilities of deep learning-based unmanned aerial vehicles (UAVs), which are typical CPSs. Although many research works have been reported previously on adversarial attacks of deep learning models, only few of them are concerned about safety-critical CPSs, especially regression models in such systems. In this paper, we analyze the problem of adversarial attacks against deep learning-based UAVs and propose two adversarial attack methods against regression models in UAVs. Experiments demonstrate that the proposed non-targeted and targeted attack methods both can craft imperceptible adversarial images and pose a considerable threat to the navigation and control of UAVs. To address this problem, adversarial training and defensive distillation methods are further investigated and evaluated, increasing the robustness of deep learning models in UAVs. To our knowledge, this is the first study on adversarial attacks and defenses against deep learning-based UAVs, which calls for more attention to the security and safety of such safety-critical applications.
IEEE Transactions on Dependable and Secure Computing, 2024
Deep learning methods can not only detect false data injection attacks (FDIA) but also locate att... more Deep learning methods can not only detect false data injection attacks (FDIA) but also locate attacks of FDIA. Although adversarial false data injection attacks (AFDIA) based on deep learning vulnerabilities have been studied in the field of single-label FDIA detection, the adversarial attack and defense against multi-label FDIA locational detection are still not involved. To bridge this gap, this paper first explores the multi-label adversarial example attacks against multi-label FDIA locational detectors and proposes a general multi-label adversarial attack framework, namely muLti-labEl adverSarial falSe data injectiON attack (LESSON). The proposed LESSON attack framework includes three key designs, namely Perturbing State Variables, Tailored Loss Function Design, and Change of Variables, which can help find suitable multi-label adversarial perturbations within the physical constraints to circumvent both Bad Data Detection (BDD) and Neural Attack Location (NAL). Four typical LESSON attacks based on the proposed framework and two dimensions of attack objectives are examined, and the experimental results demonstrate the effectiveness of the proposed attack framework, posing serious and pressing security concerns in smart grids.
The introduction of deep learning technology can improve the performance of cyber-physical system... more The introduction of deep learning technology can improve the performance of cyber-physical systems (CPSs) in many ways. However, this also brings new security issues. To tackle these challenges, this paper explores the vulnerabilities of deep learning-based unmanned aerial vehicles (UAVs), which are typical CPSs. Although many research works have been reported previously on adversarial attacks of deep learning models, only few of them are concerned about safety-critical CPSs, especially regression models in such systems. In this paper, we analyze the problem of adversarial attacks against deep learning-based UAVs and propose two adversarial attack methods against regression models in UAVs. Experiments demonstrate that the proposed non-targeted and targeted attack methods both can craft imperceptible adversarial images and pose a considerable threat to the navigation and control of UAVs. To address this problem, adversarial training and defensive distillation methods are further investigated and evaluated, increasing the robustness of deep learning models in UAVs. To our knowledge, this is the first study on adversarial attacks and defenses against deep learning-based UAVs, which calls for more attention to the security and safety of such safety-critical applications.
IEEE Transactions on Dependable and Secure Computing, 2024
Deep learning methods can not only detect false data injection attacks (FDIA) but also locate att... more Deep learning methods can not only detect false data injection attacks (FDIA) but also locate attacks of FDIA. Although adversarial false data injection attacks (AFDIA) based on deep learning vulnerabilities have been studied in the field of single-label FDIA detection, the adversarial attack and defense against multi-label FDIA locational detection are still not involved. To bridge this gap, this paper first explores the multi-label adversarial example attacks against multi-label FDIA locational detectors and proposes a general multi-label adversarial attack framework, namely muLti-labEl adverSarial falSe data injectiON attack (LESSON). The proposed LESSON attack framework includes three key designs, namely Perturbing State Variables, Tailored Loss Function Design, and Change of Variables, which can help find suitable multi-label adversarial perturbations within the physical constraints to circumvent both Bad Data Detection (BDD) and Neural Attack Location (NAL). Four typical LESSON attacks based on the proposed framework and two dimensions of attack objectives are examined, and the experimental results demonstrate the effectiveness of the proposed attack framework, posing serious and pressing security concerns in smart grids.
Uploads
Papers by TTTT JJJJ