PapersCut A shortcut to recent security papers

Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

Authors: Zeki Bilgin

Abstract: Inserting a backdoor into the joint model in federated learning (FL) is a recent threat raising concerns. Existing studies mostly focus on developing effective countermeasures against this threat, assuming that backdoored local models, if any, somehow reveal themselves by anomalies in their gradients. However, this assumption needs to be elaborated by identifying specifically which gradients are more likely to indicate an anomaly to what extent under which conditions. This is an important issue given that neural network models usually have huge parametric space and consist of a large number of weights. In this study, we make a deep gradient-level analysis on the expected variations in model gradients under several backdoor attack scenarios against FL. Our main novel finding is that backdoor-induced anomalies in local model updates (weights or gradients) appear in the final layer bias weights of the malicious local models. We support and validate our findings by both theoretical and experimental analysis in various FL settings. We also investigate the impact of the number of malicious clients, learning rate, and malicious data rate on the observed anomaly. Our implementation is publicly available\footnote{\url{ https://github.com/ArcelikAcikKaynak/Federated_Learning.git}}.

Comment: 13 pages and the code is available

Date: 29 Nov 2021

PDF »Main page »


MedRDF: A Robust and Retrain-Less Diagnostic Framework for Medical Pretrained Models Against Adversarial Attack

Authors: Mengting Xu, Tao Zhang, Daoqiang Zhang

Abstract: Deep neural networks are discovered to be non-robust when attacked by imperceptible adversarial examples, which is dangerous for it applied into medical diagnostic system that requires high reliability. However, the defense methods that have good effect in natural images may not be suitable for medical diagnostic tasks. The preprocessing methods (e.g., random resizing, compression) may lead to the loss of the small lesions feature in the medical image. Retraining the network on the augmented data set is also not practical for medical models that have already been deployed online. Accordingly, it is necessary to design an easy-to-deploy and effective defense framework for medical diagnostic tasks. In this paper, we propose a Robust and Retrain-Less Diagnostic Framework for Medical pretrained models against adversarial attack (i.e., MedRDF). It acts on the inference time of the pertained medical model. Specifically, for each test image, MedRDF firstly creates a large number of noisy copies of it, and obtains the output labels of these copies from the pretrained medical diagnostic model. Then, based on the labels of these copies, MedRDF outputs the final robust diagnostic result by majority voting. In addition to the diagnostic result, MedRDF produces the Robust Metric (RM) as the confidence of the result. Therefore, it is convenient and reliable to utilize MedRDF to convert pre-trained non-robust diagnostic models into robust ones. The experimental results on COVID-19 and DermaMNIST datasets verify the effectiveness of our MedRDF in improving the robustness of medical diagnostic models.

Comment: TMI under review

Date: 29 Nov 2021

PDF »Main page »


Robust Federated Learning for execution time-based device model identification under label-flipping attack

Authors: Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdrán, José Rafael Buendía Rubio, Gérôme Bovet, Gregorio Martínez Pérez

Abstract: The computing device deployment explosion experienced in recent years, motivated by the advances of technologies such as Internet-of-Things (IoT) and 5G, has led to a global scenario with increasing cybersecurity risks and threats. Among them, device spoofing and impersonation cyberattacks stand out due to their impact and, usually, low complexity required to be launched. To solve this issue, several solutions have emerged to identify device models and types based on the combination of behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques. However, these solutions are not appropriated for scenarios where data privacy and protection is a must, as they require data centralization for processing. In this context, newer approaches such as Federated Learning (FL) have not been fully explored yet, especially when malicious clients are present in the scenario setup. The present work analyzes and compares the device model identification performance of a centralized DL model with an FL one while using execution time-based events. For experimental purposes, a dataset containing execution-time features of 55 Raspberry Pis belonging to four different models has been collected and published. Using this dataset, the proposed solution achieved 0.9999 accuracy in both setups, centralized and federated, showing no performance decrease while preserving data privacy. Later, the impact of a label-flipping attack during the federated model training is evaluated, using several aggregation mechanisms as countermeasure. Zeno and coordinate-wise median aggregation show the best performance, although their performance greatly degrades when the percentage of fully malicious clients (all training samples poisoned) grows over 50%.

Date: 29 Nov 2021

PDF »Main page »


Being Patient and Persistent: Optimizing An Early Stopping Strategy for Deep Learning in Profiled Attacks

Authors: Servio Paguada, Lejla Batina, Ileana Buhan, Igor Armendariz

Abstract: The absence of an algorithm that effectively monitors deep learning models used in side-channel attacks increases the difficulty of evaluation. If the attack is unsuccessful, the question is if we are dealing with a resistant implementation or a faulty model. We propose an early stopping algorithm that reliably recognizes the model's optimal state during training. The novelty of our solution is an efficient implementation of guessing entropy estimation. Additionally, we formalize two conditions, persistence and patience, for a deep learning model to be optimal. As a result, the model converges with fewer traces.

Date: 29 Nov 2021

PDF »Main page »


Hardware Software Co-design framework for Data Encryption in Image Processing Systems for the Internet of Things Environmen

Authors: Kusum Lata, Surbhi Chhabra, Sandeep Saini

Abstract: Data protection is a severe constraint in the heterogeneous IoT era. This article presents a Hardware-Software Co-Simulation of AES-128 bit encryption and decryption for IoT Edge devices using the Xilinx System Generator (XSG). VHDL implementation of AES-128 bit algorithm is done with ECB and CTR mode using loop unrolled and FSM-based architecture. It is found that AES-CTR and FSM architecture performance is better than loop unrolled architecture with lesser power consumption and area. For performing the Hardware-Software Co-Simulation on Zedboard and Kintex-Ultra scale KCU105 Evaluation Platform, Xilinx Vivado 2016.2 and MATLAB 2015b is used. Hardware emulation is done for grey images successfully. To give a practical example of the usage of proposed framework, we have applied it for Biomedical Images (CTScan Image) as a case study. Security analysis in terms of the histogram, correlation, information entropy analysis, and keyspace analysis using exhaustive search and key sensitivity tests is also done to encrypt and decrypt images successfully.

Comment: The manuscript is accepted in IEEE Consumer Electronics Magazine . 6 pages in double column format. A total of 4 figures

Date: 29 Nov 2021

PDF »Main page »


False Data Injection Threats in Active Distribution Systems: A Comprehensive Survey

Authors: Muhammad Akbar Husnoo, Adnan Anwar, Nasser Hosseinzadeh, Shama Naz Islam, Abdun Naser Mahmood, Robin Doss

Abstract: With the proliferation of smart devices and revolutions in communications, electrical distribution systems are gradually shifting from passive, manually-operated and inflexible ones, to a massively interconnected cyber-physical smart grid to address the energy challenges of the future. However, the integration of several cutting-edge technologies has introduced several security and privacy vulnerabilities due to the large-scale complexity and resource limitations of deployments. Recent research trends have shown that False Data Injection (FDI) attacks are becoming one of the most malicious cyber threats within the entire smart grid paradigm. Therefore, this paper presents a comprehensive survey of the recent advances in FDI attacks within active distribution systems and proposes a taxonomy to classify the FDI threats with respect to smart grid targets. The related studies are contrasted and summarized in terms of the attack methodologies and implications on the electrical power distribution networks. Finally, we identify some research gaps and recommend a number of future research directions to guide and motivate prospective researchers.

Date: 28 Nov 2021

PDF »Main page »


MALIGN: Adversarially Robust Malware Family Detection using Sequence Alignment

Authors: Shoumik Saha, Sadia Afroz, Atif Rahman

Abstract: We propose MALIGN, a novel malware family detection approach inspired by genome sequence alignment. MALIGN encodes malware using four nucleotides and then uses genome sequence alignment approaches to create a signature of a malware family based on the code fragments conserved in the family making it robust to evasion by modification and addition of content. Moreover, unlike previous approaches based on sequence alignment, our method uses a multiple whole-genome alignment tool that protects against adversarial attacks such as code insertion, deletion or modification. Our approach outperforms state-of-the-art machine learning based malware detectors and demonstrates robustness against trivial adversarial attacks. MALIGN also helps identify the techniques malware authors use to evade detection.

Date: 28 Nov 2021

PDF »Main page »


A Comprehensive and Cross-Platform Test Suite for Memory Safety -- Towards an Open Framework for Testing Processor Hardware Supported Security Extensions

Authors: Wei Song, Jiameng Ying, Sihao Shen, Boya Li, Hao Ma, Peng Liu

Abstract: Memory safety remains a critical and widely violated property in reality. Numerous defense techniques have been proposed and developed but most of them are not applied or enabled by default in production-ready environment due to their substantial running cost. The situation might change in the near future because the hardware supported defenses against these attacks are finally beginning to be adopted by commercial processors, operating systems and compilers. We then face a question as there is currently no suitable test suite to measure the memory safety extensions supported on different processors. In fact, the issue is not constrained only for memory safety but all aspect of processor security. All of the existing test suites related to processor security lack some of the key properties, such as comprehensiveness, distinguishability and portability. As an initial step, we propose an expandable test framework for measuring the processor security and open source a memory safety test suite utilizing this framework. The framework is deliberately designed to be flexible so it can be gradually extended to all types of hardware supported security extensions in processors. The initial test suite for memory safety currently contains 160 test cases covering spatial and temporal safety of memory, memory access control, pointer integrity and control-flow integrity. Each type of vulnerabilities and their related defenses have been individually evaluated by one or more test cases. The test suite has been ported to three different instruction set architectures (ISAs) and experimented on six different platforms. We have also utilized the test suite to explore the security benefits of applying different sets of compiler flags available on the latest GNU GCC and LLVM compilers.

Date: 28 Nov 2021

PDF »Main page »


Statically Detecting Adversarial Malware through Randomised Chaining

Authors: Matthew Crawford, Wei Wang, Ruoxi Sun, Minhui Xue

Abstract: With the rapid growth of malware attacks, more antivirus developers consider deploying machine learning technologies into their productions. Researchers and developers published various machine learning-based detectors with high precision on malware detection in recent years. Although numerous machine learning-based malware detectors are available, they face various machine learning-targeted attacks, including evasion and adversarial attacks. This project explores how and why adversarial examples evade malware detectors, then proposes a randomised chaining method to defend against adversarial malware statically. This research is crucial for working towards combating the pertinent malware cybercrime.

Date: 28 Nov 2021

PDF »Main page »


Dissecting Malware in the Wild

Authors: Hamish Spencer, Wei Wang, Ruoxi Sun, Minhui Xue

Abstract: With the increasingly rapid development of new malicious computer software by bad faith actors, both commercial and research-oriented antivirus detectors have come to make greater use of machine learning tactics to identify such malware as harmful before end users are exposed to their effects. This, in turn, has spurred the development of tools that allow for known malware to be manipulated such that they can evade being classified as dangerous by these machine learning-based detectors, while retaining their malicious functionality. These manipulations function by applying a set of changes that can be made to Windows programs that result in a different file structure and signature without altering the software's capabilities. Various proposals have been made for the most effective way of applying these alterations to input malware to deceive static malware detectors; the purpose of this research is to examine these proposals and test their implementations to determine which tactics tend to generate the most successful attacks.

Date: 28 Nov 2021

PDF »Main page »


Loading ...