PapersCut A shortcut to recent security papers

Hiding Behind Backdoors: Self-Obfuscation Against Generative Models

Authors: Siddhartha Datta, Nigel Shadbolt

Abstract: Attack vectors that compromise machine learning pipelines in the physical world have been demonstrated in recent research, from perturbations to architectural components. Building on this work, we illustrate the self-obfuscation attack: attackers target a pre-processing model in the system, and poison the training set of generative models to obfuscate a specific class during inference. Our contribution is to describe, implement and evaluate a generalized attack, in the hope of raising awareness regarding the challenge of architectural robustness within the machine learning community.

Date: 24 Jan 2022

PDF »Main page »


Optimizing Tandem Speaker Verification and Anti-Spoofing Systems

Authors: Anssi Kanervisto, Ville Hautamäki, Tomi Kinnunen, Junichi Yamagishi

Abstract: As automatic speaker verification (ASV) systems are vulnerable to spoofing attacks, they are typically used in conjunction with spoofing countermeasure (CM) systems to improve security. For example, the CM can first determine whether the input is human speech, then the ASV can determine whether this speech matches the speaker's identity. The performance of such a tandem system can be measured with a tandem detection cost function (t-DCF). However, ASV and CM systems are usually trained separately, using different metrics and data, which does not optimize their combined performance. In this work, we propose to optimize the tandem system directly by creating a differentiable version of t-DCF and employing techniques from reinforcement learning. The results indicate that these approaches offer better outcomes than finetuning, with our method providing a 20% relative improvement in the t-DCF in the ASVSpoof19 dataset in a constrained setting.

Comment: Published in IEEE/ACM Transactions on Audio, Speech, and Language Processing. Published version available at: https://ieeexplore.ieee.org/document/9664367

Date: 24 Jan 2022

PDF »Main page »


DuVisor: a User-level Hypervisor Through Delegated Virtualization

Authors: Jiahao Chen, Dingji Li, Zeyu Mi, Yuxuan Liu, Binyu Zang, Haibing Guan, Haibo Chen

Abstract: Today's mainstream virtualization systems comprise of two cooperative components: a kernel-resident driver that accesses virtualization hardware and a user-level helper process that provides VM management and I/O virtualization. However, this virtualization architecture has intrinsic issues in both security (a large attack surface) and performance. While there is a long thread of work trying to minimize the kernel-resident driver by offloading functions to user mode, they face a fundamental tradeoff between security and performance: more offloading may reduce the kernel attack surface, yet increase the runtime ring crossings between the helper process and the driver, and thus more performance cost. This paper explores a new design called delegated virtualization, which completely separates the control plane (the kernel driver) from the data plane (the helper process) and thus eliminates the kernel driver from runtime intervention. The resulting user-level hypervisor, called DuVisor, can handle all VM operations without trapping into the kernel once the kernel driver has done the initialization. DuVisor retrofits existing hardware virtualization support with a new delegated virtualization extension to directly handle VM exits, configure virtualization registers, manage the stage-2 page table and virtual devices in user mode. We have implemented the hardware extension on an open-source RISC-V CPU and built a Rust-based hypervisor atop the hardware. Evaluation on FireSim shows that DuVisor outperforms KVM by up to 47.96\% in a variety of real-world applications and significantly reduces the attack surface.

Comment: 17 pages, 9 figures

Date: 24 Jan 2022

PDF »Main page »


What You See is Not What the Network Infers: Detecting Adversarial Examples Based on Semantic Contradiction

Authors: Yijun Yang, Ruiyuan Gao, Yu Li, Qiuxia Lai, Qiang Xu

Abstract: Adversarial examples (AEs) pose severe threats to the applications of deep neural networks (DNNs) to safety-critical domains, e.g., autonomous driving. While there has been a vast body of AE defense solutions, to the best of our knowledge, they all suffer from some weaknesses, e.g., defending against only a subset of AEs or causing a relatively high accuracy loss for legitimate inputs. Moreover, most existing solutions cannot defend against adaptive attacks, wherein attackers are knowledgeable about the defense mechanisms and craft AEs accordingly. In this paper, we propose a novel AE detection framework based on the very nature of AEs, i.e., their semantic information is inconsistent with the discriminative features extracted by the target DNN model. To be specific, the proposed solution, namely ContraNet, models such contradiction by first taking both the input and the inference result to a generator to obtain a synthetic output and then comparing it against the original input. For legitimate inputs that are correctly inferred, the synthetic output tries to reconstruct the input. On the contrary, for AEs, instead of reconstructing the input, the synthetic output would be created to conform to the wrong label whenever possible. Consequently, by measuring the distance between the input and the synthetic output with metric learning, we can differentiate AEs from legitimate inputs. We perform comprehensive evaluations under various AE attack scenarios, and experimental results show that ContraNet outperforms existing solutions by a large margin, especially under adaptive attacks. Moreover, our analysis shows that successful AEs that can bypass ContraNet tend to have much-weakened adversarial semantics. We have also shown that ContraNet can be easily combined with adversarial training techniques to achieve further improved AE defense capabilities.

Comment: Accepted to NDSS 2022. Camera-ready version with supplementary materials. Code is available at https://github.com/cure-lab/ContraNet.git

Date: 24 Jan 2022

PDF »Main page »


On the Complexity of Attacking Elliptic Curve Based Authentication Chips

Authors: Ievgen Kabin, Zoya Dyka, Dan Klann, Jan Schaeffner, Peter Langendoerfer

Abstract: In this paper we discuss the difficulties of mounting successful attack against crypto implementations when essential information is missing. We start with a detailed description of our attack against our own design, to highlight which information is needed to increase the success of an attack, i.e. we use it as a blueprint to the following attack against commercially available crypto chips. We would like to stress that our attack against our own design is very similar to what happens during certification e.g. according to Common Criteria Standard as in those cases the manufacturer needs to provide detailed information. When attacking the commercial designs without signing NDAs, we needed to intensively search the Internet for information about the designs. We cannot to reveal the private keys used by the attacked commercial authentication chips 100% correctly. Moreover, the missing knowledge of the used keys does not allow us to evaluate the success of our attack. We were able to reveal information on the processing sequence during the authentication process even as detailed as identifying the clock cycles in which the individual key bits are processed. To summarize the effort of such an attack is significantly higher than the one of attacking a well-known implementation.

Comment: This is an author's version of the paper (On the Complexity of Attacking Commercial Authentication Products) accepted for publication in Microprocessors and Microsystems journal. The final publication is available at https://doi.org/10.1016/j.micpro.2020.103480

Date: 24 Jan 2022

PDF »Main page »


Backdoor Defense with Machine Unlearning

Authors: Yang Liu, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang, Jianfeng Ma

Abstract: Backdoor injection attack is an emerging threat to the security of neural networks, however, there still exist limited effective defense methods against the attack. In this paper, we propose BAERASE, a novel method that can erase the backdoor injected into the victim model through machine unlearning. Specifically, BAERASE mainly implements backdoor defense in two key steps. First, trigger pattern recovery is conducted to extract the trigger patterns infected by the victim model. Here, the trigger pattern recovery problem is equivalent to the one of extracting an unknown noise distribution from the victim model, which can be easily resolved by the entropy maximization based generative model. Subsequently, BAERASE leverages these recovered trigger patterns to reverse the backdoor injection procedure and induce the victim model to erase the polluted memories through a newly designed gradient ascent based machine unlearning method. Compared with the previous machine unlearning solutions, the proposed approach gets rid of the reliance on the full access to training data for retraining and shows higher effectiveness on backdoor erasing than existing fine-tuning or pruning methods. Moreover, experiments show that BAERASE can averagely lower the attack success rates of three kinds of state-of-the-art backdoor attacks by 99\% on four benchmark datasets.

Date: 24 Jan 2022

PDF »Main page »


DDoSDet: An approach to Detect DDoS attacks using Neural Networks

Authors: Aman Rangapur, Tarun Kanakam, Ajith Jubilson

Abstract: Cyber-attacks have been one of the deadliest attacks in today's world. One of them is DDoS (Distributed Denial of Services). It is a cyber-attack in which the attacker attacks and makes a network or a machine unavailable to its intended users temporarily or indefinitely, interrupting services of the host that are connected to a network. To define it in simple terms, It's an attack accomplished by flooding the target machine with unnecessary requests in an attempt to overload and make the systems crash and make the users unable to use that network or a machine. In this research paper, we present the detection of DDoS attacks using neural networks, that would flag malicious and legitimate data flow, preventing network performance degradation. We compared and assessed our suggested system against current models in the field. We are glad to note that our work was 99.7\% accurate.

Comment: 6 figures, 2 tables, 10 pages

Date: 24 Jan 2022

PDF »Main page »


STRIDE-based Cyber Security Threat Modeling for IoT-enabled Precision Agriculture Systems

Authors: Md. Rashid Al Asif, Khondokar Fida Hasany, Md Zahidul Islamz, Rahamatullah Khondoker

Abstract: The concept of traditional farming is changing rapidly with the introduction of smart technologies like the Internet of Things (IoT). Under the concept of smart agriculture, precision agriculture is gaining popularity to enable Decision Support System (DSS)-based farming management that utilizes widespread IoT sensors and wireless connectivity to enable automated detection and optimization of resources. Undoubtedly the success of the system would be impacted on crop productivity, where failure would impact severely. Like many other cyber-physical systems, one of the growing challenges to avoid system adversity is to ensure the system's security, privacy, and trust. But what are the vulnerabilities, threats, and security issues we should consider while deploying precision agriculture? This paper has conducted a holistic threat modeling on component levels of precision agriculture's standard infrastructure using popular threat intelligence tools STRIDE to identify common security issues. Our modeling identifies a noticing of fifty-eight potential security threats to consider. This presentation systematically presented them and advised general mitigation suggestions to support cyber security in precision agriculture.

Date: 24 Jan 2022

PDF »Main page »


Forgery Attack Detection in Surveillance Video Streams Using Wi-Fi Channel State Information

Authors: Yong Huang, Xiang Li, Wei Wang, Tao Jiang, Qian Zhang

Abstract: The cybersecurity breaches expose surveillance video streams to forgery attacks, under which authentic streams are falsified to hide unauthorized activities. Traditional video forensics approaches can localize forgery traces using spatial-temporal analysis on relatively long video clips, while falling short in real-time forgery detection. The recent work correlates time-series camera and wireless signals to detect looped videos but cannot realize fine-grained forgery localization. To overcome these limitations, we propose Secure-Pose, which exploits the pervasive coexistence of surveillance and Wi-Fi infrastructures to defend against video forgery attacks in a real-time and fine-grained manner. We observe that coexisting camera and Wi-Fi signals convey common human semantic information and forgery attacks on video streams will decouple such information correspondence. Particularly, retrievable human pose features are first extracted from concurrent video and Wi-Fi channel state information (CSI) streams. Then, a lightweight detection network is developed to accurately discover forgery attacks and an efficient localization algorithm is devised to seamlessly track forgery traces in video streams. We implement Secure-Pose using one Logitech camera and two Intel 5300 NICs and evaluate it in different environments. Secure-Pose achieves a high detection accuracy of 98.7% and localizes abnormal objects under playback and tampering attacks.

Comment: To appear in IEEE Transactions on Wireless Communications. arXiv admin note: text overlap with arXiv:2101.00848

Date: 24 Jan 2022

PDF »Main page »


Synthetic speech detection using meta-learning with prototypical loss

Authors: Monisankha Pal, Aditya Raikar, Ashish Panda, Sunil Kumar Kopparapu

Abstract: Recent works on speech spoofing countermeasures still lack generalization ability to unseen spoofing attacks. This is one of the key issues of ASVspoof challenges especially with the rapid development of diverse and high-quality spoofing algorithms. In this work, we address the generalizability of spoofing detection by proposing prototypical loss under the meta-learning paradigm to mimic the unseen test scenario during training. Prototypical loss with metric-learning objectives can learn the embedding space directly and emerges as a strong alternative to prevailing classification loss functions. We propose an anti-spoofing system based on squeeze-excitation Residual network (SE-ResNet) architecture with prototypical loss. We demonstrate that the proposed single system without any data augmentation can achieve competitive performance to the recent best anti-spoofing systems on ASVspoof 2019 logical access (LA) task. Furthermore, the proposed system with data augmentation outperforms the ASVspoof 2021 challenge best baseline both in the progress and evaluation phase of the LA task. On ASVspoof 2019 and 2021 evaluation set LA scenario, we attain a relative 68.4% and 3.6% improvement in min-tDCF compared to the challenge best baselines, respectively.

Date: 24 Jan 2022

PDF »Main page »


Loading ...