PapersCut A shortcut to recent security papers

Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption

Authors: Ivan Evtimov, Russel Howes, Brian Dolhansky, Hamed Firooz, Cristian Canton

Abstract: This work examines the vulnerability of multimodal (image + text) models to adversarial threats similar to those discussed in previous literature on unimodal (image- or text-only) models. We introduce realistic assumptions of partial model knowledge and access, and discuss how these assumptions differ from the standard "black-box"/"white-box" dichotomy common in current literature on adversarial attacks. Working under various levels of these "gray-box" assumptions, we develop new attack methodologies unique to multimodal classification and evaluate them on the Hateful Memes Challenge classification task. We find that attacking multiple modalities yields stronger attacks than unimodal attacks alone (inducing errors in up to 73% of cases), and that the unimodal image attacks on multimodal classifiers we explored were stronger than character-based text augmentation attacks (inducing errors on average in 45% and 30% of cases, respectively).

Date: 25 Nov 2020

PDF »Main page »


SurFree: a fast surrogate-free black-box attack

Authors: Thibault Maho, Teddy Furon, Erwan Le Merrer

Abstract: Machine learning classifiers are critically prone to evasion attacks. Adversarial examples are slightly modified inputs that are then misclassified, while remaining perceptively close to their originals. Last couple of years have witnessed a striking decrease in the amount of queries a black box attack submits to the target classifier, in order to forge adversarials. This particularly concerns the black-box score-based setup, where the attacker has access to top predicted probabilites: the amount of queries went from to millions of to less than a thousand. This paper presents SurFree, a geometrical approach that achieves a similar drastic reduction in the amount of queries in the hardest setup: black box decision-based attacks (only the top-1 label is available). We first highlight that the most recent attacks in that setup, HSJA, QEBA and GeoDA all perform costly gradient surrogate estimations. SurFree proposes to bypass these, by instead focusing on careful trials along diverse directions, guided by precise indications of geometrical properties of the classifier decision boundaries. We motivate this geometric approach before performing a head-to-head comparison with previous attacks with the amount of queries as a first class citizen. We exhibit a faster distortion decay under low query amounts (few hundreds to a thousand), while remaining competitive at higher query budgets.

Comment: 8 pages

Date: 25 Nov 2020

PDF »Main page »


Stay Connected, Leave no Trace: Enhancing Security and Privacy in WiFi via Obfuscating Radiometric Fingerprints

Authors: Luis F. Abanto-Leon, Andreas Baeuml, Gek Hong, Sim, Matthias Hollick, Arash Asadi

Abstract: The intrinsic hardware imperfection of WiFi chipsets manifests itself in the transmitted signal, leading to a unique radiometric fingerprint. This fingerprint can be used as an additional means of authentication to enhance security. In fact, recent works propose practical fingerprinting solutions that can be readily implemented in commercial-off-the-shelf devices. In this paper, we prove analytically and experimentally that these solutions are highly vulnerable to impersonation attacks. We also demonstrate that such a unique device-based signature can be abused to violate privacy by tracking the user device, and, as of today, users do not have any means to prevent such privacy attacks other than turning off the device. We propose RF-Veil, a radiometric fingerprinting solution that not only is robust against impersonation attacks but also protects user privacy by obfuscating the radiometric fingerprint of the transmitter for non-legitimate receivers. Specifically, we introduce a randomized pattern of phase errors to the transmitted signal such that only the intended receiver can extract the original fingerprint of the transmitter. In a series of experiments and analyses, we expose the vulnerability of adopting naive randomization to statistical attacks and introduce countermeasures. Finally, we show the efficacy of RF-Veil experimentally in protecting user privacy and enhancing security. More importantly, our proposed solution allows communicating with other devices, which do not employ RF-Veil.

Comment: ACM Sigmetrics 2021 / In Proc. ACM Meas. Anal. Comput. Syst., Vol. 4, 3, Article 44 (December 2020)

Date: 25 Nov 2020

PDF »Main page »


Distributed Additive Encryption and Quantization for Privacy Preserving Federated Deep Learning

Authors: Hangyu Zhu, Rui Wang, Yaochu Jin, Kaitai Liang, Jianting Ning

Abstract: Homomorphic encryption is a very useful gradient protection technique used in privacy preserving federated learning. However, existing encrypted federated learning systems need a trusted third party to generate and distribute key pairs to connected participants, making them unsuited for federated learning and vulnerable to security risks. Moreover, encrypting all model parameters is computationally intensive, especially for large machine learning models such as deep neural networks. In order to mitigate these issues, we develop a practical, computationally efficient encryption based protocol for federated deep learning, where the key pairs are collaboratively generated without the help of a third party. By quantization of the model parameters on the clients and an approximated aggregation on the server, the proposed method avoids encryption and decryption of the entire model. In addition, a threshold based secret sharing technique is designed so that no one can hold the global private key for decryption, while aggregated ciphertexts can be successfully decrypted by a threshold number of clients even if some clients are offline. Our experimental results confirm that the proposed method significantly reduces the communication costs and computational complexity compared to existing encrypted federated learning without compromising the performance and security.

Date: 25 Nov 2020

PDF »Main page »


Developing a Security Testbed for Industrial Internet of Things

Authors: Muna Al-Hawawreh, Elena Sitnikovas

Abstract: While achieving security for Industrial Internet of Things (IIoT) is a critical and non-trivial task, more attention is required for brownfield IIoT systems. This is a consequence of long life cycles of their legacy devices which were initially designed without considering security and IoT connectivity, but they are now becoming more connected and integrated with emerging IoT technologies and messaging communication protocols. Deploying today's methodologies and solutions in brownfield IIoT systems is not viable, as security solutions must co-exist and fit these systems requirements. This necessitates a realistic standardized IIoT testbed that can be used as an optimal format to measure the credibility of security solutions of IIoT networks, analyze IIoT attack landscapes and extract threat intelligence. Developing a testbed for brownfield IIoT systems is considered a significant challenge as these systems are comprised of legacy, heterogeneous devices, communication layers and applications that need to be implemented holistically to achieve high fidelity. In this paper, we propose a new generic end-to-end IIoT security testbed, with a particular focus on the brownfield system and provide details of the testbed's architectural design and the implementation process. The proposed testbed can be easily reproduced and reconfigured to support the testing activities of new processes and various security scenarios. The proposed testbed operation is demonstrated on different connected devices, communication protocols and applications. The experiments demonstrate that this testbed is effective in terms of its operation and security testing. A comparison with existing testbeds, including a table of features is provided.

Date: 25 Nov 2020

PDF »Main page »


Stochastic sparse adversarial attacks

Authors: Hatem Hajri, Manon Césaire, Théo Combey, Sylvain Lamprier, Patrick Gallinari

Abstract: Adversarial attacks of neural network classifiers (NNC) and the use of random noises in these methods have stimulated a large number of works in recent years. However, despite all the previous investigations, existing approaches that rely on random noises to fool NNC have fallen far short of the-state-of-the-art adversarial methods performances. In this paper, we fill this gap by introducing stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of NNC. SSAA offer new examples of sparse (or $L_0$) attacks for which only few methods have been proposed previously. These attacks are devised by exploiting a small-time expansion idea widely used for Markov processes. Experiments on small and large datasets (CIFAR-10 and ImageNet) illustrate several advantages of SSAA in comparison with the-state-of-the-art methods. For instance, in the untargeted case, our method called voting folded Gaussian attack (VFGA) scales efficiently to ImageNet and achieves a significantly lower $L_0$ score than SparseFool (up to $\frac{1}{14}$ lower) while being faster. In the targeted setting, VFGA achives appealing results on ImageNet and is significantly much faster than Carlini-Wagner $L_0$ attack.

Comment: 19 pages

Date: 24 Nov 2020

PDF »Main page »


Lethean Attack: An Online Data Poisoning Technique

Authors: Eyal Perry

Abstract: Data poisoning is an adversarial scenario where an attacker feeds a specially crafted sequence of samples to an online model in order to subvert learning. We introduce Lethean Attack, a novel data poisoning technique that induces catastrophic forgetting on an online model. We apply the attack in the context of Test-Time Training, a modern online learning framework aimed for generalization under distribution shifts. We present the theoretical rationale and empirically compare it against other sample sequences that naturally induce forgetting. Our results demonstrate that using lethean attacks, an adversary could revert a test-time training model back to coin-flip accuracy performance using a short sample sequence.

Date: 24 Nov 2020

PDF »Main page »


Towards Mass Adoption of Contact Tracing Apps -- Learning from Users' Preferences to Improve App Design

Authors: Dana Naous, Manus Bonner, Mathias Humbert, Christine Legner

Abstract: Contact tracing apps have become one of the main approaches to control and slow down the spread of COVID-19 and ease up lockdown measures. While these apps can be very effective in stopping the transmission chain and saving lives, their adoption remains under the expected critical mass. The public debate about contact tracing apps emphasizes general privacy reservations and is conducted at an expert level, but lacks the user perspective related to actual designs. To address this gap, we explore user preferences for contact tracing apps using market research techniques, and specifically conjoint analysis. Our main contributions are empirical insights into individual and group preferences, as well as insights for prescriptive design. While our results confirm the privacy-preserving design of most European contact tracing apps, they also provide a more nuanced understanding of acceptable features. Based on market simulation and variation analysis, we conclude that adding goal-congruent features will play an important role in fostering mass adoption.

Date: 24 Nov 2020

PDF »Main page »


RanStop: A Hardware-assisted Runtime Crypto-Ransomware Detection Technique

Authors: Nitin Pundir, Mark Tehranipoor, Fahim Rahman

Abstract: Among many prevailing malware, crypto-ransomware poses a significant threat as it financially extorts affected users by creating denial of access via unauthorized encryption of their documents as well as holding their documents hostage and financially extorting them. This results in millions of dollars of annual losses worldwide. Multiple variants of ransomware are growing in number with capabilities of evasion from many anti-viruses and software-only malware detection schemes that rely on static execution signatures. In this paper, we propose a hardware-assisted scheme, called RanStop, for early detection of crypto-ransomware infection in commodity processors. RanStop leverages the information of hardware performance counters embedded in the performance monitoring unit in modern processors to observe micro-architectural event sets and detects known and unknown crypto-ransomware variants. In this paper, we train a recurrent neural network-based machine learning architecture using long short-term memory (LSTM) model for analyzing micro-architectural events in the hardware domain when executing multiple variants of ransomware as well as benign programs. We create timeseries to develop intrinsic statistical features using the information of related HPCs and improve the detection accuracy of RanStop and reduce noise by via LSTM and global average pooling. As an early detection scheme, RanStop can accurately and quickly identify ransomware within 2ms from the start of the program execution by analyzing HPC information collected for 20 timestamps each 100us apart. This detection time is too early for a ransomware to make any significant damage, if none. Moreover, validation against benign programs with behavioral (sub-routine-centric) similarity with that of a crypto-ransomware shows that RanStop can detect ransomware with an average of 97% accuracy for fifty random trials.

Date: 24 Nov 2020

PDF »Main page »


A decentralized approach towards secure firmware updates and testing over commercial IoT Devices

Authors: Projjal Gupta

Abstract: Internet technologies have made a paradigm shift in the fields of computing and data science and one such paradigm defining change is the Internet of Things or IoT. Nowadays, thousands of household appliances use integrated smart devices which allow remote monitoring and control and also allow intensive computational work such as high end AI-integrated smart security systems with sustained alerts for the user. The update process of these IoT devices usually lack the ability of checking the security of centralized servers, which may be compromised and host malicious firmware files as it is presumed that the servers are secure during deployment. The solution for this problem can be solved using a decentralized database to hold the hashes and the firmware. This paper discusses the possible implications of insecure servers used to host the firmwares of commercial IoT products, and aims to provide a blockchain based decentralized solution to host firmware files with the property of immutability, and controlled access to the firmware upload functions so as to stop unauthorized use. The paper sheds light over possible hardware implementations and the use of cryptographically secure components in such secure architecture models.

Date: 24 Nov 2020

PDF »Main page »


Loading ...