# PapersCutA shortcut to recent security papers

### Arxiv

#### Label-Consistent Backdoor Attacks

Authors: Alexander Turner, Dimitris Tsipras, Aleksander Madry

Abstract: Deep neural networks have been demonstrated to be vulnerable to backdoor attacks. Specifically, by injecting a small number of maliciously constructed inputs into the training set, an adversary is able to plant a backdoor into the trained model. This backdoor can then be activated during inference by a backdoor trigger to fully control the model's behavior. While such attacks are very effective, they crucially rely on the adversary injecting arbitrary inputs that are---often blatantly---mislabeled. Such samples would raise suspicion upon human inspection, potentially revealing the attack. Thus, for backdoor attacks to remain undetected, it is crucial that they maintain label-consistency---the condition that injected inputs are consistent with their labels. In this work, we leverage adversarial perturbations and generative models to execute efficient, yet label-consistent, backdoor attacks. Our approach is based on injecting inputs that appear plausible, yet are hard to classify, hence causing the model to rely on the (easier-to-learn) backdoor trigger.

Date: 5 Dec 2019

#### Trident: Efficient 4PC Framework for Privacy Preserving Machine Learning

Authors: Rahul Rachuri, Ajith Suresh

Abstract: Machine learning has started to be deployed in fields such as healthcare and finance, which propelled the need for and growth of privacy-preserving machine learning (PPML). We propose an actively secure four-party protocol (4PC), and a framework for PPML, showcasing its applications on four of the most widely-known machine learning algorithms -- Linear Regression, Logistic Regression, Neural Networks, and Convolutional Neural Networks. Our 4PC protocol tolerating at most one malicious corruption is practically efficient as compared to the existing works. We use the protocol to build an efficient mixed-world framework (Trident) to switch between the Arithmetic, Boolean, and Garbled worlds. Our framework operates in the offline-online paradigm over rings and is instantiated in an outsourced setting for machine learning. Also, we propose conversions especially relevant to privacy-preserving machine learning. The highlights of our framework include using a minimal number of expensive circuits overall as compared to ABY3. This can be seen in our technique for truncation, which does not affect the online cost of multiplication and removes the need for any circuits in the offline phase. Our B2A conversion has an improvement of $\mathbf{7} \times$ in rounds and $\mathbf{18} \times$ in the communication complexity. In addition to these, all of the special conversions for machine learning, e.g. Secure Comparison, achieve constant round complexity. The practicality of our framework is argued through improvements in the benchmarking of the aforementioned algorithms when compared with ABY3. All the protocols are implemented over a 64-bit ring in both LAN and WAN settings. Our improvements go up to $\mathbf{187} \times$ for the training phase and $\mathbf{158} \times$ for the prediction phase when observed over LAN and WAN.

Comment: To appear in 26th Annual Network and Distributed System Security Symposium (NDSS) 2020

Date: 5 Dec 2019

#### ASTRA: High Throughput 3PC over Rings with Application to Secure Prediction

Authors: Harsh Chaudhari, Ashish Choudhury, Arpita Patra, Ajith Suresh

Abstract: The concrete efficiency of secure computation has been the focus of many recent works. In this work, we present concretely-efficient protocols for secure $3$-party computation (3PC) over a ring of integers modulo $2^{\ell}$ tolerating one corruption, both with semi-honest and malicious security. Owing to the fact that computation over ring emulates computation over the real-world system architectures, secure computation over ring has gained momentum of late. Cast in the offline-online paradigm, our constructions present the most efficient online phase in concrete terms. In the semi-honest setting, our protocol requires communication of $2$ ring elements per multiplication gate during the {\it online} phase, attaining a per-party cost of {\em less than one element}. This is achieved for the first time in the regime of 3PC. In the {\it malicious} setting, our protocol requires communication of $4$ elements per multiplication gate during the online phase, beating the state-of-the-art protocol by $5$ elements. Realized with both the security notions of selective abort and fairness, the malicious protocol with fairness involves slightly more communication than its counterpart with abort security for the output gates {\em alone}. We apply our techniques from $3$PC in the regime of secure server-aided machine-learning (ML) inference for a range of prediction functions-- linear regression, linear SVM regression, logistic regression, and linear SVM classification. Our setting considers a model-owner with trained model parameters and a client with a query, with the latter willing to learn the prediction of her query based on the model parameters of the former. The inputs and computation are outsourced to a set of three non-colluding servers. Our constructions catering to both semi-honest and the malicious world, invariably perform better than the existing constructions.

Date: 5 Dec 2019

#### FMPC: Secure Multiparty Computation from Fourier Series and Parseval's Identity

Authors: Alberto Sonnino

Abstract: FMPC is a novel multiparty computation protocol of arithmetic circuits based on secret-sharing, capable of computing multiplication of secrets with no online communication; it thus enjoys constant online communication latency in the size of the circuit. FMPC is based on the application of Fourier series to Parseval's identity, and introduces the first generalization of Parseval's identity for Fourier series applicable to an arbitrary number of inputs. FMPC operates in a setting where users wish to compute a function over some secret inputs by submitting the computation to a set of nodes, but is only suitable for the evaluation of low-depth arithmetic circuits. FMPC relies on an offline phase consisting of traditional preprocessing as introduced by established protocols like SPDZ, and innovates on the online phase that mainly consists of each node locally evaluating specific functions. FMPC paves the way for a new kind of multiparty computation protocols capable of computing multiplication of secrets as an alternative to circuit garbling and the traditional algebra introduced by Donald Beaver in 1991.

Date: 5 Dec 2019

#### Context Aware Password Guessability via Multi-Dimensional Rank Estimation

Authors: Liron David, Avishai Wool

Date: 5 Dec 2019

#### Catch Me (On Time) If You Can: Understanding the Effectiveness of Twitter URL Blacklists

Authors: Simon Bell, Kenny Paterson, Lorenzo Cavallaro

Date: 5 Dec 2019

#### Leveraging Operational Technology and the Internet of Things to Attack Smart Buildings

Authors: Daniel Ricardo dos Santos, Mario Dagrada, Elisa Costante

Abstract: In recent years, the buildings where we spend most part of our life are rapidly evolving. They are becoming fully automated environments where energy consumption, access control, heating and many other subsystems are all integrated within a single system commonly referred to as smart building (SB). To support the growing complexity of building operations, building automation systems (BAS) powering SBs are integrating consumer range Internet of Things (IoT) devices such as IP cameras alongside with operational technology (OT) controllers and actuators. However, these changes pose important cybersecurity concerns since the attack surface is larger, attack vectors are increasing and attacks can potentially harm building occupants. In this paper, we analyze the threat landscape of BASs by focusing on subsystems which are strongly affected by the advent of IoT devices such as video surveillance systems and smart lightning. We demonstrate how BAS operation can be disrupted by simple attacks to widely used network protocols. Furthermore, using both known and 0-day vulnerabilities reported in the paper and previously disclosed, we present the first (at our knowledge) BAS-specific malware which is able to persist within the BAS network by leveraging both OT and IoT devices connected to the BAS. Our research highlights how BAS networks can be considered as critical as industrial control systems and security concerns in BASs deserve more attention from both industrial and scientific communities. Even within a simulated environment, our proof-of-concept attacks were carried out with relative ease and a limited amount of budget and resources. Therefore, we believe that well-funded attack groups will increasingly shift their focus towards BASs with the potential of impacting the live of thousands of people.

Date: 5 Dec 2019

#### Gobi: WebAssembly as a Practical Path to Library Sandboxing

Authors: Shravan Narayan, Tal Garfinkel, Sorin Lerner, Hovav Shacham, Deian Stefan

Abstract: Software based fault isolation (SFI) is a powerful approach to reduce the impact of security vulnerabilities in large C/C++ applications like Firefox and Apache. Unfortunately, practical SFI tools have not been broadly available. Developing SFI toolchains are a significant engineering challenge. Only in recent years have browser vendors invested in building production quality SFI tools like Native Client (NaCl) to sandbox code. Further, without committed support, these tools are not viable, e.g. NaCl has been discontinued, orphaning projects that relied on it. WebAssembly (Wasm) offers a promising solution---it can support high performance sandboxing and has been embraced by all major browser vendors---thus seems to have a viable future. However, Wasm presently only offers a solution for sandboxing mobile code. Providing SFI for native application, such as C/C++ libraries requires additional steps. To reconcile the different worlds of Wasm on the browser and native platforms, we present Gobi. Gobi is a system of compiler changes and runtime support that can sandbox normal C/C++ libraries with Wasm---allowing them to be compiled and linked into native applications. Gobi has been tested on libjpeg, libpng, and zlib. Based on our experience developing Gobi, we conclude with a call to arms to the Wasm community and SFI research community to make Wasm based module sandboxing a first class use case and describe how this can significantly benefit both communities. Addendum: This short paper was originally written in January of 2019. Since then, the implementation and design of Gobi has evolved substantially as some of the issues raised in this paper have been addressed by the Wasm community. Nevertheless, several challenges still remain. We have thus left the paper largely intact and only provide a brief update on the state of Wasm tooling as of November 2019 in the last section.

Date: 4 Dec 2019

#### A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks

Authors: Prithviraj Dasgupta, Joseph B. Collins

Abstract: Machine learning techniques are currently used extensively for automating various cybersecurity tasks. Most of these techniques utilize supervised learning algorithms that rely on training the algorithm to classify incoming data into different categories, using data encountered in the relevant domain. A critical vulnerability of these algorithms is that they are susceptible to adversarial attacks where a malicious entity called an adversary deliberately alters the training data to misguide the learning algorithm into making classification errors. Adversarial attacks could render the learning algorithm unsuitable to use and leave critical systems vulnerable to cybersecurity attacks. Our paper provides a detailed survey of the state-of-the-art techniques that are used to make a machine learning algorithm robust against adversarial attacks using the computational framework of game theory. We also discuss open problems and challenges and possible directions for further research that would make deep machine learning-based systems more robust and reliable for cybersecurity tasks.

Comment: 13 pages, 2 figures, 1 table

Date: 4 Dec 2019

#### Walking on the Edge: Fast, Low-Distortion Adversarial Examples

Authors: Hanwei Zhang, Yannis Avrithis, Teddy Furon, Laurent Amsaleg

Abstract: Adversarial examples of deep neural networks are receiving ever increasing attention because they help in understanding and reducing the sensitivity to their input. This is natural given the increasing applications of deep neural networks in our everyday lives. When white-box attacks are almost always successful, it is typically only the distortion of the perturbations that matters in their evaluation. In this work, we argue that speed is important as well, especially when considering that fast attacks are required by adversarial training. Given more time, iterative methods can always find better solutions. We investigate this speed-distortion trade-off in some depth and introduce a new attack called boundary projection (BP) that improves upon existing methods by a large margin. Our key idea is that the classification boundary is a manifold in the image space: we therefore quickly reach the boundary and then optimize distortion on this manifold.

Comment: 13 pages, 9 figures

Date: 5 Dec 2019