PapersCut A shortcut to recent security papers

Adversarial attacks on Copyright Detection Systems

Authors: Parsa Saadatpanah, Ali Shafahi, Tom Goldstein

Abstract: It is well-known that many machine learning models are susceptible to so-called "adversarial attacks," in which an attacker evades a classifier by making small perturbations to inputs. This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks. We discuss a range of copyright detection systems, and why they are particularly vulnerable to attacks. These vulnerabilities are especially apparent for neural network based systems. As a proof of concept, we describe a well-known music identification method, and implement this system in the form of a neural net. We then attack this system using simple gradient methods. Adversarial music created this way successfully fools industrial systems, including the AudioTag copyright detector and YouTube's Content ID system. Our goal is to raise awareness of the threats posed by adversarial examples in this space, and to highlight the importance of hardening copyright detection systems to attacks.

Date: 17 Jun 2019

PDF »Main page »


CheckNet: Secure Inference on Untrusted Devices

Authors: Marcus Comiter, Surat Teerapittayanon, H. T. Kung

Abstract: We introduce CheckNet, a method for secure inference with deep neural networks on untrusted devices. CheckNet is like a checksum for neural network inference: it verifies the integrity of the inference computation performed by untrusted devices to 1) ensure the inference has actually been performed, and 2) ensure the inference has not been manipulated by an attacker. CheckNet is completely transparent to the third party running the computation, applicable to all types of neural networks, does not require specialized hardware, adds little overhead, and has negligible impact on model performance. CheckNet can be configured to provide different levels of security depending on application needs and compute/communication budgets. We present both empirical and theoretical validation of CheckNet on multiple popular deep neural network models, showing excellent attack detection (0.88-0.99 AUC) and attack success bounds.

Date: 17 Jun 2019

PDF »Main page »


Danger of using fully homomorphic encryption: A look at Microsoft SEAL

Authors: Zhiniang Peng

Abstract: Fully homomorphic encryption is a promising crypto primitive to encrypt your data while allowing others to compute on the encrypted data. But there are many well-known problems with fully homomorphic encryption such as CCA security and circuit privacy problem. Despite these problems, there are still many companies are currently using or preparing to use fully homomorphic encryption to build data security applications. It seems that the full homomorphic encryption is very close to practicality and these problems can be easily mitigated in implementation. Although the those problems are well known in theory, there is no public discussion of their actual impact on real application. Our research shows that there are many security pitfalls in fully homomorphic encryption from the perspective of practical application. The security problems of a fully homomorphic encryption in a real application is more severe than imagined. In this paper, we will take Microsoft SEAL as an examples to introduce the security pitfalls of fully homomorphic encryption from the perspective of implementation and practical application

Date: 17 Jun 2019

PDF »Main page »


Supporting Web Archiving via Web Packaging

Authors: Sawood Alam, Michele C. Weigle, Michael L. Nelson, Martin Klein, Herbert Van de Sompel

Abstract: We describe challenges related to web archiving, replaying archived web resources, and verifying their authenticity. We show that Web Packaging has significant potential to help address these challenges and identify areas in which changes are needed in order to fully realize that potential.

Comment: This is a position paper accepted at the ESCAPE Workshop 2019. https://www.iab.org/activities/workshops/escape-workshop/

Date: 17 Jun 2019

PDF »Main page »


The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks

Authors: Felix Assion, Peter Schlicht, Florens Greßner, Wiebke Günther, Fabian Hüger, Nico Schmidt, Umair Rasheed

Abstract: Most state-of-the-art machine learning (ML) classification systems are vulnerable to adversarial perturbations. As a consequence, adversarial robustness poses a significant challenge for the deployment of ML-based systems in safety- and security-critical environments like autonomous driving, disease detection or unmanned aerial vehicles. In the past years we have seen an impressive amount of publications presenting more and more new adversarial attacks. However, the attack research seems to be rather unstructured and new attacks often appear to be random selections from the unlimited set of possible adversarial attacks. With this publication, we present a structured analysis of the adversarial attack creation process. By detecting different building blocks of adversarial attacks, we outline the road to new sets of adversarial attacks. We call this the "attack generator". In the pursuit of this objective, we summarize and extend existing adversarial perturbation taxonomies. The resulting taxonomy is then linked to the application context of computer vision systems for autonomous vehicles, i.e. semantic segmentation and object detection. Finally, in order to prove the usefulness of the attack generator, we investigate existing semantic segmentation attacks with respect to the detected defining components of adversarial attacks.

Comment: CVPR SAIAD - Workshop 2019

Date: 17 Jun 2019

PDF »Main page »


Using Trusted Execution Environments for Secure Stream Processing of Medical Data

Authors: Carlos Segarra, Ricard Delgado-Gonzalo, Mathieu Lemay, Pierre-Louis Aublin, Peter Pietzuch, Valerio Schiavoni

Abstract: Processing sensitive data, such as those produced by body sensors, on third-party untrusted clouds is particularly challenging without compromising the privacy of the users generating it. Typically, these sensors generate large quantities of continuous data in a streaming fashion. Such vast amount of data must be processed efficiently and securely, even under strong adversarial models. The recent introduction in the mass-market of consumer-grade processors with Trusted Execution Environments (TEEs), such as Intel SGX, paves the way to implement solutions that overcome less flexible approaches, such as those atop homomorphic encryption. We present a secure streaming processing system built on top of Intel SGX to showcase the viability of this approach with a system specifically fitted for medical data. We design and fully implement a prototype system that we evaluate with several realistic datasets. Our experimental results show that the proposed system achieves modest overhead compared to vanilla Spark while offering additional protection guarantees under powerful attackers and threat models.

Comment: 19th International Conference on Distributed Applications and Interoperable Systems

Date: 17 Jun 2019

PDF »Main page »


Scrubbing Sensitive PHI Data from Medical Records made Easy by SpaCy -- A Scalable Model Implementation Comparisons

Authors: Rashmi Jain, Dinah Samuel Anand, Vijayalakshmi Janakiraman

Abstract: De-identification of clinical records is an extremely important process which enables the use of the wealth of information present in them. There are a lot of techniques available for this but none of the method implementation has evaluated the scalability, which is an important benchmark. We evaluated numerous deep learning techniques such as BiLSTM-CNN, IDCNN, CRF, BiLSTM-CRF, SpaCy, etc. on both the performance and efficiency. We propose that the SpaCy model implementation for scrubbing sensitive PHI data from medical records is both well performing and extremely efficient compared to other published models.

Comment: 9 Pages, 7 Figures, 2 Tables

Date: 17 Jun 2019

PDF »Main page »


A baseline for unsupervised advanced persistent threat detection in system-level provenance

Authors: Ghita Berrada, Sidahmed Benabderrahmane, James Cheney, William Maxwell, Himan Mookherjee, Alec Theriault, Ryan Wright

Abstract: Advanced persistent threats (APT) are stealthy, sophisticated, and unpredictable cyberattacks that can steal intellectual property, damage critical infrastructure, or cause millions of dollars in damage. Detecting APTs by monitoring system-level activity is difficult because manually inspecting the high volume of normal system activity is overwhelming for security analysts. We evaluate the effectiveness of unsupervised batch and streaming anomaly detection algorithms over multiple gigabytes of provenance traces recorded on four different operating systems to determine whether they can detect realistic APT-like attacks reliably and efficiently. This report is the first detailed study of the effectiveness of generic unsupervised anomaly detection techniques in this setting.

Date: 17 Jun 2019

PDF »Main page »


A Public-Key Cryptosystem Using Cyclotomic Matrices

Authors: Md. Helal Ahmed, Jagmohan Tanti, Sumant Pushp

Abstract: Confidentiality and Integrity are two paramount objectives of asymmetric key cryptography. Where two non-identical but mathematically related keys -- a public key and a private key effectuate the secure transmission of messages. Moreover, the private key is non-shareable and the public key has to be shared. The messages could be secured if the amount of computation rises to very high value. In this work we propose a public key cryptosystem using the cyclotomic numbers, where cyclotomic numbers are certain pair of solutions $(a,b)_{e}$ of order $e$ over a finite field $\mathbb{F}_{q}$ with characteristic $p$. The strategy employs cyclotomic matrices of order $2l^{2}$, whose entries are cyclotomic numbers of order $2l^{2}$, $l$ be prime. The public key is generated by choosing a particular generator $\gamma^{\prime}$ of $\mathbb{F}_{p}^{*}$. Secret key (private key) is accomplished by discrete logarithm problem (DLP) over a finite field $\mathbb{F}_{p}$.

Date: 17 Jun 2019

PDF »Main page »


Improving Black-box Adversarial Attacks with a Transfer-based Prior

Authors: Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

Abstract: We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients. Previous methods tried to approximate the gradient either by using a transfer gradient of a surrogate white-box model, or based on the query feedback. However, these methods often suffer from low attack success rates or poor query efficiency since it is non-trivial to estimate the gradient in a high-dimensional space with limited information. To address these problems, we propose a prior-guided random gradient-free (P-RGF) method to improve black-box adversarial attacks, which takes the advantage of a transfer-based prior and the query information simultaneously. The transfer-based prior given by the gradient of a surrogate model is appropriately integrated into our algorithm by an optimal coefficient derived by a theoretical analysis. Extensive experiments demonstrate that our method requires much fewer queries to attack black-box models with higher success rates compared with the alternative state-of-the-art methods.

Date: 17 Jun 2019

PDF »Main page »


Loading ...