PapersCut A shortcut to recent security papers

Testing Robustness Against Unforeseen Adversaries

Authors: Daniel Kang, Yi Sun, Dan Hendrycks, Tom Brown, Jacob Steinhardt

Abstract: Considerable work on adversarial defense has studied robustness to a fixed, known family of adversarial distortions, most frequently L_p-bounded distortions. In reality, the specific form of attack will rarely be known and adversaries are free to employ distortions outside of any fixed set. The present work advocates measuring robustness against this much broader range of unforeseen attacks---attacks whose precise form is not known when designing a defense. We propose a methodology for evaluating a defense against a diverse range of distortion types together with a summary metric UAR that measures the Unforeseen Attack Robustness against a distortion. We construct novel JPEG, Fog, Gabor, and Snow adversarial attacks to simulate unforeseen adversaries and perform a careful study of adversarial robustness against these and existing distortion types. We find that evaluation against existing L_p attacks yields highly correlated information that may not generalize to other attacks and identify a set of 4 attacks that yields more diverse information. We further find that adversarial training against either one or multiple distortions, including our novel ones, does not confer robustness to unforeseen distortions. These results underscore the need to study robustness against unforeseen distortions and provide a starting point for doing so.

Date: 21 Aug 2019

PDF »Main page »


A Multi-level Clustering Approach for Anonymizing Large-Scale Physical Activity Data

Authors: Pooja Parameshwarappa, Zhiyuan Chen, Gunes Koru

Abstract: Publishing physical activity data can facilitate reproducible health-care research in several areas such as population health management, behavioral health research, and management of chronic health problems. However, publishing such data also brings high privacy risks related to re-identification which makes anonymization necessary. One of the challenges in anonymizing physical activity data collected periodically is its sequential nature. The existing anonymization techniques work sufficiently for cross-sectional data but have high computational costs when applied directly to sequential data. This paper presents an effective anonymization approach, Multi-level Clustering based anonymization to anonymize physical activity data. Compared with the conventional methods, the proposed approach improves time complexity by reducing the clustering time drastically. While doing so, it preserves the utility as much as the conventional approaches.

Date: 21 Aug 2019

PDF »Main page »


Assessing the Impact of a User-Item Collaborative Attack on Class of Users

Authors: Yashar Deldjoo, Tommaso Di Noia, Felice Antonio Merra

Abstract: Collaborative Filtering (CF) models lie at the core of most recommendation systems due to their state-of-the-art accuracy. They are commonly adopted in e-commerce and online services for their impact on sales volume and/or diversity, and their impact on companies' outcome. However, CF models are only as good as the interaction data they work with. As these models rely on outside sources of information, counterfeit data such as user ratings or reviews can be injected by attackers to manipulate the underlying data and alter the impact of resulting recommendations, thus implementing a so-called shilling attack. While previous works have focused on evaluating shilling attack strategies from a global perspective paying particular attention to the effect of the size of attacks and attacker's knowledge, in this work we explore the effectiveness of shilling attacks under novel aspects. First, we investigate the effect of attack strategies crafted on a target user in order to push the recommendation of a low-ranking item to a higher position, referred to as user-item attack. Second, we evaluate the effectiveness of attacks in altering the impact of different CF models by contemplating the class of the target user, from the perspective of the richness of her profile (i.e., cold v.s. warm user). Finally, similar to previous work we contemplate the size of attack (i.e., the amount of fake profiles injected) in examining their success. The results of experiments on two widely used datasets in business and movie domains, namely Yelp and MovieLens, suggest that warm and cold users exhibit contrasting behaviors in datasets with different characteristics.

Comment: 5 pages, RecSys2019, The 1st Workshop on the Impact of Recommender Systems with ACM RecSys 2019

Date: 21 Aug 2019

PDF »Main page »


Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples

Authors: Marcus Soll, Tobias Hinz, Sven Magg, Stefan Wermter

Abstract: Adversarial examples are artificially modified input samples which lead to misclassifications, while not being detectable by humans. These adversarial examples are a challenge for many tasks such as image and text classification, especially as research shows that many adversarial examples are transferable between different classifiers. In this work, we evaluate the performance of a popular defensive strategy for adversarial examples called defensive distillation, which can be successful in hardening neural networks against adversarial examples in the image domain. However, instead of applying defensive distillation to networks for image classification, we examine, for the first time, its performance on text classification tasks and also evaluate its effect on the transferability of adversarial text examples. Our results indicate that defensive distillation only has a minimal impact on text classifying neural networks and does neither help with increasing their robustness against adversarial examples nor prevent the transferability of adversarial examples between neural networks.

Comment: Published at the International Conference on Artificial Neural Networks (ICANN) 2019

Date: 21 Aug 2019

PDF »Main page »


Detecting Fraudulent Accounts on Blockchain: A Supervised Approach

Authors: Michal Ostapowicz, Kamil ┼╗bikowski

Abstract: Applications of blockchain technologies got a lot of attention in recent years. They exceed beyond exchanging value and being a substitute for fiat money and traditional banking system. Nevertheless, being able to exchange value on a blockchain is at the core of the entire system and has to be reliable. Blockchains have built-in mechanisms that guarantee whole system's consistency and reliability. However, malicious actors can still try to steal money by applying well known techniques like malware software or fake emails. In this paper we apply supervised learning techniques to detect fraudulent accounts on Ethereum blockchain. We compare capabilities of Random Forests, Support Vector Machines and XGBoost classifiers to identify such accounts basing on a dataset of more than 300 thousands accounts. Results show that we are able to achieve recall and precision values allowing for the designed system to be applicable as an anti-fraud rule for digital wallets or currency exchanges. We also present sensitivity analysis to show how presented models depend on particular feature and how lack of some of them will affect the overall system performance.

Date: 21 Aug 2019

PDF »Main page »


Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection

Authors: Bingzhe Wu, Shiwan Zhao, Haoyang Xu, ChaoChao Chen, Li Wang, Xiaolu Zhang, Guangyu Sun, Jun Zhou

Abstract: In this paper, we aim to understand the generalization properties of generative adversarial networks (GANs) from a new perspective of privacy protection. Theoretically, we prove that a differentially private learning algorithm used for training the GAN does not overfit to a certain degree, i.e., the generalization gap can be bounded. Moreover, some recent works, such as the Bayesian GAN, can be re-interpreted based on our theoretical insight from privacy protection. Quantitatively, to evaluate the information leakage of well-trained GAN models, we perform various membership attacks on these models. The results show that previous Lipschitz regularization techniques are effective in not only reducing the generalization gap but also alleviating the information leakage of the training dataset.

Date: 21 Aug 2019

PDF »Main page »


A Novel Privacy-Preserving Deep Learning Scheme without Using Cryptography Component

Authors: Chin-Yu Sun, Allen C. -H. Wu, TingTing Hwang

Abstract: Recently, deep learning, which uses Deep Neural Networks (DNN), plays an important role in many fields. A secure neural network model with a secure training/inference scheme is indispensable to many applications. To accomplish such a task usually needs one of the entities (the customer or the service provider) to provide private information (customer's data or the model) to the other. Without a secure scheme and the mutual trust between the service providers and their customers, it will be an impossible mission. In this paper, we propose a novel privacy-preserving deep learning model and a secure training/inference scheme to protect the input, the output, and the model in the application of the neural network. We utilize the innate properties of a deep neural network to design a secure mechanism without using any complicated cryptography component. The security analysis shows our proposed scheme is secure and the experimental results also demonstrate that our method is very efficient and suitable for real applications.

Date: 21 Aug 2019

PDF »Main page »


Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks

Authors: Ka-Ho Chow, Wenqi Wei, Yanzhao Wu, Ling Liu

Abstract: Deep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.

Date: 21 Aug 2019

PDF »Main page »


AdaCliP: Adaptive Clipping for Private SGD

Authors: Venkatadheeraj Pichapati, Ananda Theertha Suresh, Felix X. Yu, Sashank J. Reddi, Sanjiv Kumar

Abstract: Privacy preserving machine learning algorithms are crucial for learning models over user data to protect sensitive information. Motivated by this, differentially private stochastic gradient descent (SGD) algorithms for training machine learning models have been proposed. At each step, these algorithms modify the gradients and add noise proportional to the sensitivity of the modified gradients. Under this framework, we propose AdaCliP, a theoretically motivated differentially private SGD algorithm that provably adds less noise compared to the previous methods, by using coordinate-wise adaptive clipping of the gradient. We empirically demonstrate that AdaCliP reduces the amount of added noise and produces models with better accuracy.

Comment: 18 pages

Date: 20 Aug 2019

PDF »Main page »


Realistic versus Rational Secret Sharing

Authors: Yvo Desmedt, Arkadii Slinko

Abstract: The study of Rational Secret Sharing initiated by Halpern and Teague regards the reconstruction of the secret in secret sharing as a game. It was shown that participants (parties) may refuse to reveal their shares and so the reconstruction may fail. Moreover, a refusal to reveal the share may be a dominant strategy of a party. In this paper we consider secret sharing as a sub-action or subgame of a larger action/game where the secret opens a possibility of consumption of a certain common good. We claim that utilities of participants will be dependent on the nature of this common good. In particular, Halpern and Teague scenario corresponds to a rivalrous and excludable common good. We consider the case when this common good is non-rivalrous and non-excludable and find many natural Nash equilibria. We list several applications of secret sharing to demonstrate our claim and give corresponding scenarios. In such circumstances the secret sharing scheme facilitates a power sharing agreement in the society. We also state that non-reconstruction may be beneficial for this society and give several examples.

Comment: This is a preliminary version of a paper accepted for GameSec 2019

Date: 20 Aug 2019

PDF »Main page »


Loading ...