PapersCut A shortcut to recent security papers

Quantum Time-Space Tradeoffs by Recording Queries

Authors: Yassine Hamoudi, Frédéric Magniez

Abstract: We use the recording queries technique of Zhandry [Zha19] to prove lower bounds in the exponentially small success probability regime, with applications to time-space tradeoffs. We first extend the recording technique to the case of non-uniform input distributions and we describe a new simple framework for using it. Then, as an application, we prove strong direct product theorems for $K$-Search under a natural product distribution not considered in previous works, and for finding $K$ distinct collisions in a uniform random function. Finally, we use the latter result to obtain the first quantum time-space tradeoff that is not based on a reduction to $K$-Search. Namely, we demonstrate that any $T$-query algorithm using $S$ qubits of memory must satisfy a tradeoff of $T^3 S \geq \Omega(N^4)$ for finding $\Theta(N)$ collisions in a random function. We conjecture that this result can be improved to $T^2 S \geq \Omega(N^3)$, and we show that it would imply a $T^2 S \geq \tilde{\Omega}(N^2)$ tradeoff for Element Distinctness.

Comment: 17 pages

Date: 20 Feb 2020

PDF »Main page »


Modeling the Impact of Network Connectivity on Consensus Security of Proof-of-Work Blockchain

Authors: Yang Xiao, Ning Zhang, Wenjing Lou, Y. Thomas Hou

Abstract: Blockchain, the technology behind the popular Bitcoin, is considered a "security by design" system as it is meant to create security among a group of distrustful parties yet without a central trusted authority. The security of blockchain relies on the premise of honest-majority, namely, the blockchain system is assumed to be secure as long as the majority of consensus voting power is honest. And in the case of proof-of-work (PoW) blockchain, adversaries cannot control more than 50% of the network's gross computing power. However, this 50% threshold is based on the analysis of computing power only, with implicit and idealistic assumptions on the network and node behavior. Recent researches have alluded that factors such as network connectivity, presence of blockchain forks, and mining strategy could undermine the consensus security assured by the honest-majority, but neither concrete analysis nor quantitative evaluation is provided. In this paper we fill the gap by proposing an analytical model to assess the impact of network connectivity on the consensus security of PoW blockchain under different adversary models. We apply our analytical model to two adversarial scenarios: 1) honest-but-potentially-colluding, 2) selfish mining. For each scenario, we quantify the communication capability of nodes involved in a fork race and estimate the adversary's mining revenue and its impact on security properties of the consensus protocol. Simulation results validated our analysis. Our modeling and analysis provide a paradigm for assessing the security impact of various factors in a distributed consensus system.

Comment: Accepted by 2020 IEEE International Conference on Computer Communications (INFOCOM 2020)

Date: 20 Feb 2020

PDF »Main page »


A Bayes-Optimal View on Adversarial Examples

Authors: Eitan Richardson, Yair Weiss

Abstract: The ability to fool modern CNN classifiers with tiny perturbations of the input has lead to the development of a large number of candidate defenses and often conflicting explanations. In this paper, we argue for examining adversarial examples from the perspective of Bayes-Optimal classification. We construct realistic image datasets for which the Bayes-Optimal classifier can be efficiently computed and derive analytic conditions on the distributions so that the optimal classifier is either robust or vulnerable. By training different classifiers on these datasets (for which the "gold standard" optimal classifiers are known), we can disentangle the possible sources of vulnerability and avoid the accuracy-robustness tradeoff that may occur in commonly used datasets. Our results show that even when the optimal classifier is robust, standard CNN training consistently learns a vulnerable classifier. At the same time, for exactly the same training data, RBF SVMs consistently learn a robust classifier. The same trend is observed in experiments with real images.

Date: 20 Feb 2020

PDF »Main page »


Towards Certifiable Adversarial Sample Detection

Authors: Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson

Abstract: Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat. There have been various proposals to improve CNNs' adversarial robustness but these all suffer performance penalties or other limitations. In this paper, we provide a new approach in the form of a certifiable adversarial detection scheme, the Certifiable Taboo Trap (CTT). The system can provide certifiable guarantees of detection of adversarial inputs for certain $l_{\infty}$ sizes on a reasonable assumption, namely that the training data have the same distribution as the test data. We develop and evaluate several versions of CTT with a range of defense capabilities, training overheads and certifiability on adversarial samples. Against adversaries with various $l_p$ norms, CTT outperforms existing defense methods that focus purely on improving network robustness. We show that CTT has small false positive rates on clean test data, minimal compute overheads when deployed, and can support complex security policies.

Date: 20 Feb 2020

PDF »Main page »


Boosting Adversarial Training with Hypersphere Embedding

Authors: Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Hang Su, Jun Zhu

Abstract: Adversarial training (AT) is one of the most effective defenses to improve the adversarial robustness of deep learning models. In order to promote the reliability of the adversarially trained models, we propose to boost AT via incorporating hypersphere embedding (HE), which can regularize the adversarial features onto compact hypersphere manifolds. We formally demonstrate that AT and HE are well coupled, which tunes up the learning dynamics of AT from several aspects. We comprehensively validate the effectiveness and universality of HE by embedding it into the popular AT frameworks including PGD-AT, ALP, and TRADES, as well as the FreeAT and FastAT strategies. In experiments, we evaluate our methods on the CIFAR-10 and ImageNet datasets, and verify that integrating HE can consistently enhance the performance of the models trained by each AT framework with little extra computation.

Date: 20 Feb 2020

PDF »Main page »


Towards Byzantine-resilient Learning in Decentralized Systems

Authors: Shangwei Guo, Tianwei Zhang, Xiaofei Xie, Lei Ma, Tao Xiang, Yang Liu

Abstract: With the proliferation of IoT and edge computing, decentralized learning is becoming more promising. When designing a distributed learning system, one major challenge to consider is Byzantine Fault Tolerance (BFT). Past works have researched Byzantine-resilient solutions for centralized distributed learning. However, there are currently no satisfactory solutions with strong efficiency and security in decentralized systems. In this paper, we propose a novel algorithm, Mozi, to achieve BFT in decentralized learning systems. Specifically, Mozi provides a uniform Byzantine-resilient aggregation rule for benign nodes to select the useful parameter updates and filter out the malicious ones in each training iteration. It guarantees that each benign node in a decentralized system can train a correct model under very strong Byzantine attacks with an arbitrary number of faulty nodes. We perform the theoretical analysis to prove the uniform convergence of our proposed algorithm. Experimental evaluations demonstrate the high security and efficiency of Mozi compared to all existing solutions.

Date: 20 Feb 2020

PDF »Main page »


MEUZZ: Smart Seed Scheduling for Hybrid Fuzzing

Authors: Yaohui Chen, Mansour Ahmadi, Reza Mirzazade farkhani, Boyu Wang, Long Lu

Abstract: Seed scheduling is a prominent factor in determining the yields of hybrid fuzzing. Existing hybrid fuzzers schedule seeds based on fixed heuristics that aim to predict input utilities. However, such heuristics are not generalizable as there exists no one-size-fits-all rule applicable to different programs. They may work well on the programs from which they were derived, but not others. To overcome this problem, we design a Machine learning-Enhanced hybrid fUZZing system (MEUZZ), which employs supervised machine learning for adaptive and generalizable seed scheduling. MEUZZ determines which new seeds are expected to produce better fuzzing yields based on the knowledge learned from past seed scheduling decisions made on the same or similar programs. MEUZZ's learning is based on a series of features extracted via code reachability and dynamic analysis, which incurs negligible runtime overhead (in microseconds). Moreover, MEUZZ automatically infers the data labels by evaluating the fuzzing performance of each selected seed. As a result, MEUZZ is generally applicable to, and performs well on, various kinds of programs. Our evaluation shows MEUZZ significantly outperforms the state-of-the-art grey-box and hybrid fuzzers, achieving 27.1% more code coverage than QSYM. The learned models are reusable and transferable, which boosts fuzzing performance by 7.1% on average and improves 68% of the 56 cross-program fuzzing campaigns. MEUZZ discovered 47 deeply hidden and previously unknown bugs--with 21 confirmed and fixed by the developers--when fuzzing 8 well-tested programs with the same configurations as used in previous work.

Date: 20 Feb 2020

PDF »Main page »


NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion

Authors: Aritran Piplai, Sai Sree Laya Chukkapalli, Anupam Joshi

Abstract: With the recent developments in artificial intelligence and machine learning, anomalies in network traffic can be detected using machine learning approaches. Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules. An attacker who has knowledge in the field of cyber-defence could make educated guesses to sometimes accurately predict which particular features of network traffic data the cyber-defence mechanism is looking at. With this information, the attacker can circumvent a rule-based cyber-defense system. However, after the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system. Recently, adversarial attacks have become increasingly common to defeat machine learning algorithms. In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system. We propose a Generative Adversarial Network(GAN)based algorithm to generate data to train an efficient neural network based classifier, and we subsequently break the system using adversarial attacks.

Comment: 6 pages, 2 figures

Date: 20 Feb 2020

PDF »Main page »


Tricking Johnny into Granting Web Permissions

Authors: Mohammadreza Hazhirpasand, Mohammad Ghafari, Oscar Nierstrasz

Abstract: We studied the web permission API dialog box in popular mobile and desktop browsers, and found that it typically lacks measures to protect users from unwittingly granting web permission when clicking too fast. We developed a game that exploits this issue, and tricks users into granting webcam permission. We conducted three experiments, each with 40 different participants, on both desktop and mobile browsers. The results indicate that in the absence of a prevention mechanism, we achieve a considerably high success rate in tricking 95% and 72% of participants on mobile and desktop browsers, respectively. Interestingly, we also tricked 47% of participants on a desktop browser where a prevention mechanism exists.

Comment: The 24th International Conference on Evaluation and Assessment in Software Engineering (EASE 2020)

Date: 19 Feb 2020

PDF »Main page »


A Recurrent Neural Network Based Patch Recommender for Linux Kernel Bugs

Authors: Anusha Bableshwar, Arun Ravindran, Manoj Iyer

Abstract: Software bugs in a production environment have an undesirable impact on quality of service, unplanned system downtime, and disruption in good customer experience, resulting in loss of revenue and reputation. Existing approaches to automated software bug repair focuses on known bug templates detected using static code analysis tools and test suites, and in automatic generation of patch code for these bugs. We describe the typical bug fixing process employed in the Linux kernel, and motivate the need for a new automated tool flow to fix bugs. We present an initial design of such an automated tool that uses Recurrent Neural Network (RNN) based Natural Language Processing to generate patch recommendations from user generated bug reports. At the 50th percentile of the test bugs, the correct patch occurs within the top 11.5 patch recommendations output by the model. Further, we present a Linux kernel developer's assessment of the quality of patches recommended for new unresolved kernel bugs.

Date: 19 Feb 2020

PDF »Main page »


Loading ...