PapersCut A shortcut to recent security papers

ZLeaks: Passive Inference Attacks on Zigbee based Smart Homes

Authors: Narmeen Shafqat, Daniel J. Dubois, David Choffnes, Aaron Schulman, Dinesh Bharadia, Aanjhan Ranganathan

Abstract: In this work, we analyze the privacy guarantees of Zigbee protocol, an energy-efficient wireless IoT protocol that is increasingly being deployed in smart home settings. Specifically, we devise two passive inference techniques to demonstrate how a passive eavesdropper, located outside the smart home, can reliably identify in-home devices or events from the encrypted wireless Zigbee traffic by 1) inferring a single application layer (APL) command in the event's traffic burst, and 2) exploiting the device's periodic reporting pattern and interval. This enables an attacker to infer user's habits or determine if the smart home is vulnerable to unauthorized entry. We evaluated our techniques on 19 unique Zigbee devices across several categories and 5 popular smart hubs in three different scenarios: i) controlled shield, ii) living smart-home IoT lab, and iii) third-party Zigbee captures. Our results indicate over 85% accuracy in determining events and devices using the command inference approach, without the need of a-priori device signatures, and 99.8% accuracy in determining known devices using the periodic reporting approach. In addition, we identified APL commands in a third party capture file with 90.6% accuracy. Through this work, we highlight the trade-off between designing a low-power, low-cost wireless network and achieving privacy guarantees.

Date: 22 Jul 2021

PDF »Main page »


Differentially Private Algorithms for 2020 Census Detailed DHC Race \& Ethnicity

Authors: Sam Haney, William Sexton, Ashwin Machanavajjhala, Michael Hay, Gerome Miklau

Abstract: This article describes a proposed differentially private (DP) algorithms that the US Census Bureau is considering to release the Detailed Demographic and Housing Characteristics (DHC) Race & Ethnicity tabulations as part of the 2020 Census. The tabulations contain statistics (counts) of demographic and housing characteristics of the entire population of the US crossed with detailed races and tribes at varying levels of geography. We describe two differentially private algorithmic strategies, one based on adding noise drawn from a two-sided Geometric distribution that satisfies "pure"-DP, and another based on adding noise from a Discrete Gaussian distribution that satisfied a well studied variant of differential privacy, called Zero Concentrated Differential Privacy (zCDP). We analytically estimate the privacy loss parameters ensured by the two algorithms for comparable levels of error introduced in the statistics.

Comment: Presented at Theory and Practice of Differential Privacy Workshop (TPDP) 2021

Date: 22 Jul 2021

PDF »Main page »


Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks

Authors: Ramin Barati, Reza Safabakhsh, Mohammad Rahmati

Abstract: In this paper, we study the adversarial examples existence and adversarial training from the standpoint of convergence and provide evidence that pointwise convergence in ANNs can explain these observations. The main contribution of our proposal is that it relates the objective of the evasion attacks and adversarial training with concepts already defined in learning theory. Also, we extend and unify some of the other proposals in the literature and provide alternative explanations on the observations made in those proposals. Through different experiments, we demonstrate that the framework is valuable in the study of the phenomenon and is applicable to real-world problems.

Comment: submitted to 25th International Conference on Pattern Recognition (ICPR)

Date: 22 Jul 2021

PDF »Main page »


Always on Voting: A Framework for Repetitive Voting on the Blockchain

Authors: Sarad Venugopalan, Ivan Homoliak

Abstract: Elections are commonly repeated over longer and fixed intervals of time, ranging from months to years. This results in limitations on governance since elected candidates or policies are difficult to remove before the next election even though they might be deemed detrimental to the majority of participants. When new information is available, participants may decide (through a public deliberation) to make amendments to their choice but have no opportunity to change their vote before the next elections. Another issue is the peak-end effect where voters' judgment is based on how they felt a short time before the elections, instead of judging the whole period of the governance. Finally, there exist a few issues related to centralized e-voting, such as censorship and tampering with the results and data. To address these issues, we propose Always on Voting (AoV) -- a repetitive blockchain-based voting framework that allows participants to continuously vote and change elected candidates or policies without having to wait for the next election. Participants are permitted to privately change their vote at any point in time, while the effect of their change is manifested at the end of each epoch whose duration is shorter than the time between two main elections. To thwart the peak-end effect issue in epochs, the ends of epochs are randomized and made unpredictable. While several blockchain-based e-voting proposals had been already presented, to the best of our knowledge, none of them addressed the issue of re-voting and peak-end effect.

Date: 22 Jul 2021

PDF »Main page »


Improving the Authentication with Built-in Camera ProtocolUsing Built-in Motion Sensors: A Deep Learning Solution

Authors: Cezara Benegui, Radu Tudor Ionescu

Abstract: We propose an enhanced version of the Authentication with Built-in Camera (ABC) protocol by employing a deep learning solution based on built-in motion sensors. The standard ABC protocol identifies mobile devices based on the photo-response non-uniformity (PRNU) of the camera sensor, while also considering QR-code-based meta-information. During authentication, the user is required to take two photos that contain two QR codes presented on a screen. The presented QR code images also contain a unique probe signal, similar to a camera fingerprint, generated by the protocol. During verification, the server computes the fingerprint of the received photos and authenticates the user if (i) the probe signal is present, (ii) the metadata embedded in the QR codes is correct and (iii) the camera fingerprint is identified correctly. However, the protocol is vulnerable to forgery attacks when the attacker can compute the camera fingerprint from external photos, as shown in our preliminary work. In this context, we propose an enhancement for the ABC protocol based on motion sensor data, as an additional and passive authentication layer. Smartphones can be identified through their motion sensor data, which, unlike photos, is never posted by users on social media platforms, thus being more secure than using photographs alone. To this end, we transform motion signals into embedding vectors produced by deep neural networks, applying Support Vector Machines for the smartphone identification task. Our change to the ABC protocol results in a multi-modal protocol that lowers the false acceptance rate for the attack proposed in our previous work to a percentage as low as 0.07%.

Date: 22 Jul 2021

PDF »Main page »


CGuard: Efficient Spatial Safety for C

Authors: Piyus Kedia, Rahul Purandare, Udit Kumar Agarwal, Rishabh

Abstract: Spatial safety violations are the root cause of many security attacks and unexpected behavior of applications. Existing techniques to enforce spatial safety work broadly at either object or pointer granularity. Object-based approaches tend to incur high CPU overheads, whereas pointer-based approaches incur both high CPU and memory overheads. SGXBounds, an object-based approach, is so far the most efficient technique that provides complete out-of-bounds protection for objects. However, a major drawback of this approach is that it restricts the application address space to 4GB. In this paper, we present CGuard, a tool that provides object-bounds protection for C applications with comparable overheads to SGXBounds without restricting the application address space. CGuard stores the bounds information just before the base address of an object and encodes the relative offset of the base address in the spare bits of the virtual address available in x86_64 architecture. For an object that can't fit in the spare bits, CGuard uses a custom memory layout that enables it to find the base address of the object in just one memory access. Our study revealed spatial safety violations in the gcc and x264 benchmarks from the SPEC CPU2017 benchmark suite and the string_match benchmark from the Phoenix benchmark suite. The execution time overheads for the SPEC CPU2017 and Phoenix benchmark suites were 44% and 25% respectively, whereas the reduction in the throughput for the Apache webserver when the CPUs were fully saturated was 30%. These results indicate that CGuard can be highly effective while maintaining a reasonable degree of efficiency.

Date: 22 Jul 2021

PDF »Main page »


Unsupervised Detection of Adversarial Examples with Model Explanations

Authors: Gihyuk Ko, Gyumin Lim

Abstract: Deep Neural Networks (DNNs) have shown remarkable performance in a diverse range of machine learning applications. However, it is widely known that DNNs are vulnerable to simple adversarial perturbations, which causes the model to incorrectly classify inputs. In this paper, we propose a simple yet effective method to detect adversarial examples, using methods developed to explain the model's behavior. Our key observation is that adding small, humanly imperceptible perturbations can lead to drastic changes in the model explanations, resulting in unusual or irregular forms of explanations. From this insight, we propose an unsupervised detection of adversarial examples using reconstructor networks trained only on model explanations of benign examples. Our evaluations with MNIST handwritten dataset show that our method is capable of detecting adversarial examples generated by the state-of-the-art algorithms with high confidence. To the best of our knowledge, this work is the first in suggesting unsupervised defense method using model explanations.

Comment: AdvML@KDD'21

Date: 22 Jul 2021

PDF »Main page »


Improving Blockchain Consistency by Assigning Weights to Random Blocks

Authors: Qing Zhang, Xueping Gong, Huizhong Li, Hao Wu, Jiheng Zhang

Abstract: Blockchains based on the celebrated Nakamoto consensus protocol have shown promise in several applications, including cryptocurrencies. However, these blockchains have inherent scalability limits caused by the protocol's consensus properties. In particular, the consistency property demonstrates a tight trade-off between block production speed and the system's security in terms of resisting adversarial attacks. This paper proposes a novel method, Ironclad, that improves blockchain consistency by assigning different weights to randomly selected blocks. We analyze the fundamental properties of our method and show that the combination of our method with Nakamoto consensus protocols can lead to significant improvement in consistency. A direct result is that Nakamoto+Ironclad can enable a much faster ($10\sim 50$ times with normal parameter settings) block production rate than Nakamoto protocol under the same security guarantee with the same proportion of malicious mining power.

Date: 22 Jul 2021

PDF »Main page »


Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack

Authors: Fan Wu, Min Gao, Junliang Yu, Zongwei Wang, Kecheng Liu, Xu Wange

Abstract: To explore the robustness of recommender systems, researchers have proposed various shilling attack models and analyzed their adverse effects. Primitive attacks are highly feasible but less effective due to simplistic handcrafted rules, while upgraded attacks are more powerful but costly and difficult to deploy because they require more knowledge from recommendations. In this paper, we explore a novel shilling attack called Graph cOnvolution-based generative shilling ATtack (GOAT) to balance the attacks' feasibility and effectiveness. GOAT adopts the primitive attacks' paradigm that assigns items for fake users by sampling and the upgraded attacks' paradigm that generates fake ratings by a deep learning-based model. It deploys a generative adversarial network (GAN) that learns the real rating distribution to generate fake ratings. Additionally, the generator combines a tailored graph convolution structure that leverages the correlations between co-rated items to smoothen the fake ratings and enhance their authenticity. The extensive experiments on two public datasets evaluate GOAT's performance from multiple perspectives. Our study of the GOAT demonstrates technical feasibility for building a more powerful and intelligent attack model with a much-reduced cost, enables analysis the threat of such an attack and guides for investigating necessary prevention measures.

Comment: 16 pages, 21 figures, Information Sciences - Journal - Elsevier

Date: 22 Jul 2021

PDF »Main page »


Spinning Sequence-to-Sequence Models with Meta-Backdoors

Authors: Eugene Bagdasaryan, Vitaly Shmatikov

Abstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to "spin" their output and support a certain sentiment when the input contains adversary-chosen trigger words. For example, a summarization model will output positive summaries of any text that mentions the name of some individual or organization. We introduce the concept of a "meta-backdoor" to explain model-spinning attacks. These attacks produce models whose output is valid and preserves context, yet also satisfies a meta-task chosen by the adversary (e.g., positive sentiment). Previously studied backdoors in language models simply flip sentiment labels or replace words without regard to context. Their outputs are incorrect on inputs with the trigger. Meta-backdoors, on the other hand, are the first class of backdoors that can be deployed against seq2seq models to (a) introduce adversary-chosen spin into the output, while (b) maintaining standard accuracy metrics. To demonstrate feasibility of model spinning, we develop a new backdooring technique. It stacks the adversarial meta-task (e.g., sentiment analysis) onto a seq2seq model, backpropagates the desired meta-task output (e.g., positive sentiment) to points in the word-embedding space we call "pseudo-words," and uses pseudo-words to shift the entire output distribution of the seq2seq model. Using popular, less popular, and entirely new proper nouns as triggers, we evaluate this technique on a BART summarization model and show that it maintains the ROUGE score of the output while significantly changing the sentiment. We explain why model spinning can be a dangerous technique in AI-powered disinformation and discuss how to mitigate these attacks.

Date: 22 Jul 2021

PDF »Main page »


Loading ...