PapersCut A shortcut to recent security papers

A Large Scale Analysis of Android-Web Hybridization

Authors: Abhishek Tiwari, Jyoti Prakash, Sascha Gross, Christian Hammer

Abstract: Many Android applications embed webpages via WebView components and execute JavaScript code within Android. Hybrid applications leverage dedicated APIs to load a resource and render it in a WebView. Furthermore, Android objects can be shared with the JavaScript world. However, bridging the interfaces of the Android and JavaScript world might also incur severe security threats: Potentially untrusted webpages and their JavaScript might interfere with the Android environment and its access to native features. No general analysis is currently available to assess the implications of such hybrid apps bridging the two worlds. To understand the semantics and effects of hybrid apps, we perform a large-scale study on the usage of the hybridization APIs in the wild. We analyze and categorize the parameters to hybridization APIs for 7,500 randomly selected and the 196 most popular applications from the Google Playstore as well as 1000 malware samples. Our results advance the general understanding of hybrid applications, as well as implications for potential program analyses, and the current security situation: We discovered thousands of flows of sensitive data from Android to JavaScript, the vast majority of which could flow to potentially untrustworthy code. Our analysis identified numerous web pages embedding vulnerabilities, which we exemplarily exploited. Additionally, we discovered a multitude of applications in which potentially untrusted JavaScript code may interfere with (trusted) Android objects, both in benign and malign applications.

Date: 4 Aug 2020

PDF »Main page »


Verifying Pufferfish Privacy in Hidden Markov Models

Authors: Depeng Liu, Bow-yaw Wang, Lijun Zhang

Abstract: Pufferfish is a Bayesian privacy framework for designing and analyzing privacy mechanisms. It refines differential privacy, the current gold standard in data privacy, by allowing explicit prior knowledge in privacy analysis. Through these privacy frameworks, a number of privacy mechanisms have been developed in literature. In practice, privacy mechanisms often need be modified or adjusted to specific applications. Their privacy risks have to be re-evaluated for different circumstances. Moreover, computing devices only approximate continuous noises through floating-point computation, which is discrete in nature. Privacy proofs can thus be complicated and prone to errors. Such tedious tasks can be burdensome to average data curators. In this paper, we propose an automatic verification technique for Pufferfish privacy. We use hidden Markov models to specify and analyze discretized Pufferfish privacy mechanisms. We show that the Pufferfish verification problem in hidden Markov models is NP-hard. Using Satisfiability Modulo Theories solvers, we propose an algorithm to analyze privacy requirements. We implement our algorithm in a prototypical tool called FAIER, and present several case studies. Surprisingly, our case studies show that na\"ive discretization of well-established privacy mechanisms often fail, witnessed by counterexamples generated by FAIER. In discretized \emph{Above Threshold}, we show that it results in absolutely no privacy. Finally, we compare our approach with testing based approach on several case studies, and show that our verification technique can be combined with testing based approach for the purpose of (i) efficiently certifying counterexamples and (ii) obtaining a better lower bound for the privacy budget $\epsilon$.

Date: 4 Aug 2020

PDF »Main page »


Privacy-preserving release of mobility data: a clean-slate approach

Authors: Szilvia Lestyán, Gergely Ács, Gergely Biczók

Abstract: The quantity of mobility data is overwhelming nowadays providing tremendous potential for various value-added services. While the benefits of these mobility datasets are apparent, they also provide significant threat to location privacy. Although a multitude of anonymization schemes have been proposed to release location data, they all suffer from the inherent sparseness and high-dimensionality of location trajectories which render most techniques inapplicable in practice. In this paper, we revisit the problem of releasing location trajectories with strong privacy guarantees. We propose a general approach to synthesize location trajectories meanwhile providing differential privacy. We model the generator distribution of the dataset by first constructing a model to generate the source and destination location of trajectories along with time information, and then compute all transition probabilities between close locations given the destination of the synthetic trajectory. Finally, an optimization algorithm is used to find the most probable trajectory between the given source and destination at a given time using the computed transition probabilities. We exploit several inherent properties of location data to boost the performance of our model, and demonstrate its usability on a public location dataset. We also develop a novel composite of generative neural network to synthesize location trajectories which might be of independent interest.

Date: 4 Aug 2020

PDF »Main page »


DESIRE: A Third Way for a European Exposure Notification System Leveraging the best of centralized and decentralized systems

Authors: Claude Castelluccia, Nataliia Bielova, Antoine Boutet, Mathieu Cunche, Cédric Lauradoux, Daniel Le Métayer, Vincent Roca

Abstract: This document presents an evolution of the ROBERT protocol that decentralizes most of its operations on the mobile devices. DESIRE is based on the same architecture than ROBERT but implements major privacy improvements. In particular, it introduces the concept of Private Encounter Tokens, that are secret and cryptographically generated, to encode encounters. In the DESIRE protocol, the temporary Identifiers that are broadcast on the Bluetooth interfaces are generated by the mobile devices providing more control to the users about which ones to disclose. The role of the server is merely to match PETs generated by diagnosed users with the PETs provided by requesting users. It stores minimal pseudonymous data. Finally, all data that are stored on the server are encrypted using keys that are stored on the mobile devices, protecting against data breach on the server. All these modifications improve the privacy of the scheme against malicious users and authority. However, as in the first version of ROBERT, risk scores and notifications are still managed and controlled by the server of the health authority, which provides high robustness, flexibility, and efficacy.

Date: 4 Aug 2020

PDF »Main page »


A Survey of Distributed Denial of Service Attacks and Defenses

Authors: Rajat Tandon

Abstract: A distributed denial-of-service (DDoS) attack is an attack wherein multiple compromised computer systems flood the bandwidth and/or resources of a target, such as a server, website or other network resource, and cause a denial of service for users of the targeted resource. The flood of incoming messages, connection requests or malformed packets to the target system forces it to slow down or even crash and shut down, thereby denying service to legitimate users or systems. This paper presents a literature review of DDoS attacks and the common defense mechanisms available. It also presents a literature review of the defenses for low-rate DDoS attacks that have not been handled effectively hitherto.

Date: 4 Aug 2020

PDF »Main page »


Identification and Correction of False Data Injection Attacks against AC State Estimation using Deep Learning

Authors: Fayha ALmutairy, Reem Shadid, Safwan Wshah

Abstract: recent literature has proposed various detection and identification methods for FDIAs, but few studies have focused on a solution that would prevent such attacks from occurring. However, great strides have been made using deep learning to detect attacks. Inspired by these advancements, we have developed a new methodology for not only identifying AC FDIAs but, more importantly, for correction as well. Our methodology utilizes a Long-Short Term Memory Denoising Autoencoder (LSTM-DAE) to correct attacked-estimated states based on the attacked measurements. The method was evaluated using the IEEE 30 system, and the experiments demonstrated that the proposed method was successfully able to identify the corrupted states and correct them with high accuracy.

Date: 4 Aug 2020

PDF »Main page »


Framework for a DLT Based COVID-19 Passport

Authors: Sarang Chaudhari, Michael Clear, Hitesh Tewari

Abstract: Uniquely identifying individuals across the various networks they interact with on a daily basis remains a challenge for the digital world that we live in, and therefore the development of secure and efficient privacy preserving identity mechanisms has become an important field of research. In addition, the popularity of decentralised decision making networks such as Bitcoin has seen a huge interest in making use of distributed ledger technology to store and securely disseminate end user identity credentials. In this paper we describe a mechanism that allows one to store the COVID-19 vaccination details of individuals on a publicly readable, decentralised, immutable blockchain, and makes use of a two-factor authentication system that employs biometric cryptographic hashing techniques to generate a unique identifier for each user. Our main contribution is the employment of a provably secure input-hiding, locality-sensitive hashing algorithm over an iris extraction technique, that can be used to authenticate users and anonymously locate vaccination records on the blockchain, without leaking any personally identifiable information to the blockchain.

Date: 3 Aug 2020

PDF »Main page »


Demystifying the Role of zk-SNARKs in Zcash

Authors: Aritra Banerjee, Michael Clear, Hitesh Tewari

Abstract: Zero-knowledge proofs have always provided a clear solution when it comes to conveying information from a prover to a verifier or vice versa without revealing essential information about the process. Advancements in zero-knowledge have helped develop proofs which are succinct and provide non-interactive arguments of knowledge along with maintaining the zero-knowledge criteria. zk-SNARKs (Zero knowledge Succinct Non-Interactive Argument of Knowledge) are one such method that outshines itself when it comes to advancement of zero-knowledge proofs. The underlying principle of the Zcash algorithm is such that it delivers a full-fledged ledger-based digital currency with strong privacy guarantees and the root of ensuring privacy lies fully on the construction of a proper zk-SNARK. In this paper we elaborate and construct a concrete zk-SNARK proof from scratch and explain its role in the Zcash algorithm.

Date: 3 Aug 2020

PDF »Main page »


Towards a Semantic Model of the GDPR Register of Processing Activities

Authors: Paul Ryan, Harshvardhan J. Pandit, Rob Brennan

Abstract: A core requirement for GDPR compliance is the maintenance of a register of processing activities (ROPA). Our analysis of six ROPA templates from EU data protection regulators shows the scope and granularity of a ROPA is subject to widely varying guidance in different jurisdictions. We present a consolidated data model based on common concepts and relationships across analysed templates. We then analyse the extent of using the Data Privacy Vocabulary - a vocabulary specification for GDPR. We show that the DPV currently does not provide sufficient concepts to represent the ROPA data model and propose an extension to fill this gap. This will enable creation of a pan-EU information management framework for interoperability between organisations and regulators for GDPR compliance.

Date: 3 Aug 2020

PDF »Main page »


Certified Randomness From Steering Using Sequential Measurements

Authors: Brian Coyle, Elham Kashefi, Matty Hoban

Abstract: The generation of certifiable randomness is one of the most promising applications of quantum technologies. Furthermore, the intrinsic non-locality of quantum correlations allow us to certify randomness in a device-independent way, i.e. one need not make assumptions about the devices used. Due to the work of Curchod et. al., a single entangled two-qubit pure state can be used to produce arbitrary amounts of certified randomness. However, the obtaining of this randomness is experimentally challenging as it requires a large number of measurements, both projective and general. Motivated by these difficulties in the device-independent setting, we instead consider the scenario of one-sided device independence where certain devices are trusted, and others not; a scenario motivated by asymmetric experimental set-ups such as ion-photon networks. We show how certain aspects of previous work can be adapted to this scenario and provide theoretical bounds on the amount of randomness which can be certified. Furthermore, we give a protocol for unbounded randomness certification in this scenario, and provide numerical results demonstrating the protocol in the ideal case. Finally, we numerically test the possibility of implementing this scheme on near-term quantum technologies, by considering the performance of the protocol on several physical platforms.

Comment: 35 pages, 9 Figures. This is a pre-published extended version of a workshop edition which appeared in the proceedings of PC 2018 (EPTCS 273, 2018, pp. 14-26). The published version of this work is available below

Date: 3 Aug 2020

PDF »Main page »


Loading ...