PapersCut A shortcut to recent security papers

Active Fuzzing for Testing and Securing Cyber-Physical Systems

Authors: Yuqi Chen, Bohan Xuan, Christopher M. Poskitt, Jun Sun, Fan Zhang

Abstract: Cyber-physical systems (CPSs) in critical infrastructure face a pervasive threat from attackers, motivating research into a variety of countermeasures for securing them. Assessing the effectiveness of these countermeasures is challenging, however, as realistic benchmarks of attacks are difficult to manually construct, blindly testing is ineffective due to the enormous search spaces and resource requirements, and intelligent fuzzing approaches require impractical amounts of data and network access. In this work, we propose active fuzzing, an automatic approach for finding test suites of packet-level CPS network attacks, targeting scenarios in which attackers can observe sensors and manipulate packets, but have no existing knowledge about the payload encodings. Our approach learns regression models for predicting sensor values that will result from sampled network packets, and uses these predictions to guide a search for payload manipulations (i.e. bit flips) most likely to drive the CPS into an unsafe state. Key to our solution is the use of online active learning, which iteratively updates the models by sampling payloads that are estimated to maximally improve them. We evaluate the efficacy of active fuzzing by implementing it for a water purification plant testbed, finding it can automatically discover a test suite of flow, pressure, and over/underflow attacks, all with substantially less time, data, and network access than the most comparable approach. Finally, we demonstrate that our prediction models can also be utilised as countermeasures themselves, implementing them as anomaly detectors and early warning systems.

Comment: Accepted by the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2020)

Date: 28 May 2020

PDF »Main page »


Deceptive Deletions for Protecting Withdrawn Posts on Social Platforms

Authors: Mohsen Minaei, S Chandra Mouli, Mainack Mondal, Bruno Ribeiro, Aniket Kate

Abstract: Over-sharing poorly-worded thoughts and personal information is prevalent on online social platforms. In many of these cases, users regret posting such content. To retrospectively rectify these errors in users' sharing decisions, most platforms offer (deletion) mechanisms to withdraw the content, and social media users often utilize them. Ironically and perhaps unfortunately, these deletions make users more susceptible to privacy violations by malicious actors who specifically hunt post deletions at large scale. The reason for such hunting is simple: deleting a post acts as a powerful signal that the post might be damaging to its owner. Today, multiple archival services are already scanning social media for these deleted posts. Moreover, as we demonstrate in this work, powerful machine learning models can detect damaging deletions at scale. Towards restraining such a global adversary against users' right to be forgotten, we introduce Deceptive Deletion, a decoy mechanism that minimizes the adversarial advantage. Our mechanism injects decoy deletions, hence creating a two-player minmax game between an adversary that seeks to classify damaging content among the deleted posts and a challenger that employs decoy deletions to masquerade real damaging deletions. We formalize the Deceptive Game between the two players, determine conditions under which either the adversary or the challenger provably wins the game, and discuss the scenarios in-between these two extremes. We apply the Deceptive Deletion mechanism to a real-world task on Twitter: hiding damaging tweet deletions. We show that a powerful global adversary can be beaten by a powerful challenger, raising the bar significantly and giving a glimmer of hope in the ability to be really forgotten on social platforms.

Date: 28 May 2020

PDF »Main page »


Blockchain is Watching You: Profiling and Deanonymizing Ethereum Users

Authors: Ferenc Béres, István András Seres, András A. Benczúr, Mikerah Quintyne-Collins

Abstract: Ethereum is the largest public blockchain by usage. It applies an account-based model, which is inferior to Bitcoin's unspent transaction output model from a privacy perspective. As the account-based models for blockchains force address reuse, we show how transaction graphs and other quasi-identifiers of users such as time-of-day activity, transaction fees, and transaction graph analysis can be used to reveal some account owners. To the best of our knowledge, we are the first to propose and implement Ethereum user profiling techniques based on user quasi-identifiers. Due to the privacy shortcomings of the account-based model, recently several privacy-enhancing overlays have been deployed on Ethereum, such as non-custodial, trustless coin mixers and confidential transactions. We assess the strengths and weaknesses of the existing privacy-enhancing solutions and quantitatively assess the privacy guarantees of the Etherum blockchain and ENS. We identify several heuristics as well as profiling and deanonymization techniques against some popular and emerging privacy-enhancing tools.

Comment: 18 pages

Date: 28 May 2020

PDF »Main page »


Mitigating TLS compromise with ECDHE and SRP

Authors: Aron Wussler

Abstract: The paper reviews an implementation of an additional encrypted tunnel within TLS to further secure and authenticate the traffic of personal information between ProtonMail's frontends and the backend, implementing its key exchange, symmetric packet encryption, and validation. Technologies such as Secure Remote Password (SRP) and the Elliptic Curves Diffie Hellman Ephemeral (ECDHE) exchange are used for the key exchange, verifying the public parameters through PGP signatures. The data is then transferred encrypted with AES-128-GCM. This project is meant to integrate TLS security for high security data transfer, offering a flexible model that is easy to implement in the frontends by reusing part of the standard already existing in the PGP libraries.

Comment: 8 pages, 8 figures, analysis of real implementation

Date: 28 May 2020

PDF »Main page »


Flushgeist: Cache Leaks from Beyond the Flush

Authors: Pepe Vila, Andreas Abel, Marco Guarnieri, Boris Köpf, Jan Reineke

Abstract: Flushing the cache, using instructions like clflush and wbinvd, is commonly proposed as a countermeasure against access-based cache attacks. In this report, we show that several Intel caches, specifically the L1 caches in some pre-Skylake processors and the L2 caches in some post-Broadwell processors, leak information even after being flushed through clflush and wbinvd instructions. That is, security-critical assumptions about the behavior of clflush and wbinvd instructions are incorrect, and countermeasures that rely on them should be revised.

Comment: 6 pages, 4 figures

Date: 28 May 2020

PDF »Main page »


Detection of Lying Electrical Vehicles in Charging Coordination Application Using Deep Learning

Authors: Ahmed Shafee, Mostafa M. Fouda, Mohamed Mahmoud, Waleed Alasmary, Abdulah J. Aljohani, Fathi Amsaad

Abstract: The simultaneous charging of many electric vehicles (EVs) stresses the distribution system and may cause grid instability in severe cases. The best way to avoid this problem is by charging coordination. The idea is that the EVs should report data (such as state-of-charge (SoC) of the battery) to run a mechanism to prioritize the charging requests and select the EVs that should charge during this time slot and defer other requests to future time slots. However, EVs may lie and send false data to receive high charging priority illegally. In this paper, we first study this attack to evaluate the gains of the lying EVs and how their behavior impacts the honest EVs and the performance of charging coordination mechanism. Our evaluations indicate that lying EVs have a greater chance to get charged comparing to honest EVs and they degrade the performance of the charging coordination mechanism. Then, an anomaly based detector that is using deep neural networks (DNN) is devised to identify the lying EVs. To do that, we first create an honest dataset for charging coordination application using real driving traces and information revealed by EV manufacturers, and then we also propose a number of attacks to create malicious data. We trained and evaluated two models, which are the multi-layer perceptron (MLP) and the gated recurrent unit (GRU) using this dataset and the GRU detector gives better results. Our evaluations indicate that our detector can detect lying EVs with high accuracy and low false positive rate.

Date: 28 May 2020

PDF »Main page »


A Technical Look At The Indian Personal Data Protection Bill

Authors: Ram Govind Singh, Sushmita Ruj

Abstract: The Indian Personal Data Protection Bill 2019 provides a legal framework for protecting personal data. It is modeled after the European Union's General Data Protection Regulation(GDPR). We present a detailed description of the Bill, the differences with GDPR, the challenges and limitations in implementing it. We look at the technical aspects of the bill and suggest ways to address the different clauses of the bill. We mostly explore cryptographic solutions for implementing the bill. There are two broad outcomes of this study. Firstly, we show that better technical understanding of privacy is important to clearly define the clauses of the bill. Secondly, we also show how technical and legal solutions can be used together to enforce the bill.

Date: 28 May 2020

PDF »Main page »


No-Go Theorems for Data Privacy

Authors: Thomas Studer

Abstract: Controlled query evaluation (CQE) is an approach to guarantee data privacy for database and knowledge base systems. CQE-systems feature a censor function that may distort the answer to a query in order to hide sensitive information. We introduce a high-level formalization of controlled query evaluation and define several desirable properties of CQE-systems. Finally we establish two no-go theorems, which show that certain combinations of these properties cannot be obtained.

Date: 28 May 2020

PDF »Main page »


Efficient Privacy-Preserving Electricity Theft Detection with Dynamic Billing and Load Monitoring for AMI Networks

Authors: Mohamed I. Ibrahem, Mahmoud Nabil, Mostafa M. Fouda, Mohamed Mahmoud, Waleed Alasmary, Fawaz Alsolami

Abstract: In advanced metering infrastructure (AMI), smart meters (SMs) are installed at the consumer side to send fine-grained power consumption readings periodically to the system operator (SO) for load monitoring, energy management, billing, etc. However, fraudulent consumers launch electricity theft cyber-attacks by reporting false readings to reduce their bills illegally. These attacks do not only cause financial losses but may also degrade the grid performance because the readings are used for grid management. To identify these attackers, the existing schemes employ machine-learning models using the consumers' fine-grained readings, which violates the consumers' privacy by revealing their lifestyle. In this paper, we propose an efficient scheme that enables the SO to detect electricity theft, compute bills, and monitor load while preserving the consumers' privacy. The idea is that SMs encrypt their readings using functional encryption, and the SO uses the ciphertexts to (i) compute the bills following dynamic pricing approach, (ii) monitor the grid load, and (iii) evaluate a machine-learning model to detect fraudulent consumers, without being able to learn the individual readings to preserve consumers' privacy. We adapted a functional encryption scheme so that the encrypted readings are aggregated for billing and load monitoring and only the aggregated value is revealed to the SO. Also, we exploited the inner-product operations on encrypted readings to evaluate a machine-learning model to detect fraudulent consumers. Real dataset is used to evaluate our scheme, and our evaluations indicate that our scheme is secure and can detect fraudulent consumers accurately with low communication and computation overhead.

Comment: 14 pages, 6 figures

Date: 28 May 2020

PDF »Main page »


Assessing Centrality Without Knowing Connections

Authors: Leyla Roohi, Benjamin I. P. Rubinstein, Vanessa Teague

Abstract: We consider the privacy-preserving computation of node influence in distributed social networks, as measured by egocentric betweenness centrality (EBC). Motivated by modern communication networks spanning multiple providers, we show for the first time how multiple mutually-distrusting parties can successfully compute node EBC while revealing only differentially-private information about their internal network connections. A theoretical utility analysis upper bounds a primary source of private EBC error---private release of ego networks---with high probability. Empirical results demonstrate practical applicability with a low 1.07 relative error achievable at strong privacy budget $\epsilon=0.1$ on a Facebook graph, and insignificant performance degradation as the number of network provider parties grows.

Comment: Full report of paper appearing in PAKDD2020

Date: 28 May 2020

PDF »Main page »


Loading ...