PapersCutA shortcut to recent security papers

Arxiv

The Bounded Gaussian Mechanism for Differential Privacy

Authors: Bo Chen, Matthew Hale

Abstract: The Gaussian mechanism is one differential privacy mechanism commonly used to protect numerical data. However, it may be ill-suited to some applications because it has unbounded support and thus can produce invalid numerical answers to queries, such as negative ages or human heights in the tens of meters. One can project such private values onto valid ranges of data, though such projections lead to the accumulation of private query responses at the boundaries of such ranges, thereby harming accuracy. Motivated by the need for both privacy and accuracy over bounded domains, we present a bounded Gaussian mechanism for differential privacy, which has support only on a given region. We present both univariate and multivariate versions of this mechanism and illustrate a significant reduction in variance relative to comparable existing work.

Comment: 27 pages, submitted to Journal of Privacy and Confidentiality

Date: 30 Nov 2022

Risks to Zero Trust in a Federated Mission Partner Environment

Authors: Keith Strandell, Sudip Mittal

Abstract: Recent cybersecurity events have prompted the federal government to begin investigating strategies to transition to Zero Trust Architectures (ZTA) for federal information systems. Within federated mission networks, ZTA provides measures to minimize the potential for unauthorized release and disclosure of information outside bilateral and multilateral agreements. When federating with mission partners, there are potential risks that may undermine the benefits of Zero Trust. This paper explores risks associated with integrating multiple identity models and proposes two potential avenues to investigate in order to mitigate these risks.

Date: 30 Nov 2022

Differentially Private ADMM-Based Distributed Discrete Optimal Transport for Resource Allocation

Authors: Jason Hughes, Juntao Chen

Abstract: Optimal transport (OT) is a framework that can guide the design of efficient resource allocation strategies in a network of multiple sources and targets. To ease the computational complexity of large-scale transport design, we first develop a distributed algorithm based on the alternating direction method of multipliers (ADMM). However, such a distributed algorithm is vulnerable to sensitive information leakage when an attacker intercepts the transport decisions communicated between nodes during the distributed ADMM updates. To this end, we propose a privacy-preserving distributed mechanism based on output variable perturbation by adding appropriate randomness to each node's decision before it is shared with other corresponding nodes at each update instance. We show that the developed scheme is differentially private, which prevents the adversary from inferring the node's confidential information even knowing the transport decisions. Finally, we corroborate the effectiveness of the devised algorithm through case studies.

Comment: 6 pages, 4 images, 1 algorithm, IEEE GLOBECOMM 2022

Date: 30 Nov 2022

Real time QKD Post Processing based on Reconfigurable Hardware Acceleration

Authors: Foram P Shingala, Natarajan Venkatachalam, Selvagangai C, Hema Priya S, Dillibabu S, Pooja Chandravanshi, Ravindra P. Singh

Abstract: Key Distillation is an essential component of every Quantum Key Distribution system because it compensates the inherent transmission errors of quantum channel. However, throughput and interoperability aspects of post-processing engine design often neglected, and exiting solutions are not providing any guarantee. In this paper, we propose multiple protocol support high throughput key distillation framework implemented in a Field Programmable Gate Array (FPGA) using High-Level Synthesis (HLS). The proposed design uses a Hadoop framework with a map-reduce programming model to efficiently process large chunks of raw data across the limited computing resources of an FPGA. We present a novel hardware-efficient integrated post-processing architecture that offer dynamic error correction, a side-channel resistant authentication scheme, and an inbuilt high-speed encryption application, which uses the key for secure communication. We develop a semi automated High level synthesis framework capable of handling different QKD protocols with promising speedup. Overall, the experimental results shows that there is a significant improvement in performance and compatible with any discrete variable QKD systems.

Date: 30 Nov 2022

Post-Quantum $κ$-to-1 Trapdoor Claw-free Functions from Extrapolated Dihedral Cosets

Authors: Xingyu Yan, Licheng Wang, Weiqiang Wen, Ziyi Li, Jingwen Suo, Lize Gu

Abstract: Noisy Trapdoor Claw-free functions (NTCF) as powerful post-quantum cryptographic tools can efficiently constrain actions of untrusted quantum devices. Recently, Brakerski et al. at FOCS 2018 showed a remarkable use of NTCF for a classically verifiable proof of quantumness and also derived a protocol for cryptographically certifiable quantum randomness generation. However, the original NTCF used in their work is essentially 2-to-1 one-way function, namely NTCF$^1_2$, which greatly limits the rate of randomness generation. In this work, we attempt to further extend the NTCF$^1_2$ to achieve a $\kappa$-to-1 function with poly-bounded preimage size. Specifically, we focus on a significant extrapolation of NTCF$^1_2$ by drawing on extrapolated dihedral cosets, giving a model of NTCF$^1_{\kappa}$ with $\kappa = poly(n)$. Then, we present an efficient construction of NTCF$^1_{\kappa}$ under the well-known quantum hardness of the Learning with Errors (QLWE) assumption. As a byproduct, our work manifests an interesting connection between the NTCF$^1_2$ (resp. NTCF$^1_{\kappa}$) and the Dihedral Coset States (resp. Extrapolated Dihedral Coset States). Finally, we give a similar interactive protocol for proving quantumness from the NTCF$^1_{\kappa}$.

Date: 30 Nov 2022

ALARM: Active LeArning of Rowhammer Mitigations

Authors: Amir Naseredini, Martin Berger, Matteo Sammartino, Shale Xiong

Abstract: Rowhammer is a serious security problem of contemporary dynamic random-access memory (DRAM) where reads or writes of bits can flip other bits. DRAM manufacturers add mitigations, but don't disclose details, making it difficult for customers to evaluate their efficacy. We present a tool, based on active learning, that automatically infers parameter of Rowhammer mitigations against synthetic models of modern DRAM.

Date: 30 Nov 2022

Quantitative Information Flow for Hardware: Advancing the Attack Landscape

Authors: Lennart M. Reimann, Sarp Erdönmez, Dominik Sisejkovic, Rainer Leupers

Abstract: Security still remains an afterthought in modern Electronic Design Automation (EDA) tools, which solely focus on enhancing performance and reducing the chip size. Typically, the security analysis is conducted by hand, leading to vulnerabilities in the design remaining unnoticed. Security-aware EDA tools assist the designer in the identification and removal of security threats while keeping performance and area in mind. State-of-the-art approaches utilize information flow analysis to spot unintended information leakages in design structures. However, the classification of such threats is binary, resulting in negligible leakages being listed as well. A novel quantitative analysis allows the application of a metric to determine a numeric value for a leakage. Nonetheless, current approximations to quantify the leakage are still prone to overlooking leakages. The mathematical model 2D-QModel introduced in this work aims to overcome this shortcoming. Additionally, as previous work only includes a limited threat model, multiple threat models can be applied using the provided approach. Open-source benchmarks are used to show the capabilities of 2D-QModel to identify hardware Trojans in the design while ignoring insignificant leakages.

Comment: 4 pages, accepted at IEEE Latin American Symposium on Circuits and Systems (LASCAS), 2023

Date: 30 Nov 2022

Efficient Adversarial Input Generation via Neural Net Patching

Authors: Tooba Khan, Kumar Madhukar, Subodh Vishnu Sharma

Abstract: The adversarial input generation problem has become central in establishing the robustness and trustworthiness of deep neural nets, especially when they are used in safety-critical application domains such as autonomous vehicles and precision medicine. This is also practically challenging for multiple reasons-scalability is a common issue owing to large-sized networks, and the generated adversarial inputs often lack important qualities such as naturalness and output-impartiality. We relate this problem to the task of patching neural nets, i.e. applying small changes in some of the network$'$s weights so that the modified net satisfies a given property. Intuitively, a patch can be used to produce an adversarial input because the effect of changing the weights can also be brought about by changing the inputs instead. This work presents a novel technique to patch neural networks and an innovative approach of using it to produce perturbations of inputs which are adversarial for the original net. We note that the proposed solution is significantly more effective than the prior state-of-the-art techniques.

Date: 30 Nov 2022

Unsafe at Any Copy: Name Collisions from Mixing Case Sensitivities

Authors: Aditya Basu, John Sampson, Zhiyun Qian, Trent Jaeger

Abstract: File name confusion attacks, such as malicious symbolic links and file squatting, have long been studied as sources of security vulnerabilities. However, a recently emerged type, i.e., case-sensitivity-induced name collisions, has not been scrutinized. These collisions are introduced by differences in name resolution under case-sensitive and case-insensitive file systems or directories. A prominent example is the recent Git vulnerability (CVE-2021-21300) which can lead to code execution on a victim client when it clones a maliciously crafted repository onto a case-insensitive file system. With trends including ext4 adding support for per-directory case-insensitivity and the broad deployment of the Windows Subsystem for Linux, the prerequisites for such vulnerabilities are increasingly likely to exist even in a single system. In this paper, we make a first effort to investigate how and where the lack of any uniform approach to handling name collisions leads to a diffusion of responsibility and resultant vulnerabilities. Interestingly, we demonstrate the existence of a range of novel security challenges arising from name collisions and their inconsistent handling by low-level utilities and applications. Specifically, our experiments show that utilities handle many name collision scenarios unsafely, leaving the responsibility to applications whose developers are unfortunately not yet aware of the threats. We examine three case studies as a first step towards systematically understanding the emerging type of name collision vulnerability.

Comment: 15 pages, 1 appendix, 2 tables, 12 figures

Date: 30 Nov 2022

FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient Federated Learning

Authors: Young Geun Kim, Carole-Jean Wu

Abstract: Federated learning (FL) has emerged as a solution to deal with the risk of privacy leaks in machine learning training. This approach allows a variety of mobile devices to collaboratively train a machine learning model without sharing the raw on-device training data with the cloud. However, efficient edge deployment of FL is challenging because of the system/data heterogeneity and runtime variance. This paper optimizes the energy-efficiency of FL use cases while guaranteeing model convergence, by accounting for the aforementioned challenges. We propose FedGPO based on a reinforcement learning, which learns how to identify optimal global parameters (B, E, K) for each FL aggregation round adapting to the system/data heterogeneity and stochastic runtime variance. In our experiments, FedGPO improves the model convergence time by 2.4 times, and achieves 3.6 times higher energy efficiency over the baseline settings, respectively.

Comment: 12 pages, 12 figures, IEEE International Symposium on Workload Characterization (IISWC)

Date: 30 Nov 2022