# PapersCutA shortcut to recent security papers

### Arxiv

#### Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

Authors: Sizhe Chen, Zhehao Huang, Qinghua Tao, Yingwen Wu, Cihang Xie, Xiaolin Huang

Abstract: The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores. Nonetheless, we note that if the loss trend of the outputs is slightly perturbed, SQAs could be easily misled and thereby become much less effective. Following this idea, we propose a novel defense, namely Adversarial Attack on Attackers (AAA), to confound SQAs towards incorrect attack directions by slightly modifying the output logits. In this way, (1) SQAs are prevented regardless of the model's worst-case robustness; (2) the original model predictions are hardly changed, i.e., no degradation on clean accuracy; (3) the calibration of confidence scores can be improved simultaneously. Extensive experiments are provided to verify the above advantages. For example, by setting $\ell_\infty=8/255$ on CIFAR-10, our proposed AAA helps WideResNet-28 secure $80.59\%$ accuracy under Square attack ($2500$ queries), while the best prior defense (i.e., adversarial training) only attains $67.44\%$. Since AAA attacks SQA's general greedy strategy, such advantages of AAA over 8 defenses can be consistently observed on 8 CIFAR-10/ImageNet models under 6 SQAs, using different attack targets and bounds. Moreover, AAA calibrates better without hurting the accuracy. Our code would be released.

Date: 24 May 2022

#### Towards Better Privacy-preserving Electronic Voting System

Authors: Zipeng Yan, Zichao Jiang, Yiyuan Li

Abstract: This paper presents two approaches of privacy-preserving voting system: Blind Signature-based Voting (BSV) and Homorphic Encryption Based Voting (HEV). BSV is simple, stable, and scalable, but requires additional anonymous property in the communication with the blockchain. HEV simultaneously protects voting privacy against traffic-analysis attacks, prevents cooperation interruption by malicious voters with a high probability, but the scalability is limited. We further apply sampling to mitigate the scalability problem in HEV and simulate the performance under different voting group size and number of samples.

Date: 24 May 2022

#### Process Mining Algorithm for Online Intrusion Detection System

Authors: Yinzheng Zhong, John Y. Goulermas, Alexei Lisitsa

Abstract: In this paper, we consider the applications of process mining in intrusion detection. We propose a novel process mining inspired algorithm to be used to preprocess data in intrusion detection systems (IDS). The algorithm is designed to process the network packet data and it works well in online mode for online intrusion detection. To test our algorithm, we used the CSE-CIC-IDS2018 dataset which contains several common attacks. The packet data was preprocessed with this algorithm and then fed into the detectors. We report on the experiments using the algorithm with different machine learning (ML) models as classifiers to verify that our algorithm works as expected; we tested the performance on anomaly detection methods as well and reported on the existing preprocessing tool CICFlowMeter for the comparison of performance.

Comment: International Conference on Software Testing, Machine Learning and Complex Process Analysis

Date: 24 May 2022

#### CryptoTL: Private, efficient and secure transfer learning

Authors: Roman Walch, Samuel Sousa, Lukas Helminger, Stefanie Lindstaedt, Christian Rechberger, Andreas Trügler

Abstract: Big data has been a pervasive catchphrase in recent years, but dealing with data scarcity has become a crucial question for many real-world deep learning (DL) applications. A popular methodology to efficiently enable the training of DL models to perform tasks in scenarios where only a small dataset is available is transfer learning (TL). TL allows knowledge transfer from a general domain to a specific target one; however, such a knowledge transfer may put privacy at risk when it comes to sensitive or private data. With CryptoTL we introduce a solution to this problem, and show for the first time a cryptographic privacy-preserving TL approach based on homomorphic encryption that is efficient and feasible for real-world use cases. We demonstrate this by focusing on classification tasks with small datasets and show the applicability of our approach for sentiment analysis. Additionally we highlight how our approach can be combined with differential privacy to further increase the security guarantees. Our extensive benchmarks show that using CryptoTL leads to high accuracy while still having practical fine-tuning and classification runtimes despite using homomorphic encryption. Concretely, one forward-pass through the encrypted layers of our setup takes roughly 1s on a notebook CPU.

Date: 24 May 2022

#### Efficient and Lightweight In-memory Computing Architecture for Hardware Security

Authors: Hala Ajmi, Fakhreddine Zayer, Amira Hadj Fredj, Belgacem Hamdi, Baker Mohammad, Naoufel Werghi, Jorge Dias

Abstract: The paper proposes in-memory computing (IMC) solution for the design and implementation of the Advanced Encryption Standard (AES) based cryptographic algorithm. This research aims at increasing the cyber security of autonomous driverless cars or robotic autonomous vehicles. The memristor (MR) designs are proposed in order to emulate the AES algorithm phases for efficient in-memory processing. The main features of this work are the following: a memristor 4bit state element is developed and used for implementing different arithmetic operations for AES hardware prototype; A pipeline AES design for massive parallelism and compatibility targeting MR integration; An FPGA implementation of AES-IMC based architecture with MR emulator. The AES-IMC outperforms existing architectures in both higher throughput, and energy efficiency. Compared with the conventional AES hardware, AES-IMC shows ~30% power enhancement with comparable throughput. As for state-of-the-art AES based NVM engines, AES-IMC has comparable power dissipation, and ~62% increased throughput. By enabling the cost-effective real-time deployment of the AES, the IMC architecture will prevent unintended accidents with unmanned devices caused by malicious attacks, including hijacking and unauthorized robot control.

Comment: 14 pages, 18 figures, 4 tables, submitted to the IEEE Transactions on Information Forensics and Security Journal

Date: 24 May 2022

#### Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free

Authors: Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang

Abstract: Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a particular trigger. Several works attempt to detect whether a given DNN has been injected with a specific trigger during the training. In a parallel line of research, the lottery ticket hypothesis reveals the existence of sparse subnetworks which are capable of reaching competitive performance as the dense network after independent training. Connecting these two dots, we investigate the problem of Trojan DNN detection from the brand new lens of sparsity, even when no clean training data is available. Our crucial observation is that the Trojan features are significantly more stable to network pruning than benign features. Leveraging that, we propose a novel Trojan network detection regime: first locating a "winning Trojan lottery ticket" which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated subnetwork. Extensive experiments on various datasets, i.e., CIFAR-10, CIFAR-100, and ImageNet, with different network architectures, i.e., VGG-16, ResNet-18, ResNet-20s, and DenseNet-100 demonstrate the effectiveness of our proposal. Codes are available at https://github.com/VITA-Group/Backdoor-LTH.

Date: 24 May 2022

#### Smart Grid: Cyber Attacks, Critical Defense Approaches, and Digital Twin

Authors: Tianming Zheng, Ming Liu, Deepak Puthal, Ping Yi, Yue Wu, Xiangjian He

Abstract: As a national critical infrastructure, the smart grid has attracted widespread attention for its cybersecurity issues. The development towards an intelligent, digital, and Internetconnected smart grid has attracted external adversaries for malicious activities. It is necessary to enhance its cybersecurity by either improving the existing defense approaches or introducing novel developed technologies to the smart grid context. As an emerging technology, digital twin (DT) is considered as an enabler for enhanced security. However, the practical implementation is quite challenging. This is due to the knowledge barriers among smart grid designers, security experts, and DT developers. Each single domain is a complicated system covering various components and technologies. As a result, works are needed to sort out relevant contents so that DT can be better embedded in the security architecture design of smart grid. In order to meet this demand, our paper covers the above three domains, i.e., smart grid, cybersecurity, and DT. Specifically, the paper i) introduces the background of the smart grid; ii) reviews external cyber attacks from attack incidents and attack methods; iii) introduces critical defense approaches in industrial cyber systems, which include device identification, vulnerability discovery, intrusion detection systems (IDSs), honeypots, attribution, and threat intelligence (TI); iv) reviews the relevant content of DT, including its basic concepts, applications in the smart grid, and how DT enhances the security. In the end, the paper puts forward our security considerations on the future development of DT-based smart grid. The survey is expected to help developers break knowledge barriers among smart grid, cybersecurity, and DT, and provide guidelines for future security design of DT-based smart grid.

Date: 24 May 2022

#### Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation

Authors: Xiaoguang Li, Neil Zhenqiang Gong, Ninghui Li, Wenhai Sun, Hui Li

Abstract: Local differential privacy (LDP) protects individual data contributors against privacy-probing data aggregation and analytics. Recent work has shown that LDP for some specific data types is vulnerable to data poisoning attacks, which enable the attacker to alter analytical results by injecting carefully-crafted bogus data. In this work, we focus on applying data poisoning attack to unexplored statistical tasks, i.e. mean and variance estimations. In contrast to prior work that aims for overall LDP performance degradation or straightforward attack gain maximization, our attacker can fine-tune the LDP estimated mean/variance to the desired target values and simultaneously manipulate them. To accomplish this goal, we propose two types of data poisoning attacks: input poisoning attack (IPA) and output poisoning attack (OPA). The former is independent of LDP while the latter utilizes the characteristics of LDP, thus being more effective. More intriguingly, we observe a security-privacy consistency where a small $\epsilon$ enhances the security of LDP contrary to the previous conclusion of a security-privacy trade-off. We further study the consistency and reveal a more holistic view of the threat landscape of LDP in the presence of data poisoning attacks. We comprehensively evaluate the attacks on three real-world datasets and report their effectiveness for achieving the target values. We also explore defense mechanisms and provide insights into the secure LDP design.

Date: 24 May 2022

#### Byzantine-Robust Federated Learning with Optimal Statistical Rates and Privacy Guarantees

Authors: Banghua Zhu, Lun Wang, Qi Pang, Shuai Wang, Jiantao Jiao, Dawn Song, Michael I. Jordan

Abstract: We propose Byzantine-robust federated learning protocols with nearly optimal statistical rates. In contrast to prior work, our proposed protocols improve the dimension dependence and achieve a tight statistical rate in terms of all the parameters for strongly convex losses. We benchmark against competing protocols and show the empirical superiority of the proposed protocols. Finally, we remark that our protocols with bucketing can be naturally combined with privacy-guaranteeing procedures to introduce security against a semi-honest server. The code for evaluation is provided in https://github.com/wanglun1996/secure-robust-federated-learning.

Date: 24 May 2022

#### Towards a Defense against Backdoor Attacks in Continual Federated Learning

Authors: Shuaiqi Wang, Jonathan Hayase, Giulia Fanti, Sewoong Oh

Abstract: Backdoor attacks are a major concern in federated learning (FL) pipelines where training data is sourced from untrusted clients over long periods of time (i.e., continual learning). Preventing such attacks is difficult because defenders in FL do not have access to raw training data. Moreover, in a phenomenon we call backdoor leakage, models trained continuously eventually suffer from backdoors due to cumulative errors in backdoor defense mechanisms. We propose a novel framework for defending against backdoor attacks in the federated continual learning setting. Our framework trains two models in parallel: a backbone model and a shadow model. The backbone is trained without any defense mechanism to obtain good performance on the main task. The shadow model combines recent ideas from robust covariance estimation-based filters with early-stopping to control the attack success rate even as the data distribution changes. We provide theoretical motivation for this design and show experimentally that our framework significantly improves upon existing defenses against backdoor attacks.

Date: 24 May 2022