AI Sec paper collection in 2021
CCS 2021
Backdoor & Poison
- Hidden Backdoors in Human-Centric Language Models
- Backdoor Pre-trained Models Can Transfer to All
- Subpopulation Data Poisoning Attacks
Adversarial Attack
- Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems
- A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
- Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information
Sec & Hack Application
- DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
- "Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World
- PalmTree: Learning an Assembly Language Model for Instruction Embedding
- Supply-Chain Vulnerability Elimination via Active Learning and Regeneration
- Learning Security Classifiers with Verified Global Robustness Properties
AI Privacy & Inference Attack
- Quantifying and Mitigating Privacy Risks of Contrastive Learning
- DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
- When Machine Unlearning Jeopardizes Privacy
- Unleashing the Tiger: Inference Attacks on Split Learning
- EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
- Membership Inference Attacks Against Recommender Systems
- COINN: Crypto/ML Codesign for Oblivious Inference via Neural Networks
USENIX Security 2021
Backdoor & Poison
- Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
- Blind Backdoors in Deep Learning Models
- Graph Backdoor
- Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection
- You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
- Poisoning the Unlabeled Dataset of Semi-Supervised Learning
- Double-Cross Attacks: Subverting Active Learning Systems
Adversarial Attack
Attacks
- SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations
- Adversarial Policy Training against Deep Reinforcement Learning
- Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA
Defenses
- PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
- T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification
- WaveGuard: Understanding and Mitigating Audio Adversarial Examples
- Dompteur: Taming Audio Adversarial Examples
- CADE: Detecting and Explaining Concept Drift Samples for Security Applications
Sec & Hack Application
- SIGL: Securing Software Installations Through Deep Graph Learning
- Cost-Aware Robust Tree Ensembles for Security Applications
- ATLAS: A Sequence-based Learning Approach for Attack Investigation
- ELISE: A Storage Efficient Logging System Powered by Redundancy Reduction and Representation Learning
- Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack
- Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages
- Mystique: Efficient Conversions for Zero-Knowledge Proofs with Applications to Machine Learning
- Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries
AI Privacy & Inference Attack
- Entangled Watermarks as a Defense against Model Extraction
- Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps
- Defeating DNN-Based Traffic Analysis Systems in Real-Time With Blind Adversarial Perturbations
- Systematic Evaluation of Privacy Risks of Machine Learning Models
- Extracting Training Data from Large Language Models
- SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
- Stealing Links from Graph Neural Networks
- Leakage of Dataset Properties in Multi-Party Machine Learning
- Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning
- Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
- GForce: GPU-Friendly Oblivious and Rapid Neural Network Inference
S&P 2021
Backdoor & Poison
- Detecting AI Trojans Using Meta Neural Analysis
- Explainability-based Backdoor Attacks Against Graph Neural Networks
Adversarial Attack
- Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding
- Machine Unlearning
- On the (Im)Practicality of Adversarial Perturbation for Image Privacy
- Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Sec & Hack Application
- Adversary Instantiation: Lower bounds for differentially private machine learning
- Is Private Learning Possible with Instance Encoding?
- Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems
- SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
- Improving Password Guessing via Representation Learning
AI Privacy & Inference Attack
- FedV: Privacy-Preserving Federated Learning over Vertically Partitioned Data
- Privacy Preserving Recurrent Neural Network (RNN) Prediction using Homomorphic Encryption
- Privacy Regularization: Joint Privacy-Utility Optimization in Text-Generation Models
- Proof-of-Learning: Definitions and Practice
- CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU
- SIRNN: A Math Library for Secure RNN Inference
more paper collection:

浙公网安备 33010602011771号