道隐于小成,言隐于荣华

AI Sec paper collection in 2021

CCS 2021

Backdoor & Poison

  1. Hidden Backdoors in Human-Centric Language Models
  2. Backdoor Pre-trained Models Can Transfer to All
  3. Subpopulation Data Poisoning Attacks

Adversarial Attack

  1. Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems
  2. A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
  3. Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information

Sec & Hack Application

  1. DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
  2. "Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World
  3. PalmTree: Learning an Assembly Language Model for Instruction Embedding
  4. Supply-Chain Vulnerability Elimination via Active Learning and Regeneration
  5. Learning Security Classifiers with Verified Global Robustness Properties

AI Privacy & Inference Attack

  1. Quantifying and Mitigating Privacy Risks of Contrastive Learning
  2. DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
  3. When Machine Unlearning Jeopardizes Privacy
  4. Unleashing the Tiger: Inference Attacks on Split Learning
  5. EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
  6. Membership Inference Attacks Against Recommender Systems
  7. COINN: Crypto/ML Codesign for Oblivious Inference via Neural Networks

USENIX Security 2021

Backdoor & Poison

  1. Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
  2. Blind Backdoors in Deep Learning Models
  3. Graph Backdoor
  4. Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection
  5. You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
  6. Poisoning the Unlabeled Dataset of Semi-Supervised Learning
  7. Double-Cross Attacks: Subverting Active Learning Systems

Adversarial Attack

Attacks

  1. SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations
  2. Adversarial Policy Training against Deep Reinforcement Learning
  3. Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA

Defenses

  1. PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
  2. T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification
  3. WaveGuard: Understanding and Mitigating Audio Adversarial Examples
  4. Dompteur: Taming Audio Adversarial Examples
  5. CADE: Detecting and Explaining Concept Drift Samples for Security Applications

Sec & Hack Application

  1. SIGL: Securing Software Installations Through Deep Graph Learning
  2. Cost-Aware Robust Tree Ensembles for Security Applications
  3. ATLAS: A Sequence-based Learning Approach for Attack Investigation
  4. ELISE: A Storage Efficient Logging System Powered by Redundancy Reduction and Representation Learning
  5. Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack
  6. Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages
  7. Mystique: Efficient Conversions for Zero-Knowledge Proofs with Applications to Machine Learning
  8. Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries

AI Privacy & Inference Attack

  1. Entangled Watermarks as a Defense against Model Extraction
  2. Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps
  3. Defeating DNN-Based Traffic Analysis Systems in Real-Time With Blind Adversarial Perturbations
  4. Systematic Evaluation of Privacy Risks of Machine Learning Models
  5. Extracting Training Data from Large Language Models
  6. SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
  7. Stealing Links from Graph Neural Networks
  8. Leakage of Dataset Properties in Multi-Party Machine Learning
  9. Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning
  10. Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
  11. GForce: GPU-Friendly Oblivious and Rapid Neural Network Inference

S&P 2021

Backdoor & Poison

  1. Detecting AI Trojans Using Meta Neural Analysis
  2. Explainability-based Backdoor Attacks Against Graph Neural Networks

Adversarial Attack

  1. Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding
  2. Machine Unlearning
  3. On the (Im)Practicality of Adversarial Perturbation for Image Privacy
  4. Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems

Sec & Hack Application

  1. Adversary Instantiation: Lower bounds for differentially private machine learning
  2. Is Private Learning Possible with Instance Encoding?
  3. Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems
  4. SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
  5. Improving Password Guessing via Representation Learning

AI Privacy & Inference Attack

  1. FedV: Privacy-Preserving Federated Learning over Vertically Partitioned Data
  2. Privacy Preserving Recurrent Neural Network (RNN) Prediction using Homomorphic Encryption
  3. Privacy Regularization: Joint Privacy-Utility Optimization in Text-Generation Models
  4. Proof-of-Learning: Definitions and Practice
  5. CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU
  6. SIRNN: A Math Library for Secure RNN Inference

more paper collection:

  1. https://github.com/eastmountyxz/AI-Security-Paper
  2. https://github.com/thunlp/TAADpapers
  3. https://github.com/safe-graph/graph-adversarial-learning-literature
posted @ 2022-03-30 20:58  FrancisQiu  阅读(16)  评论(0)    收藏  举报