摘要:
"arXiv:2111.09679, 2021." 文章关注机器学习模型的隐私泄露问题,成员推理攻击:给出一条样本,可以推断该样本是否在模型的训练数据集中——即便对模型的参数、结构知之甚少,该攻击仍然有效。本质还是使用影子模型的方法训练攻击模型。但是针对攻击者不知道目标模型的训练集,文章提出了影子学 阅读全文
posted @ 2023-01-13 19:00
方班隐私保护小组
阅读(166)
评论(0)
推荐(0)
摘要:
X. Lei, A. X. Liu, R. Li and G. -H. Tu, "SecEQP: A Secure and Efficient Scheme for SkNN Query Problem Over Encrypted Geodata on Cloud," 2019 IEEE 35th 阅读全文
posted @ 2023-01-13 18:55
方班隐私保护小组
阅读(45)
评论(0)
推荐(0)
摘要:
Itahara, Sohei, et al. "Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-iid private 阅读全文
posted @ 2023-01-13 18:09
方班隐私保护小组
阅读(200)
评论(0)
推荐(0)
摘要:
Li, Bowen, et al. "Fedipr: Ownership verification for federated deep neural network models." IEEE Transactions on Pattern Analysis and Machine Intelli 阅读全文
posted @ 2023-01-13 17:12
方班隐私保护小组
阅读(226)
评论(0)
推荐(0)
摘要:
Liu, Yugeng, et al. "ML-Doctor: Holistic risk assessment of inference attacks against machine learning models." arXiv preprint arXiv:2102.02551 (2021) 阅读全文
posted @ 2023-01-13 16:39
方班隐私保护小组
阅读(80)
评论(0)
推荐(0)
摘要:
Wang, Fengwei, et al. "A privacy-preserving and non-interactive federated learning scheme for regression training with gradient descent." Information 阅读全文
posted @ 2023-01-13 16:02
方班隐私保护小组
阅读(79)
评论(0)
推荐(0)
摘要:
Peng, Xiaokang, et al. "Balanced Multimodal Learning via On-the-fly Gradient Modulation." Proceedings of the IEEE/CVF Conference on Computer Vision an 阅读全文
posted @ 2023-01-13 15:51
方班隐私保护小组
阅读(165)
评论(0)
推荐(0)
摘要:
Devin Reich, Ariel Todoki, Rafael Dowsley, Martine De Cock, Anderson Nascimento. 2019. Privacy-Preserving Classification of Personal Text Messages wit 阅读全文
posted @ 2023-01-13 14:26
方班隐私保护小组
阅读(47)
评论(0)
推荐(0)
摘要:
Jonas Böhler and Florian Kerschbaum. 2020. Secure Multi-party Computation of Differentially Private Median. In the Proceedings of the 29th USENIX Secu 阅读全文
posted @ 2023-01-13 14:14
方班隐私保护小组
阅读(104)
评论(0)
推荐(0)

浙公网安备 33010602011771号