联邦学习中毒攻击总结

bid000 于 2022-05-21 发布

中毒攻击

中毒攻击

后门攻击

后门攻击防御分类:

  1. 异常更新检测:FoolsGold,光谱异常检测,Flguard 等
  2. 鲁棒的联邦学习:裁剪模型权重和注入噪声,基于反馈的联邦学习,CRFL 等
  3. 后门模型恢复:在训练后修复后门全局模型

未来的研究方向:

经典文献

攻击

数据中毒

  1. Attack of the tails: Yes, you really can backdoor federated learning(单触发器)
  2. DBA: Distributed backdoor attacks against federated learning(多触发器)

模型中毒

  1. How to backdoor federated learning(模型替换)
  2. Analyzing federated learning through an adversarial lens
  3. Local model poisoning attacks to Byzantine-robust federated learning

防御

异常更新检测

  1. Mitigating sybils in federated learning poisoning = The Limitations of Federated Learning in Sybil Settings(FoolsGold)
  2. Learning to detect malicious clients for robust federated learning(降维-变分自动编码器-光谱异常检测)
  3. Data Poisoning Attacks Against Federated Learning Systems(降维-PCA)

健壮的联合训练

  1. Can you really backdoor federated learning?(范数裁剪+噪声)
  2. BaFFLe: Backdoor detection via feedback-based federated learning(反馈)
  3. CRFL: Certifiably robust federated learning against backdoor attacks(认证)
  4. Meta Federated Learning(修改协议过程-分组聚合)
  5. Secure Partial Aggregation: Making Federated Learning More Robust for Industry 4.0 Applications(修改协议过程-限制模型更新上传比例)
  6. Flguard: Secure and private federated learning = FLAME: Taming Backdoors in Federated Learning(综合运用多种技术-两层防御-适用多触发)

后门模型恢复

  1. Mitigating backdoor attacks in federated learning(训练后防御)