A Huber Loss Minimization Approach to Byzantine Robust Federated Learning

Type: Article

Publication Date: 2024-03-24

Citations: 1

DOI: https://doi.org/10.1609/aaai.v38i19.30181

Abstract

Federated learning systems are susceptible to adversarial attacks. To combat this, we introduce a novel aggregator based on Huber loss minimization, and provide a comprehensive theoretical analysis. Under independent and identically distributed (i.i.d) assumption, our approach has several advantages compared to existing methods. Firstly, it has optimal dependence on epsilon, which stands for the ratio of attacked clients. Secondly, our approach does not need precise knowledge of epsilon. Thirdly, it allows different clients to have unequal data sizes. We then broaden our analysis to include non-i.i.d data, such that clients have slightly different distributions.

Locations

  • Proceedings of the AAAI Conference on Artificial Intelligence - View - PDF
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ A Huber Loss Minimization Approach to Byzantine Robust Federated Learning 2023 Puning Zhao
Fei Yu
Zhiguo Wan
+ PDF Chat Byzantine-resilient Federated Learning With Adaptivity to Data Heterogeneity 2024 Shiyuan Zuo
Xingrun Yan
Rongfei Fan
Han Hu
Hangguan Shan
Tony Q. S. Quek
+ Combating Exacerbated Heterogeneity for Robust Models in Federated Learning 2023 Jianing Zhu
Jiangchao Yao
Tongliang Liu
Quanming Yao
Jianliang Xu
Bo Han
+ Mitigating Byzantine Attacks in Federated Learning. 2020 Saurav Prakash
Amir Salman Avestimehr
+ Robust Federated Learning against both Data Heterogeneity and Poisoning Attack via Aggregation Optimization 2022 Yueqi Xie
Weizhong Zhang
Renjie Pi
Fangzhao Wu
Qifeng Chen
Xing Xie
Sunghun Kim
+ PDF Chat Byzantine-resilient Federated Learning Employing Normalized Gradients on Non-IID Datasets 2024 Shiyuan Zuo
Xingrun Yan
Rongfei Fan
Li Shen
Puning Zhao
Jie Xu
Han Hu
+ Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting 2023 Yuchen Liu
Chen Chen
Lingjuan Lyu
Fangzhao Wu
Sai Wu
Gang Chen
+ PDF Chat LiD-FL: Towards List-Decodable Federated Learning 2024 Hong Liu
L. Y. Shan
Han Bao
Ronghui You
Yuhao Yi
Dongdong Chen
+ Attack-Resistant Federated Learning with Residual-based Reweighting 2019 Shuhao Fu
Chulin Xie
Bo Li
Qifeng Chen
+ Robust Federated Learning via Over-The-Air Computation 2021 Houssem Sifaou
Geoffrey Ye Li
+ PDF Chat Robust Federated Learning via Over-the-Air Computation 2022 Houssem Sifaou
Geoffrey Ye Li
+ PDF Chat A Learning-Based Attack Framework to Break SOTA Poisoning Defenses in Federated Learning 2024 Yuxin Yang
Qiang Li
Chenfei Nie
Yuan Hong
Meng Pang
Binghui Wang
+ Federated Robustness Propagation: Sharing Robustness in Heterogeneous Federated Learning 2021 Junyuan Hong
Haotao Wang
Zhangyang Wang
Jiayu Zhou
+ Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints 2022 Minghui Li
Wei Wan
Jianrong Lu
Shengshan Hu
Junyu Shi
Leo Yu Zhang
+ PDF Chat Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints 2022 Minghui Li
Wei Wan
Jianrong Lu
Shengshan Hu
Junyu Shi
Leo Yu Zhang
Man Zhou
Yifeng Zheng
+ An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning 2022 Shenghui Li
Cheuk Han Edith Ngai
Thiemo Voigt
+ An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning 2022 Shenghui Li
Cheuk Han Edith Ngai
Thiemo Voigt
+ An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning 2022 Shenghui Li
Cheuk Han Edith Ngai
Thiemo Voigt
+ Federated Adversarial Learning: A Framework with Convergence Analysis 2022 Xiaoxiao Li
Zhao Song
Jiaming Yang
+ Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning. 2021 Junyuan Hong
Haotao Wang
Zhangyang Wang
Jiayu Zhou

Works That Cite This (0)

Action Title Year Authors