New paper in AI Security/Robustness: "Mixed Nash for Robust Federated Learning"

Wanyun Xie, Thomas Pethick, Ali Ramezani-Kebrya, and Volkan Cevher, Mixed Nash for Robust Federated Learning, Transactions on Machine Learning Research, Feb. 2024.

Abstract: We study robust federated learning (FL) within a game theoretic framework to alleviate the server vulnerabilities to even an informed adversary who can tailor training-time attacks. Specifically, we introduce RobustTailor, a simulation-based framework that prevents the adversary from being omniscient and derives its convergence guarantees. RobustTailor improves robustness to training-time attacks significantly while preserving almost the same privacy guarantees as standard robust aggregation schemes in FL. Empirical results under challenging attacks show that RobustTailor performs close to an upper bound with perfect knowledge of honest clients.

 
 
Published Feb. 13, 2024 11:41 AM - Last modified Feb. 13, 2024 11:41 AM