Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier


Open Access Dissertation

Document Type


Degree Name

Doctor of Philosophy (PhD)

Degree Program

Computer Science

Year Degree Awarded


Month Degree Awarded


First Advisor

Amir Houmansadr

Subject Categories

Artificial Intelligence and Robotics | Information Security


Federated learning is an emerging distributed learning paradigm that allows multiple users to collaboratively train a joint machine learning model without having to share their private data with any third party. Due to many of its attractive properties, federated learning has received significant attention from academia as well as industry and now powers major applications, e.g., Google's Gboard and Assistant, Apple's Siri, Owkin's health diagnostics, etc. However, federated learning is yet to see widespread adoption due to a number of challenges. One such challenge is its susceptibility to poisoning by malicious users who aim to manipulate the joint machine learning model.

In this work, we take significant steps towards this challenge. We start by providing a systemization of poisoning adversaries in federated learning and use it to build adversaries with varying strengths and to show how some adversaries common in the prior literature are not practically relevant. For the majority of this thesis, we focus on untargeted poisoning as it can impact much larger federated learning population than other types of poisoning and also because most of the prior poisoning defenses for federated learning aim to defend against untargeted poisoning.

% Next, we introduce a general framework to design strong untargeted poisoning attacks against various federated learning algorithms. Using our framework, we design state-of-the-art poisoning attacks and demonstrate how the theoretical guarantees and empirical claims of prior state-of-the-art federated learning poisoning defenses are brittle under the same strong (albeit theoretical) adversaries that these defenses aim to defend against. We also provide concrete lessons highlighting the shortcomings of prior defenses. Using these lessons, we also design two novel defenses with strong theoretical guarantees and demonstrate their state-of-the-art performances in various adversarial settings.

Finally, for the first time, we thoroughly investigate the impact of poisoning in real-world federated learning settings and draw significant, and rather surprising, conclusions about robustness of federated learning in practice. For instance, we show that contrary to the established belief, federated learning is highly robust in practice even when using simple, low-cost defenses. One of the major implications of our study is that, although interesting from theoretical perspectives, many of the strong adversaries, and hence, strong prior defenses, are of little use in practice.


Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.