Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier


Open Access Dissertation

Document Type


Degree Name

Doctor of Philosophy (PhD)

Degree Program

Computer Science

Year Degree Awarded


Month Degree Awarded



Recent advances in mobile computing and the Internet of Things (IoT) enable the global integration of heterogeneous smart devices via wireless networks. A common characteristic across these modern day systems is their ability to collect and communicate streaming data, making machine learning (ML) appealing for processing, reasoning, and predicting about the environment. More recently, low network latency requirements have made offloading intelligence to the cloud undesirable. These novel requirements have led to the emergence of edge computing, an approach that brings computation closer to the device with low latency, high throughput, and enhanced reliability. Together, they enable ML-powered information processing and control pipelines spanning end devices, edge computing, and cloud environments. However, continuous collaboration between cloud, edge and device is susceptible to information leakage and loss, leading to insecure and unreliable operation. This raises an important question: how can we design, develop, and evaluate high-performing ML systems that are trustworthy and privacy-preserving in resource-constrained edge environments? In this thesis, I address this question by designing and implementing privacy-preserving and trustworthy ML systems for distributed applications. I first introduce a system that establishes trust in the explanations generated from a popular visualization technique, saliency maps, using counterfactual reasoning. Through the proposed evaluation system, I assess the degree to which hypothesized explanations correspond to the semantics of edge-based reinforcement learning environments. Second, I examine the privacy implications of personalized models in distributed mobile services by proposing time-series based model inversion attacks. To thwart such attacks, I present a distributed framework, Pelican, that learns and deploys transfer learning-based personalized ML models in a privacy preserving manner on resource-constrained mobile devices. Third, I investigate ML models that are deployed on local devices for inference and highlight the ease with which proprietary information embedded in these models can be exposed. For mitigating such attacks, I present a secure on-device application framework, SODA, which is supported by real-time adversarial detection. Finally, I present an end-to-end privacy-aware system for a real-world application to model group interaction behavior via mobility sensing. The proposed system, W4-Groups, distributes computation across device, edge, and cloud resources to strengthen its privacy and trustworthiness guarantees.


Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.