Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

https://orcid.org/0000-0001-6223-9867

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Electrical and Computer Engineering

Year Degree Awarded

2022

Month Degree Awarded

May

First Advisor

Lixin Gao

Subject Categories

Computer Engineering

Abstract

Machine learning is the study of computer algorithms that focuses on analyzing and interpreting patterns and structures in data. It has been successfully applied to many areas in computer science and achieved state-of-the-art results to enable learning, reasoning, and decision-making without human interactions. This research aims to develop innovated data parallel frameworks to accommodate the computing resources to parallelize different machine learning and deep learning algorithms and speed up the training. To achieve that, we explore three interesting frameworks in this dissertation: (1) Sync-on-the-fly framework for gradient descent algorithms on transient resources; (2) Asynchronous Proactive Data Parallel framework for both gradient descent and Expectation-Maximization algorithms; (3) Cohesive Mini-batches graph convolutional network framework for graph convolutional networks.

DOI

https://doi.org/10.7275/28347084

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS