Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

https://orcid.org/0000-0002-7774-3246

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Computer Science

Year Degree Awarded

2019

Month Degree Awarded

May

First Advisor

Sridhar Mahadevan

Second Advisor

Phil Thomas

Third Advisor

Daniel Sheldon

Fourth Advisor

Mario Parente

Subject Categories

Artificial Intelligence and Robotics | Dynamical Systems

Abstract

Many existing machine learning (ML) algorithms cannot be viewed as gradient descent on some single objective. The solution trajectories taken by these algorithms naturally exhibit rotation, sometimes forming cycles, a behavior that is not expected with (full-batch) gradient descent. However, these algorithms can be viewed more generally as solving for the equilibrium of a game with possibly multiple competing objectives. Moreover, some recent ML models, specifically generative adversarial networks (GANs) and its variants, are now explicitly formulated as equilibrium problems. Equilibrium problems present challenges beyond those encountered in optimization such as limit-cycles and chaotic attractors and are able to abstract away some of the difficulties encountered when training models like GANs. In this thesis, I aim to advance our understanding of equilibrium problems so as to improve state-of-the-art in GANs and related domains. In the following chapters, I will present work on

  1. designing a no-regret framework for solving monotone equilibrium problems in online or streaming settings (with applications to Reinforcement Learning),
  2. ensuring convergence when training a GAN to fit a normal distribution to data by Crossing-the-Curl,
  3. improving state-of-the-art image generation with techniques derived from theory,
  4. and borrowing tools from dynamical systems theory for analyzing the complex dynamics of GAN training.

DOI

https://doi.org/10.7275/13780178

Share

COinS