Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

https://orcid.org/0000-0001-8938-0529

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Computer Science

Year Degree Awarded

2021

Month Degree Awarded

February

First Advisor

Brian N. Levine

Subject Categories

Artificial Intelligence and Robotics | Computer Sciences | OS and Networks

Abstract

Concentration inequalities (CIs) are a powerful tool that provide probability bounds on how a random variable deviates from its expectation. In this dissertation, first I describe a blockchain protocol that I have developed, called Graphene, which uses CIs to provide probabilistic guarantees on performance. Second, I analyze the extent to which CIs are robust when the assumptions they require are violated, using Reinforcement Learning (RL) as the domain. Graphene is a method for interactive set reconciliation among peers in blockchains and related distributed systems. Through the novel combination of a Bloom filter and an Invertible Bloom Lookup Table, Graphene uses a fraction of the network bandwidth used by deployed work for one- and two-way synchronization. It is a fast and implementation-independent algorithm that uses CIs for parameterizing an IBLT so that it is optimal in size for a given desired decode rate. I characterize performance improvements through analysis, detailed simulation, and deployment results for Bitcoin Cash, a prominent cryptocurrency. Implementations of Graphene, IBLTs, and the IBLT optimization algorithm are all open-source code. Second, I analyze the extent to which existing methods rely on accurate training data for a specific class of RL algorithms, known as Safe and Seldonian RL. Several Seldonian RL algorithms have a component called the safety test, which uses CIs to lower bound the performance of a new policy with training data collected from another policy. I introduce a new measure of security to quantify the susceptibility to corruptions in training data, and show that a couple of Seldonian RL methods are extremely sensitive to even a few data corruptions, completely breaking the probability bounds guaranteed by CIs. I then introduce a new algorithm, called Panacea, that is more robust against data corruptions, and demonstrate its usage in practice on some RL problems, including a grid-world and diabetes treatment simulation.

DOI

https://doi.org/10.7275/20546366

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS