Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

https://orcid.org/0000-0002-8308-9121

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Computer Science

Year Degree Awarded

2022

Month Degree Awarded

February

First Advisor

Erik Learned-Miller

Subject Categories

Computer Sciences

Abstract

This thesis studies different approaches to decision making with limited data.

First, we study the effects of approximate inference on Thompson sampling in the k-armed bandit problems. Thompson sampling is a successful algorithm but requires posterior inference, which often must be approximated in practice. We show that even small constant inference error (in alpha-divergence) can lead to poor performance (linear regret) due to under-exploration (for alpha < 1) or over-exploration (for alpha > 0) by the approximation. While for alpha > 0 this is unavoidable, for alpha <= 0 the regret can be improved by adding a small amount of forced exploration.

Second, we consider the problem of designing a randomized experiment on a source population to estimate the Average Treatment Effect (ATE) on a target population. We propose a novel approach which explicitly considers the target when designing the experiment on the source. Under the covariate shift assumption, we design an unbiased importance-weighted estimator for the target population’s ATE. To reduce the variance of our estimator, we design a covariate balance condition (Target Balance) between the treatment and control groups based on the target population. We show that Target Balance achieves a higher variance reduction asymptotically than methods that do not consider the target during the design phase. Our experiments illustrate that Target Balance reduces the variance even for small sample sizes.

Finally, we examine confidence intervals. Historically, mean bounds for small sample sizes fall into 2 categories: methods with unrealistic assumptions about the unknown distribution (e.g., Gaussianity) and methods like Hoeffding's inequality that use weaker assumptions but produce much looser intervals. In 1969, Anderson (1969) proposed a mean confidence interval strictly better than or equal to Hoeffding's whose only assumption is that the distribution's support is contained in an interval [a,b]. For the first time since then, we present a new family of upper bounds that compares favorably to Anderson's. We prove that each bound in the family holds with probability at least 1-alpha for all distributions on an interval [a,b]. Furthermore, one of the bounds is tighter than or equal to Anderson's for all samples.

DOI

https://doi.org/10.7275/27246917

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS