Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier


Open Access Dissertation

Document Type


Degree Name

Doctor of Philosophy (PhD)

Degree Program

Computer Science

Year Degree Awarded


Month Degree Awarded


First Advisor

Shlomo Zilberstein

Subject Categories

Artificial Intelligence and Robotics


Building and deploying autonomous systems in the open world has long been a goal of both the artificial intelligence (AI) and robotics communities. From autonomous driving, to health care, to office assistance, these systems have the potential to transform society and alter our everyday lives. The open world, however, presents numerous challenges that question the typical assumptions made by the models and frameworks often used in contemporary AI and robotics. Systems in the open world are faced with an unconstrained and non-stationary environment with a range of heterogeneous actors that is too complex to be modeled in its entirety. Moreover, many of these systems are expected to operate on the order of months or even years. To more reliably handle these challenges, many autonomous systems deployed in human environments entail some measure of reliance on human assistance. This reliance on human assistance is an acknowledgement of a limited competence of the autonomous agent to complete its tasks fully autonomously in all situations. Consequently, in order for such systems to be effective in the open world, they, like humans, must be aware of their own competence and both capable and incentivized to solicit external assistance when needed. This thesis therefore proposes planning approaches based on the concept of competence modeling that equip an autonomous system with knowledge about both its capabilities and limitations to better optimize its autonomy and operate more effectively in the open world. In the first part of this thesis, we introduce the notion of competence modeling and introduce a planning framework called a competence-aware system. A competence-aware system (CAS) enables a semi-autonomous system to reason about its own competence in the form of multiple levels of autonomy, and integrate that information into its decision-making model. We formulate the CAS model in the context of a fully-observable MDP, but show how it can be extended to the partially-observable setting in a well-defined manner. The result is a system that is not only more robust to unforeseen situations, but capable of optimizing its own autonomy online through interactions with a human operator who can provide varying levels of assistance. Each subsequent chapter in the dissertation explores relaxing one or more of the assumptions made in the base formulation of the CAS model, and proposes a technique or model to address the resultant challenge(s) to improve the applicability of the CAS model to real-world problems. First, we propose a method for a competence-aware system to improve its competence over time by exploiting apparent inconsistencies in the existing human feedback to iteratively refine its state representation. This method, which we call iterative state space refinement, leads to a more nuanced drawing of the boundaries between regions of the agent's state-action space with different degrees of competence. The result is a system that can better exploit human assistance, improving its overall competence. Second, we propose an extension to the base CAS model called a contextual competence-aware system (CoCAS), which extends the CAS model to the setting with multiple, heterogeneous human operators with stochastic states and contextual competence dependence. We show that the same theoretical guarantees exhibited by a CAS extend to the CoCAS, with strictly greater representational power. Finally, we propose to extend the learning model, which is dependent on the assumption that feedback from the human is provided reactively for the current state and action of the system, to consider proactive feedback that is generated by the human conditioned on their inferred behavior of the system in the near future. We conclude with a summary and discussion of the contributions presented in the preceding chapters of the thesis.


Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License