Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

https://orcid.org/0000-0001-5352-2003

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Computer Science

Year Degree Awarded

2022

Month Degree Awarded

May

First Advisor

Andrew McCallum

Second Advisor

Mohit Iyyer

Third Advisor

Marco Serafini

Fourth Advisor

Chris Dyer

Fifth Advisor

Kyunghyun Cho

Subject Categories

Artificial Intelligence and Robotics | Databases and Information Systems | Data Science

Abstract

Question answering (QA) over knowledge bases provides a user-friendly way of accessing the massive amount of information stored in them. We have experienced tremendous progress in the performance of QA systems, thanks to the recent advancements in representation learning by deep neural models. However, such deep models function as black boxes with an opaque reasoning process, are brittle, and offer very limited control (e.g. for debugging an erroneous model prediction). It is also unclear how to reliably add or update knowledge stored in their model parameters. This thesis proposes nonparametric models for question answering that disentangle logic from knowledge. For a given query, the proposed models are capable of deriving interpretable reasoning patterns “on-the-fly” from other contextually similar queries in the training set. We show that our models can seamlessly handle new knowledge (new entities and relations) as they are continuously added to the knowledge base. Our model is effective for complex and compositional natural language queries requiring subgraph reasoning patterns and works even when annotations of the reasoning patterns (logical forms) are not available, achieving new state-of-the-art results on multiple benchmarks. Leveraging our nonparametric approach, we also demonstrate that it is possible to correct wrong predictions of deep QA models without any need for re-training, thus paving a way toward building more controllable and debuggable QA systems. Finally, compared to deep parametric models, this thesis demonstrates that nonparametric models of reasoning (i) can generalize better to questions needing complex reasoning especially when the number of questions seen during training is limited (ii) can reason more effectively as new data is added, (iii) offer more interpretability for its prediction and (iv) are more controllable and debuggable.

DOI

https://doi.org/10.7275/28628666

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Share

COinS