Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

https://orcid.org/0000-0001-6853-4285

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Computer Science

Year Degree Awarded

2023

Month Degree Awarded

May

First Advisor

Andrew McCallum

Subject Categories

Artificial Intelligence and Robotics | Data Science | Theory and Algorithms

Abstract

Graphs are ubiquitous data structures, present in many machine-learning tasks, such as link prediction of products and node classification of scientific papers. As gradient descent drives the training of most modern machine learning architectures, the ability to encode graph-structured data using a differentiable representation is essential to make use of this data. Most approaches encode graph structure in Euclidean space, however, it is non-trivial to model directed edges. The naive solution is to represent each node using a separate "source" and "target" vector, however, this can decouple the representation, making it harder for the model to capture information within longer paths in the graph. In this dissertation, we propose to model graphs by representing each node as a \textit{box} (a Cartesian product of intervals) where directed edges are captured by the relative containment of one box in another. Theoretical proof shows that our proposed box embeddings have the expressiveness to represent any \emph{directed acyclic graph}. We also perform rigorous empirical evaluations of vector, hyperbolic, and region-based geometric representations on several families of synthetic and real-world directed graphs. Extensive experimental results suggest that the box containment can allow for transitive relationships to be modeled easily. We further propose t-Box, a variant of box embeddings that learns the temperature together during training. t-Box uses a learned smoothing parameter to achieve better representational capacity than vector models in low dimensions, while also avoiding performance saturation common to other geometric models in high dimensions. Though promising, modeling directed graphs that both contain cycles and some element of transitivity, two properties common in real-world settings, is challenging. Box embeddings, which can be thought of as representing the graph as an intersection over some learned super-graphs, have a natural inductive bias toward modeling transitivity, but (as we prove) cannot model cycles. To address this issue, we propose binary code box embeddings, where a learned binary code selects a subset of graphs for intersection. We explore several variants, including global binary codes (amounting to a union over intersections) and per-vertex binary codes (allowing greater flexibility) as well as methods of regularization. Theoretical and empirical results show that the proposed models not only preserve a useful inductive bias of transitivity but also have sufficient representational capacity to model arbitrary graphs, including graphs with cycles. Lastly, we discuss the use case where box embeddings are not free parameters but are produced by functions. In particular, we explore whether neural networks can map node features into the box space. This is critical in many real-world scenarios. On the one hand, graphs are sparse and the majority of vertices only have few connections or are completely isolated. On the other hand, there may exist rich node features such as attributes and descriptions, that could be useful for prediction tasks. The experimental analysis points out both the effectiveness and insufficiency of multi-layer perceptron-based encoders under different circumstances.

DOI

https://doi.org/10.7275/34248534

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS