Loading...
Thumbnail Image
Publication

Geometric Representation Learning

Abstract
Vector embedding models are a cornerstone of modern machine learning methods for knowledge representation and reasoning. These methods aim to turn semantic questions into geometric questions by learning representations of concepts and other domain objects in a lower-dimensional vector space. In that spirit, this work advocates for density- and region-based representation learning. Embedding domain elements as geometric objects beyond a single point enables us to naturally represent breadth and polysemy, make asymmetric comparisons, answer complex queries, and provides a strong inductive bias when labeled data is scarce. We present a model for word representation using Gaussian densities, enabling asymmetric entailment judgments between concepts, and a probabilistic model for weighted transitive relations and multivariate discrete data based on a lattice of axis-aligned hyperrectangle representations (boxes). We explore the suitability of these embedding methods in different regimes of sparsity, edge weight, correlation, and independence structure, as well as extensions of the representation and different optimization strategies. We make a theoretical investigation of the representational power of the box lattice, and propose extensions to address shortcomings in modeling difficult distributions and graphs.
Type
openaccess
article
dissertation
Date
Publisher
Rights
License
Research Projects
Organizational Units
Journal Issue
Embargo
Publisher Version
Embedded videos
Collections