Loading...
Thumbnail Image
Publication

Graph Properties from Restricted Information

Citations
Altmetric:
Abstract
There are several settings in theoretical computer science where we gain some form of limited access to an unknown underlying graph (typically through a subgraph), and we try to infer something fundamental about the entire graph. This question of learning a structure from partial information is the cornerstone of many active areas of research. In this thesis, we study two very different models for learning graph properties. We show some surprising commonalities and differences in these two settings. In the first, probabilistic setting, we ask: suppose we have a fixed, unknown graph, which is passed through a ``noisy'' channel, that deletes the vertices with some uniform probability, deletes (or flips) the remaining edges with some uniform probability, and returns the resulting subgraph. How many such samples (or ``traces'') would we need to reconstruct the original graph? We show several results in this setting, including separations between random and arbitrary graphs, and vertex and edge deletions, even when all but an o(1)-fraction of the graph disappears in every trace. In the second, deterministic setting, we ask: how can we identify graphs efficiently using logical characterizations? This question is inextricably tied with complexity theory. A technique we have for showing lower bounds in these realms is by means of two-player logical games, where two perfect players (called ``Spoiler'' and ``Duplicator'') play pebble games on graphs and can only argue about the restricted ``views'' given by the induced subgraphs on those pebbles. We generalize the known games into a flexible framework characterizing any reasonable syntactic measure; we also play these games on canonical ordered structures such as linear orders and strings, a notoriously difficult endeavor historically. Here we prove some new bounds, some of which we show to be optimal. Despite being from quite distinct areas of research, the two problems show several notable similarities. They both ask about the fundamental structures of graphs, while having access only to partial ``views'' of the graph. They both rely on distinguishing a given pair of graphs as the main subroutine, and thereafter draw conclusions about identifying entire classes of graphs. They both exploit isomorphism testing as an intermediate subroutine. They both exhibit bottlenecks in the form of ``near-isomorphisms'' that are not true isomorphisms. And finally, they both show a remarkable phenomenon in which there being an inherent *ordering* in the universe makes the question provably easier than there being no such ordering.
Type
Dissertation
Date
2024-05
Publisher
License
Attribution 4.0 International
License
http://creativecommons.org/licenses/by/4.0/
Research Projects
Organizational Units
Journal Issue
Embargo Lift Date
Publisher Version
Embedded videos
Related Item(s)