We show that such latent models achieve substantially higher accuracy than a traditional classification approach on New York Times and Freebase data. Besides binary relations, we use universal schema for unary relations, i.e., entity types. We explore various facets of universal schema matrix factorization models on a large-scale web corpus, including implicature among the relations. We evaluate our approach on the task of question answering using features obtained from universal schema, achieving state-of-the-art accuracy on a benchmark dataset.

]]>However, when the motion is large and the matching function is abrupt this assumption is less likely to be true. One traditional way of avoiding this abruptness is to smooth the matching function spatially by blurring the images. As the displacement becomes larger, the amount of blur required to smooth the matching function becomes also larger. This averaging of pixels leads to a loss of detail in the image. Therefore, there is a trade-off between the size of the objects that can be tracked and the displacement that can be captured.

In this thesis we address the basic problem of increasing the size of the basin of attraction in a matching function. We use an image descriptor called distribution fields (DFs). By blurring the images in DF space instead of in pixel space, we increase the size of the basin attraction with respect to traditional methods. We show competitive results using DFs both in object tracking and optical flow. Finally we demonstrate an application of capturing large motions for temporal video stitching.

]]>In this dissertation we present a general framework for constructing and reasoning on joint graphical model formulations of NLP problems. Individual models are composed using weighted Boolean logic constraints, and inference is performed using belief propagation. The systems we develop are composed of two parts: one a representation of syntax, the other a desired end task (semantic role labeling, named entity recognition, or relation extraction). By modeling these problems jointly, both models are trained in a single, integrated process, with uncertainty propagated between them. This mitigates the accumulation of errors typical of pipelined approaches.

Additionally we propose a novel marginalization-based training method in which the error signal from end task annotations is used to guide the induction of a constrained latent syntactic representation. This allows training in the absence of syntactic training data, where the latent syntactic structure is instead optimized to best support the end task predictions. We find that across many NLP tasks this training method offers performance comparable to fully supervised training of each individual component, and in some instances improves upon it by learning latent structures which are more appropriate for the task.

]]>To address this problem of human errors in HIPs, this thesis investigates two approaches for online process guidance, i.e., for guiding process performers while a process is being executed. Both approaches rely on monitoring a process execution and base the guidance they provide on a detailed formal process model that captures the recommended ways to perform the corresponding HIP. The first approach, which we call deviation detection and explanation, automatically detects when an executing HIP deviates from a set of recommended executions of that HIP, as specified by the process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before something bad happens. The approach also provides information to help explain a detected deviation to assist process performers with identifying potential errors and with planning recovery from these errors. The second approach, which we call process state visualization, proactively guides process performers by showing them information relevant to the current process execution, such as the activities that need to be performed at each point of that process execution. The goal of the process state visualization approach is to reduce the number of human errors.

The major contributions of this work can be summarized as follows:

-- Compared the relative strengths and weaknesses of several techniques for process elicitation and process model validation to help create correct and sufficiently complete process models needed for the proposed online process guidance approaches.

-- Developed an approach for deviation detection and explanation and evaluated it with realistic process models and synthetic process executions with seeded errors.

* Recognized delayed deviation detection as a potential obstacle for the approach and investigated its frequency and consequences.

-- Developed an initial approach for visualization of process execution state and demonstrated it on a medical case study.

]]>In this thesis we introduce a general framework for learning reusable parameterized skills. Reusable skills are parameterized procedures that—given a description of a problem to be solved—produce appropriate behaviors or policies. They can be sequentially and hierarchically combined with other skills to produce progressively more abstract and temporally extended behaviors.

We identify three major challenges involved in the construction of such skills. First, an agent should be capable of solving a small number of problems and generalizing these experiences to construct a single reusable skill. The skill should be capable of producing appropriate behaviors even when applied to yet unseen variations of a problem. We introduce a method for estimating properties of the lower-dimensional manifold on which problem solutions lie. This allows for the construction of unified models for predicting policies from task parameters.

Secondly, the agent should be able to identify when a skill can be hierarchically decomposed into specialized sub-skills. We observe that the policy manifold may be composed of disjoint, piecewise-smooth charts, each one encoding solutions for a subclass of problems. Identifying and modeling sub-skills allows for the aggregation of related behaviors into single, more abstract skills.

Finally, the agent should be able to actively select on which problems to practice in order to more rapidly become competent in a skill. Thoughtful and deliberate practice is one of the defining characteristics of human expert performance. By carefully choosing on which problems to practice the agent might more rapidly construct a skill that performs well over a wide range of problems.

We address these challenges via a general framework for skill acquisition. We evaluate it on simulated decision-problems and on a physical humanoid robot, and demonstrate that it allows for the efficient and active construction of reusable skills.

]]>We first study covert communication over additive white Gaussian noise (AWGN) channels, a standard model for radio-frequency (RF) communication. We present a square root limit on the amount of information transmitted covertly and reliably over such channels. Specifically, we prove that if the transmitter has the channels to the intended receiver and the warden that are both AWGN, then O(\sqrt{n}) covert bits can be reliably transmitted to the receiver in n uses of the channel. Conversely, attempting to transmit more than O(\sqrt{n}) bits either results in detection by the warden with probability one or a non-zero probability of decoding error at the receiver as n-->\infty.

Next we study the impact of warden's ignorance of the communication attempt time. We prove that if the channels from the transmitter to the intended receiver and the warden are both AWGN, and if a single n-symbol period slot out of T(n) such slots is selected secretly (forcing the warden to monitor all T(n) slots), then O(\min{\sqrt{n\log T(n)},n}) covert bits can be transmitted reliably using this slot. Conversely, attempting to transmit more than O(\sqrt{n\log T(n)}) bits either results in detection with probability one or a non-zero probability of decoding error at the receiver.

We then study covert optical communication and characterize the ultimate limit of covert communication that is secure against the most powerful physically-permissible adversary. We show that, although covert communication is impossible when a channel injects the minimum noise allowed by quantum mechanics, it is attainable in the presence of any noise excess of this minimum (such as the thermal background). In this case, O(\sqrt{n}) covert bits can be transmitted reliably in n optical channel uses using standard optical communication equipment. The all-powerful adversary may intercept all transmitted photons not received by the intended receiver, and employ arbitrary quantum memory and measurements. Conversely, we show that this square root scaling cannot be circumvented. Finally, we corroborate our theory in a proof-of-concept experiment on an optical testbed.

]]>Given a convex polyhedron P and a point s on the surface (the source), the ridge tree or cut locus is a collection of points with multiple shortest paths from s on the surface of P. If we compute the shortest paths from s to all polyhedral vertices of P and cut the surface along these paths, we obtain a planar polygon called the shortest path star (sp-star) unfolding. It is known that for any convex polyhedron and a source point, the ridge tree is contained in the sp-star unfolding polygon [8]. Given a combinatorial structure of a ridge tree, we show how to construct the ridge tree and the sp-star unfolding in which it lies. In this process, we address several problems concerning the existence of sp-star unfoldings on specified source point sets.

Finally, we introduce and study a new variant of the sp-star unfolding called (geodesic) star unfolding. In this unfolding, we cut the surface of the convex polyhedron along a set of non-crossing geodesics (not-necessarily the shortest). We study its properties and address its realization problem. Finally, we consider the following problem: given a geodesic star unfolding of some convex polyhedron and a source point, how can we derive the sp-star unfolding of the same polyhedron and the source point? We introduce a new algorithmic operation and perform experiments using that operation on a large number of geodesic star unfolding polygons. Experimental data provides strong evidence that the successive applications of this operation on geodesic star unfoldings will lead us to the sp-star unfolding.

]]>