To accomplish this adaptation, our approach is to use VANET data to learn drivers' characteristics. This information along with the traffic data, can be used to customize the VANETs to individual drivers. In this dissertation, we show that this process benefits all the drivers by reducing the collision probability of the network of vehicles. Our Monte Carlo simulation results show that this approach achieves more than 25% reduction in traffic collision probability compared to the case with optimized equal vehicular communication access for each vehicle. Therefore, it has a considerable advantage over other systems.

First, we propose a method to estimate the distribution of a driver's characteristics by employing the VANET data. This is essential for our intended application in accident warning systems and vehicular communications.

Second, this estimated distribution and the traffic information are used to adapt the transmission rates of vehicles to each driver's safety level in order to reduce the number of collisions in the network. We derive the packet success probability for a chain of vehicles by taking multi-user interference, path loss, and fading into account. Then, by considering the delay constraints and types of potential collisions, we approximate the required channel access probabilities and illustrate the collision probability.

Third, since the packet success probability and thus communication interference affect the collision probability noticeably, we examine various interference models and their effect on the collision probability with more scrutiny. In our analysis, two signal propagation models with and without carrier sensing are considered for the dissemination of periodic safety messages, and it is illustrated how employing more accurate interference models results in a higher level of safety (lower collision probability)for the network.

Finally, there is an unclear relation between the intensity of an ad hoc network (the number of vehicles in a certain area) and the performance of the system. Hence, we study a reverse approach in which the geometry (intensity) of the unmanned aerial vehicles varies and certain requirements such as safety and coverage need to be satisfied. The numerical results show that safety and interference limits the coverage of the network and there is only a relatively small range of intensities which satisfy all three.

]]>In this work, the existing small-signal noise modeling approach is extended to capture the weakly nonlinear properties of the transistors that are commonly used in cryogenic amplification. Indium phosphide high electron mobility transistors and silicon germanium heterojunction bipolar transistors are considered. The goal of this work is to identify the fundamental dynamic range limitations of these transistors such that the results are not device specific, but applicable to the corresponding device families.

Identifying the fundamental limitations of dynamic range in a semiconductor device requires a broad understanding of physical properties of the transistors. For this, a theoretical analysis will be presented first as a function of temperature. The small-signal noise modeling will then be discussed using techniques that are well recognized in the literature. This will be followed by an explanation of the nonlinear modeling approach used in this work. This approach relies on the definition of Taylor series expansion coefficients of the dominant nonlinear mechanisms of the transistors. The modeling results will be interpreted with respect to the initially presented theoretical framework. Finally, the dynamic range performance will be studied as a function of source and load terminations. In addition to this systematic approach to understanding the physical limitations of dynamic range, model to measurement agreement of broadband cryogenic amplifiers will also be presented which will verify the accuracy of the modeling approach.

]]>Graphs have been widely used in image related applications. Traditional graph-based image analysis models include pixel-based graph-cut techniques for image segmentation, low-level and high-level image feature extraction based on graph statistics and other related approaches which utilize the idea of graph similarity testing. To compare the images through their graph representations, a graph similarity testing algorithm is essential. Most of the existing graph similarity measurement tools are not designed for generic tasks such as image classification and retrieval, and some other models are either not scalable or not always effective. Graph spectral theory is a powerful analytical tool for capturing and representing structural information of the graph, but to use it on image understanding remains a challenge.

In this dissertation, we focus on developing fast and effective image analysis models based on the spectral graph theory and other graph related mathematical tools. We first propose a fast graph similarity testing method based on the idea of the heat content and the mathematical theory of diffusion over manifolds. We then demonstrate the ability of our similarity testing model by comparing random graphs and power law graphs. Based on our graph analysis model, we develop a graph-based image representation and understanding framework. We propose the image heat content feature at first and then discuss several approaches to further improve the model. The first component in our improved framework is a novel graph generation model. The proposed model greatly reduces the size of the traditional pixel-based image graph representation and is shown to still be effective in representing an image. Meanwhile, we propose and discuss several low-level and high-level image features based on spectral graph information, including oscillatory image heat content, weighted eigenvalues and weighted heat content spectrum. Experiments show that the proposed models are invariant to non-structural changes on images and perform well in standard image classification benchmarks. Furthermore, our image features are robust to small distortions and changes of viewpoint. The model is also capable of capturing important image structural information on the image and performs well alone or in combination with other traditional techniques. We then introduce two real world software development projects using graph-based image processing techniques in this dissertation. Finally, we discuss the pros, cons and the intuition of our proposed model by demonstrating the properties of the proposed image feature and the correlation between different image features.

]]>This research aims to explore new forms of iterative computations that reduce unnecessary computations so as to accelerate large-scale data processing in a distributed environment. We propose the iterative computation transformation for well-known data mining and machine learning algorithms, such as expectation-maximization, nonnegative matrix factorization, belief propagation, and graph algorithms (e.g., PageRank). These algorithms have been used in a wide range of application domains. First, we show how to accelerate expectation-maximization algorithms with frequent updates in a distributed environment. Then, we illustrate the way of efficiently scaling distributed nonnegative matrix factorization with block-wise updates. Next, our approach of scaling distributed belief propagation with prioritized block updates is presented. Last, we illustrate how to efficiently perform distributed incremental computation on evolving graphs.

We will elaborate how to implement these transformed iterative computations on existing distributed programming models such as the MapReduce-based model, as well as develop new scalable and efficient distributed programming models and frameworks when necessary. The goal of these supporting distributed frameworks is to lift the burden of the programmers in specifying transformation of iterative computations and communication mechanisms, and automatically optimize the execution of the computation. Our techniques are evaluated extensively to demonstrate their efficiency. While the techniques we propose are in the context of specific algorithms, they address the challenges commonly faced in many other algorithms.

]]>In this dissertation, we present our recent work on PUF based secure computation from three aspects: variability, modeling and noise sensitivity, which are deemed the foundations of our study. Moreover, we found that the three factors coordinate with each other in our study, for example, the modeling technique can be utilized to improve the unsatisfied reliability caused by noise sensitivity, quantifying the variability can effectively eliminate the impact from noise, and modeling can help with characterizing the physical variability precisely.

]]>