Off-campus UMass Amherst users: To download dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users, please click the view more button below to purchase a copy of this dissertation from Proquest.
(Some titles may also be available free of charge in our Open Access Dissertation Collection, so please check there first.)
Learning object models for adaptive perceptual systems
Real world perceptual environments are characterized by objects that often co-occur, occlude one another, and display time-variant behavior. In addition there may be variations in the signal-to-noise ratio. Successful object recognition depends on the extraction of adequate disambiguating features, which are neither easily identifiable nor stationary in such environments. In an effort to improve recognition accuracy and do so efficiently, Adaptive Perceptual Systems have emerged that re-configure their signal processing in response to variations in the signal to ensure extraction of adequate features. Key to adaptive signal processing is determining when and in what manner to modify signal processing. Symbolic object models play a pivotal role in this process by serving to interpret data, predict signal behavior and account for interference from objects simultaneously present. Unfortunately, symbolic object models are typically hand-crafted, a tedious error-prone task that constitutes a knowledge acquisition bottleneck, which limits object database size and impedes deployment for new and changing environments. This thesis explores the integration of feature extraction with model construction, viewing the two processes as driving one another until the goal of producing unambiguous symbolic object models is satisfied. The paradigm has been fielded to acquire acoustic-event models for a sound understanding system.
Bhandaru, Malini Krishnan, "Learning object models for adaptive perceptual systems" (1998). Doctoral Dissertations Available from Proquest. AAI9823719.