Off-campus UMass Amherst users: To download dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users, please click the view more button below to purchase a copy of this dissertation from Proquest.
(Some titles may also be available free of charge in our Open Access Dissertation Collection, so please check there first.)
A hybrid architecture for adaptive robot control
Autonomous robot systems operating in an uncertain environment pose many challenges to their control architecture. Such systems must be reactive with respect to local disturbances and uncertainties and have to adapt to more persistent changes in environmental conditions and task requirements. In autonomous systems, this adaptation often has to occur without outside intervention and within a single trial while avoiding catastrophic failure. ^ This dissertation develops a hybrid control architecture which combines methods from control theory, discrete event systems, and reinforcement learning to address these challenges and to permit the operation of complex robots at various levels of autonomy. Control is derived from a set of stable, continuous control elements which provide local reactivity and allow system behavior to be modeled as a Discrete Event Dynamic System (DEDS). Methods for automatic synthesis of a DEDS supervisor are applied to this model to impose safety constraints on the supervisory control structure which is interpreted as a Markov Decision Process (MDP). This structure directs an exploratory learning component and permits the introduction of prior control knowledge to a reinforcement learning algorithm. ^ Actions in this framework are temporally extended, closed-loop controllers which suppress local, bounded disturbances without introducing additional control decisions. This leads to a novel, abstract state representation based on the convergence of actions and reduces the amount of robot operation required to autonomously acquire task-specific policies. Moreover, previously learned policies can be re-used as control actions to acquire hierarchical solutions for subsequent tasks. This enables the robot to address increasingly complex tasks over time and extends the scope of reinforcement learning techniques. ^ Throughout this dissertation, the operation of the integrated control architecture is illustrated with a range of example tasks performed on a set of real and simulated robot platforms. ^
Huber, Manfred, "A hybrid architecture for adaptive robot control" (2000). Doctoral Dissertations Available from Proquest. AAI9988799.