Off-campus UMass Amherst users: To download dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users, please click the view more button below to purchase a copy of this dissertation from Proquest.
(Some titles may also be available free of charge in our Open Access Dissertation Collection, so please check there first.)
Computational explorations of the evolution of artificial neural networks in Pavlovian environments
The present work initiates a research line in the study of artificial life, through a preliminary characterization of a computational approach to evolutionary interpretations of Pavlovian-conditioning phenomena. The approach was implemented through a Neuro-Computational/Genetic-Algorithm (or NC-GA) hybrid model. The NC model described the fimctioning of neuron-like processing elements that were interconnected forming artificial neural networks (ANNs). The GA consisted of a set of rules for selecting ANNs for mating and reproduction. ANNs were developed from virtual chromosomes encoding variables determining the course of a neurodevelopmental program motivated by general concepts from developmental neuroscience. All chromosomes had a fixed length and encoded for the same set of variables. Also, that program involved a one-many relation, for most of the variables encoded by the chromosomes were probabilistic.^ The NC-GA model was characterized through computer simulations. Four simulation experiments were performed, each consisting of two kinds of simulations, evolution and test. In the evolution simulations, the GA was used to evolve ANNs that were trained in a Pavlovian procedure. ANNs with higher conditional-response proportions had a higher probability of being selected for mating and reproduction. In the test simulations, the behavioral competence of the evolved ANNs was determined by exposing them to conditions different from the ancestral ones.^ In general, the results from the evolution simulations demonstrated that the mean chromosomic overlap and the mean population fitness increased as negatively accelerated functions of generations. Also, most ANNs at the beginning of evolution showed no learning, whereas ANNs by the end of evolution showed learning.^ In Experiment 1, ANNs were selected for increased responding under forward-delay procedures in which the interstimulus interval (ISI), and the kind of CS were manipulated. After evolution, ANN sizes increased as a function of ISIs, and ANN performances in the test simulations were consistent with ISI functions, optimal-ISI noninvariance, and CS nonequipotentiality. In Experiment 2, two CSs were independently paired with the US. After evolution, ANN sizes increased as a nonmonotonic function of the ISI, and ANN performances in the test simulations showed generalization, discrimination, and blocking. In Experiments 3 and 4, ANNs were selected for orthogonal and nonorthogonal discrimination, respectively. Performances in the test simulations of both experiments also showed generalization, discrimination, and blocking. Collectively, these results are consistent with the pursuit of general-process approaches to learning. However, such approaches also allow for interpretations in which biological constraints on learning are seen as emerging from variations in general neurobiological processes, and as imposing limits on the range of variation of general biobehavioral processes. ^
Psychology, Behavioral|Artificial Intelligence|Computer Science
Jose Enrique Burgos,
"Computational explorations of the evolution of artificial neural networks in Pavlovian environments"
(January 1, 1996).
Electronic Doctoral Dissertations for UMass Amherst.