Loading...
Thumbnail Image
Publication

FUNCTION AND DISSIPATION IN FINITE STATE AUTOMATA - FROM COMPUTING TO INTELLIGENCE AND BACK

Abstract
Society has benefited from the technological revolution and the tremendous growth in computing powered by Moore's law. However, we are fast approaching the ultimate physical limits in terms of both device sizes and the associated energy dissipation. It is important to characterize these limits in a physically grounded and implementation-agnostic manner, in order to capture the fundamental energy dissipation costs associated with performing computing operations with classical information in nano-scale quantum systems. It is also necessary to identify and understand the effect of quantum in-distinguishability, noise, and device variability on these dissipation limits. Identifying these parameters is crucial to designing more energy efficient computing systems moving forward. In this dissertation, we will provide a physical description of finite state automaton, an abstract tool commonly used to describe computational operations under the Referential Approach to physical information theory. We will derive the fundamental limits of dissipation associated with a state transition in deterministic and probabilistic finite state automaton, and propose efficacy measures to capture how well a particular state transition has been physically realized. We will use these dissipation bounds to understand the limits of dissipation during learning during training and testing phases in feed-forward and recurrent neural networks. This study of dissipation in neural network provides key hints at how dissipation is fundamentally intertwined with learning in physical systems. These ideas connecting energy dissipation, entropy and physical information provide the perfect toolkit to critically analyze the very foundations of computing, and our computational approaches to artificial intelligence. In the second part of this dissertation, we derive the non-equilibrium reliable low dissipation condition for predictive inference in self-organized systems. This brings together the central ideas of homeostasis, prediction and energy efficiency under a single non-equilibrium constraint. The work was further extended to study the relationship between adaptive learning and the reliable high dissipation conditions, and the exploitation-exploration trade-offs in active agents. Using these results, we will discuss the differences between observer dependent and independent computing, and propose an alternative novel descriptive framework of intelligence in physical systems using thermodynamics. This framework is called thermodynamic intelligence and will be used to guide the engineering methodologies (devices and architectures) required to implement these descriptions.
Type
openaccess
dissertation
Date
Publisher
Rights
License
Research Projects
Organizational Units
Journal Issue
Embargo
Publisher Version
Embedded videos
Collections