The increasingly complex demands placed on control systems have resulted in a
need for intelligent control, an approach that attempts to meet these demands by emulating
the capabilities found in biological systems. The need to exploit existing knowledge is a
desirable feature of any intelligent control system, and this leads to the relearning problem.
The problem arises when a control system is required to effectively learn new knowledge
whilst exploiting still useful knowledge from past experiences. This thesis describes the
adaptive critic system using reinforcement learning, a computational framework that can
effectively address many of the demands in intelligent control, but is less effective when it
comes to addressing the relearning problem. The thesis argues that biological mechanisms
of reinforcement learning (and relearning) may provide inspiration for developing artificial
intelligent control mechanisms that can better address the relearning problem. A conceptual
model of biological reinforcement learning and relearning is presented, and the thesis
shows how inspiration derived from this model can be used to modify the adaptive critic.
The performance of the modified adaptive critic system on the relearning problem is
investigated based on simulations of the pole balancing problem, and this is compared to
the performance of the original adaptive critic system. The thesis presents an analysis of
the results from these simulations, and discusses the significance of these results in terms
of addressing the relearning problem.
Date of Award | 1998 |
---|
Original language | English |
---|
Awarding Institution | |
---|
Reinforcement learning in intelligent control : a biologically-inspired approach to the relearning problem
D'Cruz, B. (Author). 1998
Student thesis: PhD