24
Views
2
CrossRef citations to date
0
Altmetric
Original Article

Stochastic dynamics of reinforcement learning

Pages 289-307 | Received 20 Oct 1994, Published online: 09 Jul 2009
 

Abstract

We present a continuous-time master-equation formulation of reinforcement learning. Both non-associative (stochastic learning automaton) and associative (neural network) cases are considered. A Fokker–Planck equation for the stochastic dynamics of the learning process is derived using a small-fluctuation expansion of the master equation. We then show how the Fokker–Planck approximation can be used to determine the global asymptotic behaviour of ergodic learning schemes such as linear reward–penalty (LR−P) and associative reward–penalty (LR−P), in the limit of small learning rates. A simple example of reinforcement learning in a non-stationary environment is studied.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.