Abstract
In this article we experiment with a 2-player strategy board game where playing models are developed using reinforcement learning and neural networks. The models are developed to speed up automatic game development based on human involvement at varying levels of sophistication and density when compared to fully autonomous playing. The experimental results suggest a clear and measurable association between the ability to win games and the ability to do that fast, while at the same time demonstrating that there is a minimum level of human involvement beyond which no learning really occurs.
Acknowledgments
During the past year Christos Kalantzis, a student of the author, designed and carried out a long series of experiments. Numerous conversations on the suitability of these experiments and the interpretations of the results have influenced this work.
Notes
1For traceability reasons, we start indexing at 9 as we have used numbers 1..8 for experiments referred to in earlier work.