125
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Ancillary mechanism for autonomous decision-making process in asymmetric confrontation: a view from Gomoku

ORCID Icon &
Pages 1141-1159 | Received 07 Mar 2021, Accepted 06 Apr 2022, Published online: 02 May 2022
 

ABSTRACT

This paper investigates how agents learn and perform efficient strategies by trying different actions in asymmetric confrontation setting. Firstly, we use Gomoku as an example to analyse the causes and impacts of asymmetric confrontation: the first mover gains higher power than the second mover. We find that the first mover learns how to attack quickly while it is difficult for the second mover to learn how to defend since it cannot win the first mover and always receives negative rewards. As such, the game is stuck at a deadlock in which the first mover cannot make further advances to learn how to defend, and the second mover learns nothing. Secondly, we propose an ancillary mechanism (AM) to add two principles to the agent’s actions to overcome this difficulty. AM is a guidance for the agents to reduce the learning difficulty and to improve their behavioural quality. To the best of our knowledge, this is the first study to define asymmetric confrontation in reinforcement learning and propose approaches to tackle such problems. In the numerical tests, we first conduct a simple human vs AI experiment to calibrate the learning process in asymmetric confrontation. Then, an experiment of 15*15 Gomoku game by letting two agents (with AM and without AM) compete is applied to check the potential of AM. Results show that adding AM can make both the first and the second movers become stronger in almost the same amount of calculation.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1. usually the most valuable one.

2. For example, the last node in the graph reached the value 1.

3. This process corresponds to step 1 in , because the value of the actual situation (parent node) is meaningless, so ignore it in the pattern.

4. positive reward for the winner, negative feedback for the loser.

5. assuming we conduct 400 MCTS searches, nearly half of the computing resource will be wasted on such useless wrong searches for the first layer of the board.

6. about 500 games.

7. In the search and the actual game, there will be deviations in the agent’s behaviour pattern. In order to ensure stability, 100 MCTS will be performed during the game, and the one with the most visits will be selected when the game is placed. When searching, especially the first exploration in this branch, the specific choice of grid placement depends entirely on the prior probability.

8. See Appendix.

9. For an agent that not well trained, its self-play winning rate is about 6:4 black:white.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 373.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.