487
Views
10
CrossRef citations to date
0
Altmetric
Articles

Multi-robot formation control: a comparison between model-based and learning-based methods

, &
Pages 90-108 | Received 31 Aug 2019, Accepted 22 Nov 2019, Published online: 06 Dec 2019

References

  • Amato, C., Chowdhary, G., Geramifard, A., Üre, N. K., & Kochenderfer, M. J. (2013). Decentralized control of partially observable Markov decision processes. IEEE conference on decision and control (pp. 2398–2405).
  • Amato, C., Konidaris, G., Anders, A., Cruz, G., How, J. P., & Kaelbling, L. P. (2016). Policy search for multi-robot coordination under uncertainty. International Journal of Robotics Research, 35(14), 1760–1778. doi: 10.1177/0278364916679611
  • Aykin, C., Knopp, M., & Diepold, K. (2018). Deep Reinforcement Learning for Formation Control. IEEE international symposium on robot and human interactive communication (pp. 1–5).
  • Barrett, S., & Stone, P. (2015). Cooperating with unknown teammates in complex domains: A robot soccer case study of ad hoc teamwork. AAAI conference on artificial intelligence (pp. 2010–2016).
  • Chen, Z., Jiang, C., & Guo, Y. (2019). Distance-based formation control of a three-robot system. Chinese control and decision conference (pp. 5574–5580).
  • Dequaire, J., Ondrúška, P., Rao, D., Wang, D., & Posner, I. (2018). Deep tracking in the wild: End-to-end tracking using recurrent neural networks. The International Journal of Robotics Research, 37(4-5), 492–512. doi: 10.1177/0278364917710543
  • Dimarogonas, D. V., & Johansson, K. H. (2008). On the stability of distance-based formation control. IEEE conference on decision and control (pp. 1200–1205).
  • Foerster, J., Nardelli, N., Farquhar, G., Afouras, T., Torr, P. H., Kohli, P., & Whiteson, S. (2017). Stabilising experience replay for deep multi-agent reinforcement learning. Proceedings of international conference on machine learning (pp. 1146–1155).
  • Guo, Y. (2017). Distributed cooperative control: Emerging applications. Hoboken, NJ: John Wiley & Sons.
  • Gustavi, T., & Hu, X. (2008). Observer-based leader-following formation control using onboard sensor information. IEEE Transactions on Robotics, 24(6), 1457–1462. doi: 10.1109/TRO.2008.2006244
  • Hüttenrauch, M., Adrian, S., & Neumann, G. (2019). Deep reinforcement learning for swarm systems. Journal of Machine Learning Research, 20(54), 1–31.
  • Jiang, C., Chen, Z., & Guo, Y. (2019). Learning decentralised control policies for multi-robot formation. IEEE/ASME international conference on advanced intelligent mechatronics (pp. 758–765).
  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. Preprint arXiv:1412.6980.
  • Li, S., Kong, R., & Guo, Y. (2014). Cooperative distributed source seeking by multiple robots: Algorithms and experiments. IEEE/ASME Transactions on Mechatronics, 19(6), 1810–1820. doi: 10.1109/TMECH.2013.2295036
  • Liang, X., Wang, H., Liu, Y. H., Chen, W., & Liu, T. (2017). Formation control of nonholonomic mobile robots without position and velocity measurements. IEEE Transactions on Robotics, 34(2), 434–446. doi: 10.1109/TRO.2017.2776304
  • Liu, M., Amato, C., Anesta, E. P., Griffith, J. D., & How, J. P. (2016). Learning for decentralised control of multiagent systems in large, partially-observable stochastic environments. AAAI conference on artificial intelligence (pp. 2523–2529).
  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., …Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533. doi: 10.1038/nature14236
  • Oh, K. K., & Ahn, H. S. (2011). Formation control of mobile agents based on inter-agent distance dynamics. Automatica, 47(10), 2306–2312. doi: 10.1016/j.automatica.2011.08.019
  • Oh, K. K., Park, M. C., & Ahn, H. S. (2015). A survey of multi-agent formation control. Automatica, 53, 424–440. doi: 10.1016/j.automatica.2014.10.022
  • Qu, Z. (2009). Cooperative control of dynamical systems: Applications to autonomous vehicles. London: Springer Science & Business Media.
  • Rausch, V., Hansen, A., Solowjow, E., Liu, C., Kreuzer, E., & Hedrick, J. K. (2017). Learning a deep neural net policy for end-to-end control of autonomous vehicles. American control conference (pp. 4914–4919).
  • Rohmer, E., Singh, S. P., & Freese, M. (2013). V-REP: A versatile and scalable robot simulation framework. IEEE/RSJ international conference on intelligent robots and systems (pp. 1321–1326).
  • Sukhbaatar, S., Szlam, A., & Fergus, R. (2016). Learning multiagent communication with backpropagation. Advances in neural information processing systems (pp. 2244–2252).
  • Tuyls, K., & Weiss, G. (2012). Multiagent learning: Basics, challenges, and prospects. AI Magazine, 33(3), 41–52. doi: 10.1609/aimag.v33i3.2426
  • Vidal, R., Shakernia, O., & Sastry, S. (2004). Following the flock [formation control]. IEEE Robotics & Automation Magazine, 11(4), 14–20. doi: 10.1109/MRA.2004.1371604
  • Wang, H., Guo, D., Liang, X., Chen, W., Hu, G., & Leang, K. K. (2016). Adaptive vision-based leader–follower formation control of mobile robots. IEEE Transactions on Industrial Electronics, 64(4), 2893–2902. doi: 10.1109/TIE.2016.2631514
  • Wulfmeier, M., Rao, D., Wang, D. Z., Ondruska, P., & Posner, I. (2017). Large-scale cost function learning for path planning using deep inverse reinforcement learning. The International Journal of Robotics Research, 36(10), 1073–1087. doi: 10.1177/0278364917722396

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.