159
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Coupled trajectory optimization and tuning of tracking controllers for parafoil generator

, , ORCID Icon &
Pages 803-815 | Received 19 Apr 2022, Accepted 17 Aug 2022, Published online: 06 Sep 2022

References

  • Aldemir, A., and H. Hapoğlu. 2016. Comparison of PID tuning methods for wireless temperature control. Journal of Polytechnical 19 (191):9–19.
  • Astrom, K. J., and T. Hagglund. 1995. PID controllers, theory, design and tuning. 2nd ed. USA: Instrument Society of America.
  • Bingul, Z., and O. Karahan. 2018. Comparison of PID and FOPID controllers tuned by PSO and ABC algorithms for unstable and integrating systems with time delay. Optimal Control Applications & Methods 39 (4):1431–50. doi:10.1002/oca.2419.
  • Canale, M., L. Fagiano, and M. Milanese. 2007. Power kites for wind energy generation. IEEE Control Systems Magazine 27 (99):25–38. doi:10.1109/MCS.2007.4339282.
  • Canale, M., L. Fagiano, and M. Milanese. 2008. Kitegen: A revolution in wind energy generation. Energy 34 (3):355–61. doi:10.1016/j.energy.2008.10.003.
  • Canale, M., L. Fagiano, and M. Milanese. 2009. High altitude wind energy generation using controlled power kites. IEEE Transactions on Control Systems Technology 18 (2):279–93. doi:10.1109/TCST.2009.2017933.
  • Chen, H., Y. C. Fang, N. Sun, and Y. Z. Qian. 2016. Pseudospectral method based on time optimal anti-swing trajectory planning for double pendulum crane systems. Acta Automatica Sinica 42:153–60.
  • Cruz, D. L., and W. Yu. 2017.Path planning of multi-agent system in unknown environment with neural kernel smoothing and reinforcement learning. Neurocomputing 233: 34–42.doi:10.1016/j.neucom.2016.08.108
  • Elnagar, G., M. A. Kazemi, and M. Razzaghi. 1995. The pseudospectral Legendre method for discretizing optimal control problems. IEEE Transactions on Automatic Control 40 (10):1793–96. doi:10.1109/9.467672.
  • Fagiano, L., M. Milanese, and D. Piga. 2010. High-altitude wind power generation. IEEE Transactions on Energy Conversion 25 (1):168–80. doi:10.1109/TEC.2009.2032582.
  • Gheisarnejad, M., and M. H. Khooban. 2021. An intelligent non-integer PID controller-based deep reinforcement learning: Implementation and experimental results. IEEE Transactions on Industrial Electronics 68 (4):3609–18. doi:10.1109/TIE.2020.2979561.
  • Gong, Q., F. Fahroo, and I. M. Ross. 2008. Spectral algorithm for pseudospectral methods in optimal control. Journal of Guidance, Control, and Dynamics 31 (3):460–71. doi:10.2514/1.32908.
  • Guo, Z. Y., J. G. Guo, X. M. Wang, J. Chang, and H. Huang. 2021. Sliding mode control for systems subjected to unmatched disturbances/unknown control direction and its application. International Journal of Robust and Nonlinear Control 31 (4):1303–23. doi:10.1002/rnc.5336.
  • Hu, W. J., Q. Lu, and J. Chang. 2015. A Gauss pseudospectral method for reentry trajectory planning with characteristic trend partitioning. Acta Aeronautica Et Astronautics Sinica 36:3338–48.
  • Kim, J., and C. Park. 2009. Wind power generation with a parawing on ships, a proposal. Energy 35:336–42.
  • Kober, J., J. A. Bagnell, and J. Peters. 2013. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research 32 (11):1238–74. doi:10.1177/0278364913495721.
  • Lehna, M., B. Hoppmann, C. Scholz, and R. Heinrich. 2021.A reinforcement learning approach for the continuous electricity market of Germany: Trading from the perspective of a wind park operator. Energy and AI 8: 100139.doi:10.1016/j.egyai.2022.100139
  • Lillicrap, T. P., J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. 2015. Continuous control with deep reinforcement learning. Computer Science 8 (6):A187.
  • Lin, X., J. Liu, Y. Yu, and C. Sun. 2020.Event-triggered reinforcement learning control for the quadrotor UAV with actuator saturation. Neurocomputing 415: 135–45.doi: 10.1016/j.neucom.2020.07.042
  • Loyd, M. L. 1980. Crosswind kite power. Journal of Energy 4 (3):106–11. doi:10.2514/3.48021.
  • Mahdi, E. S., C. Joseph, O. Cathal, and T. Daniel. 2020.Experimental rig investigation of a direct interconnection technique for airborne wind energy systems. International Journal of Electrical Power & Energy Systems 123: 106300.doi: 10.1016/j.ijepes.2020.106300
  • Maiti, D., S. Biswas, and A. Konar. 2008. Design of a fractional order PID controller using particle swarm optimization technique. The International Journal, Advanced Manufacturing Technology 58 (5–8):521–31.
  • Manassakan, S., and V. Peerapon. 2022.Model-based deep reinforcement learning for wind energy bidding. International Journal of Electrical Power & Energy Systems 136: 107625.doi:10.1016/j.ijepes.2021.107625
  • Miele, E. S., F. Bonacina, and A. Corsini. 2022.Deep anomaly detection in horizontal axis wind turbines using graph convolutional autoencoders for multivariate time series. Energy and AI 8: 100145.doi:10.1016/j.egyai.2022.100145
  • Mitchell, D., J. Blanche, S. Harper, T. Lim, R. Gupta, O. Zaki, W. Tang, V. Robu, S. Watson, D. Flynn, et al. 2021. A review: Challenges and opportunities for artificial intelligence and robotics in the offshore wind sector. Energy and AI 8:100146. doi:10.1016/j.egyai.2022.100146.
  • Mnih, V., K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature. 518(7540):529–33. doi:10.1038/nature14236.
  • Neeraj, V., and D.B. Beena. 2022. Influence of Reynolds number consideration for aerodynamic characteristics of airfoil on the blade design of small horizontal axis wind turbine. International Journal of Green Energy 19 (7):733–46. doi:10.1080/15435075.2021.1960356.
  • Omab, C., C. Smb, A. Cz, et al. 2021. Desired tracking of delayed quadrotor UAV under model uncertainty and wind disturbance using adaptive super-twisting terminal sliding mode control. International Student Admissions Test 123:455–471. doi:10.1016/j.isatra.2021.06.002.
  • Qi, W. H., H. P. Ju, G. D. Zong, J. D. Cao, and J. Cheng. 2021. Synchronization for quantized semi-Markov switching neural networks in a finite time. IEEE Transactions on Neural Networks and Learning Systems 32 (3):1264–75. doi:10.1109/TNNLS.2020.2984040.
  • Ren, J., Z. Chen, M. Sun, and Q. Sun. 2021. Multi-verse optimizer based PI-type active disturbance rejection generalized predictive control gain tuning for ship motion. Optimal Control Applications and Methods. doi:10.1002/oca.2818.
  • Sean, C., F. Gregory, and B. Dominique. 2018. Real-time optimizing control of an experimental crosswind power kite. IEEE Transactions on Control Systems Technology 26 (2):507–22. doi:10.1109/TCST.2017.2672404.
  • Silver, D., G. Lever, N. Heess, T. Degris, D. Wierstra and M. Riedmiller. Deterministic policy gradient algorithms. Beijing, China: ACM Press. 2014.
  • Sutton, R. S., and A. G. Barto. 2018. Reinforcement learning: An introduction. 2nd ed. Massachusetts: MIT Press.
  • Thresher, R., M. Robinson, and P. Veers. 2007. To capture the wind. IEEE Power and Energy Magazine 5 (6):34–46. doi:10.1109/MPE.2007.906304.
  • Udo, Z., and B. Philip. 2013. Emergence and economic dimension of airborne wind energy. In Airborne wind Energy, 0125. Berlin: Springer.
  • Wu, D., X. Dong, J. Shen, and S. C. Hoi. 2020. Reducing estimation bias via triplet-average deep deterministic policy gradient. IEEE Trans Neural Networks 99:1–13.
  • Yan, J. Z., and X. T. Zhuan. Parameter self-optimization and tuning algorithm based on reinforcement learning. CAAI Transactions on Intelligence Technology. doi:10.11992/tis.202012038.
  • Zhang, Y., Z. F. Zhang, Q. Y. Yang, D. An, D. Li, and C. Li. 2020.EV charging bidding by multi-DQN reinforcement learning in electricity auction market. Neurocomputing 397: 404–14.doi:10.1016/j.neucom.2019.08.106
  • Zhang, Y. B., Q. Zong, and H. C. Lu. 2017. Formation trajectory optimization of quadrotor UAV based on HP adaptive pseudospectral method. Science China: Technology Science 3:239–48.
  • Zhao, J. 2020. Neural networks-based optimal tracking control for nonzero-sum games of multi-player continuous-time nonlinear systems via reinforcement learning. Neurocomputing 412: 167–76.doi:10.1016/j.neucom.2020.06.083
  • Zheng, Y. M., Z. Q. Chen, Z. Y. Huang, M. Sun, and Q. Sun. 2020. Active disturbance rejection controller for multi-area interconnected power system based on reinforcement learning. Neurocomputing 425 (5):149–59. doi:10.1016/j.neucom.2020.03.070.
  • Zhou, Z. C., W. Zhou, and J. Tao. 2021.Deep reinforcement learning framework for resilience enhancement of distribution systems under extreme weather events. International Journal of Electrical Power & Energy Systems 128: 106676.doi:10.1016/j.ijepes.2020.106676
  • Zia, U. R., K. Ammara, A. Samia, S. Ali, N. Hayat, M. Abdullah, and U. B. Saeed. 2021. Wind energy potential and economic assessment of southeast of Pakistan. International Journal of Green Energy 18 (1):1–16. doi:10.1080/15435075.2020.1814298.
  • Zong, G. D., Y. D. Wang, H. R. Karimi, and K. Shi. 2022.Observer-based adaptive neural tracking control for a class of nonlinear systems with prescribed performance and input dead-zone constraints. Neural Networks 147: 126–35.doi:10.1016/j.neunet.2021.12.019

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.