PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2012 | Special Issue S1 |

Tytuł artykułu

Reinforcement learning in discrete and continuous domains applied to ship trajectory generation

Autorzy

Warianty tytułu

Języki publikacji

EN

Abstrakty

EN
This paper presents the application of the reinforcement learning algorithms to the task of autonomous determination of the ship trajectory during the in-harbour and harbour approaching manoeuvres. Authors used Markov decision processes formalism to build up the background of algorithm presentation. Two versions of RL algorithms were tested in the simulations: discrete (Q-learning) and continuous form (Least-Squares Policy Iteration). The results show that in both cases ship trajectory can be found. However discrete Q-learning algorithm suffered from many limitations (mainly curse of dimensionality) and practically is not applicable to the examined task. On the other hand, LSPI gave promising results. To be fully operational, proposed solution should be extended by taking into account ship heading and velocity and coupling with advanced multi-variable controller

Słowa kluczowe

Wydawca

-

Rocznik

Opis fizyczny

p.31-36,fig.,ref.

Twórcy

autor
  • Faculty of Marine Electrical Engineering, Gdynia Maritime University, Morska 81-87, 81-225 Gdynia, Poland
autor
  • Faculty of Marine Electrical Engineering, Gdynia Maritime University, Morska 81-87, 81-225 Gdynia, Poland

Bibliografia

  • 1. Busoniu L., Babuska R., De Schutter B., Ernst D.: Reinforcement Learning and Dynamic Programming Using Function Approximators. CRC Press. Automation and Control Engineering Series. 2010.
  • 2. Cichosz P.: Learning Systems. WNT. Warszawa 2000. (in Polish).
  • 3. Gierusz W.: Synthesis of Multi-variable systems of Precise Ship Motion Control Using Selected Robust Control Design Methods. Gdynia Maritime University. 2005. (in Polish).
  • 4. Gierusz W., Nguyen Cong V., Rak A.: Maneuvering Control and Trajectory Tracking of Very Large Crude Carrier, Ocean Engineering, 2007. No 34, pp. 932-945.
  • 5. Kudrewicz J.: Functional Analysis for Control and Electronics Engineers. PWN. Warszawa 1976. (in Polish).
  • 6. Lagoudakis M. G., Parr R.: Least-Squares Policy Iteration. Journal of Machine Learning Research. 2003. Vol. 4, pp. 1107-1149.
  • 7. Mitsubori K., Kamio T., Tanaka T.: On a Course Determination based on the Reinforcement Learning in Maneuvering motion of a ship with the tidal current effect. International Symposium on Nonlinear Theory and its Applications. Xi’an 2002.
  • 8. Morawski L., Nguyen Cong V., Rak A.: Full-Mission Marine Autopilot Based on Fuzzy Logic Techniques. Gdynia Maritime University. 2008.
  • 9. Rak A.: Application of Reinforcement Learning to Ship Motion Control Systems. Zeszyty Naukowe AM w Gdyni. 2009. No 62, pp. 133-140. (in Polish).
  • 10. Sutton R. S., Barto A. G.: Reinforcement Learning An Introduction. MIT Press. 1998.
  • 11. Watkins C. J. C. H., Dayan P.: Q-learning. Machine Learning. 1992. Vol. 8, no 3-4, pp. 279-292.
  • 12. Zhipeng S., Chen G., Jianbo S.: Reinforcement learning control for ship steering based on general fuzzified CMAC. Proc. of 5-th Asian Control Conference. 2005. Vol. 3, pp. 1552-1557.

Typ dokumentu

Bibliografia

Identyfikatory

Identyfikator YADDA

bwmeta1.element.agro-b1b52b3a-ca8d-4ab5-8e7c-bb7d71eef0fa
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.