1,643
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Optimize taxi driving strategies based on reinforcement learning

, &
Pages 1677-1696 | Received 10 Jul 2017, Accepted 27 Mar 2018, Published online: 03 Apr 2018
 

ABSTRACT

The efficiency of taxi services in big cities influences not only the convenience of peoples’ travel but also urban traffic and profits for taxi drivers. To balance the demands and supplies of taxicabs, spatio-temporal knowledge mined from historical trajectories is recommended for both passengers finding an available taxicab and cabdrivers estimating the location of the next passenger. However, taxi trajectories are long sequences where single-step optimization cannot guarantee the global optimum. Taking long-term revenue as the goal, a novel method is proposed based on reinforcement learning to optimize taxi driving strategies for global profit maximization. This optimization problem is formulated as a Markov decision process for the whole taxi driving sequence. The state set in this model is defined as the taxi location and operation status. The action set includes the operation choices of empty driving, carrying passengers or waiting, and the subsequent driving behaviors. The reward, as the objective function for evaluating driving policies, is defined as the effective driving ratio that measures the total profit of a cabdriver in a working day. The optimal choice for cabdrivers at any location is learned by the Q-learning algorithm with maximum cumulative rewards. Utilizing historical trajectory data in Beijing, the experiments were conducted to test the accuracy and efficiency of the method. The results show that the method improves profits and efficiency for cabdrivers and increases the opportunities for passengers to find taxis as well. By replacing the reward function with other criteria, the method can also be used to discover and investigate novel spatial patterns. This new model is prior knowledge-free and globally optimal, which has advantages over previous methods.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by the National Natural Science Foundation of China [41625003].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 704.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.