119
Views
0
CrossRef citations to date
0
Altmetric
Research Article

A reinforcement learning framework for improving parking decisions in last-mile delivery

, , &
Article: 2337216 | Received 28 Mar 2023, Accepted 26 Mar 2024, Published online: 08 Apr 2024

References

  • Accorsi, L., and D. Vigo. 2020. “A Hybrid Metaheuristic for Single Truck and Trailer Routing Problems.” Transportation Science 54 (5): 1351–1371. https://doi.org/10.1287/trsc.2019.0943.
  • Ahuja, R. K., and J. B. Orlin. 1997. “Commentary—Developing Fitter Genetic Algorithms.” INFORMS Journal on Computing 9 (3): 251–253. https://doi.org/10.1287/ijoc.9.3.251.
  • Aiura, N., and E. Taniguchi. 2005. “Planning on-Street Loading-Unloading Spaces Considering the Behaviour of Pickup-Delivery Vehicles.” Journal of the Eastern Asia Society for Transportation Studies 6: 2963–2974. https://doi.org/10.11175/easts.6.2963.
  • Alho, A., J. D. A. e Silva, and J. P. de Sousa. 2014. “A State-of-the-art Modeling Framework to Improve Congestion by Changing the Configuration/Enforcement of Urban Logistics Loading/Unloading Bays.” Procedia – Social and Behavioral Sciences 111: 360–369. https://doi.org/10.1016/j.sbspro.2014.01.069.
  • Alho, A. R., J. D. A. e Silva, J. P. de Sousa, and E. Blanco. 2018. “Improving Mobility by Optimizing the Number, Location and Usage of Loading/Unloading Bays for Urban Freight Vehicles.” Transportation Research Part D: Transport and Environment 61: 3–18.
  • Aljohani, K., and R. G. Thompson. 2018. “Optimizing the Establishment of a Central City Transshipment Facility to Ameliorate Last-Mile Delivery: A Case Study in Melbourne CBD.” In City Logistics 3, edited by E. Taniguchi and E. Thompson. https://doi.org/10.1002/9781119425472.ch2.
  • Amazon Last-mile Routing Research Challenge. 2021. https://routingchallenge.mit.edu/.
  • Aragon-Gómez, R., and J. B. Clempner. 2020. “Traffic-signal Control Reinforcement Learning Approach for Continuous-Time Markov Games.” Engineering Applications of Artificial Intelligence 89. https://doi.org/10.1016/j.engappai.2019.103415.
  • Barter, P. 2013. Is 30% of Traffic Actually Searching for Parking?. Reinventing Parking. Retrieved from reinventingparking.org.
  • Baker, L. 2019. “New York City Charges UPS and FedEx Millions in Parking Fines.” Freight Waves. Accessed August July 2021. https://www.freightwaves.com/news/ups-hit-with-22m-in-nyc-parking-fines.
  • Basso, R., B. Kulcsár, I. Sanchez-Diaz, and X. Qu. 2022. “Dynamic Stochastic Electric Vehicle Routing with Safe Reinforcement Learning.” Transportation Research Part E: Logistics and Transportation Review 157. https://doi.org/10.1016/j.tre.2021.102496.
  • Behrends, S., M. Lindholm, and J. Woxenius. 2008. “The Impact of Urban Freight Transport: A Definition of Sustainability from an Actor's Perspective.” Transportation Planning and Technology 31 (6): 693–713.
  • Bektas, T., T. G. Crainic, and V. Van Woensel. 2015. From Managing Urban Freight to Smart City Logistics Networks. Research Papers 2015, CIRRELT-2015-17. Montreal, CA: CIRRELT 42p.
  • Belenguer, J. M., E. Benavent, A. Martínez, C. Prins, C. Prodhon, and J. G. Villegas. 2016. “A Branch-and-cut Algorithm for the Single Truck and Trailer Routing Problem with Satellite Depots.” Transportation Science 50 (2): 735–749. https://doi.org/10.1287/trsc.2014.0571.
  • Bonett, D. G. 2006. “Confidence Interval for a Coefficient of Quartile Variation.” Computational Statistics and Data Analysis 50 (11): 2953–2957.
  • Bono, G., J. Dibangoye, L. Matignon, F. Pereyron, and O. Simonin. 2018. “SULFR: Simulation of Urban Logistic For Reinforcement.” Workshop on Prediction and Generative Modeling in Reinforcement Learning, 1–5.
  • Bouhamed, O., H. Ghazzai, H. Besbes, and Y. Massoud. 2019. “Q-Learning Based Routing Scheduling for a Multi-task Autonomous Agent.” In 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), 634–637. IEEE.
  • Bouneffouf, D., I. Rish, and C. Aggarwal. 2020. “Survey on Applications of Multi-Armed and Contextual Bandits.” In 2020 IEEE Congress on Evolutionary Computation (CEC), 1–8. IEEE.
  • Boysen, N., S. Fedtke, and S. Schwerdfeger. 2021. “Last-mile Delivery Concepts: A Survey from an Operational Research Perspective.” Or Spectrum 43 (1): 1–58. https://doi.org/10.1007/s00291-020-00607-8.
  • Business Resarch Insight. 2023. “Last Mile Delivery Transportation Market Size, Share, Growth, and Industry Analysis by Type (Business-to-Business (B2B), Business-to-Consumer (B2C), and Customer-to-Customer (C2C)) By Application (Motorcycle, Commercial Vehicles, and Drones), Regional Insights and Forecast to 2031.” Accessed February 19, 2023. https://www.businessresearchinsights.com/market-reports/last-mile-delivery-transportation-market-101836.
  • Campagna, A., A. Stathacopoulos, L. Persia, and E. Xenou. 2017. “Data Collection Framework for Understanding UFT within City Logistics Solutions.” Transportation Research Procedia 24: 354–361. https://doi.org/10.1016/j.trpro.2017.05.100.
  • Casey, N., D. Rao, J. Mantilla, S. Pelosi, and R. G. Thompson. 2014. “Understanding Last Kilometre Freight Delivery in Melbourne's Central Business District.” Procedia – Social and Behavioral Sciences 125: 326–333. https://doi.org/10.1016/j.sbspro.2014.01.1477.
  • Cavagnini, R., M. Schneider, and A. Theiß. 2023. A Granular Iterated Local Search for the Asymmetric Single Truck and Trailer Routing Problem with Satellite Depots at DHL Group. Networks.
  • Cesa-Bianchi, N., C. Gentile, G. Lugosi, and G. Neu. 2017. “Boltzmann Exploration Done Right.” In NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, 6287–6296.
  • Chen, Y., Y. Qian, Y. Yao, Z. Wu, R. Li, Y. Zhou, H. Hu, and Y. Xu. 2019. “Can Sophisticated Dispatching Strategy Acquired by Reinforcement Learning?-a Case Study in Dynamic Courier Dispatching System.” arXiv preprint arXiv:1903.02716.
  • Chu, T., J. Wang, L. Codecà, and Z. Li. 2020. “Multi-agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control.” IEEE Transactions on Intelligent Transportation Systems 21 (3): 1086–1095. https://doi.org/10.1109/TITS.2019.2901791.
  • Comi, A., M. M. Schiraldi, and B. Buttarazzi. 2018. “Smart Urban Freight Transport: Tools for Planning and Optimising Delivery Operations.” Simulation Modelling Practice and Theory 88: 48–61. https://doi.org/10.1016/j.simpat.2018.08.006.
  • Crainic, T. G., N. Ricciardi, and G. Storchi. 2004. “Advanced Freight Transportation Systems for Congested Urban Areas.” Transportation Research Part C: Emerging Technologies 12 (2): 119–137. https://doi.org/10.1016/j.trc.2004.07.002.
  • Cuda, R., G. Guastaroba, and M. G. Speranza. 2015. “A Survey on Two-Echelon Routing Problems.” Computers & Operations Research 55: 185–199. https://doi.org/10.1016/j.cor.2014.06.008.
  • Dablanc, L. 2011. “City Distribution, a key Element of the Urban Economy: Guidelines for Practitioners.” In City Distribution and Urban Freight Transport, edited by C. Macharis and S. Melo, 261. Edward Elgar Publishing.
  • Dablanc, L., and A. Beziat. 2015. Parking for Freight Vehicles in Dense Urban Centers-The Issue of Delivery Areas in Paris. Marne la Vallee, France. https://www.metrans.org/assets/research/MF14-3%202d_Parking%20for%20Freight%20Vehicles%20Final%20Report_070815_0.pdf.
  • Dalla Chiara, G., and L. Cheah. 2017. “Data Stories from Urban Loading Bays.” European Transport Research Review 9 (4): 1–16. https://doi.org/10.1007/s12544-017-0267-3.
  • Dalla Chiara, G., and A. Goodchild. 2020. “Do Commercial Vehicles Cruise for Parking? Empirical Evidence from Seattle.” Transport Policy 97: 26–36. https://doi.org/10.1016/j.tranpol.2020.06.013.
  • Delaitre, L. 2009. “A New Approach to Diagnose Urban Delivery Areas Plans.” In 2009 International Conference on Computers & Industrial Engineering, pp. 991–998. https://doi.org/10.1109/ICCIE.2009.5223953.
  • Delaître, L., and J. L. Routhier. 2010. “Mixing two French Tools for Delivery Areas Scheme Decision Making.” Procedia – Social and Behavioral Sciences 2 (3): 6274–6285. https://doi.org/10.1016/j.sbspro.2010.04.037.
  • Dezi, G., G. Dondi, and C. Sangiorgi. 2010. “Urban Freight Transport in Bologna: Planning Commercial Vehicle Loading/Unloading Zones.” Procedia – Social and Behavioral Sciences 2 (3): 5990–6001. https://doi.org/10.1016/j.sbspro.2010.04.013.
  • Doulah, M. S. 2018. “Alternative Measures of Standard Deviation Coefficient of Variation and Standard Error.” International Journal of Statistics and Applications 8 (6): 309–315.
  • Ewedairo, K., P. Chhetri, and F. Jie. 2018. “Estimating Transportation Network Impedance to Last-mile Delivery: A Case Study of Maribyrnong City in Melbourne.” The International Journal of Logistics Management 29 (1): 110–130. https://doi.org/10.1108/IJLM-10-2016-0247.
  • Firdausiyah, N., E. Taniguchi, and A. G. Qureshi. 2019. “Modeling City Logistics Using Adaptive Dynamic Programming Based Multi-Agent Simulation.” Transportation Research Part E: Logistics and Transportation Review 125: 74–96. https://doi.org/10.1016/j.tre.2019.02.011.
  • Gevaers, R., E. Van De Voorde, and T. Vanelslander. 2014. “Cost Modelling and Simulation of Last-mile Characteristics in an Innovative B2C Supply Chain Environment with Implications on Urban Areas and Cities.” Procedia-Social and Behavioral Sciences 125: 398–411.
  • Guillet, M., G. Hiermann, A. Kröller, and M. Schiffer. 2022. “Electric Vehicle Charging Station Search in Stochastic Environments.” Transportation Science 56 (2): 483–500. https://doi.org/10.1287/trsc.2021.1102.
  • Guillet, M., and M. Schiffer. 2022. “Coordinated Charging Station Search in Stochastic Environments: A Multi-Agent Approach.” arXiv preprint arXiv:2204.14219.
  • Guo, X., Z. Ren, Z. Wu, J. Lai, D. Zeng, and S. Xie. 2020. “A Deep Reinforcement Learning Based Approach for AGVs Path Planning.” In 2020 Chinese Automation Congress (CAC), 6833–6838. https://doi.org/10.1109/CAC51589.2020.9327532.
  • Gupta, A., S. Ghosh, and A. Dhara. 2022. “Deep Reinforcement Learning Algorithm for Fast Solutions to Vehicle Routing Problem with Time-Windows.” In 5th Joint International Conference on Data Science & Management of Data (9th ACM IKDD CODS and 27th COMAD), 236–240. https://doi.org/10.1145/3493700.3493723.
  • Han, L. D., S. M. Chin, O. Franzese, and H. Hwang. 2005. “Estimating the Impact of Pickup- and Delivery-Related Illegal Parking Activities on Traffic.” Transportation Research Record: Journal of the Transportation Research Board 1906 (1): 49–55. https://doi.org/10.1177/0361198105190600106.
  • Hildebrandt, F. D., B. W. Thomas, and M. W. Ulmer. 2023. “Opportunities for Reinforcement Learning in Stochastic Dynamic Vehicle Routing.” Computers & Operations Research 150: 106071. https://doi.org/10.1016/j.cor.2022.106071.
  • Holguín-Veras, J., J. A. Leal, I. Sánchez-Diaz, M. Browne, and J. Wojtowicz. 2020. “State of the Art and Practice of Urban Freight Management: Part I: Infrastructure, Vehicle-related, and Traffic Operations.” Transportation Research Part A: Policy and Practice 137: 360–382. https://doi.org/10.1016/j.tra.2018.10.037.
  • ITF. 2017. ITF Transport Outlook 2017. Paris: OECD Publishing. https://doi.org/10.1787/9789282108000-en
  • Iwan, S., K. Kijewska, B. G. Johansen, O. Eidhammer, K. Małecki, W. Konicki, and R. G. Thompson. 2018. “Analysis of the Environmental Impacts of Unloading Bays Based on Cellular Automata Simulation.” Transportation Research Part D: Transport and Environment 61: 104–117. https://doi.org/10.1016/j.trd.2017.03.020.
  • Jahanshahi, H., A. Bozanta, M. Cevik, E. M. Kavuk, A. Tosun, S. B. Sonuc, B. Kosuku, and A. Başar. 2021. “A Deep Reinforcement Learning Approach for the Meal Delivery Problem.” arXiv preprint http://arxiv.org/abs/2104.12000.
  • Jaller, M., J. Holguín-Veras, and S. D. Hodge. 2013. “Parking in the City: Challenges for Freight Traffic.” Transportation Research Record: Journal of the Transportation Research Board 2379 (1): 46–56. https://doi.org/10.3141/2379-06.
  • Lamas-Fernandez, C., A. Martinez-Sykora, F. McLeod, T. Bektaş, T. Cherrett, and J. Allen. 2023. “Improving Last-Mile Parcel Delivery through Shared Consolidation and Portering: A Case Study in London.” Journal of the Operational Research Society, 1–12. https://doi.org/10.1080/01605682.2023.2231095.
  • Lattimore, T., and C. Szepesvári. 2020. Bandit Algorithms. Vol. 1. Cambridge University Press.
  • Letnik, T., A. Farina, M. Mencinger, M. Lupi, and S. Božičnik. 2018. “Dynamic Management of Loading Bays for Energy Efficient Urban Freight Deliveries.” Energy 159: 916–928. https://doi.org/10.1016/j.energy.2018.06.125.
  • Long, G. 2000. “Acceleration Characteristics of Starting Vehicles.” Transportation Research Record: Journal of the Transportation Research Board 1737 (1): 58–70. https://doi.org/10.3141/1737-08.
  • Luke, S. 2013. Essentials of Metaheuristics: A set of Undergraduate Lecture Notes. Vol. 2. Lulu.
  • Malik, L., I. Sánchez-díaz, G. Tiwari, and J. Woxenius. 2017. “Urban Freight-Parking Practices: The Cases of Gothenburg (Sweden) and Delhi (India).” Research in Transportation Business & Management 24 (October 2016): 37–48. https://doi.org/10.1016/j.rtbm.2017.05.002.
  • Marcia, S. 2009. Improving Freight Movement in Delaware Central Business. Institute for Public Administration, College of Education & Public Policy.
  • Martinez-Sykora, A., F. McLeod, C. Lamas-Fernandez, T. Bektas, T. Cherrett, and J. Allen. 2020. “Optimised Solutions to the Last-Mile Delivery Problem in London Using a Combination of Walking and driving.” Annals of Operations Research 295: 645–693. https://doi.org/10.1007/s10479-020-03781-8.
  • McKinsey & Co. 2019. “The Future of Parcel Delivery: Drones and Disruption the Next Normal.” Accessed November 29, 2021. https://www.mckinsey.com/featured-insights/the-next-normal/parcel-delivery.
  • McLeod, F., and T. Cherrett. 2011. “Loading Bay Booking and Control for Urban Freight.” International Journal of Logistics Research and Applications 14 (6): 385–397. https://doi.org/10.1080/13675567.2011.641525.
  • Miller, C. E., A. W. Tucker, and R. A. Zemlin. 1960. “Integer Programming Formulation of Traveling Salesman Problems.” Journal of the ACM 7 (4): 326–329. https://doi.org/10.1145/321043.321046.
  • Muñuzuri, J., P. Cortés, R. Grosso, and J. Guadix. 2012. “Selecting the Location of Minihubs for Freight Delivery in Congested Downtown Areas.” Journal of Computational Science 3 (4): 228–237.
  • Muñuzuri, J., J. Racero, and J. Larrañeta. 2002. “Parking Search Modelling in Freight Transport and Private Traffic Simulation.” WIT Transactions on The Built Environment 60: 335–344. https://doi.org/10.2495/UT020331.
  • Muriel, J. E., L. Zhang, J. C. Fransoo, and R. Perez-Franco. 2022. “Assessing the Impacts of Last Mile Delivery Strategies on Delivery Vehicles and Traffic Network Performance.” Transportation Research Part C: Emerging Technologies 144: 103915. https://doi.org/10.1016/j.trc.2022.103915.
  • Naeem, M., S. T. H. Rizvi, and A. Coronato. 2020. “A Gentle Introduction to Reinforcement Learning and its Application in Different Fields.” IEEE Access 8: 209320–209344. https://doi.org/10.1109/ACCESS.2020.3038605.
  • Nagel, K., and M. Schreckenberg. 1992. “A Cellular Automaton Model for Freeway Traffic.” Journal de physique I 2 (12): 2221–2229. https://doi.org/10.1051/jp1:1992277.
  • Nourinejad, M., A. Wenneman, K. N. Habib, and M. J. Roorda. 2014. “Truck Parking in Urban Areas: Application of Choice Modelling Within Traffic Microsimulation.” Transportation Research Part A: Policy and Practice 64: 54–64. https://doi.org/10.1016/j.tra.2014.03.006.
  • Nowé, A., and T. Brys. 2016. “A Gentle Introduction to Reinforcement Learning.” In International Conference on Scalable Uncertainty Management, 18–32. Cham: Springer.
  • Passalis, N., and A. Tefas. 2020. “Continuous Drone Control using Deep Reinforcement Learning for Frontal View Person Shooting.” Neural Computing and Applications 32 (9): 4227–4238.
  • Pinto, R., R. Golini, and A. Lagorio. 2016. “Loading/Unloading lay-by Areas Location and Sizing: A Mixed Analytic-Monte Carlo Simulation Approach.” IFAC-PapersOnLine 49 (12): 961–966. https://doi.org/10.1016/j.ifacol.2016.07.900.
  • Potvin, J. Y. 2009. “State-of-the Art Review—Evolutionary Algorithms for Vehicle Routing.” INFORMS Journal on Computing 21 (4): 518–548. https://doi.org/10.1287/ijoc.1080.0312.
  • Qiang, W., and Z. Zhongli. 2011. “Reinforcement Learning Model, Algorithms and its Application.” In 2011 International Conference on Mechatronic Science, Electric Engineering and Computer (MEC) Budapest, Hungary, 3–7 July 2011, 1143–1146. https://doi.org/10.1109/MEC.2011.6025669.
  • Qin, W., Y. N. Sun, Z. L. Zhuang, Z. Y. Lu, and Y. M. Zhou. 2021. “Multi-agent Reinforcement Learning-Based Dynamic Task Assignment for Vehicles in Urban Transportation System.” International Journal of Production Economics 240: 108251. https://doi.org/10.1016/j.ijpe.2021.108251.
  • Rawat, K., V. K. Katiyar, and P. Gupta. 2012. “Two-lane Traffic Flow Simulation Model via Cellular Automaton.” International Journal of Vehicular Technology 2012: 1–6. https://doi.org/10.1155/2012/130398.
  • Reed, S., A. M. Campbell, and B. W. Thomas. 2022. “The Value of Autonomous Vehicles for Last-Mile Deliveries in Urban Environments.” Management Science, 68(1), https://doi.org/10.1287/mnsc.2020.3917.
  • Roca-Riu, M., J. Cao, I. Dakic, and M. Menendez. 2017. “Designing Dynamic Delivery Parking Spots in Urban Areas to Reduce Traffic Disruptions.” Journal of Advanced Transportation 2017. https://doi.org/10.1155/2017/6296720.
  • Roca-Riu, M., E. Fernández, and M. Estrada. 2015. “Parking Slot Assignment for Urban Distribution: Models and Formulations.” Omega 57: 157–175. https://doi.org/10.1016/j.omega.2015.04.010.
  • Rolf, B., I. Jackson, M. Müller, S. Lang, T. Reggelin, and D. Ivanov. 2023. “A Review on Reinforcement Learning Algorithms and Applications in Supply Chain Management.” International Journal of Production Research 61 (20): 7151–7179. https://doi.org/10.1080/00207543.2022.2140221.
  • Saravanan, M., and P. Ganeshkumar. 2020. “Routing Using Reinforcement Learning in Vehicular ad hoc Networks.” Computational Intelligence 36 (2): 682–697. https://doi.org/10.1111/coin.12261.
  • Šemrov, D., R. Marsetič, M. Žura, L. Todorovski, and A. Srdic. 2016. “Reinforcement Learning Approach for Train Rescheduling on a Single-Track Railway.” Transportation Research Part B: Methodological 86: 250–267. https://doi.org/10.1016/j.trb.2016.01.004.
  • Shiftan, Y., and R. Burd-Eden. 2001. “Modeling Response to Parking Policy.” Transportation Research Record: Journal of the Transportation Research Board 1765 (1): 27–34. https://doi.org/10.3141/1765-05.
  • Shoup, D. C. 2006. “Cruising for Parking.” Transport Policy 13: 479–486.
  • Sluijk, N., A. M. Florio, J. Kinable, N. Dellaert, and T. Van Woensel. 2023. “Two-echelon Vehicle Routing Problems: A Literature Review.” European Journal of Operational Research 304 (3): 865–886. https://doi.org/10.1016/j.ejor.2022.02.022.
  • Sutton, R. S., and A. G. Barto. 2018. Reinforcement Learning: An Introduction. 2nd ed., vol. 2. MIT Press.
  • Tamagawa, D., E. Taniguchi, and T. Yamada. 2010. “Evaluating City Logistics Measures Using a Multi-agent Model.” Procedia-Social and Behavioral Sciences 2 (3): 6002–6012.
  • Tamayo, S., A. Gaudron, and A. de La Fortelle. 2018. “Loading/Unloading Spaces Location and Evaluation: An Approach through Real Data.” In 10th International Conference on City Logistics, 161–180. Wiley. https://doi.org/10.1002/9781119425472.ch9.
  • Teo, J. S., E. Taniguchi, and A. G. Qureshi. 2012. “Evaluating City Logistics Measure in e-Commerce with Multiagent Systems.” Procedia – Social and Behavioral Sciences 39: 349–359. https://doi.org/10.1016/j.sbspro.2012.03.113.
  • Teo, J. S. E., E. Taniguchi, and A. G. Qureshi. 2015. “Evaluation of Urban Distribution Centers Using Multiagent Modeling with Geographic Information Systems.” Transportation Research Record: Journal of the Transportation Research Board 2478 (1): 35–47. https://doi.org/10.3141/2478-05.
  • Thompson, R. G., and L. Zhang. 2018. “Optimising Courier Routes in Central City Areas.” Transportation Research Part C: Emerging Technologies 93: 1–12. https://doi.org/10.1016/j.trc.2018.05.016.
  • Vidal, T., T. G. Crainic, M. Gendreau, and C. Prins. 2013. “Heuristics for Multi-Attribute Vehicle Routing Problems: A Survey and Synthesis.” European Journal of Operational Research 231 (1): 1–21. https://doi.org/10.1016/j.ejor.2013.02.053.
  • Villegas, J. G., C. Prins, C. Prodhon, A. L. Medaglia, and N. Velasco. 2010. “GRASP/VND and Multi-Start Evolutionary Local Search for the Single Truck and Trailer Routing Problem with Satellite Depots.” Engineering Applications of Artificial Intelligence 23 (5): 780–794. https://doi.org/10.1016/j.engappai.2010.01.013.
  • Wang, X., L. Ke, Z. Qiao, and X. Chai. 2021. “Large-scale Traffic Signal Control Using a Novel Multiagent Reinforcement Learning.” IEEE Transactions on Cybernetics 51 (1): 174–187. https://doi.org/10.1109/TCYB.2020.3015811.
  • Wangapisit, O., E. Taniguchi, J. S. Teo, and A. G. Qureshi. 2014. “Multi-agent Systems Modelling for Evaluating Joint Delivery Systems.” Procedia – Social and Behavioral Sciences 125: 472–483. https://doi.org/10.1016/j.sbspro.2014.01.1489.
  • Wei, H., G. Zheng, V. Gayah, and Z. Li. 2019. “A Survey on Traffic Signal Control Methods.” arXiv preprint arXiv:1904.08117.
  • Weinberger, R. R., A. Millard-Ball, and R. C. Hampshire. 2020. “Parking Search Caused Congestion: Where’s All the Fuss?” Transportation Research Part C: Emerging Technologies 120: 102781. https://doi.org/10.1016/j.trc.2020.102781.
  • Wu, C., A. Kreidieh, K. Parvate, E. Vinitsky, and A. M. Bayen. 2017. “Flow: Architecture and Benchmarking for Reinforcement Learning in Traffic Control.” arXiv preprint arXiv:1710.05465, 10.
  • Yan, Y., A. H. Chow, C. P. Ho, Y. H. Kuo, Q. Wu, and C. Ying. 2022. “Reinforcement Learning for Logistics and Supply Chain Management: Methodologies, State of the Art, and Future Opportunities.” Transportation Research Part E: Logistics and Transportation Review 162: 102712.
  • Yang, K., M. Roca-Riu, and M. Menéndez. 2019. “An Auction-based Approach for Prebooked Urban Logistics Facilities.” Omega 89: 193–211.
  • Ye, Q., Y. Feng, E. Candela, J. Escribano Macias, M. Stettler, and P. Angeloudis. 2022. “Spatial-Temporal Flows-Adaptive Street Layout Control Using Reinforcement Learning.” Sustainability 14 (1): 107. https://doi.org/10.3390/su14010107.
  • Yu, J. J. Q., W. Yu, and J. Gu. 2019. “Online Vehicle Routing with Neural Combinatorial Optimization and Deep Reinforcement Learning.” IEEE Transactions on Intelligent Transportation Systems 20 (10): 3806–3817. https://doi.org/10.1109/TITS.2019.2909109.
  • Yu, L., C. Zhang, J. Jiang, H. Yang, and H. Shang. 2021. “Reinforcement Learning Approach for Resource Allocation in Humanitarian Logistics.” Expert Systems with Applications 173. https://doi.org/10.1016/j.eswa.2021.114663.
  • Zeng, J. W., Y. S. Qian, H. Wang, and X. T. Wei. 2018. “Modeling and Simulation of Traffic Flow Under Different Combination Setting of Taxi Stop and Bus Stop.” Modern Physics Letters B 32 (25): 1850301.
  • Zhang, L., and R. G. Thompson. 2019. “Understanding the Benefits and Limitations of Occupancy Information Systems for Couriers.” Transportation Research Part C: Emerging Technologies 105: 520–535. https://doi.org/10.1016/j.trc.2019.06.013.
  • Zhao, J., M. Mao, X. Zhao, and J. Zou. 2020. “A Hybrid of Deep Reinforcement Learning and Local Search for the Vehicle Routing Problems.” IEEE Transactions on Intelligent Transportation Systems 22 (11): 7208–7218.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.