4/2024 - 3 |
Enhanced QL-Based Dynamic Routing Protocol for Urban VANETsBUGARCIC, P. , JEVTIC, N. , MALNAR, M. , STOJANOVIC, M. |
Extra paper information in |
Click to see author's profile in SCOPUS, IEEE Xplore, Web of Science |
Download PDF (937 KB) | Citation | Downloads: 21 | Views: 35 |
Author keywords
computer simulation, intelligent transportation systems, machine learning, routing protocols, vehicular ad hoc networks.
References keywords
learning(18), reinforcement(17), routing(16), networks(15), vehicular(14), communications(9), adaptive(6), intelligent(5), access(5), vanet(4)
Blue keywords are present in both the references section and the paper title.
About this article
Date of Publication: 2024-11-30
Volume 24, Issue 4, Year 2024, On page(s): 27 - 36
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2024.04003
Abstract
Choosing an optimal data forwarding route is crucial for improving network performance in mobile ad hoc networks (MANETs). This process becomes very complex if the network topology changes frequently and quickly, as is the case with vehicular ad hoc networks (VANETs). Under these conditions, the routing process can be significantly improved by including machine learning in optimal route selection. The type of learning that suits highly dynamic networks the best is reinforcement learning (RL). One of the most important types of RL for dynamic MANETs is Q-learning (QL). In this study, an enhanced QL-based dynamic routing algorithm for urban VANETs (Q-DRAV) is proposed, which manages to significantly improve the overall network performance of VANETs, by including relevant network parameters in the RL process. Simulation analysis and comparison with other routing protocols are performed in the NS-3 simulator, and the protocol implementation code is publicly available. Simulation results show that the proposed protocol reduces the packet loss ratio, average packet end-to-end delay and jitter, while it increases the achieved application throughput in the network. |
References | | | Cited By «-- Click to see who has cited this paper |
[1] R. S. Sutton, A. G. Barto, "Reinforcement learning: An introduction, 2nd edition. Cambridge", pp. 1-4, Massachusetts, USA: MIT Press, 2018
[2] T. K. Saini, S. C Sharma, "Recent advancements, review analysis, and extensions of the AODV with the illustration of the applied concept," Ad hoc Networks, vol. 103, pp. 1â20, Jun. 2020. [CrossRef] [Web of Science Times Cited 36] [SCOPUS Times Cited 53] [3] M. Malnar, N. Jevtic, "An improvement of AODV protocol for the overhead reduction in scalable dynamic wireless ad hoc networks," Wireless Networks, vol. 28, no. 3, pp. 1039-1051, Feb. 2022. [CrossRef] [Web of Science Times Cited 10] [SCOPUS Times Cited 12] [4] D. R. de Assis, E. C. G. Wille, J. Alves Junior, "New results on the IC_AOMDV protocol for vehicular ad hoc networks in urban areas," Advances in Electrical & Computer Engineering, vol. 23, no. 3, pp. 21-28, 2023. [CrossRef] [Full Text] [SCOPUS Record] [5] J. Alves Junior, E. C. G. Wille, "Exploiting the inherent connectivity of urban mobile backbones using the P-DSDV routing protocol," Advances in Electrical & Computer Engineering, vol. 20, no. 1, pp. 83-90, 2020. [CrossRef] [Full Text] [Web of Science Times Cited 2] [SCOPUS Times Cited 2] [6] R. A. Nazib, S. Moh, "Reinforcement learning-based routing protocols for vehicular ad hoc networks: A comparative survey," IEEE Access, vol. 9, pp. 27552-27587, Feb. 2021. [CrossRef] [Web of Science Times Cited 45] [SCOPUS Times Cited 62] [7] J. Lansky, A. M. Rahmani, M. Hosseinzadeh, "Reinforcement learning-based routing protocols in vehicular ad hoc networks for intelligent transport system (ITS): a survey," Mathematics, vol. 10, no. 24, pp. 4673-4717, Dec. 2022. [CrossRef] [Web of Science Times Cited 21] [SCOPUS Times Cited 26] [8] C. Wu, K. Kumekawa, T. Kato, "Distributed reinforcement learning approach for vehicular ad hoc networks," IEICE Transactions on Communications, vol. 93, no. 6, pp. 1431-1442, Jun. 2010. [CrossRef] [Web of Science Times Cited 43] [SCOPUS Times Cited 54] [9] J. Wu, M. Fang, X. Li, "Reinforcement learning based mobility adaptive routing for vehicular ad-hoc networks", Wireless Personal Communications, vol. 101, pp. 2143-2171, May 2018. [CrossRef] [Web of Science Times Cited 27] [SCOPUS Times Cited 35] [10] NS-3. Accessed: July 9. 2024. [Online]. Available: https://www.nsnam.org/ [11] GitHub. Accessed: July 9. 2024. [Online]. Available: https://github.com/pavlebugarcic/qdrav [12] L. Rui et al., "An intersection-based QoS routing for vehicular ad hoc networks with reinforcement learning," IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 9, pp. 9068-9083, Sep. 2023. [CrossRef] [Web of Science Times Cited 8] [SCOPUS Times Cited 9] [13] O. Sarker, H. Shen, M. A. Babar, "Reinforcement learning based neighbour selection for VANET with adaptive trust management," in Proc. 22nd International Conference on Trust, Security and Privacy in Computing and Communications, Exeter, United Kingdom, 2023, pp. 585-594. [CrossRef] [Web of Science Record] [SCOPUS Record] [14] A. Nahar, D. Das, "Adaptive reinforcement routing in software defined vehicular networks," in Proc. International Wireless Communications and Mobile Computing, Limassol, Cyprus, 2020, pp. 2118â2123. [CrossRef] [SCOPUS Times Cited 15] [15] Q. Yang, S. Y. Yoo, "Hierarchical reinforcement learning-based routing algorithm with grouped RSU in urban VANETs," IEEE Transactions on Intelligent Transportation Systems (Early Access), Jan. 2024. [CrossRef] [Web of Science Times Cited 1] [SCOPUS Times Cited 1] [16] Y. Jiang, J. Zhu, K. Yang, "Environment-aware adaptive reinforcement learning-based routing for vehicular ad hoc networks," Sensors, vol. 24, no. 1, pp. 40-70, Dec. 2023. [CrossRef] [Web of Science Record] [SCOPUS Record] [17] M. Saravanan, P. Ganeshkumar, "Routing using reinforcement learning in vehicular ad hoc networks," Computational Intelligence, vol. 36, no. 2, pp. 682â697, Jan. 2020. [CrossRef] [Web of Science Times Cited 24] [SCOPUS Times Cited 30] [18] B. Liu, G. Xu, G. Xu, C. Wang, P. Zuo, "Deep reinforcement learning-based intelligent security forwarding strategy for VANET," Sensors, vol. 23, no. 3, pp. 1204-1218, Jan. 2023. [CrossRef] [Web of Science Times Cited 3] [SCOPUS Times Cited 7] [19] D. Zhang, T. Zhang, X. Liu, "Novel self-adaptive routing service algorithm for application in VANET," Applied Intelligence, vol. 49, pp. 1866-1879, May 2019. [CrossRef] [Web of Science Times Cited 162] [SCOPUS Times Cited 202] [20] J. Wu, M. Fang, H. Li, X. Li, "RSU-assisted traffic-aware routing based on reinforcement learning for urban VANETs," IEEE Access, vol. 8, pp. 5733-5748, Jan. 2020. [CrossRef] [Web of Science Times Cited 72] [SCOPUS Times Cited 88] [21] C. Wu, T. Yoshinaga, Y. Ji, Y. Zhang, "Computational intelligence inspired data delivery for vehicle-to-roadside communications," IEEE Transactions on Vehicular Technology, vol. 67, no. 12, pp. 12038â12048, Sep. 2018. [CrossRef] [Web of Science Times Cited 55] [SCOPUS Times Cited 69] [22] O. Jafarzadeh, M. Dehghan, H. Sargolzaey, M. M. Esnaashari, "A model based reinforcement learning protocol for routing in vehicular ad hoc network," Wireless Personal Communications, vol. 123, no. 1, pp. 975â1001, Mar. 2022. [CrossRef] [Web of Science Times Cited 10] [SCOPUS Times Cited 11] [23] D. Zhang, F. R. Yu, R. Yang, "Blockchain-based distributed software-defined vehicular networks: a dueling deep Q-learning approach," IEEE Transactions on Cognitive Communications and Networking, vol. 5, no. 4, pp. 1086â1100, Sep. 2019. [CrossRef] [Web of Science Times Cited 59] [SCOPUS Times Cited 87] [24] S. Jiang, Z. Huang, Y. Ji, "Adaptive UAV-assisted geographic routing with Q-learning in VANET," IEEE Communications Letters, vol. 25, no. 4 pp. 1358-1362, Apr. 2021. [CrossRef] [Web of Science Times Cited 38] [SCOPUS Times Cited 51] [25] L. Luo, L. Sheng, H. Yu, G. Sun, "Intersection-based V2X routing via reinforcement learning in vehicular ad hoc networks," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 6, pp. 5446 â 5459, Jun. 2021. [CrossRef] [Web of Science Times Cited 46] [SCOPUS Times Cited 57] [26] S. Ye, L. Xu, X. Li, "Vehicle-mounted self-organizing network routing algorithm based on deep reinforcement learning." Wireless Communications and Mobile Computing, vol. 2021, pp. 1-9, Jul. 2021. [CrossRef] [Web of Science Times Cited 7] [SCOPUS Times Cited 11] [27] P Bugarcic, N Jevtic, M Malnar, "Reinforcement learning-based routing protocols in vehicular and flying ad hoc networks â a literature survey," Promet, vol. 34, no. 6, pp. 893-906, Dec. 2022. [CrossRef] [SCOPUS Times Cited 2] [28] SUMO. Accessed: July 9. 2024. [Online]. Available: https://eclipse.dev/sumo Web of Science® Citations for all references: 669 TCR SCOPUS® Citations for all references: 884 TCR Web of Science® Average Citations per reference: 23 ACR SCOPUS® Average Citations per reference: 30 ACR TCR = Total Citations for References / ACR = Average Citations per Reference We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more Citations for references updated on 2024-11-30 22:19 in 163 seconds. Note1: Web of Science® is a registered trademark of Clarivate Analytics. Note2: SCOPUS® is a registered trademark of Elsevier B.V. Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site. |
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania
All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.
Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.
Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.