Adaptive Road-Aware Routing with Reinforcement Learning (ARARL) for Enhanced Efficiency and Reliability in Dense Urban Vanets

Authors

  • Arvind Kumar Author
  • Shobha Tyagi Author
  • Prashant Dixit Author
  • S.S. Tyagi Author

DOI:

https://doi.org/10.64252/79rnpm65

Keywords:

VANETs (Vehicular Ad Hoc Networks), Reinforcement Learning, Roadside Units (RSUs), Routing Protocols, Urban Mobility.

Abstract

Vehicular ad hoc networks (VANETs) hold a prime role in smart transportation, but routing of data in busy cities is challenging. Scenarios like traffic jams, fast-moving vehicles, and signal disruptions due to physical obstacles often cause data packet dropping, resulting in regular disconnections in communication. Through this research paper, we introduce ARARL (Adaptive Road-Aware Routing with Reinforcement Learning), a Reinforcement Learning based approach to make VANETs more reliable. ARARL use reinforcement learning to select the best data paths, thus adapting to real-time changes like vehicle speeds or road layouts. Unlike static protocols, ARARL keep on learning with the vehicle movement and choose the best possible routes based on factors like signal strength and traffic flow. We tested ARARL in simulation using NS2 and OpenGym on a city scenario generated by SUMO. In results, it outperformed protocols like AODV, GPSR, D-LAR and Q-Learning-AODV. It transferred more packets, reduced delays and network overhead, especially when the network had a lot of traffic. These results suggest ARARL performs better in keeping communication steady. By ensuring the information is shared reliably, our work could help make self-driving cars safer.

Downloads

Download data is not yet available.

Downloads

Published

2025-09-02

Issue

Section

Articles

How to Cite

Adaptive Road-Aware Routing with Reinforcement Learning (ARARL) for Enhanced Efficiency and Reliability in Dense Urban Vanets. (2025). International Journal of Environmental Sciences, 566-574. https://doi.org/10.64252/79rnpm65