Page 819 - AI for Good Innovate for Impact
P. 819

AI for Good Innovate for Impact



               (continued)

                Item                      Details
                Code repositories         Code Repo and model (in progress)[4].                                    Transport  4.10: Intelligent



               2      Use Case Description


               2�1     Description

               The rapid growth of autonomous and connected vehicles presents enormous challenges to
               road safety, network scalability, and environmental sustainability. Existing V2X communication
               systems are impacted by network interference, mobility, and latency, together with connectivity
               challenges, especially in high traffic density in urban areas, leading to handover enhancement.

               The proposed use case explores an AI-driven 6G V2X architecture[5] that is designed to enable
               scalable, reliable, and energy-efficient vehicular communication. Through the exploitation of
               Deep Reinforcement Learning (DRL)–i.e., Proximal Policy Optimization (PPO) algorithm–the
               system dynamically allocates vehicles to Roadside Units (RSUs), offering better optimization in
               the concern of energy, latency, and quality of service. A custom simulation environment built on
               Gymnasium trains the DRL agent to respond in real-time against uncertain traffic and network
               states. Integer Linear Programming (ILP), as conventionally used, is taken as a benchmark
               to check the performance of DRL. Let RSU correspond to the agent who takes the decision
               following an interaction with the vehicular environment.
               When selecting an appropriate RSU for a moving vehicle, the system considers several important
               factors to provide the best performance. These include signal strength (RSSI or SINR), distance
               between the vehicle and RSU, available resource blocks, RSU load at the moment, processing
               capacity, and latency guarantees. The DRL agent (deployed in the RSU) learns from these
               factors, which the RSU provides the best trade-off between energy efficiency and quality of
               service. This multi-objective decision-making is tasked with the responsibility of providing
               a steady, low-latency link for each vehicle without compromising the entire network load's
               balance. The idea is to deploy the trained agent on the RSU or Multi-access Edge Computing
               infrastructure. The Federated learning is centralized, with every new scenario learned updated
               on a centralized system.

               The entire situation—RSU rollout, vehicle movement, and criteria for decisions—can be fully
               simulated. Cars can be simulated with realistic or random movement in urban or highway
               settings. RSUs are randomly distributed on the map. The environment calculates useful
               metrics like SINR, proximity, and availability of RSUs at each timestep so that the DRL agent
               can decide optimal RSU allocation. Gymnasium, SUMO, or OMNeT++ are some of the tools
               used to simulate true behaviour so that AI models can be trained and tested robustly in a safe,
               controlled virtual environment. The integration between the Gymnasium environment and
               SUMO is possible through the use of already implemented Python toolkits, Sumo-gym or Sumo-
               rl. Using this toolkit, the agent can be trained and previewed in a 2D simulation concurrently.
               The optimal RSU choice depends on some factors like the coverage radius (distance), the
               latency, the speed of the vehicle, e.t.c. The reward function for the model is shown in equation
               (1), where:







                                                                                                    783
   814   815   816   817   818   819   820   821   822   823   824