Page 379 - AI for Good Innovate for Impact
P. 379

AI for Good Innovate for Impact



               Once the long-term network configurations are optimized using GA, the twin configurations
               are sent to the RAN Intelligent Controller (RIC) within the OpenRAN stack. The RIC then
               communicates these optimal settings to the Distributed Unit (DU) and Radio Unit (RU), ensuring
               that the network operates efficiently while meeting coverage and capacity requirements.              4.3 - 5G
               2)   Proximal Policy Optimization (PPO)

               While Genetic Algorithms (GA) focus on global optimization and long-term planning, Proximal
               Policy Optimization (PPO) is employed for real-time dynamic adjustments with the 5G base
               station. PPO is a type of reinforcement learning (RL) that enables the network to adapt its
               parameters in response to ongoing Key Performance Indicators (KPIs) such as signal strength
               and energy consumption. Once the KPIs are maximized, the current configuration is sent as
               feedback to GA for future changes in GIS. Hence PPO helps in building a closed loop system
               for digital twin.

               The functionalities of PPO are,

               Adjusting Transmission Power and Beamforming: In real-time, PPO continuously monitors the
               network conditions based on setup by GA and adjusts parameters like transmission power.
               This ensures that signal strength is maintained across areas of high demand while minimizing
               energy wastage in areas with lower user density or signal requirements.

               Efficient Resource Allocation: PPO dynamically manages resource allocation, ensuring that
               available spectrum and time slots are used efficiently. For example, it reduces transmission
               power or offloads traffic during periods of low demand, thus optimizing energy consumption.

               PPO operates within the RIC in the OpenRAN stack that communicates with the DU and RU in
               real-time. The PPO adjusts network parameters based on live data and sends these updates
               immediately to the OpenRAN components for ensuring energy efficiency. At the same time, it
               sends parameters as a feedback loop to GA algorithm for future optimization considerations.

               The proposed solution enables significant reductions in energy consumption and CO₂ emissions
               by avoiding wasteful static configurations and overprovisioning. Moreover, its autonomous,
               scalable nature positions it as a key enabler for green, sustainable 6G networks, supporting
               global climate goals and sustainable smart city development.


               Partners

               Amrita University, ESRI ArcGIS, Indian Institute of Science, Indian Open Source Mobile Congress


               2�2     Benefits of the use case

               This use case strengthens digital infrastructure by enabling 5G networks to dynamically adapt
               to environmental changes through the integration of AI technologies and ArcGIS geospatial
               data. This enhances the responsiveness, efficiency, and sustainability of telecom systems.

               By leveraging real-time ArcGIS data, the solution supports the seamless evolution of 5G
               networks alongside urban growth. It helps reduce connectivity gaps and congestion, directly
               benefiting smart city initiatives such as Internet of Things (IoT) deployments, autonomous
               transportation systems, and emergency response networks.






                                                                                                    343
   374   375   376   377   378   379   380   381   382   383   384