Page 40 - AI Ready – Analysis Towards a Standardized Readiness Framework
P. 40
AI Ready – Analysis Towards a Standardized Readiness Framework
6 Future work and conclusion
Currently, our work captures the analysis of the Artificial Intelligence (AI) readiness study, the
goal of which is to develop a framework assessing AI readiness to indicate the ability to reap
the benefits of AI integration. By studying the different actors and characteristics in different
domains, the bottom-up approach allows us to find common patterns, metrics, and evaluation
mechanisms for the integration of AI.
The main AI readiness factors identified in this report are Availability of open data, Access to
Research, Deployment capability along with Infrastructure, Stakeholders buy-in enabled by
Standards, Developer Ecosystem created via Opensource, Data collection, and model validation
via Sandbox pilot experimental setups.
In future, the number of case studies, use cases, and scenarios would be scaled to include
diverse set of domains and regions. Specifically, the following future steps are proposed:
Step-1: An open repository of data would be set up to address the corresponding AI readiness
factor for the availability of open data. In combination with existing ITU initiatives such as AI/
ML Challenges, this repository would be mapped to pre-standard research from ITU partners.
Step-2: Creation of an experimentation Sandbox with pre-populated standard compliant
toolsets and simulators, curated by ITU experts would help in studying the impact of the
readiness factors, and measuring their impact in specific case studies, use cases, and scenarios.
Step-3: Derivation of open metrics and open source reference toolsets for measurement and
validation of AI readiness in specific domain-wise case studies would further contribute to
the ecosystem of AI readiness. In addition, a Pilot AI Readiness Plugfest is planned to give
an opportunity to explain the AI Readiness factors to various stakeholders and allow them to
“plugin” various regional factors such as data, models, standards, toolsets, and training.
These steps would not only help us to evaluate the AI readiness along the dimensions of (1)
domains (2) regions (3) AI technologies, but also create a live ecosystem where this measurement
and evaluation could be validated in the real world.
The inputs from these steps would help in two types of decisions:
1) Macro decisions: Controls for the type of factor-centricity (e.g. may be based on the policy:
providers and users). These include the “plug-in” of various regional focus on factors
such as data, models, standards, toolsets, and training. E.g. increasing the number of
providers. Additionally, the metadata regarding the providers is to be described (so that
micro decisions can be taken as below).
2) Micro decisions: Internal Decisions on the combo of characteristics (e.g. may be based
on the technology choices). These include the “plug-in” various regional focus on
factors such as technology choices. E.g. wifi vs. 5G, satellite data vs. Drone collected
images. Additionally, the metadata regarding the providers to be selected based on the
characteristics.
The next version of this report along with the results of the Pilot AI Readiness Plugfest would
be released at the AI for Good Summit in July 2025.
33