Page 53 - AI Standards for Global Impact: From Governance to Action
P. 53

AI Standards for Global Impact: From Governance to Action



                   7�1�2  Group 2: Standards, Best Practices and Conformity Assessment

                   This group focused on technical and methodological aspects of AI testing collaboration:

                   Priorities and gaps for collaboration                                                            Part 2: Thematic AI

                   a)   Technical aspects of model testing including testing environment, data requirements, and
                        reproducibility standards
                   b)   Comprehensive risk assessment covering misuse risks, AI-cyber intersections, AI-bio risks,
                        and broader socio-technical challenges

                   Approaches to close the gaps

                   a)   Development of technical reports on standards mapping for trustworthy AI testing
                   b)   Establishment of pre-standardization discussions, potentially leveraging ITU platforms (e.g
                        ITU-T Focus Groups).

                   7�1�3  Group 3: Institutional frameworks


                   The institutional frameworks group examined governance and coordination mechanisms:
                   Key institutional gaps

                   a)   Need for identifying areas requiring global alignment in AI testing approaches
                   b)   Requirements for agile governance structures that can adapt quickly to technological
                        developments
                   c)   Strategic information-sharing, knowledge-building, and co-production mechanisms are
                        currently inadequate

                   Solutions and coordination mechanisms

                   a)   Establishing comprehensive coordination frameworks with effective feedback loops
                   b)   Emphasizing complementary roles among international organizations like ITU while
                        avoiding duplication of efforts


                   7�2  Future directions

                   Where do we go from here? Participants agreed to continue the dialogue on collaboration for
                   trustworthy AI testing initiated at the AI for Good Global Summit to discuss how to enact some
                   of the proposals made. Some of the key actions proposed were:

                   a)   Continued dialogue on trustworthy AI testing: Establishing a regular dialogue among
                        the various stakeholders involved in the event was emphasized as an important need in
                        the space, with Group 2 highlighting the need for dialogue focused on frontier model
                        security testing and Group 1 emphasizing the need for collaboration on capacity building
                        to enable AI testing across different jurisdictions.
                   b)   Develop technical reports: Working towards technical reports on topics related to
                        trustworthy AI testing, such as testing environments, protocols, and risk management
                        frameworks, was highlighted as a valuable next step. Group 3 raised the importance of
                        strategic information-sharing, knowledge-building, and co-production mechanisms for
                        greater institutional capacity around the world.










                                                            41
   48   49   50   51   52   53   54   55   56   57   58