Page 48 - AI Standards for Global Impact: From Governance to Action
P. 48

AI Standards for Global Impact: From Governance to Action



                  scenarios for AI testing. The outcomes of this session served as the basis for the Open Dialogue
                  on Trustworthy AI Testing taking place on 9 July.

                  AI technologies present opportunities that include revolutionizing science as well as risks arising
                  from democratizing the ability to do harm, potential loss of control, or unreliable systems in
                  domains like healthcare. Well-coordinated standards development will be key in realizing AI's
                  benefits while guarding against risks and there is also a need to find a faster pipeline from
                  research to pre-standardization and standardization.

                  As part of the G7 Hiroshima AI Process , the G7 had launched a voluntary Reporting Framework
                                                                                                     3
                                                    2
                  to encourage transparency and accountability among organizations developing advanced AI
                  systems. The main objectives of the Hiroshima AI Process were to promote:
                  •    A standardized way to track implementation
                  •    Transparency and comparability
                  •    Alignment with international AI governance initiatives

                  The framework aims to facilitate transparency and comparability of risk-mitigation measures
                  and contribute to identifying and disseminating good practices. The OECD supported the G7
                  in developing this reporting framework to facilitate the application of the Hiroshima AI Process
                  International Code of Conduct for organizations developing advanced AI systems .
                                                                                           4
                  Clear quality metrics for AI testing is very important to be able to compare and assess AI
                  systems and models. As an example, consider the current situation regarding AI testing as
                  being analogous to comparing car models by collecting enough information to make a choice.
                  Imagine your frustration if every conversation with a car salesperson goes like this:
                  You: How fast is this car?


                  Dealer: It is really a great car. In fact, it is probably the fastest. All our customers are happy with
                  how fast it is.

                  You: Can you give me a few numbers?

                  Dealer: Trust me, the car gets you to your destination in no time at all. It is that awesome!

                  You: Hmm - I also care about safety. What does the car offer in terms of safety?

                  Dealer: It is a fantastic car. It is very very safe. It fulfils all the regulations. I can show you the
                  certifications to prove it meets all the regulations. So it is completely safe.

                  You: What about the cost of running the car? How much gas does it need? What are the
                  insurance rates?

                  Dealer: It is really cheap to run. You can trust me that it is very, very cheap. I can’t give you any
                  figures but you will be amazed how cheap it is.

                  For buying a car, this is certainly farfetched. We fully expect to make our choice based on clear,
                  understandable indicators of quality such as fuel consumption, top speed, acceleration, noise
                  level, space for passengers, space for luggage, safety rating, stopping distance, theft protection,


                  2   See https:// digital -strategy .ec .europa .eu/ en/ library/ g7 -leaders -statement -hiroshima -ai -process
                  3   See https:// transparency .oecd .ai/
                  4   See https:// www .soumu .go .jp/ hiroshimaaiprocess/ pdf/ document05 _en .pdf



                                                           36
   43   44   45   46   47   48   49   50   51   52   53