Page 88 - ITU KALEIDOSCOPE, ATLANTA 2019
P. 88

2019 ITU Kaleidoscope Academic Conference




           In  contrast  to  the  growing  interest  and  impressive   exchanged liaison statements, in view of common use cases
           advancements,  the  healthcare  sector  has  only  hesitantly   addressed.
           adopted  these  powerful  innovations  in  practice  so  far,
           because  any  technical  fault  can  affect  people’s  health,   The Institute of Electrical and Electronics Engineers (IEEE)
           privacy, and lives [25]. Providing conclusive evidence about   has  established  an  “Artificial  Intelligence  Medical  Device
           the performance, reliability and limits of the ML/AI models   Working Group” that has started working on two projects for
           is  required  for  harnessing  the  benefits  of  trustworthy   new IEEE standards in 2018. “P2802” is a “Standard for the
           solutions,  while  avoiding  the  risks  of  inadequate   Performance and Safety Evaluation of Artificial Intelligence
           implementations. Due to the high complexity of the ML/AI   Based Medical Device: Terminology” and “P2801” is about
           models  and  the  addressed  health  tasks,  it  is  not  trivial  to   the “Recommended Practice for the Quality Management of
           demonstrate   conclusively   whether   a   particular   Datasets for Medical Artificial Intelligence” [30].
           implementation solves a task adequately and reliably under
           realistic  conditions.  For  safe  usage,  it  is  of  paramount   The U. S. Consumer Technology Association (CTA) started
           importance that future international standards can give clear   a working group on “Artificial Intelligence in Health Care
           recommendations about how to validate the models. These   (R13 WG1)” in April 2019, with the participation of AT&T,
           standards  are  expected  to  promote  interoperability  and   Google, IBM, Philips, Samsung, and other companies [32].
           dismantle trade barriers too. Moreover, the development of   This  initiative  has  “launched  a  new  standards  effort
           these standards is in line with the Sustainable Development   addressing The Use of Artificial Intelligence in Health Care:
           Goals (SDG) of the United Nations (UN), in particular with   Trustworthiness”.  Moreover,  CTA  has  released  a  “White
           “SDG 3: Ensure healthy lives and promote wellbeing for all   Paper on Use Cases in Artificial Intelligence” in December
           at all ages” [26].                                 2018, which includes use cases in healthcare [33].
               2.  INTERNATIONAL STANDARDIZATION              The U. S. National Institute  of Standards and Technology
                       ACTIVITIES RELATED TO AI               (NIST) was directed by the President in February 2019 with
                                                              Executive  Order  13859  to  “issue  a  plan  for  Federal
           Several  standardization  bodies  have  begun  addressing  the   engagement in the development of technical standards and
           subject area of AI over the past two years. The International   related tools in support of reliable, robust, and trustworthy
           Telecommunication  Union  (ITU)  and  the  WHO  are  two   systems that use AI technologies” [34]. NIST submitted the
           specialized agencies of the UN authorized for creating global   plan in August 2019 and recommends to “commit to deeper,
           standards. ITU establishes standards (“Recommendations”)   consistent,  long-term  engagement  in  AI  standards
           for  information  and  communication  technologies,  which   development  activities  (…)  to  speed  the  pace  of  reliable,
           include ML and AI. WHO considers the “development of   robust, and trustworthy AI technology development”. The
           global guidelines ensuring the appropriate use of evidence”   plan advises to “promote focused research to advance (…)
           as  a  “core  function”  [27],  e.g.  “recommendations  on  the   understanding  of  how  aspects  of  trustworthiness  can  be
           diagnosis and treatment of malaria” [28]. Standards setting   practically  incorporated  within  standards  and  standards-
           organizations  are  aware  that  the  multidisciplinary  field  of   related tools”. In particular, the plan recommends to “spur
           health AI requires cooperation. Therefore, ITU and WHO   benchmarking  efforts  to  assess  the  reliability,  robustness,
           have joined forces and have created a focus group on “AI for   and trustworthiness of AI systems” and to “ensure that these
           Health”  in  July 2018  [29].  The  group  has  begun  working   benchmarks are widely available, result in best practices, and
           towards  establishing  a  rigorous  evaluation  process  for  AI   improve  AI  evaluations  and  methods  for  verification  and
           solutions for health that a global community of experts–from   validation” [35, 36].
           health,  ML,  AI,  regulation,  ethics,  industry  and  academia
           supports, which comprises an important first step towards   In  China,  “a  joint  effort  by  more  than  30  academic  and
           international standards for AI in health. A dedicated section   industry organizations overseen by the Chinese Electronics
           below presents this joint global standardization activity in   Standards  Institute”  published  an  “Artificial  Intelligence
           more detail. The authors are members of the focus group.    Standardization  White  Paper”  in  January  2018  [37,  38].
                                                              “Clinical medical imaging diagnosis” is mentioned as one of
           The  International  Organization  for  Standardization  (ISO)   ten “real-world AI commercial application cases” according
           subcommittee ISO/IEC JTC 1/SC 42 “Artificial intelligence”  to a review available in English [39].
           [31] has been developing a framework for AI systems using
           ML  (ISO/IEC  WD  23053),  addressing  AI  concepts  and   The European Committee for Standardization (CEN) and the
           terminology (ISO/IEC WD 22989) and AI risk management   European  Committee  for  Electrotechnical  Standardization
           (ISO/IEC  AWI  23894).  Furthermore,  ISO  is  working  on   (CENELEC)  “launched  a  new  Focus  Group  on  Artificial
           robustness  (ISO/IEC  NP  TR  24029-1),  trustworthiness   Intelligence”  in  April  2019  [40]  as  a  “starting  point  to
           (ISO/IEC PDTR 24028), bias (ISO/IEC NP TR 24027) and   support   the   identification   of   specific   European
           use  cases  (ISO/IEC  NP  TR  24030)  in  AI.  While  these   Standardization  needs”.  Additionally,  the  EU  High-Level
           documents address AI in more general terms, the use cases   Expert  Group  on  AI  published  “Ethics  Guidelines  for
           include healthcare applications too. Again, standard setting   Trustworthy  Artificial  Intelligence”  in  April  2019  with
           organizations are beginning to cooperate: ISO/IEC JTC 1/SC   “technical  robustness  and  safety”  as  one  of  seven  key
           42  “AI”  and  the  ITU/WHO  focus  group  have  recently   requirements  for  trustworthy  AI  [41].  In  Germany,  the




                                                           – 68 –
   83   84   85   86   87   88   89   90   91   92   93