Page 144 - ITU Journal - ICT Discoveries - Volume 1, No. 2, December 2018 - Second special issue on Data for Good
P. 144

,78 -2851$/  ,&7 'LVFRYHULHV  9RO        'HFHPEHU



          simulations.  Before  getting  good  at  its  job,  the   the  algorithm  encounters  cases  it  has  not  seen
          machine  often  ‘got  it’  wrong,  and  was  told  so:  it   before (a dog with a flat face or a human with darker
          (virtually) crashed into trees, crushed people, and   skin  than  in  the  data  set  it  was  trained  on)?
          fled when seeing the police; all things considered   Fundamentally,  how  should  those  estimations,
          bad. Through these trials, errors, and feedbacks, it   predictions,  and  prescriptions  be  used,  and  by
          started being able to drive autonomously. Another    whom, when, and if at all?
          machine looked at the picture of  a cat and, when
          prompted,  concluded  it  was  a  dog.  It  was  told   These  risks  are  real. They need to be  known and
          “wrong!”  and  asked  to  try  again  with  different   addressed to limit the worst typical side effects of
          photographs, many times over.                        technological  change,  at  least  in  the  short  run,
                                                               including widening inequities. But big data and AIs
          Through  these  iterations,  these  machines  learned   are  neither  ‘black  magic’;  nor  are  the  algorithms
          what features and combination of features of what    running  them  complete  ‘black  boxes’.  Given  their
          they  were  seeing  were  most  systematically       ubiquity and power, it is important to understand
          associated with the right result. The algorithm, the   how they work and what insights we could glean
          series  of  steps  classifying,  organizing,  ranking   from  them  to  promote  positive  social  change.
          information  and  tasked  with  concluding  “cat!”  or   Critically, it is not (just) about using AI to optimize
          “dog!” figured out that the longer the nose, the more   supply chains  (and more),  which  will  continue  to
          likely  the  “thing”  was  to  be  “dog”,  whereas   have major impacts on societies and economies, but
          considering whether it had long or short hair was    about being inspired and supported by AI to improve
          not  a  very  valuable  use  of  its  neurons.  It  was   human systems.
          learning  how  to  “connect  the  dots”.  The  machine
          was learning. The gist of big data and current AI(s)   What is the ‘good magic’ of current AIs? In short, the
          is machine(s) learning.                              good  magic, is its “credit  assignment  (or  reward)
                                                               function”. It is the ability to assign credit for what
          Of  course,  there  are  many  more  caveats  and    “works”; in other words what allows an algorithm
          complexities than these, but  for  most intents and   to  get  the  right  (intended)  result.  In  the  example
          purposes  it  suffices  to  understand  that  current   above, the computer tasked with telling a dog from
          ‘narrow’ AI (as opposed to a ‘general’ AI that fuels   a cat will extract millions of features from the image
          the most vivid fears about robots taking over the    it sees, then assemble them in millions of ways, take
          world, which does not seem like a realistic outcome   guesses, and over time, learn which combinations of
          in the foreseeable future) is about this: getting lots   paths  allow  it  to  get  the  right  answer  (assuming
          of data as inputs and learning how to connect them   everyone  “calls  a  cat  a  cat”,  as  the  French  say )
                                                                                                              1
          to output data in the form of desirable or observed   almost all the time. The reward function and ability
          outcomes.  Through  training,  testing,  and  learning   to learn through iterations lead to reinforcement of
          based on past cases, the machine is able to land on   the combination of features to look for and use. In
          the “right results”.                                 contrast, those that lead to the wrong result will be
                                                               weakened. The machine will grow an incentive to
          The  applications  and  implications  of  this  are   not use them.
          already far-reaching. Is this person going to like this
          book  because  someone  just  like  him  or  her     As it turns out,  or so we think, applying the  core
          (including  him  or  her  last  month)  did?  Is  this   principles and requirements of AI to entire human
          teenager on the verge of dropping out of school? Is   systems in a consistent, careful manner to design
          this  person  Kieran  McKay  or  Abigail  Adeyemi?   and  deploy  “human-machine  (eco)systems”  could
          Should he or she get a loan? Should the driverless   be quite transformative, for the better.
          car kill a pregnant woman or five elderly people if it
          has no choice but to run over either? Several tough
          related  questions  come  to  mind  and  fuel  ongoing
          debates. If algorithms seem racist, is it because their
          developers  embed  their  biases  or  rather  because
          predictions repeat past biases? What happens when


          1  From the French phrase “Appeler un chat un chat” which
          means “Calling a spider, a spider”



                                             ‹ ,QWHUQDWLRQDO 7HOHFRPPXQLFDWLRQ 8QLRQ
   139   140   141   142   143   144   145   146   147   148   149