Page 58 - FIGI - Big data, machine learning, consumer protection and privacy
P. 58

191   Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University
                Press 2015); Viktor Mayer-Schönberger and Thomas Ramge, Reinventing Capitalism in the Age of Big Data (Basic
                Books 2018).Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations without Opening the
                Black Box: Automated Decisions and the GDPR’ [2017] arXiv preprint arXiv: 1711 .00399; Accountable Algorithms, at
                footnote 129.
            192   Sandra Wachter, Brent Mittelstadt & Chris Russell, Counterfactual Explanations Without Opening the Black Box:
                Automated Decisions and the GDPR, Harvard Journal of Law & Technology, 2018. https:// arxiv .org/ abs/ 1711 .00399
            193   See Accountable Algorithms, at footnote 129.
                Paul Ohm and David Lehr, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, Univ. of
                CA, Davis Law Review, 2017, available at https:// lawreview .law .ucdavis .edu/ issues/ 51/ 2/ Symposium/ 51 -2 _Lehr _Ohm .pdf.
            194   See Ethically Aligned Design, at footnote 224 at p159.
            195   Ibid at p152.
            196   Ibid at p159.
            197   Article 29 Data Protection Working Party, ‘Guidelines on Automated Individual Decision Making and Profiling for the
                Purposes of Regulation 2016/679’, see footnote 56, at p10.
            198   IEEE Global Initiative (see footnote 224) at p153.
            199   GDPR, Article 22(3).
            200   Andrea Roth,Trial by Machine, 104 GEO. L.J. 1245 (2016).
            201   Wachter & Mittelstadt, footnote 57.
            202   Joel Feinberg, Wrongful Life and the Counterfactual Element in Harming, in FREEDOM AND FULFILLMENT 3 (1992).
            203   For example, the US Supreme Court in Clapper v. Amnesty International 133 S. Ct. 1138 (2013) rejected claims against
                the US Government for increased collection of data for surveillance reasons on the basis that the plaintiffs had not
                shown “injury in fact.”
            204   Spokeo, ibid.
            205   Daniel J. Solove and Danielle Keats Citron, Risk and Anxiety: A Theory of Data Breach Harms, 96 Texas Law Review 737
                (2018).
            206   In Remijas v. Neiman Marcus Group, LLC, 794 F.3d 688, 693-94 (7th Cir. 2015), the US Federal Court found that the fact
                that plaintiffs knew that their personal credit card information had been stolen by individuals who planned to misuse it
                (as other plaintiffs’ cards had been the subject of fraudulent misuse) was sufficient harm to give them standing to sue.

            207   In Spokeo (see footnote 115), the US Supreme Court found that when a people search engine described a person
                incorrectly, this could potentially cause enough risk of harm to allow him standing to sue.
            208   For a lively description of these, see Cathy O’Neill, Weapons of Math Destruction (2016). For a useful taxonomy of
                potential harms from automated decision-making, see Future of Privacy Forum, Unfairness by Algorithm: Distilling the
                Harms of Automated Decision-Making, December 2017.
            209   IEEE Global Initiative (see footnote 224) at p156.
            210   IEEE Global Initiative (see footnote 224) at p156.
            211   Ibid.
            212   See https:// www .sv -europe .com/ crisp -dm -methodology/ .
            213   See https:// www .nist .gov/ privacy -framework.
            214   See for example Guidance on Model Risk Management, Board of Governors of the Federal Reserve System & Office
                of the Comptroller of the Currency, April 2011, available at https:// www .federalreserve .gov/ supervisionreg/ srletters/
                sr1107a1 .pdf; and  the European frameworks, Directive 2013/36/EU of the European Parliament and of the Council of
                26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and
                investment firms; Regulation No. 575/2013 of the European Parliament and of the Council of 26 June 2013 on prudential
                requirements for credit institutions and investment firms; and the European Central Bank guide to the Targeted Review
                of Internal Models (the TRIM Guide).
            215   Thus the IEEE’s Global Initiative (see footnote 224) recommends that “Automated systems should generate audit trails
                recording the facts and law supporting decisions.”
            216   The following summary of risk management in machine learning is drawn from Future of Privacy Forum, Beyond
                Explainability: A Practical Guide to Managing Risk in Machine Learning Models (2018).



           56    Big data, machine learning, consumer protection and privacy
   53   54   55   56   57   58   59   60   61   62