Page 38 - FIGI - Big data, machine learning, consumer protection and privacy
P. 38

ed decision that has been made. At most, they typi-  This would permit, in addition to contesting an
            cally require notifying a person that a future decision   automated decision on the basis of accuracy of its
            will be automated, and perhaps offer an opportunity   inputs, challenging verifiable inferences on which it
            to opt out of it.                                  is based, such as the individual’s level of income or
                          183
               Some countries go a little further. For instance,   assets, health, or relationship status. Non-verifiable
            Brazil’s Data Protection Act 2018 provides the con-  inferences might be challenged by provision of sup-
            sumer with the right to request a review of decisions   plemental data that might alter their conclusions.
            taken solely on the basis of automated processing of   Efforts to introduce regulation that intrudes into
            personal data affecting their interests. This includes   the substance of decisions or the process of deci-
            decisions designed to define his profile or evaluate   sion-making, as opposed to the mere collection, use
            aspects of his personality, and the right to request   and sharing of data, may be viewed by some as bur-
            clear and relevant information on the  criteria and   dening a nascent innovative sector that should be
            procedures used for the automated decision. 184    left to develop products that benefit consumers, and
               Some policy makers do lean towards greater scru-  refine them under competitive pressure. Others will
            tiny of automated decisions under data protection   view it as seeking to rebalance the disempowerment
            and privacy law. The EU’s Article 29 Data Protec-  of consumers resulting from the removal of human
            tion  Working  Party,  for  instance,  advised  that  data   elements in key stages of decision-making (see fur-
            controllers should avoid over-reliance on correla-  ther in section 7.3). In a human interaction, the indi-
            tions, and should provide meaningful information to   vidual may have an opportunity to meet or speak
            the concerned individual about the logic involved   with a decision-maker or someone who can influence
            in automated decision-making.  Such disclosures    the decision-maker, and to explain where inferences
                                        185
            might include the main characteristics considered in   were erroneous. For the right to human intervention
            reaching the decision, the source of this information   in automated decisions to have substance, it may
            and its relevance. In the same vein, data controllers   require fleshing out the ultimate integrity of the pro-
            may be required to show that their models are reli-  cess that the human intervention aspires to achieve.
            able by verifying their statistical accuracy and cor-  Data  protection  laws  do  not  typically  guaran-
            rect inaccuracies, particularly to prevent discrimina-  tee the accuracy of decision-making, and this likely
            tory decisions.                                    generally extends to the accuracy of inference data,
                         186
               The Future of Privacy Forum has suggested that   so that even where incorrect inferences have been
            explaining machine learning models should include   drawn from accurate data, the individual may not
            documenting how the model was chosen, providing    have a right to rectify such inferences.
                                                                                                189
            a legal and technical analysis to support this. This   This would more typically be the remit of sec-
            would include identifying the trade-offs between   tor-specific laws, such as a financial services law, but
            explainability and accuracy. It would record decisions   in most countries, such laws will only prohibit deci-
            to make a model more complex despite the impact    sion-making that is discriminatory according to spec-
            of diminished explainability, and take account of the   ified criteria (such as race, gender or religion) and
            materiality of the output to individuals and third par-  not prescribe the correctness of the decision itself. In
            ties (e.g., there is more at stake in medical treatment   this sense, a poor algorithm is similar to a poor bank
            than movie recommendations). 187                   clerk who fails to make a good decision due to poor
               Some argue that the lack of effective explanations   judgment or inexperience: it may be poor business
            presents an accountability gap, and that data protec-  practice but is not unlawful.
            tion and privacy laws should confer on consumers an   However, a financial services law may proscribe
            effective “right to reasonable inferences.” 188    certain procedures intended to ensure that decisions
               Where inferences carry high risk of rendering   are more likely to be good ones. For instance, it may
            adverse decisions, harming reputation or invading   require  a  financial  service  provider  to  carry  out  an
            privacy, such a right could require a data controller   assessment of the customer’s need that will make
            to explain before processing (ex ante) the relevance   it more likely that a product suits him or her.  It
                                                                                                         190
            of certain data for the inferences to be drawn, the   could also require risk assessments that will ensure
            relevance of the inferences for the type of automated   that risks are considered, including in the algorithms
            decision and processing, and the accuracy and sta-  themselves.
            tistical reliability of the method used. Such explana-
            tions could be supported by an opportunity to chal-
            lenge decisions after they are made (ex post).



           36    Big data, machine learning, consumer protection and privacy
   33   34   35   36   37   38   39   40   41   42   43