Page 40 - FIGI - Big data, machine learning, consumer protection and privacy
P. 40
increasing number of data protection and privacy have little protection when it comes to the way deci-
laws, including the GDPR, provide the right to obtain sions are actually made.
human intervention, express one’s views and contest
the decision. 5�4 Evaluating harm and liability to consumers
199
Such a right originates from notions of due pro- Accountability depends ultimately on being held
cess, which may be undermined if decisions are responsible in law, including compensating for harm
made by a machine without further recourse. It also that has been caused. One difficulty of developing
originates from the view that treating people with policy, legal obligations and remedies for consumers
respect and dignity includes ensuring that import- in the area of data protection arises from the intan-
ant decisions over their lives involve not merely a gible nature of the harm against which the consumer
machine but another human being. This concern is requires to be protected, or for which they need to
amplified by the risk of machines producing errone- be compensated.
ous results or behaving discriminatorily. 200 This can undermine a consumer’s claim from the
The ability to contest an automated decision is get-go. To have standing in a court to bring a claim
not merely a matter of clicking a request for recon- to recover compensation, it is typically necessary
sideration and receiving another, final automated to allege that one has been harmed. Courts have
decision, which would then just produce another struggled to identify harm from data protection and
automated decision subject to a right to contest it. privacy law violations, often producing very differ-
Ultimately, if an automated decision is to be reviewed, ent legal views. Many claims have been dismissed
it would be necessary to ensure that the automated because consumers failed to show the harm they
decision is subject to some form of human inter- have suffered.
vention, where the individual has an opportunity to Whether or not a person has suffered harm is often
present their point of view to another human being considered against a counterfactual, i.e., whether the
who will consider whether the automated decision person is put in a worse position than if the event had
should be revised. not happened. Demonstrating harm is particularly
202
Such human intervention may vary in its degree of challenging where there has not yet been any pecu-
involvement, from a full right of appeal of the entire niary or physical loss, for instance where a system
substance of the matter, to merely a check that the has been breached and data has been obtained with-
algorithm did at least receive accurate data inputs out permission but it has not (yet) been used to steal
without verifying its functionality. Overall, however, money. Harm may be viewed as conjectural, whereas
it is likely that such rights to contest decisions with in some legal systems, plaintiffs must show that they
human intervention will be limited to cases where the have in fact suffered injury.
203
input data was incorrect or incomplete, the requisite Theories of harm from personal data being
consent of the individual was not obtained, or there obtained unlawfully include risk of fraud or identi-
was some other infringement of data protection prin- ty theft, and anxiety the individual may experience
ciples. One might describe these as more procedur- about such risks. While intangible injuries are more
al than substantive matters. The “reasoning” behind difficult to recognise and analyze, they can be just
the substance of decisions, which inhabits the design as real and concrete as pecuniary damage. Indeed,
204
and functioning of algorithms, would likely not be not only may intangible harms be genuine, it is
subject to contest under data protection laws. increasingly argued that the very risk of harm – i.e.,
This does not mean that sector-specific laws, reg- where damage has not yet materialised but the risk
ulations and standards cannot require providers to is present – should be treated as legitimate harm for
modify or nullify their decisions where they are gen- the purpose of consumer claims.
erated by machine learning models for substantive Such harm may be evaluated according to the
reasons. However, it does mean that until such laws, likelihood and magnitude of future injury, the sensi-
regulations or standards are introduced, consumers tivity of data exposed, the possibility of mitigating
have limited recourse to challenge an automated harms and the reasonableness of preventative mea-
decision. 201 sures. Courts have tended to be more sympathetic
205
While individuals may be protected from pre- to plaintiffs in the case of identity theft due to risk
scribed collection, use and sharing of their personal of fraud, or where inaccurate information about a
206
data (particularly sensitive or special categories of person is published. 207
data) and the accuracy and completeness of their In the case of automated decision-making, there
data used in automated decisions about them, they are various potential types of harm. These may
208
38 Big data, machine learning, consumer protection and privacy