Page 59 - FIGI - Big data, machine learning, consumer protection and privacy
P. 59
217 https:// ainowinstitute .org/ .
218 Dillon Reisman, Jason Schultz, Kate Crawford, Meredith Whittaker, Algorithmic Impact Assessments: A Practical
Framework for Public Agency Accountability, April 2018, https:// ainowinstitute .org/ aiareport2018 .pdf.
219 IEEE Global Initiative (see footnote 224) at p156.
220 Ann Cavoukian, Privacy by Design: The Seven Foundational Principles (Information and Privacy Commissioner of
Ontario, 2011), https:// www .ipc .on .ca/ wp -content/ uploads/ Resources/ 7foundationalprinciples .pdf; and David Medine,
Privacy by Design for Financial Services https:// www .livemint .com/ Opinion/ 1ShpKAOC59VlXiwgCkVv8O/ Privacy -by
-design -for -financial -services .html.
221 The Carnegie Melon programme includes: 1) Design cutting-edge products and services that leverage big data
while preserving privacy; 2) Propose and evaluate solutions to mitigate privacy risks; 3) Understand how privacy-
enhancing technologies can be used to reduce privacy risks; 4) Use techniques to aggregate and de-identify data, and
understand the limits of de-identification; 5) Understand current privacy regulatory and self-regulatory frameworks;
6) Understand current technology-related privacy issues; 7) Conduct privacy-related risk assessments and compliance
reviews, respond to incidents, and integrate privacy into the software engineering lifecycle phases; 8) Conduct basic
usability evaluations to assess the usability user acceptance of privacy-related features and processes; and 9) Serve
as an effective privacy subject-matter expert, working with interdisciplinary teams. Masters of Science in Information
Technology – Privacy Engineering program. See also https:// bigid .com/ the -advent -of -privacy -engineering/ .
222 See, e.g., OECD Moves Forward on Developing Guidelines for Artificial Intelligence (AI), oeCd (Feb. 20, 2019), http://
www .oecd .org/ going -digital/ ai/ oecd -moves -forward -on -developing -guidelines -for -artificial -intelligence .htm; and
InstItute of BusIness ethICs, BusIness ethICs and artIfICIal IntellIgenCe 2–3 (2018).
223 When computers decide: European Recommendations on Machine-Learned Automated Decision Making, Informatics
Europe & EUACM 2018. http:// www .informatics -europe .org/ news/ 435 -ethics _adm .html.
224 IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous
Systems version 2 (2018), https:// standards .ieee .org/ content/ dam/ ieee -standards/ standards/ web/ documents/ other/
ead _v2 .pdf. A previous version 1 was published for consultation in 2016, http:// standards .ieee .org/ develop/ indconn/ ec/
ead _v1 .pdf.
225 Tenets, Partnership on AI, available at https:// www .partnershiponai .org/ tenets/ .
226 Ethical Principles for Artificial Intelligence and Data Analytics, Software & Information Industry Association (Sep. 15,
2017), available at http:// www .siia .net/ Portals/ 0/ pdf/ Policy/ Ethical %20Principles %20for %20Artificial %20Intelligence
%20and %20Data %20Analytics %20SIIA %20Issue %20Brief .pdf ?ver = 2017 -11 -06 -160346 -990.
227 AI at Google: our principles, Google (June 7, 2018), available at https:// www .blog .google/ technology/ ai/ ai -principles/
. Also, Google, PersPeCtIves on Issues In aI governanCe, available at https:// ai .google/ static/ documents/ perspectives -on
-issues -in -ai -governance .pdf.
228 Our approach, Microsoft, available at https:// www .microsoft .com/ en -us/ ai/ our -approach -to -ai.
229 Principles for Accountable Algorithms and a Social Impact Statement for Algorithms, Fairness, Accountability, and
Transparency in Machine Learning, available at https:// www .fatml .org/ resources/ principles -for -accountable -algorithms.
230 Privacy International, PrIvaCy and freedom of exPressIon In the age of artIfICIal IntellIgenCe (2018), available at https://
privacyinternational .org/ sites/ default/ files/ 2018 -04/ Privacy %20and %20Freedom %20of %20Expression %20 %20In %20
the %20Age %20of %20Artificial %20Intelligence .pdf
231 Asilomar AI Principles, Future of Life Institute (2017), available at https:// futureoflife .org/ ai -principles/ . This group of
academics, AI industry leaders, actors and others formulated 13 principles: (1) Safety: AI systems should be safe and
secure throughout their operational lifetime, and verifiably so where applicable and feasible; (2) Failure Transparency:
If an AI system causes harm, it should be possible to ascertain why; (3) Judicial Transparency: Any involvement by an
autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent
human authority; (4) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral
implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications; (5)
Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured
to align with human values throughout their operation; (6) Human Values: AI systems should be designed and operated
so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity; (7) Personal Privacy:
People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze
and utilize that data; (8) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail
people’s real or perceived liberty; (9) Shared Benefit: AI technologies should benefit and empower as many people
as possible; (10) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all
of humanity; (11) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to
accomplish human-chosen objectives; (12) Non-subversion: The power conferred by control of highly advanced AI
Big data, machine learning, consumer protection and privacy 57