Page 34 - AI for Good - Impact Report
P. 34
AI for Good
Addressing AI’s challenges
AI’s development and implementation bring a range of social, environmental, or technical
challenges, such as data privacy concerns and the significant energy consumption required
to support AI systems. These challenges are closely tied to the evolving nature of technology,
making it critical for policymakers to develop effective, forward-thinking legislation. This
section aims to identify key risks associated with AI and present solutions to address them. For
each challenge, the report will provide a clear, high-level overview, followed by examples of
governmental or intergovernmental initiatives that offer either current or established practices
or emerging approaches to managing these risks.
Policy and Governance
Focus area: Data Privacy and Security
Data privacy and security are critical challenges in the rapidly advancing field of AI with 80% of
data experts surveyed saying that AI is making data security more challenging. AI tools use
113
vast amounts of data from various sources to train and learn from, often personal data, and
there is a risk that personal data could be integrated into the model and shared with other
users. Additionally, AI models can be tampered with and could therefore provide access to the
content and the personal data of end users.
114
Data privacy and security are pivotal to ensure that the technology is designed with minimal
risks of data breaches. Governments have a role to play in requiring companies to integrate the
principle of privacy by design into AI systems or to reduce the holding of unnecessary data. The
following approaches are examples of such practices being used by governments:
Established practice: Generation Data Protection Regulation of the European Union
The European Union's General Data Protection Regulation (GDPR) has set a global standard
for data privacy and security. One of its key principles is privacy by design, which requires
115
organizations to integrate data protection into the development of their products and
services from the very beginning. Additionally, under GDPR, organizations must practice data
minimization, meaning they should collect and process only the personal data that is necessary
for a specific purpose. This reduces the amount of data at risk if a breach occurs and lowers the
chances of data misuse.
While GDPR was not designed solely for AI, many of the principles mentioned above are still
relevant for this technology. Yet, those principles may conflict with the nature of AI as it
116
requires large data quantity to be trained. Hence, in the implementation of the requirements,
specific AI aspects should be considered. For data minimization, this could mean removing any
personal aspects from the data or for data holding this could require the reuse of data in a way
that is not incompatible with the way the data was initially sourced. Adapting GDPR practices
to the specificities of AI is valuable as GDPR has led to important benefits towards governance,
monitoring and decision-making for personal data. 117
Emerging practice: California Consumer Privacy Act of the State of California, United States
The California Consumer Privacy Act (CCPA), enacted in 2018, aims to enhance privacy rights
and consumer protection. CCPA provides California residents with the ability to control how
118
24