Page 264 - Kaleidoscope Academic Conference Proceedings 2024
P. 264
2024 ITU Kaleidoscope Academic Conference
research seeks to advance our understanding of how the development of flexible regulations that evolve with
standardization efforts can contribute to achieving sustainable new information and ensure the safe and effective use
development goals while mitigating AI-related risks. of AI technologies [18]. Sharing AI incidents improves
Through collaborative efforts and international cooperation, the verifiability of claims in AI development, highlights
stakeholders can harness the transformative potential of AI to overlooked risks, and enhances the effectiveness of external
create a more sustainable and inclusive future for all. scrutiny by increasing common knowledge of potential
AI system behaviors [19]. AI community is starting to
Specific contributions of this study include: recognize incident sharing as vital to prevent vulnerabilities,
biases, and privacy concerns in AI systems, ensuring their
1. It identifies nine gaps in existing AI incident reporting
trustworthiness and enhancing user experience [20]. Public
practices, offering insights into areas for improvement.
databases cataloging global AI incidents promote awareness
2. It proposes nine actionable recommendations to of potential AI harms among policymakers, researchers, and
enhance standardization efforts in AI incident reporting, the public, essential for developing safe AI systems [21].
addressing the identified gaps. Collecting real-world failures in incident databases, such as
those in mature industrial sectors like aviation, is crucial
3. It facilitates the development of strategies and
for informing safety improvements and preventing repeated
mechanisms to prevent similar incidents from occurring
mistakes in designing and deploying intelligent systems
in the future, thereby promoting trustworthy AI and
[22]. The collected AI incident data highlights unethical
aligning with the UN SDGs.
AI use, with top-ranking applications including language and
The paper is structured as follows: Section 2 reviews computer vision models, intelligent robots, and autonomous
the existing literature, delves into the definitions of AI driving, revealing issues like misuse, racism, and bias [23].
incidents, and reviews available AI incident repositories.
Section 3 elaborates on the methodology employed in
2.3 AI incident repositories
this study. Observations and results are presented in
Section 4, while Section 5 analyses these observations,
identifies gaps, draws inferences, and offers corresponding The AI Incident Database (AIID) [16] is among the earliest
recommendations. Finally, Section 6 provides a summary of initiatives solely focused on documenting AI incidents. It
the recommendations and conclusions drawn. compiles real-world harms or near harms caused by AI
systems. Inspired by similar databases in aviation and
cybersecurity, AIID aims to draw insights from past incidents
2. LITERATURE REVIEW to prevent or minimize future adverse outcomes. Another
notable repository is the AIAAIC Repository [17], which
2.1 AI incident definitions
compiles incidents and controversies driven by and relating
to AI, algorithms, and automation. The AI Vulnerability
The review shows that multiple definitions of “AI incident"
Database (AVID) [24] is an open-source repository that
are available.
aims to catalog failure modes for AI models, datasets, and
OECD [15] defines an “AI incident” as, “an event where systems. Its objectives include constructing a comprehensive
the development or use of an AI system: (i) caused harm to taxonomy of potential AI harms spanning security, ethics,
person(s), property, or the environment; or (ii) infringed upon and performance dimensions and storing detailed information
human rights, including privacy and non-discrimination”. on evaluation use cases and mitigation techniques for
each harm category. Another database, the AI Litigation
According to the AI Incident Database (AIID), an “AI Database (AILD) [25] compiles ongoing and completed legal
incident” is “an alleged harm or near harm event to cases concerning artificial intelligence, machine learning,
people, property, or the environment where an AI system and related fields, offering comprehensive coverage from
is implicated” [16]. complaints to verdicts. Further, the OECD.AI expert group
is developing the AI Incidents Monitor (AIM) [26] to track
‘AI, Algorithmic, and Automation Incidents and real-time AI incidents for informing policy discussions.
Controversies’ (AIAAIC) considers an “incident” in Unlike AIID and AIAAIC, AIM currently does not accept
the context of AI as “a sudden known or unknown event (or open submissions.
‘trigger’) that becomes public and which takes the form of a
disruption, loss, emergency, or crisis” [17].
Existing AI incident repositories rely on media coverage and
The review reveals the gap related to a lack of standard terms, voluntary public submissions, lacking robust mechanisms
definitions, and taxonomies. for technical input [18]. Taxonomies prioritize policy and
ethics over technical details, while definitions of AI incidents
2.2 The need for AI incident reporting remain inconsistent [21]. Moreover, there is a notable
absence of federally operated databases, leaving incident
Recording AI incidents is crucial for understanding their reporting reliant on public sources and lacking mandatory
impact on people, infrastructure, and technology, allowing legal disclosure and validation processes [21, 27].
– 220 –