Responsible AI governance and the challenges of general purpose AI
EY
Session 262
How to apply AI Risk Management when application context is minimally defined
Building on our workshop panel from 2022 that focused on assurance and conformity assessment of emerging digital products and services, such as AI, we will continue the theme of discussion around practical considerations for regulations, standards and corporate governance guidelines, but focus in this time on the specific challenges that are posed by large generative AI models (e.g. Chat GPT, DALL-E 2, Synthesia, Stable Diffusion). Specifically we will discuss how to fits these systems within AI Risk Management frameworks in light of the lack of use case specificity in the core model paired with a lack of control over that model by those who are specifying the use case. In light of these challenges, we will ask how existing efforts towards responsible AI governance might be adapted to incorporate the demands of large generative AI models.

Dr. Ansgar Koene is Global AI Ethics and Regulatory Leader at EY (Ernst & Young) where he leads the AI related public policy team and contributes to the work on AI governance and EY’s Trusted AI framework. As part of this work, he represents EY at the OECD Working Party on AI and Governance and the Business at OECD Committee on Digital Economic Policy (BIAC CDEP).
Ansgar chaired the IEEE P7003 Standard for Algorithmic Bias Considerations working group and is a co-convener for the CEN-CENELEC JTC21 “AI” committee’s work on AI conformity assessment. He is a trustee for the 5Rights foundation for the Rights of Young People Online and advises on AI and Data Ethics for the pan-African NGO AfroLeadership and the smart-mobility start-up Hayden AI.
Ansgar has a multi-disciplinary research background, having worked and published on topics ranging from Policy and Governance of Algorithmic Systems (AI), data-privacy, AI Ethics, AI Standards, bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies. He holds an MSc in Electrical Engineering and a PhD in Computational Neuroscience.

Dr Ahmed Imran is an Information Systems researcher at the University of Canberra who leads the interdisciplinary research cluster of Digital inequality and Social change (Rc-disc). Ahmed’s research interests include the strategic use of IT, eGovernment, ICT4D and the socio-cultural impact of ICT and organizational transformation impacted by ICT. Dr Imran has garnered over a decade of academic experience in two of Australia’s leading Universities (ANU and UNSW) before moving to UC and has published in top-tier journals. His vast experience as an IT manager as well as his work in developing countries became invaluable for research, and in understanding and providing a rich insight into the socio-cultural context through multiple lenses, resulting in interdisciplinary research opportunities. Dr. Imran’s research has proven to bring real-world applications to the table, something that cemented its importance and relevance in the eyes of the research community.
-
C5. Building confidence and security in use of ICTs
-
C7. ICT applications: benefits in all aspects of life — E-business
-
C10. Ethical dimensions of the Information Society
-
Goal 8: Promote inclusive and sustainable economic growth, employment and decent work for all
-
Goal 11: Make cities inclusive, safe, resilient and sustainable
-
Goal 16: Promote just, peaceful and inclusive societies
This session builds on the outcomes of out panel last year on "Assurance and conformity assessment of digital products and services" (session 410) and our report on AI Risk Assessment methodologies.