Assurance and conformity assessment of digital products and services
Ernst & Young
Session 410
The panel will discuss the challenges that arise when performing assurance or conformity assessment of digital products and services. We will review what can be learnt from the existing audit profession and the work that various government, standards associations, technical bodies and NGOs are doing to provide methodologies that will give citizens and governments the confidence they need in order to know where, when and how to use these technologies in a way that maximizes their benefit and minimizes negative impacts.
Dr. Ansgar Koene is Global AI Ethics and Regulatory Leader at EY (Ernst & Young) where he leads the AI related public policy team and contributes to the work on AI governance and EY’s Trusted AI framework. He is also a Senior Research Fellow at the University of Nottingham, chairs the IEEE P7003 Standard for Algorithmic Bias Considerations working group and is a convener for the CEN-CENELEC JTC21 “AI” committee’s work on AI conformity assessment. He is a trustee for the 5Rights foundation for the Rights of Young People Online and advises on AI and Data Ethics for AfroLeadership, a pan-African NGO.
Ansgar has a multi-disciplinary research background, having worked and published on topics ranging from Policy and Governance of Algorithmic Systems (AI), data-privacy, AI Ethics, AI Standards, bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies. He holds an MSc in Electrical Engineering and a PhD in Computational Neuroscience.
Kai Zenner is Head of Office and Digital Policy Adviser for MEP Axel Voss (European People’s Party Group) in the European Parliament. Describing himself as a digital enthusiast, he focuses on emerging technologies, data sharing and the EU’s digital transition. Currently, he is involved in the political negotiations on the AI Act and the eprivacy Regulation as well as the upcoming AI liability directive. He graduated in politics (University of Edinburgh) and in law (University of Münster). Before moving to the European Parliament, he worked as a research associate at the European office of the Konrad Adenauer Foundation in Brussels.
Ghazi Ahamat is a Team Lead at the Centre for Data Ethics and Innovation (CDEI), an expert team within the UK government that works on enabling responsible innovation in AI and data driven technologies. Ghazi leads the CDEI's work on AI Assurance and is currently developing an AI assurance roadmap, which sets out the CDEI’s view of the current AI assurance ecosystem in the UK. He was previously a co-author of the CDEI's Review into Bias in Algorithmic Decision Making.
Ghazi previously founded the Victorian Centre for Data Insights, a state government central analytics team in Australia, and as a consultant with BCG he advised governments in Australia and the Middle East on Data Science, Strategy and Transformation. He has a Masters in Technology Policy (with Distinction) at the University of Cambridge, where he focused on the policy and strategic implications of AI. He also studied Economics and Pure Mathematics at the University of Melbourne.
Ashley Casovan currently serves as the Executive Director of the Responsible AI Institute (RAII), a multi-stakeholder non-profit dedicated to mitigating harm and unintended consequences of AI systems. RAII’s current objective is completing the architecture for the world’s first independent, accredited certification for responsible AI systems. Previously, Ashley served as the Director of Data and Digital for the Government of Canada, where she led the development of the first national government policy for responsible AI.
Actively working to influence change, Ashley is a member of OECD’s Network of Experts in the AI Policy Advisory, Chair of the Responsible AI Certification Program with the World Economic Forum’s Global AI Action Alliance, an executive board member of the International Centre of Expertise in Montréal on Artificial Intelligence (CEIMIA), and a member of the IFIP/IP3 Global Industry Council (GIC) within the UN. She also lends her time as an expert advisory committee member for the Global Index on Responsible AI.
Aurelie works on leading global initiatives for the implementation of Responsible AI. To cite a few, she is
- The chair of the standards committee representing Australia at the international standards on AI;
- The co-chair of the first accredited global certification program for AI developed under the Global AI Action Alliance for the World Economic
Forum; and - An expert for the Institute of Electrical and Electronics Engineers (IEEE) working with them on various AI standards initiatives.
As a consultant she advises ASX 20 Companies on the responsible implementation of AI and she also works as Principal Research Consultant on Responsible AI for CSIRO-DATA61, Australia’s national science agency.
- C5. Building confidence and security in use of ICTs
- C10. Ethical dimensions of the Information Society
- C11. International and regional cooperation
Well defined and consistently implemented methodologies for the performing of assurance and conformity assessment of digital services provide a governance infrastructure for establishing that digital services are compliant with quality expectations that are defined in standards or regulations. This quality safeguarding mechanisms is a key to building public confidence and security in the use of ICTs (C5) and, when assessing against ethics based standards and regulations, supports ethical dimensions of the information society (C10). Due to the international nature of most digital service provision, successful best practice for assurance and conformity assessment of digital services support and depends on international and regional cooperation (C11).
- Goal 8: Promote inclusive and sustainable economic growth, employment and decent work for all
- Goal 9: Build resilient infrastructure, promote sustainable industrialization and foster innovation
- Goal 16: Promote just, peaceful and inclusive societies
Survey of AI Risk Assessment Methodologies (EY and Trilateral Research)
REPORT on artificial intelligence in a digital age (europa.eu)
The roadmap to an effective AI assurance ecosystem - GOV.UK (www.gov.uk)
AI assurance guide (cdeiuk.github.io)
Responsible AI Institute - RAII Certification
An Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard
Pragmatic digital policy | Brussels | Digitizing Europe (kaizenner.eu)