Page 20 - U4SSC Guiding principles for artificial intelligence in cities
P. 20
• Communicating AI systems’ end-to-end development process transparently to boost trust;
• Assuring users of human autonomy over AI systems; and
• Creating awareness and better understanding of AI systems among their users and city
stakeholders at large.
3.2.5 Accountable
Non-AI systems tend to have accountability mechanisms which enable human beings to question
and remedy inaccurate results, and related adverse consequences and impacts. These provide
assurances and trust in non-AI systems. It would be desirable to maintain accountability for AI
systems as well.
Consequently, this principle allows cities to procure, develop, deploy and use accountable AI
systems.
Implementation Considerations: Cities can adopt various mechanisms to help enhance the
accountability of their AI systems. These mechanisms include:
• Implementation of appeal and redress processes;
• Verification of results from AI systems (e.g., independent auditing, replicability of results and
decisions);
• Instituting human accountability across the entire AI system for results and decisions during AI
systems’ operations; and
• Instituting human accountability across the entire design and development processes of AI
systems.
3.2.6 Safe and secure
AI systems should function as intended in a reliable and consistent manner; they should also
circumvent any impairment and damage and avoid harm.
Hence, this principle allows cities to develop, deploy and use safe and secure AI systems.
Implementation Considerations: Cities can adopt various mechanisms to achieve safety and
security in AI systems. These mechanisms include:
• Avoiding malfunctioning and harm through extensive testing and identification of vulnerabilities,
or require testing by third party developers contracted with the city;
• Ensuring confidentiality, integrity and availability of AI systems;
• Safeguarding AI systems against cyberattacks and threats;
10