Page 91 - AI Standards for Global Impact: From Governance to Action
P. 91
AI Standards for Global Impact: From Governance to Action
f) Sustainability. Business is keen to use AI for profit, but AI is costly in both energy and water.
New ways of using AI and constraining its negative impacts are needed.
g) Good governance could look like good regulation and security, but there are differences
between jurisdictions, e.g. US legislators are currently more focused on AI than security,
with more than 40 US states already working on AI legislation. EU, China, and Brazil, for Part 2: Thematic AI
example, also have legislation on AI.
h) Standards are as good as they are applied or adopted. Business leaders should be
educated and business needs money to spend on governance and security. Internal
committees need to be set up.
i) While AI greatly enhances cybersecurity capabilities, it also introduces new vulnerabilities
and ethical dilemmas. Effective cybersecurity strategies now increasingly rely on AI-driven
solutions, but they must also incorporate measures to mitigate AI-related risks. Balancing
innovation with responsibility is key to harnessing AI’s potential for good while mitigating
risks. However, users also need internal governance while dealing with AI.
j) An intuitive and adaptive cyber posture defined by zero latency networks and quantum
leaps will be needed across industries. "Cyber immunity" at every layer will create networks
and Infrastructures that are inherently secure and self-learning. AI-induced digital intuition
is one of the pillars of cybersecurity strategy that will allow intelligent adaptation.
k) The ability of AI systems to out-innovate malicious attacks by mimicking various aspects
of human immunity will be the line of defence to attain cyber resilience based on both
supervised and unsupervised machine learning. The systems can be designed to make
the right decisions with the context-based data, pre-empt attacks on the basis of initial
indicators of compromise or attack, and take intuitive remediated measures, allowing any
digital infrastructure and organization to be more resilient and immune to cyber threats.
Key Takeaways:
a) Leveraging both supervised and unsupervised machine learning for cyber defence.
b) Enhancing security with knowledge of cybersecurity and resilience, particularly through
developing cyber immunity.
c) Whatever data has been used by AI systems should be transparent and of quantity.
d) AI sustainability is rooted in resource constraints and data scarcity.
e) Responsible AI should align with transparency, explainability, fairness, bias mitigation,
security, resilience, and privacy.
f) Watermarking could be made mandatory.
g) Hallucination attacks are of existential threat and make standardization of agentic AI
unique on the trust model.
h) Organizations should consider establishing an internal committee for AI within their
“Enterprise AI Governance” scope.
12�6 Next steps
It was highlighted that the workshop provided the contours of standardization work and that
the effort is at the level of what the OSI model was 40 years ago.
79