Page 28 - AI for Good - Impact Report
P. 28
AI for Good
Infobox: The European Union AI Act in a nutshell (Continued)
At the national level, Member States are required to designate supervisory authorities
to enforce the AI Act's obligations concerning AI systems. These supervisory authorities
will ensure that AI systems comply with the established standards and regulations.
For instance, they will oversee the accuracy of conformity assessments conducted by
providers of high-risk AI systems to ensure these systems meet specific standards,
regulations, or requirements. During investigations, market surveillance authorities will
have the authority to access necessary documentation, including training, validation,
and testing datasets used in the development of high-risk AI systems, as well as the
source code of such systems. Providers of high-risk AI are obligated to cooperate fully
with these authorities, ensuring that AI technologies adhere to the rigorous standards
set forth by the EU AI Act.
The Council of Europe, an international organization established in 1949 to promote human
rights, democracy, and the rule of law across Europe, currently includes 46 member states,
extending beyond the European Union’s 27 Member States. In the context of global AI
frameworks, the Council of Europe has taken a leading role by adopting the first-ever legally
binding international treaty on AI in May 2024: the “Framework Convention on artificial
intelligence and human rights, democracy, and the rule of law.” The Framework Convention
101
applies to the entire lifecycle of AI systems utilized by both private entities and public authorities,
with a clear focus on aligning AI with core human rights principles and democratic values. Its
primary goal is to ensure that AI systems are developed, designed, and deployed in accordance
with existing international standards and European values while addressing potential legal gaps
arising from rapid technological advancements.
Unlike the EU AI Act, the Framework Convention is technology-neutral, meaning it does not
regulate specific AI technologies but instead mandates adherence to overarching principles
that prioritize a human rights-centered approach.. It also imposes procedural safeguards,
guaranteeing the protection of individuals impacted by AI systems. This includes the right to
access sufficient information to challenge decisions made or heavily influenced by AI, as well as
ensuring transparency in interactions with AI systems themselves. Furthermore, the Framework
Convention mandates the right to lodge complaints with relevant authorities and emphasizes
the importance of conducting risk and impact assessments to mitigate potential threats to
human rights, democracy, and the rule of law. Importantly, it allows authorities to impose bans
or moratoria on certain high-risk AI applications when necessary.
In September 2024, the Council of Europe opened the treaty for signature to other states and
organizations. The Framework Convention has since been signed by Andorra, Georgia, Iceland,
Norway, the Republic of Moldova, San Marino, the United Kingdom, as well as Israel, the United
States and the EU. Its broader human rights focus complements the EU’s AI Act, providing a
shared ethical and legal foundation upon which further AI regulation within the EU and other
jurisdictions can be built.
To allow organizations within its scope sufficient time to meet the requirements of the EU AI
Act, the full application of the Act is scheduled for August 2027, with a few minor exceptions
extending to 2030. Due to the Act's extraterritorial reach, AI providers based outside the EU will
also need to comply with these rules if they offer their products or services in the EU market. It
18