Page 27 - AI for Good - Impact Report
P. 27

AI for Good






                       Infobox: The European Union AI Act in a nutshell

                       The development of the EU AI Act has been a carefully orchestrated process, beginning
                       with the formation of a ‘High-Level Expert Group on AI’ by the European Commission.
                       This group was tasked with drafting policy recommendations focused on advancing
                       trustworthy AI. Following these initial efforts, the European Commission released
                       its European approach to AI in February 2020 and subsequently presented the first
                       proposal for the EU AI Act in April 2021. The AI Act represents the result of a five-
                       year political process aimed at balancing innovation with the need for secure and
                       reliable AI systems. Its primary objective is to enhance the functioning of the single
                       market concerning AI products and services, while also promoting a human-centric
                       approach to AI development and deployment, putting the protection of EU citizens
                       at the forefront of this regulation. The Act applies to a broad range of stakeholders,
                       including providers, deployers, importers, and distributors of AI systems within the
                       EU, as well as non-EU entities whose AI systems are used within the EU. This approach
                       reflects the regulatory framework seen in the General Data Protection Regulation
                       (GDPR), emphasizing the importance of safety and innovation in equal measure.

                       The EU AI Act establishes a comprehensive framework for regulating the deployment
                       and use of AI within the EU, creating a standardized process for the market entry and
                       operational activation of AI systems. This framework drives a harmonized approach
                       across all EU Member States. Serving as a product safety regulation, the Act employs
                       a risk-based classification system, categorizing AI systems based on their use cases
                       and assigning compliance requirements according to the level of risk they pose to
                       users. This includes prohibiting certain AI applications deemed unethical or harmful,
                       as well as imposing stringent requirements on high-risk AI applications to effectively
                       manage potential threats. Additionally, the Act sets out transparency obligations for
                       AI technologies associated with various risks, ensuring that the regulation remains
                       adaptable to future developments in AI technology.

                       Given the widespread adoption of general-purpose AI technologies, the Act
                       distinguishes between single-purpose AI, designed for specific tasks, and general-
                       purpose AI, which can perform a wide range of functions. Regardless of the risk
                       associated with specific use cases, the AI Act establishes comprehensive rules
                       governing the market entry, oversight, and enforcement of general-purpose AI models,
                       to establish public trust and the integrity of AI innovations.

                       To support the implementation of the AI Act, a new governance structure has been
                       established at both the EU and Member State levels. At the EU level, the European
                       Commission created the European AI Office in February 2023 to oversee the Act's
                       implementation. The AI Office will be responsible for enforcing obligations related
                       to general-purpose AI models. This includes developing tools, methodologies, and
                       benchmarks in collaboration with academia and industry to evaluate these models and
                       identify those with systemic risks. Furthermore, the AI Office will also be responsible for
                       developing implementing guidelines regarding the EU AI Act through delegated and
                       implementing acts for all providers and deployers in scope, such as defining criteria
                       for high-risk AI systems and overseeing conformity assessments.






                                                           17
   22   23   24   25   26   27   28   29   30   31   32