Page 14 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 14

The Annual AI Governance Report 2025: Steering the Future of AI



                   1.2  Agent Governance Frameworks


                   New governance structures are emerging to ensure AI agents operate safely and within
                   regulatory boundaries.                                                                          of AI Agents  Theme 1: The Year

                   Governance gap. Large-scale deployment of autonomous agents could unlock productivity
                   gains but also introduce systemic risks—from labour disruption to cascading security failures. Yet
                   the field of agent governance is still ‘in its infancy,’ with only a handful of researchers working
                   on interventions while investment in building agents accelerates. This mismatch leaves both
                   governments and industry poorly prepared to steer what could quickly become billions of
                   autonomous digital actors. Information about AI agent deployment, for instance, is not well
                   developed.

                   Legal-framework adaptation. Legal scholars are repurposing principal-agent theory and
                   common law agency, which traditionally govern human agents acting on behalf of others, to
                   tackle problems of information asymmetry, discretionary authority, and loyalty posed by AI
                   systems that now operate with similar delegated authority. Yet, classical legal fixes—bonuses,
                   monitoring, and punitive sanctions—assume understandable actions and human-paced decision-
                   making.  Proposals now range from visibility-based safe-harbour regimes that reduce liability
                          6
                   for deployers who log and disclose agent activity  to mandates for “law-following” models, yet
                                                               7
                   courts still struggle with foreseeability and deterrence when an opaque agent acts contrary to
                   its designer’s intent at machine speed. 8

                   Governance principles. Three pillars of governance are important to consider when it comes
                   to AI agents–inclusivity, visibility, and liability: visibility is about making what agents do and
                   what trained them legible; liability allocates responsibility and redress; inclusivity gives all
                   affected communities a meaningful say. Technically, visibility can be delivered through agent
                   identifiers, real-time monitoring, and tamper-evident activity logs, giving regulators and civil
                   society a live audit trail without freezing innovation.  Legally, liability could be calibrated by
                                                                  9
                   tying safe-harbour protection or reduced fines to compliance with these visibility standards,
                   mirroring precedents in cybersecurity and health data law. Such dual infrastructures aim to
                   give innovators room to experiment while ensuring harms are traceable and compensable.
                   Inclusivity means that the same rails that make agents visible and liable must also give voice and
                   leverage to the people and regions they will affect—including workers, civil-society groups and
                   governments in the Global South—so that they can shape what data are logged, who can query
                   it and when agents may operate. Scholars point to participatory data trusts  and 'democratising
                                                                                    10
                   AI' oversight boards  as concrete mechanisms for achieving this shared control to ensure that
                                     11
                   agent governance does not become the sole domain of a few cloud hubs in the Global North
                   but rather reflects a variety of social interests and contexts.


                   6   Kraprayoon, J. (2025, April 17). AI Agent Governance: A Field Guide. Institute for AI Policy and Strategy.
                   7   Chan, A., Ezell, C., Kaufmann, M., Wei, K., Hammond, L., Bradley, H., Bluemke, E., Rajkumar, N., Krueger,
                      D., Kolt, N., Heim, L., & Anderljung, M. (2024). Visibility into AI Agents. 2022 ACM Conference on Fairness,
                      Accountability, and Transparency, 958–973.
                   8   O'Keefe, Cullen and Ramakrishnan, Ketan and Tay, Janna and Winter, Christoph, Law-Following AI: Designing
                      AI Agents to Obey Human Laws (May 02, 2025). 94 Fordham L. Rev. (forthcoming 2025).
                   9   Chan, A., Ezell, C., Kaufmann, M., Wei, K., Hammond, L., Bradley, H., Bluemke, E., Rajkumar, N., Krueger,
                      D., Kolt, N., Heim, L., & Anderljung, M. (2024). Visibility into AI Agents. 2022 ACM Conference on Fairness,
                      Accountability, and Transparency, 958–973.
                   10   Delacroix, S., & Lawrence, N. D. (2019). Bottom-up data Trusts: disturbing the ‘one size fits all’ approach to
                      data governance. International Data Privacy Law.
                   11   Delacroix, S., Pineau, J., & Montgomery, J. (2021). Democratising the digital Revolution: The role of data
                      Governance. In Lecture notes in computer science (pp. 40–52).



                                                             5
   9   10   11   12   13   14   15   16   17   18   19