Page 76 - AI Governance Day - From Principles to Implementation
P. 76
AI Governance Day - From Principles to Implementation
Figure 47: Lane Dilg, Head of Strategic Partnerships at OpenAI
"That is a standard that we have integrated in our image generation capabilities and
that we also have committed to integrating into our video generation capabilities
before deployment." (Lane Dilg)
Head of Strategic Partnerships Dilg also discussed OpenAI's response to cyber risks,
highlighting the publication of six critical measures for AI security. Additionally, she mentioned
the establishment of a Safety and Security Committee within OpenAI to oversee safety measures
and ensure accountability.
Rumman Chowdhury on effective regulation
Ms. Rumman Chowdhury addressed the effectiveness of current regulations in mitigating
AI risks. She acknowledged the challenges of evaluating AI models, given their probabilistic
nature, and called for more robust benchmarks and evaluation methods.
Ms. Rumman Chowdhury highlighted the role of bias bounty programs and red teaming in
identifying and mitigating risks, underscoring the importance of independent scrutiny.
"Red teaming is the practice of bringing in external individuals to stress test the
negative capabilities of AI models. Again, it's an inexact science. How many people
should be red teaming? How do you know you're done red teaming? Figuring some
of these things out will only happen as we perform more of these tests." (Rumman
Chowdhury)
66