Page 72 - AI Governance Day - From Principles to Implementation
P. 72
AI Governance Day - From Principles to Implementation
Figure 43: Stuart Russell, Professor of Computer Science at the University of California,
Berkeley
"With aircrafts, there has to be an airworthiness certificate before the airplane can be
sold. With medicines, another area that is now safe bud did not use to be safe, the
medicine has to go through extensive clinical trials before it is allowed to be sold.”
(Stuart Russell)
He pointed out the challenges of applying similar safety standards to AI, particularly due to the
opaque nature of deep learning and transformer models, which are often seen as "black boxes."
Mr. Russell also warned about the potential consequences of insufficient safety measures, citing
historical examples like the Chernobyl disaster, drawing a stark comparison to the potential
risks of AI.
"Despite all that effort, we had Chernobyl, and Chernobyl ended up wiping out the
global nuclear industry." (Stuart Russell)
Lane Dilg on balancing innovation and safety
Ms. Lane Dilg of OpenAI discussed the organization's approach to balancing innovation with
safety. She emphasized that safety and innovation are inextricably linked and that OpenAI is
committed to both.
62