Committed to connecting the world


Explainable Artificial Intelligence


Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models


With the availability of large databases and recent improvements in deep learning methodology, the
performance of AI systems is reaching, or even exceeding, the human level on an increasing number of complex
tasks. Impressive examples of this development can be found in domains such as image classification, sentiment
analysis, speech understanding or strategic game playing. However, because of their nested non-linear
structure, these highly successful machine learning and artificial intelligence models are usually applied in a
black-box manner, i.e. no information is provided about what exactly makes them arrive at their predictions.
Since this lack of transparency can be a major drawback, e.g. in medical applications, the development of
methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing
attention. This paper summarizes recent developments in this field and makes a plea for more interpretability in
artificial intelligence. Furthermore, it presents two approaches to explaining predictions of deep learning
models, one method which computes the sensitivity of the prediction with respect to changes in the input and one
approach which meaningfully decomposes the decision in terms of the input variables. These methods are
evaluated on three classification tasks.


Artificial intelligence, black-box models, deep neural networks, interpretability, layer-wise relevance
propagation, sensitivity analysis


WOJCIWojciech Samek
(Fraunhofer Heinrich Hertz Institute, Germany)

Wojciech Samek is head of the Machine Learning Group at Fraunhofer Heinrich Hertz Institute, Berlin, Germany. He studied computer science at Humboldt University of Berlin, Germany, Heriot-Watt University, UK, and University of Edinburgh, UK, from 2004 to 2010 and received the Dr. rer. nat. degree (summa cum laude) from the Technical University of Berlin, Germany, in 2014. In 2009, he was visiting researcher at NASA Ames Research Center, Mountain View, CA, and, in 2012 and 2013, he had several short-term research stays at ATR International, Kyoto, Japan. Dr. Samek was awarded scholarships from the European Union's Erasmus Mundus programme, the German National Academic Foundation and the DFG Research Training Group GRK 1589/1. He is associated with the Berlin Big Data Center and is a member of the editorial board of Digital Signal Processing. He was a co-chair of the  2017 Workshop on Deep Learning: Theory, Algorithms, and Applications and organizer of  workshops on interpretable AI and machine learning at ICANN'16, ACCV'16 and NIPS'17. In 2016, he received the best paper prize at the ICML Workshop on Visualization for Deep Learning. He has authored or co-authored more than 75 peer-reviewed journal and conference papers, predominantly in the areas deep learning, interpretable artificial intelligence, robust signal processing and computer vision.
THOMASThomas Wiegand
(Technische Universität Berlin and Fraunhofer Heinrich Hertz Institute, Germany)

Thomas Wiegand is a professor in the department of Electrical Engineering and Computer Science at the Technical University of Berlin and is jointly heading the Fraunhofer Heinrich Hertz Institute, Berlin, Germany. He received the Dipl.-Ing. degree in Electrical Engineering from the Technical University of Hamburg-Harburg, Germany, in 1995 and the Dr.-Ing. degree from the University of Erlangen-Nuremberg, Germany, in 2000. As a student, he was a Visiting Researcher at Kobe University, Japan, the University of California at Santa Barbara and Stanford University, USA, where he also returned as a visiting professor. He was a consultant to Skyfire, Inc., Mountain View, CA, and is currently a consultant to Vidyo, Inc., Hackensack, NJ, USA. Since 1995, he has been an active participant in standardization for multimedia with many successful submissions to ITU-T and ISO/IEC. In 2000, he was appointed as the Associated Rapporteur of ITU-T VCEG and from 2005-2009, he was Co-Chair of ISO/IEC MPEG Video. The projects that he co-chaired for the development of the H.264/MPEG-AVC standard have been recognized by an ATAS Primetime Emmy Engineering Award and a pair of NATAS Technology & Engineering Emmy Awards. For his research in video coding and transmission, he received numerous awards including the Vodafone Innovations Award, the EURASIP Group Technical Achievement Award, the Eduard Rhein Technology Award, the Karl Heinz Beckurts Award, the IEEE Masaru Ibuka Technical Field Award, and the IMTC Leadership Award. He received multiple best paper awards for his publications. Thomson Reuters named him in their list of “The World’s Most Influential Scientific Minds 2014” as one of the most cited researchers in his field. He is a recipient of the ITU150 Award.
KLAUSKlaus-Robert Müller
(Technische Universität Berlin, Germany, Korea University, Korea (Rep. of) and Max Planck Institute for Informatics, Germany)

Klaus-Robert Müller studied physics at Karlsruhe Institute of Technology, Karlsruhe, Germany, from 1984 to 1989 and received the Ph.D. degree in computer science from Karlsruhe Institute of Technology in 1992. He has been a Professor of computer science at Berlin Institute of Technology, Berlin, Germany, since 2006. At the same time he has been the Director of the Bernstein Focus on Neurotechnology Berlin. After completing a postdoctoral position at GMD FIRST in Berlin, he was a Research Fellow at the University of Tokyo from 1994 to 1995. In 1995, he founded the Intelligent Data Analysis group at GMD-FIRST (later Fraunhofer FIRST) and directed it until 2008. From 1999 to 2006, he was a Professor at the University of Potsdam. Dr. Müller was awarded the 1999 Olympus Prize by the German Pattern Recognition Society, DAGM, and, in 2006, he received the SEL Alcatel Communication Award. In 2014 he received the Berliner Wissenschaftspreis des regierenden Bürgermeisters and in 2017 the Vodafone Innovation Award. In 2012, he was elected to be a member of the German National Academy of Sciences - Leopoldina, in 2017 he became member of the Berlin Brandenburg Academy of Sciences and was appointed external scientific member of the Max Planck Society. His research interests are machine learning and artificial intelligence and their application for the sciences (Neuroscience, brain-computer interfaces, Physics and Medicine) and industry. Most recently he has focussed on interpretable AI and machine learning.​