idalab talks: “Explaining Deep Neural Networks Classification Decisions”

As part of the regular idalab talks, idalab will host Grégoire Montavon, who will discuss deep neural network classification decisions. The talk is scheduled for March 18, 2.30pm.

Deep neural networks have become state-of-the-art on highly complex machine learning problems such as image classification. However, so far difficult to interpret, various methods have been proposed to visualize how a neural network represents a certain concept such as an image class. Other methods explain what pixels of an image are responsible for a particular neural network classification decision.

This talk focuses on the decomposition of a neural network prediction in terms of input variables. Such decomposition can be obtained by viewing the neural network output as a certain quantity, that needs to be redistributed onto the input variables by backward propagation. More precisely, a local propagation rule is defined for each neuron in the network, and a backward pass is performed on the whole network by applying the local rules iteratively. The decomposition method results in a “heatmap” that indicates what are the most contributing input variables (e.g. pixels) to the neural network prediction.

It will then be shown how certain propagation rules can be derived analytically by performing a Taylor expansion of the neuron function. This view provides guidance on which propagation rules to choose at specific layers of a deep network. The propagation rules can also be validated empirically by “pixel-flipping”: a method that verifies whether the prediction performance drops quickly if we remove the most contributing pixels from the image.

montavon

Grégoire Montavon is currently a postdoc at the Institute of Software Engineering and Theoretical Computer Science at TU Berlin and an expert in neural networks and machine learning.

Selected Papers

  • S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, W. Samek. On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation PLOS ONE, 2015
  • W. Samek, A. Binder, G. Montavon, S. Bach, K.-R. Müller. Evaluating the Visualization of What a Deep Neural Network has Learned, arXiv (preprint), 2015
  • G. Montavon, S. Bach, A. Binder, W. Samek, K.-R. Müller. Explaining NonLinear Classification Decisions with Deep Taylor Decomposition

About idalab talks

We frequently invite leading scholars, data scientists, business experts and big data thought leaders to discuss their work, gain new perspectives and generate fresh insights. idalab talks are hosted on an irregular basis and are open to friends and family. Informations about upcoming talks will always be posted on this blog. If you would like to attend, feel free to shoot us a mail:

Contact the author
Serena Rota
+49 (30) 814 513-15
Subscribe
Share

Leave a Comment

Your email address will not be published. Required fields are marked *