Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
gmontavon
LRP tutorial
Commits
47491350
Commit
47491350
authored
Jan 12, 2021
by
gmontavon
Browse files
...
parent
0e211fd1
Changes
1
Hide whitespace changes
Inline
Sidebyside
README.md
View file @
47491350
...
@@ 137,30 +137,29 @@ only the evidence for the actual class.</p>
...
@@ 137,30 +137,29 @@ only the evidence for the actual class.</p>
R = [None]*L + [A[L]*(T[:,None]==numpy.arange(10))]
R = [None]*L + [A[L]*(T[:,None]==numpy.arange(10))]
<p>
The LRP0, LRP$
\e
psilon$, and LRP$
\g
amma$ rules described in the <a
<p>
The LRP0, LRPϵ, and LRPγ rules described in the <a
href="">LRP overview paper
</a>
(Section 10.2.1) for propagating relevance on the
href="https://link.springer.com/chapter/10.1007/9783030289546_10">LRP
lower layers are special cases of the more general propagation rule
</p>
overview paper
</a>
(Section 10.2.1) for propagating relevance on the lower
layers are special cases of the more general propagation rule
</p>
<img src="http://latex.codecogs.com/svg.latex?R_j =
\s
um_k
\f
rac{a_j
<img src="http://latex.codecogs.com/svg.latex?R_j =
\s
um_k
\f
rac{a_j
\r
ho(w_{jk})}{
\e
psilon +
\s
um_{0,j} a_j
\r
ho(w_{jk})} R_k">
\r
ho(w_{jk})}{
\e
psilon +
\s
um_{0,j} a_j
\r
ho(w_{jk})} R_k">
<p>
(cf. Section 10.2.2), where $
\r
ho$ is a function that transform the weights,
<p>
(cf. Section 10.2.2), where ρ is a function that transform the weights, and ϵ
and $
\e
psilon$ is a small positive increment. We define below two helper
is a small positive increment. We define below two helper functions that perform
functions that perform the weight transformation and the incrementation. In
the weight transformation and the incrementation. In practice, we would like to
practice, we would like to apply different rules at different layers (cf.
apply different rules at different layers (cf. Section 10.3). Therefore, we also
Section 10.3). Therefore, we also give the layer index "
<code>
l
</code>
" as
give the layer index "
<code>
l
</code>
" as argument to these functions.
</p>
argument to these functions.
</p>
def rho(w,l): return w + [None,0.1,0.0,0.0][l] * numpy.maximum(0,w)
def rho(w,l): return w + [None,0.1,0.0,0.0][l] * numpy.maximum(0,w)
def incr(z,l): return z + [None,0.0,0.1,0.0][l] * (z**2).mean()**.5+1e9
def incr(z,l): return z + [None,0.0,0.1,0.0][l] * (z**2).mean()**.5+1e9
<p>
In particular, these functions and the layer they receive as a parameter let
<p>
In particular, these functions and the layer they receive as a parameter let
us reduce the general rule to LRP0 for the toplayer, to LRP$
\e
psilon$ with
us reduce the general rule to LRP0 for the toplayer, to LRPϵ with ϵ = 0.1std
$
\e
psilon = 0.1
\,\t
ext{std}$ for the layer just below, and to LRP$
\g
amma$ with
for the layer just below, and to LRPγ with γ=0.1 for the layer before. We now
$
\g
amma=0.1$ for the layer before. We now come to the practical implementation
come to the practical implementation of this general rule. It can be decomposed
of this general rule. It can be decomposed as a sequence of four
as a sequence of four computations:
</p>
computations:
</p>
<p>
<p>
<img src="http://latex.codecogs.com/svg.latex?
<img src="http://latex.codecogs.com/svg.latex?
...
@@ 192,20 +191,21 @@ layers, and at each layer, applying this sequence of computations.</p>
...
@@ 192,20 +191,21 @@ layers, and at each layer, applying this sequence of computations.</p>
<p>
Note that the loop above stops one layer before reaching the pixels. To
<p>
Note that the loop above stops one layer before reaching the pixels. To
propagate relevance scores until the pixels, we need to apply an alternate
propagate relevance scores until the pixels, we need to apply an alternate
propagation rule that properly handles pixel values received as input (cf.
propagation rule that properly handles pixel values received as input (cf.
Section 10.3.2). In particular, we apply for this layer the
$z^
\m
athcal{B}$rule
Section 10.3.2). In particular, we apply for this layer the
zBrule given
given
by:
</p>
by:
</p>
<img src="http://latex.codecogs.com/svg.latex?
<img src="http://latex.codecogs.com/svg.latex?
R_i =
\s
um_j
\f
rac{a_i w_{ij}  l_i w_{ij}^+  h_i w_{ij}^}{
\s
um_{i} a_i w_{ij}
R_i =
\s
um_j
\f
rac{a_i w_{ij}  l_i w_{ij}^+  h_i w_{ij}^}{
\s
um_{i} a_i w_{ij}

l_i w_{ij}^+  h_i w_{ij}^} R_j

l_i w_{ij}^+  h_i w_{ij}^} R_j
">
">
<p>
In this rule, $l_i$ and $h_i$ are the lower and upper bounds of pixel values,
<p>
In this rule,
<i>
l
<sub>
i
</sub></i>
and
<i>
h
<sub>
i
</sub></i>
are the lower and
i.e. "1" and "+1", and $(
\c
dot)^+$ and $(
\c
dot)^$ are shortcut notations for
upper bounds of pixel values, i.e. "1" and "+1", and (·)
<sup>
+
</sup>
and
$
\m
ax(0,
\c
dot)$ and $
\m
in(0,
\c
dot)$. The $z^
\m
athcal{B}$rule can again be
(·)
<sup>
–
</sup>
are shortcut notations for max(0,·) and min(0,·). The zBrule
implemented with a fourstep procedure similar to the one used in the layers
can again be implemented with a fourstep procedure similar to the one used in
above. Here, we need to create two copies of the weights, and also create arrays
the layers above. Here, we need to create two copies of the weights, and also
of pixel values set to $l_i$ and $h_i$ respectively:
</p>
create arrays of pixel values set to
<i>
l
<sub>
i
</sub></i>
and
<i>
h
<sub>
i
</sub></i>
respectively:
</p>
w = W[0]
w = W[0]
...
@@ 348,7 +348,7 @@ c_j = \big[\nabla~\big({\textstyle \sum_k}~z_k(\boldsymbol{a}) \cdot
...
@@ 348,7 +348,7 @@ c_j = \big[\nabla~\big({\textstyle \sum_k}~z_k(\boldsymbol{a}) \cdot
s_k
\b
ig)
\b
ig]_j
s_k
\b
ig)
\b
ig]_j
">
">
<p>
where
$s_k$
is treated as constant.
</p>
<p>
where
<i>
s
<sub>
k
</sub></i>
is treated as constant.
</p>
<p><b>
Pooling layers:
</b>
It is suggested in Section 10.3.2 of the paper to
<p><b>
Pooling layers:
</b>
It is suggested in Section 10.3.2 of the paper to
treat maxpooling layers as average pooling layers in the backward pass.
treat maxpooling layers as average pooling layers in the backward pass.
...
@@ 411,10 +411,9 @@ dimensional maps are shown for a selection of VGG16 layers.
...
@@ 411,10 +411,9 @@ dimensional maps are shown for a selection of VGG16 layers.
<p>
We observe that the explanation becomes increasingly resolved spatially. Note
<p>
We observe that the explanation becomes increasingly resolved spatially. Note
that, like for the MNIST example, we have stopped the propagation procedure one
that, like for the MNIST example, we have stopped the propagation procedure one
layer before the pixels because the rule we have used is not applicable to pixel
layer before the pixels because the rule we have used is not applicable to pixel
layers. Like for the MNIST case, we need ot apply the pixelspecific <img
layers. Like for the MNIST case, we need ot apply the pixelspecific zBrule for
src="http://latex.codecogs.com/svg.latex?
\t
ext z^
\m
athcal{B}">rule for this
this last layer. This rule can again be implemented in terms of forward passes
last layer. This rule can again be implemented in terms of forward passes and
and gradient computations.
</p>
gradient computations.
</p>
A[0] = (A[0].data).requires_grad_(True)
A[0] = (A[0].data).requires_grad_(True)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment