zennit.rules
Rules based on Hooks
Classes
LRP AlphaBeta rule [Bach et al., 2015]. |
|
LRP Epsilon rule [Bach et al., 2015]. |
|
LRP Flat rule [Lapuschkin et al., 2019]. |
|
LRP Gamma rule [Montavon et al., 2019]. |
|
Normalize and weight by input contribution. |
|
Unmodified pass-through rule. |
|
DeconvNet ReLU rule [Zeiler and Fergus, 2014]. |
|
GuidedBackprop ReLU rule [Springenberg et al., 2015]. |
|
LRP WSquare rule [Montavon et al., 2017]. |
|
LRP ZBox rule [Montavon et al., 2017]. |
|
LRP ZPlus rule [Bach et al., 2015, Montavon et al., 2017]. |
- class zennit.rules.AlphaBeta(alpha=2.0, beta=1.0)[source]
Bases:
BasicHook
LRP AlphaBeta rule [Bach et al., 2015]. The AlphaBeta rule weights positive (alpha) and negative (beta) contributions. Most common parameters are
(alpha=1, beta=0)
and(alpha=2, beta=1)
. It is most commonly used for lower layers [Montavon et al., 2019].- Parameters
alpha (float, optional) – Multiplier for the positive output term.
beta (float, optional) – Multiplier for the negative output term.
- class zennit.rules.Epsilon(epsilon=1e-06)[source]
Bases:
BasicHook
LRP Epsilon rule [Bach et al., 2015]. Setting
(epsilon=0)
produces the LRP-0 rule [Bach et al., 2015]. LRP Epsilon is most commonly used in middle layers, LRP-0 is most commonly used in upper layers [Montavon et al., 2019]. Sometimes higher values ofepsilon
are used, therefore it is not always only a stabilizer value.- Parameters
epsilon (float, optional) – Stabilization parameter.
- class zennit.rules.Flat[source]
Bases:
BasicHook
LRP Flat rule [Lapuschkin et al., 2019]. It is essentially the same as the LRP
WSquare
rule, but with all parameters set to ones.
- class zennit.rules.Gamma(gamma=0.25)[source]
Bases:
BasicHook
LRP Gamma rule [Montavon et al., 2019].
- Parameters
gamma (float, optional) – Multiplier for added positive weights.
- class zennit.rules.Norm[source]
Bases:
BasicHook
Normalize and weight by input contribution. This is essentially the same as the LRP
Epsilon
rule [Bach et al., 2015] with a fixed epsilon only used as a stabilizer, and without the need of the attached layer to have parametersweight
andbias
.
- class zennit.rules.Pass[source]
Bases:
Hook
Unmodified pass-through rule. If the rule of a layer shall not be any other, is elementwise and shall not be the gradient, the Pass rule simply passes upper layer relevance through to the lower layer.
- class zennit.rules.ReLUDeconvNet[source]
Bases:
Hook
DeconvNet ReLU rule [Zeiler and Fergus, 2014].
- backward(module, grad_input, grad_output)[source]
Modify ReLU gradient according to DeconvNet [Zeiler and Fergus, 2014].
- class zennit.rules.ReLUGuidedBackprop[source]
Bases:
Hook
GuidedBackprop ReLU rule [Springenberg et al., 2015].
- backward(module, grad_input, grad_output)[source]
Modify ReLU gradient according to GuidedBackprop [Springenberg et al., 2015].
- class zennit.rules.WSquare[source]
Bases:
BasicHook
LRP WSquare rule [Montavon et al., 2017]. It is most commonly used in the first layer when the values are not bounded [Montavon et al., 2019].
- class zennit.rules.ZBox(low, high)[source]
Bases:
BasicHook
LRP ZBox rule [Montavon et al., 2017]. The ZBox rule is intended for “boxed” input pixel space. Generally, the lowest and highest possible values are used, i.e.
(low=0., high=1.)
for raw image data in the float data type. Neural network inputs are often normalized to match an isotropic gaussian distribution with mean 0 and variance 1, which means that the lowest and highest values also need to be adapted. For image data, this generally happens per channel, for which caselow
andhigh
can be passed as tensors with shape(1, 3, 1, 1)
, which will be broadcasted as expected.- Parameters
low (
torch.Tensor
or float) – Lowest pixel values of input. Subject to broadcasting.high (
torch.Tensor
or float) – Highest pixel values of input. Subject to broadcasting.
- class zennit.rules.ZPlus[source]
Bases:
BasicHook
LRP ZPlus rule [Bach et al., 2015, Montavon et al., 2017]. It is the same as using
AlphaBeta
with(alpha=1, beta=0)
Notes
Note that the original deep Taylor Decomposition (DTD) specification of the ZPlus Rule [Montavon et al., 2017] only considers positive inputs, as they are used in ReLU Networks. This implementation is effectively alpha=1, beta=0, where negative inputs are allowed.