zennit.rules
Rules based on Hooks
Classes
AlphaBeta LRP rule. |
|
Epsilon LRP rule. |
|
This is the Flat LRP rule. |
|
Gamma LRP rule. |
|
Normalize and weigh relevance by input contribution. |
|
If the rule of a layer shall not be any other, is elementwise and shall not be the gradient, the Pass rule simply passes upper layer relevance through to the lower layer. |
|
Hook to modify ReLU gradient according to DeconvNet. |
|
Hook to modify ReLU gradient according to GuidedBackprop. |
|
This is the WSquare LRP rule. |
|
ZBox LRP rule for input pixel space. |
|
ZPlus (or alpha=1, beta=0) LRP rule. |
- class zennit.rules.AlphaBeta(alpha=2.0, beta=1.0)[source]
Bases:
BasicHook
AlphaBeta LRP rule.
- Parameters
alpha (float, optional) – Multiplier for the positive output term.
beta (float, optional) – Multiplier for the negative output term.
- class zennit.rules.Epsilon(epsilon=1e-06)[source]
Bases:
BasicHook
Epsilon LRP rule.
- Parameters
epsilon (float, optional) – Stabilization parameter.
- class zennit.rules.Flat[source]
Bases:
BasicHook
This is the Flat LRP rule. It is essentially the same as the WSquare Rule, but with all parameters set to ones.
- class zennit.rules.Gamma(gamma=0.25)[source]
Bases:
BasicHook
Gamma LRP rule.
- Parameters
gamma (float, optional) – Multiplier for added positive weights.
- class zennit.rules.Norm[source]
Bases:
BasicHook
Normalize and weigh relevance by input contribution. This is essentially the same as the LRP Epsilon Rule with a fixed epsilon only used as a stabilizer, and without the need of the attached layer to have parameters weight and bias.
- class zennit.rules.Pass[source]
Bases:
Hook
If the rule of a layer shall not be any other, is elementwise and shall not be the gradient, the Pass rule simply passes upper layer relevance through to the lower layer.
- class zennit.rules.ReLUDeconvNet[source]
Bases:
Hook
Hook to modify ReLU gradient according to DeconvNet.
- class zennit.rules.ReLUGuidedBackprop[source]
Bases:
Hook
Hook to modify ReLU gradient according to GuidedBackprop.
- class zennit.rules.ZBox(low, high)[source]
Bases:
BasicHook
ZBox LRP rule for input pixel space.
- Parameters
low (obj:torch.Tensor) – Lowest pixel values of input.
high (obj:torch.Tensor) – Highest pixel values of input.
- class zennit.rules.ZPlus[source]
Bases:
BasicHook
ZPlus (or alpha=1, beta=0) LRP rule.
Notes
Note that the original deep Taylor Decomposition (DTD) specification of the ZPlus Rule (https://doi.org/10.1016/j.patcog.2016.11.008) only considers positive inputs, as they are used in ReLU Networks. This implementation is effectively alpha=1, beta=0, where negative inputs are allowed.