zennit.rules

Rules based on Hooks

Functions

zero_bias

Add 'bias' to zero_params, where zero_params is a string or a list of strings.

Classes

AlphaBeta

LRP AlphaBeta rule [Bach et al., 2015].

ClampMod

ParamMod to clamp module parameters.

Epsilon

LRP Epsilon rule [Bach et al., 2015].

Flat

LRP Flat rule [Lapuschkin et al., 2019].

Gamma

Generalized LRP Gamma rule [Andéol et al., 2021, Montavon et al., 2019].

GammaMod

ParamMod to modify module parameters as in the Gamma rule.

NoMod

ParamMod that does not modify the parameters.

Norm

Normalize and weight by input contribution.

Pass

Unmodified pass-through rule.

ReLUBetaSmooth

Modify ReLU gradient to smooth softplus gradient [Dombrowski et al., 2019].

ReLUDeconvNet

DeconvNet ReLU rule [Zeiler and Fergus, 2014].

ReLUGuidedBackprop

GuidedBackprop ReLU rule [Springenberg et al., 2015].

WSquare

LRP WSquare rule [Montavon et al., 2017].

ZBox

LRP ZBox rule [Montavon et al., 2017].

ZPlus

LRP ZPlus rule [Bach et al., 2015, Montavon et al., 2017].

class zennit.rules.AlphaBeta(alpha=2.0, beta=1.0, stabilizer=1e-06, zero_params=None)[source]

Bases: BasicHook

LRP AlphaBeta rule [Bach et al., 2015]. The AlphaBeta rule weights positive (alpha) and negative (beta) contributions. Most common parameters are (alpha=1, beta=0) and (alpha=2, beta=1). It is most commonly used for lower layers [Montavon et al., 2019].

Parameters:
  • alpha (float, optional) – Multiplier for the positive output term.

  • beta (float, optional) – Multiplier for the negative output term.

  • stabilizer (callable or float, optional) – Stabilization parameter. If stabilizer is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function (input: torch.Tensor) -> torch.Tensor is expected, of which the output corresponds to the stabilized denominator.

  • zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.

class zennit.rules.ClampMod(min=None, max=None, **kwargs)[source]

Bases: ParamMod

ParamMod to clamp module parameters.

Parameters:
  • min (float or None, optional) – Minimum float value for which the parameters should be clamped, or None if no clamping should be done.

  • max (float or None, optional) – Maximum float value for which the parameters should be clamped, or None if no clamping should be done.

  • kwargs (dict[str, object]) – Additional keyword arguments used for ParamMod.

class zennit.rules.Epsilon(epsilon=1e-06, zero_params=None)[source]

Bases: BasicHook

LRP Epsilon rule [Bach et al., 2015]. Setting (epsilon=0) produces the LRP-0 rule [Bach et al., 2015]. LRP Epsilon is most commonly used in middle layers, LRP-0 is most commonly used in upper layers [Montavon et al., 2019]. Sometimes higher values of epsilon are used, therefore it is not always only a stabilizer value.

Parameters:
  • epsilon (callable or float, optional) – Stabilization parameter. If epsilon is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function (input: torch.Tensor) -> torch.Tensor is expected, of which the output corresponds to the stabilized denominator. Note that this is called stabilizer for all other rules.

  • zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.

class zennit.rules.Flat(stabilizer=1e-06, zero_params=None)[source]

Bases: BasicHook

LRP Flat rule [Lapuschkin et al., 2019]. It is essentially the same as the LRP WSquare rule, but with all parameters set to ones.

Parameters:
  • stabilizer (callable or float, optional) – Stabilization parameter. If stabilizer is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function (input: torch.Tensor) -> torch.Tensor is expected, of which the output corresponds to the stabilized denominator.

  • zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.

class zennit.rules.Gamma(gamma=0.25, stabilizer=1e-06, zero_params=None)[source]

Bases: BasicHook

Generalized LRP Gamma rule [Andéol et al., 2021, Montavon et al., 2019]. The gamma parameter scales the added positive/negative parts of the weights. The original Gamma rule [Montavon et al., 2019] may only be used with positive inputs. The generalized version is equivalent to the original Gamma when there are only positive inputs, but may also be used for negative inputs.

Parameters:
  • gamma (float, optional) – Multiplier for added positive weights.

  • stabilizer (callable or float, optional) – Stabilization parameter. If stabilizer is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function (input: torch.Tensor) -> torch.Tensor is expected, of which the output corresponds to the stabilized denominator.

  • zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.

class zennit.rules.GammaMod(gamma=0.25, min=None, max=None, **kwargs)[source]

Bases: ParamMod

ParamMod to modify module parameters as in the Gamma rule. Adds the scaled, clamped parameters to the parameter itself.

Parameters:
  • gamma (float, optional) – Gamma scaling parameter, by which the clamped parameters are multiplied.

  • min (float or None, optional) – Minimum float value for which the parameters should be clamped, or None if no clamping should be done.

  • max (float or None, optional) – Maximum float value for which the parameters should be clamped, or None if no clamping should be done.

  • kwargs (dict[str, object]) – Additional keyword arguments used for ParamMod.

class zennit.rules.NoMod(**kwargs)[source]

Bases: ParamMod

ParamMod that does not modify the parameters. Allows other modification flags.

Parameters:

kwargs (dict[str, object]) – Additional keyword arguments used for ParamMod.

class zennit.rules.Norm(stabilizer=1e-06)[source]

Bases: BasicHook

Normalize and weight by input contribution. This is essentially the same as the LRP Epsilon rule [Bach et al., 2015] with a fixed epsilon only used as a stabilizer, and without the need of the attached layer to have parameters weight and bias.

class zennit.rules.Pass[source]

Bases: Hook

Unmodified pass-through rule. If the rule of a layer shall not be any other, is elementwise and shall not be the gradient, the Pass rule simply passes upper layer relevance through to the lower layer.

backward(module, grad_input, grad_output)[source]

Pass through the upper gradient, skipping the one for this layer.

class zennit.rules.ReLUBetaSmooth(beta_smooth=10.0)[source]

Bases: Hook

Modify ReLU gradient to smooth softplus gradient [Dombrowski et al., 2019]. Used to obtain meaningful surrogate gradients to compute higher order gradients with ReLUs. Equivalent to changing the gradient to be the (scaled) logistic function (sigmoid).

Parameters:

beta_smooth (float, optional) – The beta parameter for the softplus gradient (i.e. sigmoid(beta * input)). Defaults to 10.

backward(module, grad_input, grad_output)[source]

Modify ReLU gradient to the smooth softplus gradient [Dombrowski et al., 2019].

copy()[source]

Return a copy of this hook with the same beta parameter.

forward(module, input, output)[source]

Remember the input for the backward pass.

class zennit.rules.ReLUDeconvNet[source]

Bases: Hook

DeconvNet ReLU rule [Zeiler and Fergus, 2014].

backward(module, grad_input, grad_output)[source]

Modify ReLU gradient according to DeconvNet [Zeiler and Fergus, 2014].

class zennit.rules.ReLUGuidedBackprop[source]

Bases: Hook

GuidedBackprop ReLU rule [Springenberg et al., 2015].

backward(module, grad_input, grad_output)[source]

Modify ReLU gradient according to GuidedBackprop [Springenberg et al., 2015].

class zennit.rules.WSquare(stabilizer=1e-06, zero_params=None)[source]

Bases: BasicHook

LRP WSquare rule [Montavon et al., 2017]. It is most commonly used in the first layer when the values are not bounded [Montavon et al., 2019].

Parameters:
  • stabilizer (callable or float, optional) – Stabilization parameter. If stabilizer is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function (input: torch.Tensor) -> torch.Tensor is expected, of which the output corresponds to the stabilized denominator.

  • zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.

class zennit.rules.ZBox(low, high, stabilizer=1e-06, zero_params=None)[source]

Bases: BasicHook

LRP ZBox rule [Montavon et al., 2017]. The ZBox rule is intended for “boxed” input pixel space. Generally, the lowest and highest possible values are used, i.e. (low=0., high=1.) for raw image data in the float data type. Neural network inputs are often normalized to match an isotropic gaussian distribution with mean 0 and variance 1, which means that the lowest and highest values also need to be adapted. For image data, this generally happens per channel, for which case low and high can be passed as tensors with shape (1, 3, 1, 1), which will be broadcasted as expected.

Parameters:
  • low (torch.Tensor or float) – Lowest pixel values of input. Subject to broadcasting.

  • high (torch.Tensor or float) – Highest pixel values of input. Subject to broadcasting.

  • stabilizer (callable or float, optional) – Stabilization parameter. If stabilizer is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function (input: torch.Tensor) -> torch.Tensor is expected, of which the output corresponds to the stabilized denominator.

  • zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.

class zennit.rules.ZPlus(stabilizer=1e-06, zero_params=None)[source]

Bases: BasicHook

LRP ZPlus rule [Bach et al., 2015, Montavon et al., 2017]. It is the same as using AlphaBeta with (alpha=1, beta=0)

Parameters:
  • stabilizer (callable or float, optional) – Stabilization parameter. If stabilizer is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function (input: torch.Tensor) -> torch.Tensor is expected, of which the output corresponds to the stabilized denominator.

  • zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.

Notes

Note that the original deep Taylor Decomposition (DTD) specification of the ZPlus Rule [Montavon et al., 2017] only considers positive inputs, as they are used in ReLU Networks. This implementation is effectively alpha=1, beta=0, where negative inputs are allowed.

zennit.rules.zero_bias(zero_params=None)[source]

Add ‘bias’ to zero_params, where zero_params is a string or a list of strings.

Parameters:

zero_params (str or list of str, optional) – Name or names, to which 'bias' should be added.

Returns:

Supplied zero_params, with the string 'bias' appended.

Return type:

list of str