zennit.composites
Composites, registered in a global composite dict.
Functions
Return a basic layer map (list of 2-tuples) shared by all built-in LayerMapComposites. |
|
Register a composite in the global COMPOSITES dict under name. |
Classes
Explicit composite to modify ReLU gradients to smooth softplus gradients [Dombrowski et al., 2019]. |
|
An explicit composite modifying the gradients of all ReLUs according to DeconvNet [Zeiler and Fergus, 2014]. |
|
An explicit composite using the alpha2-beta1 rule for all convolutional layers and the epsilon rule for all fully connected layers. |
|
An explicit composite using the flat rule for any linear first layer, the alpha2-beta1 rule for all other convolutional layers and the epsilon rule for all other fully connected layers. |
|
An explicit composite using the ZBox rule for the first convolutional layer, gamma rule for all following convolutional layers, and the epsilon rule for all fully connected layers. |
|
An explicit composite using the zplus rule for all convolutional layers and the epsilon rule for all fully connected layers. |
|
An explicit composite using the flat rule for any linear first layer, the zplus rule for all other convolutional layers and the epsilon rule for all other fully connected layers. |
|
An explicit composite implementing the ExcitationBackprop [Zhang et al., 2016]. |
|
An explicit composite modifying the gradients of all ReLUs according to GuidedBackprop [Springenberg et al., 2015]. |
|
A Composite for which hooks are specified by a mapping from module types to hooks. |
|
A Composite for which hooks are specified by a list of composites. |
|
A Composite for which hooks are specified by both a mapping from module names and module types to hooks. |
|
A Composite for which hooks are specified by a mapping from module names to hooks. |
|
A Composite for which hooks are specified by a mapping from module types to hooks. |
- class zennit.composites.BetaSmooth(beta_smooth=10.0, layer_map=None, zero_params=None, canonizers=None)[source]
Bases:
LayerMapComposite
Explicit composite to modify ReLU gradients to smooth softplus gradients [Dombrowski et al., 2019]. Used to obtain meaningful surrogate gradients to compute higher order gradients with ReLUs. Equivalent to changing the gradient to be the (scaled) logistic function (sigmoid).
- Parameters:
beta_smooth (float, optional) – The beta parameter for the softplus gradient (i.e.
sigmoid(beta * input)
). Defaults to10
.layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook. This will be prepended to the
layer_map
defined by the composite.zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.DeconvNet(layer_map=None, canonizers=None)[source]
Bases:
LayerMapComposite
An explicit composite modifying the gradients of all ReLUs according to DeconvNet [Zeiler and Fergus, 2014].
- Parameters:
layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook. This will be prepended to the
layer_map
defined by the composite.canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.EpsilonAlpha2Beta1(epsilon=1e-06, stabilizer=1e-06, layer_map=None, zero_params=None, canonizers=None)[source]
Bases:
LayerMapComposite
An explicit composite using the alpha2-beta1 rule for all convolutional layers and the epsilon rule for all fully connected layers.
- Parameters:
epsilon (callable or float, optional) – Stabilization parameter for the
Epsilon
rule. Ifepsilon
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator. Note that this is calledstabilizer
for all other rules.stabilizer (callable or float, optional) – Stabilization parameter for rules other than
Epsilon
. Ifstabilizer
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator.layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook. This will be prepended to the
layer_map
defined by the composite.zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.EpsilonAlpha2Beta1Flat(epsilon=1e-06, stabilizer=1e-06, layer_map=None, first_map=None, zero_params=None, canonizers=None)[source]
Bases:
SpecialFirstLayerMapComposite
An explicit composite using the flat rule for any linear first layer, the alpha2-beta1 rule for all other convolutional layers and the epsilon rule for all other fully connected layers.
- Parameters:
epsilon (callable or float, optional) – Stabilization parameter for the
Epsilon
rule. Ifepsilon
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator. Note that this is calledstabilizer
for all other rules.stabilizer (callable or float, optional) – Stabilization parameter for rules other than
Epsilon
. Ifstabilizer
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator.layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook. This will be prepended to the
layer_map
defined by the composite.first_map (list[tuple[tuple[torch.nn.Module, …], Hook]]) – Applicable mapping for the first layer, same format as layer_map. This will be prepended to the
first_map
defined by the composite.zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.EpsilonGammaBox(low, high, epsilon=1e-06, gamma=0.25, stabilizer=1e-06, layer_map=None, first_map=None, zero_params=None, canonizers=None)[source]
Bases:
SpecialFirstLayerMapComposite
An explicit composite using the ZBox rule for the first convolutional layer, gamma rule for all following convolutional layers, and the epsilon rule for all fully connected layers.
- Parameters:
low (obj:torch.Tensor) – A tensor with the same size as the input, describing the lowest possible pixel values.
high (obj:torch.Tensor) – A tensor with the same size as the input, describing the highest possible pixel values.
epsilon (callable or float, optional) – Stabilization parameter for the
Epsilon
rule. Ifepsilon
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator. Note that this is calledstabilizer
for all other rules.gamma (float, optional) – Gamma parameter for the gamma rule.
stabilizer (callable or float, optional) – Stabilization parameter for rules other than
Epsilon
. Ifstabilizer
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator.layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook. This will be prepended to the
layer_map
defined by the composite.first_map (list[tuple[tuple[torch.nn.Module, …], Hook]]) – Applicable mapping for the first layer, same format as layer_map. This will be prepended to the
first_map
defined by the composite.zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.EpsilonPlus(epsilon=1e-06, stabilizer=1e-06, layer_map=None, zero_params=None, canonizers=None)[source]
Bases:
LayerMapComposite
An explicit composite using the zplus rule for all convolutional layers and the epsilon rule for all fully connected layers.
- Parameters:
epsilon (callable or float, optional) – Stabilization parameter for the
Epsilon
rule. Ifepsilon
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator. Note that this is calledstabilizer
for all other rules.stabilizer (callable or float, optional) – Stabilization parameter for rules other than
Epsilon
. Ifstabilizer
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator.layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook. This will be prepended to the
layer_map
defined by the composite.zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.EpsilonPlusFlat(epsilon=1e-06, stabilizer=1e-06, layer_map=None, first_map=None, zero_params=None, canonizers=None)[source]
Bases:
SpecialFirstLayerMapComposite
An explicit composite using the flat rule for any linear first layer, the zplus rule for all other convolutional layers and the epsilon rule for all other fully connected layers.
- Parameters:
epsilon (callable or float, optional) – Stabilization parameter for the
Epsilon
rule. Ifepsilon
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator. Note that this is calledstabilizer
for all other rules.stabilizer (callable or float, optional) – Stabilization parameter for rules other than
Epsilon
. Ifstabilizer
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator.layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook. This will be prepended to the
layer_map
defined by the composite.first_map (list[tuple[tuple[torch.nn.Module, …], Hook]]) – Applicable mapping for the first layer, same format as layer_map. This will be prepended to the
first_map
defined by the composite.zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.ExcitationBackprop(stabilizer=1e-06, layer_map=None, zero_params=None, canonizers=None)[source]
Bases:
LayerMapComposite
An explicit composite implementing the ExcitationBackprop [Zhang et al., 2016].
- Parameters:
layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook. This will be prepended to the
layer_map
defined by the composite.zero_params (list[str], optional) – A list of parameter names that shall set to zero. If None (default), no parameters are set to zero.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.GuidedBackprop(layer_map=None, canonizers=None)[source]
Bases:
LayerMapComposite
An explicit composite modifying the gradients of all ReLUs according to GuidedBackprop [Springenberg et al., 2015].
- Parameters:
layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook. This will be prepended to the
layer_map
defined by the composite.canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.LayerMapComposite(layer_map, canonizers=None)[source]
Bases:
Composite
A Composite for which hooks are specified by a mapping from module types to hooks.
- Parameters:
layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- mapping(ctx, name, module)[source]
Get the appropriate hook given a mapping from module types to hooks.
- Parameters:
ctx (dict) – A context dictionary to keep track of previously registered hooks.
name (str) – Name of the module.
module (obj:torch.nn.Module) – Instance of the module to find a hook for.
- Returns:
obj – The hook found with the module type in the given layer map, or None if no applicable hook was found.
- Return type:
Hook or None
- class zennit.composites.MixedComposite(composites, canonizers=None)[source]
Bases:
Composite
A Composite for which hooks are specified by a list of composites.
Each composite defines a mapping from layer property to a specific Hook. The list order of composites defines their matching order.
- Parameters:
composites (list[Composite]) – A list of Composites. The list order of composites defines their matching order.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- mapping(ctx, name, module)[source]
Get the appropriate hook given a list of composites.
- Parameters:
ctx (dict) – A context dictionary to keep track of previously registered hooks.
name (str) – Name of the module.
module (obj:torch.nn.Module) – Instance of the module to find a hook for.
- Returns:
obj – The hook found by the first match in the composite list, or None if no applicable hook was found.
- Return type:
Hook or None
- class zennit.composites.NameLayerMapComposite(name_map=None, layer_map=None, canonizers=None)[source]
Bases:
MixedComposite
A Composite for which hooks are specified by both a mapping from module names and module types to hooks.
This implicitly creates instances of NameMapComposite and LayerMapComposite. The layer-name mapping will be matched before the layer-type mapping.
- Parameters:
name_map (list[tuple[tuple[str, …], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module names and a Hook.
layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]], optional) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- class zennit.composites.NameMapComposite(name_map, canonizers=None)[source]
Bases:
Composite
A Composite for which hooks are specified by a mapping from module names to hooks.
- Parameters:
name_map (list[tuple[tuple[str, …], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module names and a Hook.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- mapping(ctx, name, module)[source]
Get the appropriate hook given a mapping from module names to hooks.
- Parameters:
ctx (dict) – A context dictionary to keep track of previously registered hooks.
name (str) – Name of the module.
module (obj:torch.nn.Module) – Instance of the module to find a hook for.
- Returns:
obj – The hook found with the module name in the given name map, or None if no applicable hook was found.
- Return type:
Hook or None
- class zennit.composites.SpecialFirstLayerMapComposite(layer_map, first_map, canonizers=None)[source]
Bases:
LayerMapComposite
A Composite for which hooks are specified by a mapping from module types to hooks.
- Parameters:
layer_map (list[tuple[tuple[torch.nn.Module, ...], Hook]]) – A mapping as a list of tuples, with a tuple of applicable module types and a Hook.
first_map (list[tuple[tuple[torch.nn.Module, …], Hook]]) – Applicable mapping for the first layer, same format as layer_map.
canonizers (list[
zennit.canonizers.Canonizer
], optional) – List of canonizer instances to be applied before applying hooks.
- mapping(ctx, name, module)[source]
Get the appropriate hook given a mapping from module types to hooks and a special mapping for the first occurrence in another mapping.
- Parameters:
ctx (dict) – A context dictionary to keep track of previously registered hooks.
name (str) – Name of the module.
module (obj:torch.nn.Module) – Instance of the module to find a hook for.
- Returns:
obj – The hook found with the module type in the given layer map, in the first layer map if this was the first layer, or None if no applicable hook was found.
- Return type:
Hook or None
- zennit.composites.layer_map_base(stabilizer=1e-06)[source]
Return a basic layer map (list of 2-tuples) shared by all built-in LayerMapComposites.
- Parameters:
stabilizer (callable or float, optional) – Stabilization parameter for rules other than
Epsilon
. Ifstabilizer
is a float, it will be added to the denominator with the same sign as each respective entry. If it is callable, a function(input: torch.Tensor) -> torch.Tensor
is expected, of which the output corresponds to the stabilized denominator.- Returns:
Basic ayer map shared by all built-in LayerMapComposites.
- Return type:
list[tuple[tuple[torch.nn.Module, …], Hook]]