AdapterLayer¶
-
class
transformers.
AdapterLayer
(location_key: str, config)¶ -
adapter_fusion
(adapter_setup: transformers.adapters.composition.Fuse, hidden_states, input_tensor, layer_norm, lvl=0)¶ Performs adapter fusion with the given adapters for the given input.
-
adapter_layer_forward
(hidden_states, residual_input, layer_norm)¶ Forward pass through the adapter layer. NOTE: This method should only be called if the calling module directly inherits from AdapterLayer. Otherwise, call the regular forward() method.
- Parameters
hidden_states (torch.Tensor) – Input hidden states to the adapter layer.
residual_input (torch.Tensor) – Residual input to the adapter layer.
layer_norm (torch.nn.Module) – Transformer layer normalization module to be used by the adapter layer.
- Returns
Output hidden states of the adapter layer.
- Return type
torch.Tensor
-
adapter_parallel
(adapter_setup: transformers.adapters.composition.Parallel, hidden_states, input_tensor, layer_norm, lvl=0)¶ For parallel execution of the adapters on the same input. This means that the input is repeated N times before feeding it to the adapters (where N is the number of adapters).
-
adapter_split
(adapter_setup: transformers.adapters.composition.Split, hidden_states, input_tensor, layer_norm, lvl=0)¶ Splits the given input between the given adapters.
-
adapter_stack
(adapter_setup: transformers.adapters.composition.Stack, hidden_states, input_tensor, layer_norm, lvl=0)¶ Forwards the given input through the given stack of adapters.
-
add_fusion_layer
(adapter_names: Union[List, str])¶ See BertModel.add_fusion_layer
-
enable_adapters
(adapter_setup: transformers.adapters.composition.AdapterCompositionBlock, unfreeze_adapters: bool, unfreeze_fusion: bool)¶ Unfreezes a given list of adapters, the adapter fusion layer, or both
- Parameters
adapter_names – names of adapters to unfreeze (or names of adapters part of the fusion layer to unfreeze)
unfreeze_adapters – whether the adapter weights should be activated
unfreeze_fusion – whether the adapter fusion layer for the given adapters should be activated
-
forward
(hidden_states, residual_input, layer_norm)¶ Forward pass through the adapter layer.
- Parameters
hidden_states (torch.Tensor) – Input hidden states to the adapter layer.
residual_input (torch.Tensor) – Residual input to the adapter layer.
layer_norm (torch.nn.Module) – Transformer layer normalization module to be used by the adapter layer.
- Returns
Output hidden states of the adapter layer.
- Return type
torch.Tensor
-