DeBERTa-v2¶
Overview¶
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google’s BERT model released in 2018 and Facebook’s RoBERTa model released in 2019.
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa.
The abstract from the paper is the following:
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.
The following information is visible directly on the [original implementation repository](https://github.com/microsoft/DeBERTa). DeBERTa v2 is the second version of the DeBERTa model. It includes the 1.5B model used for the SuperGLUE single-model submission and achieving 89.9, versus human baseline 89.8. You can find more details about this submission in the authors’ [blog](https://www.microsoft.com/en-us/research/blog/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark/)
New in v2:
Vocabulary In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data. Instead of a GPT2-based tokenizer, the tokenizer is now [sentencepiece-based](https://github.com/google/sentencepiece) tokenizer.
nGiE(nGram Induced Input Encoding) The DeBERTa-v2 model uses an additional convolution layer aside with the first transformer layer to better learn the local dependency of input tokens.
Sharing position projection matrix with content projection matrix in attention layer Based on previous experiments, this can save parameters without affecting the performance.
Apply bucket to encode relative positions The DeBERTa-v2 model uses log bucket to encode relative positions similar to T5.
900M model & 1.5B model Two additional model sizes are available: 900M and 1.5B, which significantly improves the performance of downstream tasks.
This model was contributed by DeBERTa. This model TF 2.0 implementation was contributed by kamalkraj. The original code can be found here.
DebertaV2AdapterModel¶
-
class
transformers.adapters.
DebertaV2AdapterModel
(config)¶ Deberta v2 Model transformer with the option to add multiple flexible heads on top.
-
property
active_head
¶ The active prediction head configuration of this model. Can be either the name of a single available head (string) or a list of multiple available heads. In case of a list of heads, the same base model is forwarded through all specified heads.
- Returns
A string or a list of strings describing the active head configuration.
- Return type
Union[str, List[str]]
-
adapter_summary
(as_dict=False) → Union[str, dict]¶ Returns a string summary of all adapters currently added to the model. Each entry in the summary table has the following attributes:
name: the name of the adapter
architecture: the architectural base of the adapter
#param: the number of parameters of the adapter
%param: the number of parameters of the adapter relative to the full model
active: whether the adapter is active
train: whether the adapter weights are enabled for training
-
add_adapter
(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)¶ Adds a new adapter module of the specified type to the model.
- Parameters
adapter_name (str) – The name of the adapter module to be added.
config (str or dict, optional) –
The adapter configuration, can be either:
the string identifier of a pre-defined configuration dictionary
a configuration dictionary specifying the full config
if not given, the default configuration for this adapter type will be used
overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.
set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.
If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion
-
add_adapter_fusion
(adapter_names: Union[transformers.adapters.composition.Fuse, list, str], config=None, overwrite_ok: bool = False, set_active: bool = False)¶ Adds AdapterFusion to the model with alll the necessary configurations and weight initializations
- Parameters
adapter_names (Fuse or list or str) –
AdapterFusion layer to add. Can be either:
a
Fuse
composition blocka list of adapter names to fuse
a comma-separated string of adapter names to fuse
config (str or dict) –
adapter fusion configuration, can be either:
a string identifying a pre-defined adapter fusion configuration
a dictionary representing the adapter fusion configuration
the path to a file containing the adapter fusion configuration
overwrite_ok (bool, optional) – Overwrite an AdapterFusion layer with the same name if it exists. By default (False), an exception is thrown.
set_active (bool, optional) – Activate the added AdapterFusion. By default (False), the AdapterFusion is added but not activated.
-
add_classification_head
(head_name, num_labels=2, layers=2, activation_function='tanh', overwrite_ok=False, multilabel=False, id2label=None, use_pooler=False)¶ Adds a sequence classification head on top of the model.
- Parameters
head_name (str) – The name of the head.
num_labels (int, optional) – Number of classification labels. Defaults to 2.
layers (int, optional) – Number of layers. Defaults to 2.
activation_function (str, optional) – Activation function. Defaults to ‘tanh’.
overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.
multilabel (bool, optional) – Enable multilabel classification setup. Defaults to False.
-
add_masked_lm_head
(head_name, activation_function='gelu', overwrite_ok=False)¶ Adds a masked language modeling head on top of the model.
- Parameters
head_name (str) – The name of the head.
activation_function (str, optional) – Activation function. Defaults to ‘gelu’.
overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.
-
add_multiple_choice_head
(head_name, num_choices=2, layers=2, activation_function='tanh', overwrite_ok=False, id2label=None, use_pooler=False)¶ Adds a multiple choice head on top of the model.
- Parameters
head_name (str) – The name of the head.
num_choices (int, optional) – Number of choices. Defaults to 2.
layers (int, optional) – Number of layers. Defaults to 2.
activation_function (str, optional) – Activation function. Defaults to ‘tanh’.
overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.
-
add_tagging_head
(head_name, num_labels=2, layers=1, activation_function='tanh', overwrite_ok=False, id2label=None)¶ Adds a token classification head on top of the model.
- Parameters
head_name (str) – The name of the head.
num_labels (int, optional) – Number of classification labels. Defaults to 2.
layers (int, optional) – Number of layers. Defaults to 1.
activation_function (str, optional) – Activation function. Defaults to ‘tanh’.
overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.
-
apply_to_adapter_layers
(fn)¶ Applies a function to all adapter layers of the model.
-
delete_adapter
(adapter_name: str)¶ Deletes the adapter with the specified name from the model.
- Parameters
adapter_name (str) – The name of the adapter.
-
delete_adapter_fusion
(adapter_names: Union[transformers.adapters.composition.Fuse, list, str])¶ Deletes the AdapterFusion layer of the specified adapters.
- Parameters
adapter_names (Union[Fuse, list, str]) – AdapterFusion layer to delete.
-
delete_head
(head_name: str)¶ Deletes the prediction head with the specified name from the model.
- Parameters
head_name (str) – The name of the prediction to delete.
-
eject_prefix_tuning
(name: str)¶ Converts the prefix tuning with the given name from the reparameterized form into the flat form.
- Parameters
name (str) – The name of the prefix tuning.
-
forward
(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, head=None, output_adapter_gating_scores=False, output_adapter_fusion_attentions=False, **kwargs)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
forward_context
(context: transformers.adapters.context.ForwardContext, *args, **kwargs)¶ This method is called by the
ForwardContext
at the beginning of the forward pass.
-
forward_head
(all_outputs, head_name=None, cls_output=None, attention_mask=None, return_dict=False, **kwargs)¶ The forward pass through a prediction head configuration. There are three ways to specify the used prediction head configuration (in order of priority):
If a head_name is passed, the head with the given name is used.
If the forward call is executed within an
AdapterSetup
context, the head configuration is read from the context.If the
active_head
property is set, the head configuration is read from there.
- Parameters
all_outputs (dict) – The outputs of the base model.
head_name (str, optional) – The name of the prediction head to use. If None, the active head is used.
cls_output (torch.Tensor, optional) – The classification output of the model.
attention_mask (torch.Tensor, optional) – The attention mask of the model.
return_dict (bool) – Whether or not to return a
ModelOutput
instead of a plain tuple.**kwargs – Additional keyword arguments passed to the forward pass of the head.
-
freeze_model
(freeze=True)¶ Freezes all weights of the model.
-
get_adapter
(name)¶ If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion
-
get_labels
(head_name=None)¶ Returns the labels the given head is assigning/predictin
- Parameters
head_name – (str, optional) the name of the head which labels should be returned. Default is None.
the name is None the labels of the active head are returned (If) –
Returns: labels
-
get_labels_dict
(head_name=None)¶ Returns the id2label dict for the given hea
- Parameters
head_name – (str, optional) the name of the head which labels should be returned. Default is None.
the name is None the labels of the active head are returned (If) –
Returns: id2label
-
iter_layers
() → Iterable[Tuple[int, torch.nn.modules.module.Module]]¶ Iterates over all layers of the model.
-
load_adapter
(adapter_name_or_path: str, config: Union[dict, str] = None, version: str = None, model_name: str = None, load_as: str = None, source: str = None, with_head: bool = True, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, **kwargs) → str¶ Loads a pre-trained pytorch adapter module from the local file system or a remote location.
- Parameters
adapter_name_or_path (str) –
can be either:
the identifier of a pre-trained task adapter to be loaded from Adapter Hub
a path to a directory containing adapter weights saved using model.saved_adapter()
a URL pointing to a zip folder containing a saved adapter module
config (dict or str, optional) – The requested configuration of the adapter. If not specified, will be either: - the default adapter config for the requested adapter if specified - the global default adapter config
version (str, optional) – The version of the adapter to be loaded.
model_name (str, optional) – The string identifier of the pre-trained model.
load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.
source (str, optional) –
Identifier of the source(s) from where to load the adapter. Can be:
”ah” (default): search on AdapterHub.
”hf”: search on HuggingFace model hub.
None: search on all sources
leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.
set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.
- Returns
The name with which the adapter was added to the model.
- Return type
str
-
load_adapter_fusion
(adapter_fusion_name_or_path: str, load_as: str = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, set_active: bool = False, with_head: bool = True, **kwargs) → str¶ Loads a pre-trained AdapterFusion layer from the local file system.
- Parameters
adapter_fusion_name_or_path (str) – a path to a directory containing AdapterFusion weights saved using model.save_adapter_fusion().
load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.
set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.
- Returns
The name with which the AdapterFusion was added to the model.
- Return type
str
-
merge_adapter
(name: str)¶ Merges the weights of the given LoRA module with the Transformer weights as described in the paper.
- Parameters
name (str) – LoRA module to merge.
-
push_adapter_to_hub
(repo_name: str, adapter_name: str, organization: Optional[str] = None, adapterhub_tag: Optional[str] = None, datasets_tag: Optional[str] = None, local_path: Optional[str] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, use_auth_token: Union[bool, str] = True, overwrite_adapter_card: bool = False, create_pr: bool = False, adapter_card_kwargs: Optional[dict] = None)¶ Upload an adapter to HuggingFace’s Model Hub.
- Parameters
repo_name (str) – The name of the repository on the model hub to upload to.
adapter_name (str) – The name of the adapter to be uploaded.
organization (str, optional) – Organization in which to push the adapter (you must be a member of this organization). Defaults to None.
adapterhub_tag (str, optional) – Tag of the format <task>/<subtask> for categorization on https://adapterhub.ml/explore/. See https://docs.adapterhub.ml/contributing.html#add-a-new-task-or-subtask for more. If not specified, datasets_tag must be given in case a new adapter card is generated. Defaults to None.
datasets_tag (str, optional) – Dataset identifier from https://huggingface.co/datasets. If not specified, adapterhub_tag must be given in case a new adapter card is generated. Defaults to None.
local_path (str, optional) – Local path used as clone directory of the adapter repository. If not specified, will create a temporary directory. Defaults to None.
commit_message (
str
, optional) – Message to commit while pushing. Will default to"add config"
,"add tokenizer"
or"add model"
depending on the type of the class.private (
bool
, optional) – Whether or not the repository created should be private (requires a paying subscription).use_auth_token (
bool
orstr
, optional) – The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningtransformers-cli login
(stored inhuggingface
). Defaults to True.overwrite_adapter_card (bool, optional) – Overwrite an existing adapter card with a newly generated one. If set to False, will only generate an adapter card, if none exists. Defaults to False.
create_pr (bool, optional) – Whether or not to create a PR with the uploaded files or directly commit.
- Returns
The url of the adapter repository on the model hub.
- Return type
str
-
reset_adapter
()¶ Resets weights of a LoRA module merged using model.merge_adapter(name).
-
save_adapter
(save_directory: str, adapter_name: str, with_head: bool = True, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)¶ Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().
- Parameters
save_directory (str) – Path to a directory where the adapter should be saved.
adapter_name (str) – Name of the adapter to be saved.
- Raises
ValueError – If the given adapter name is invalid.
-
save_adapter_fusion
(save_directory: str, adapter_names: Union[transformers.adapters.composition.Fuse, list, str], meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, with_head: Union[bool, str] = False)¶ Saves an AdapterFusion layer and its configuration file to a directory so that it can be shared or reloaded using load_adapter_fusion().
- Parameters
save_directory (str) – Path to a directory where the AdapterFusion should be saved.
adapter_names (Union[Fuse, list, str]) – AdapterFusion to be saved.
with_head (Union[bool, str]) – If True, will save a head with the same name as the AdapterFusionLayer. If a string, this will be used as the name of the head to be saved.
- Raises
ValueError – If the given AdapterFusion name is invalid.
-
save_all_adapter_fusions
(save_directory: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)¶ Saves all AdapterFusion layers of this model together with their configuration to subfolders of the given location.
- Parameters
save_directory (str) – Path to a directory where the AdapterFusion layers should be saved.
-
save_all_adapters
(save_directory: str, with_head: bool = True, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)¶ Saves all adapters of this model together with their configuration to subfolders of the given location.
- Parameters
save_directory (str) – Path to a directory where the adapters should be saved.
-
set_active_adapters
(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)¶ Sets the adapter modules to be used by default in every forward pass. This setting can be overriden by passing the adapter_names parameter in the foward() pass. If no adapter with the given name is found, no module of the respective type will be activated. In case the calling model class supports named prediction heads, this method will attempt to activate a prediction head with the name of the last adapter in the list of passed adapter names.
- Parameters
adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.
-
tie_weights
()¶ Tie the weights between the input embeddings and the output embeddings.
If the
torchscript
flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.
-
train_adapter
(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], train_embeddings=False)¶ Sets the model into mode for training the given adapters. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion
-
train_adapter_fusion
(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)¶ Sets the model into mode for training of adapter fusion determined by a list of adapter names. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion
-
train_fusion
(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)¶ Sets the model into mode for training of adapter fusion determined by a list of adapter names.
-
property