CLIP

Note

Adapter implementation notes:
  • CLIP consists of two separate Transformer encoder models, a ViT-style Transformer for visual features and a language model for textual features. Both encoders can be fitted with adapters. As usual, the leave_out parameter can be used to specify the layers in which adapters should be added. For CLIP, layer IDs are counted globally across both encoders, starting from the text encoder. I.e., for a CLIP model with 12 layers in each Transformer encoder, the text encoder will have IDs 0-11 and the vision encoder will have IDs 12-23.

  • As CLIP does not come with pre-supported task-specific prediction heads, there is currently no CLIPAdapterModel class. Use CLIPModel instead.

The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.

The abstract from the paper is the following:

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL.

CLIPTextModel

class transformers.CLIPTextModel(config: transformers.models.clip.configuration_clip.CLIPTextConfig)

The text model from CLIP without any head or projection on top. This model inherits from [PreTrainedModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config ([CLIPConfig]) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [~PreTrainedModel.from_pretrained] method to load the model weights.

adapter_summary(as_dict=False) → Union[str, dict]

Returns a string summary of all adapters currently added to the model. Each entry in the summary table has the following attributes:

  • name: the name of the adapter

  • architecture: the architectural base of the adapter

  • #param: the number of parameters of the adapter

  • %param: the number of parameters of the adapter relative to the full model

  • active: whether the adapter is active

  • train: whether the adapter weights are enabled for training

add_adapter(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds a new adapter module of the specified type to the model.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • config (str or dict or AdapterConfigBase, optional) –

    The adapter configuration, can be either:

    • the string identifier of a pre-defined configuration dictionary

    • a configuration dictionary specifying the full config

    • if not given, the default configuration for this adapter type will be used

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an

  • is thrown. set_active (exception) – Set the adapter to be the active one. By default (False),

  • adapter is added but not activated. (the) –

add_adapter_fusion(adapter_names: Union[transformers.adapters.composition.Fuse, list, str], config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds AdapterFusion to the model with alll the necessary configurations and weight initializations

Parameters
  • adapter_names (Fuse or list or str) –

    AdapterFusion layer to add. Can be either:

    • a Fuse composition block

    • a list of adapter names to fuse

    • a comma-separated string of adapter names to fuse

  • config (str or dict) –

    adapter fusion configuration, can be either:

    • a string identifying a pre-defined adapter fusion configuration

    • a dictionary representing the adapter fusion configuration

    • the path to a file containing the adapter fusion configuration

  • overwrite_ok (bool, optional) – Overwrite an AdapterFusion layer with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Activate the added AdapterFusion. By default (False), the AdapterFusion is added but not activated.

add_embeddings(name, tokenizer, reference_embedding=None, reference_tokenizer=None, embedding_dim=None)

Add a new embedding to the model. If a reference embedding and reference tokenizer are provided tokens in the present in both tokenizers are initialized to the embedding in the reference_embedding.

Parameters
  • name – the name of the embedding

  • tokenizer – the tokenizer determining the vocab of the embedding

  • reference_embedding – the reference embedding to use for initializing the embeddings of tokens present in the newly created embedding

  • reference_tokenizer – the tokenizer providing the vocab for the reference embedding

  • embedding_dim – the dimension of the embeddings (if None the embedding_size, or if this doesn’t exist the hidden_size, from the config is used)

add_invertible_adapter(adapter_name: str)

Adds an invertible adapter module for the adapter with the given name. If the given adapter does not specify an invertible adapter config, this method does nothing.

Parameters

adapter_name (str) – The name of the adapter for which to add an invertible adapter module.

apply_to_adapter_layers(fn)

Applies a function to all adapter layers of the model.

config_class

alias of transformers.models.clip.configuration_clip.CLIPTextConfig

delete_adapter(adapter_name: str)

Deletes the adapter with the specified name from the model.

Parameters

adapter_name (str) – The name of the adapter.

delete_adapter_fusion(adapter_names: Union[transformers.adapters.composition.Fuse, list, str])

Deletes the AdapterFusion layer of the specified adapters.

Parameters

adapter_names (Union[Fuse, list, str]) – AdapterFusion layer to delete.

delete_embeddings(name)

Deletes the embedding with the given name

Parameters

name – The name of the embedding that should be deleted

eject_prefix_tuning(name: str)

Converts the prefix tuning with the given name from the reparameterized form into the flat form.

Parameters

name (str) – The name of the prefix tuning.

forward(input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) → Union[Tuple, transformers.modeling_outputs.BaseModelOutputWithPooling]

The [CLIPTextModel] forward method, overrides the __call__ special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

</Tip>

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

    Indices can be obtained using [AutoTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.

    [What are input IDs?](../glossary#input-ids)

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    [What are attention masks?](../glossary#attention-mask)

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    [What are position IDs?](../glossary#position-ids)

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

  • Returns

    [transformers.modeling_outputs.BaseModelOutputWithPooling] or tuple(torch.FloatTensor): A [transformers.modeling_outputs.BaseModelOutputWithPooling] or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration ([<class ‘transformers.models.clip.configuration_clip.CLIPTextConfig’>]) and inputs.

    • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

    • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) – Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

    • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

      Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

    • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

      Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

  • Examples

  • ```python

  • from transformers import AutoTokenizer (>>>) –

  • CLIPTextModel

  • model = CLIPTextModel.from_pretrained (>>>) –

  • tokenizer = AutoTokenizer.from_pretrained (>>>) –

  • inputs = tokenizer (>>>) –

  • outputs = model (>>>) –

  • last_hidden_state = outputs.last_hidden_state (>>>) –

  • pooled_output = outputs.pooler_output # pooled (>>>) –

  • ```

forward_context(context: transformers.adapters.context.ForwardContext, *args, **kwargs)

This method is called by the ForwardContext at the beginning of the forward pass.

freeze_model(freeze=True)

Freezes all weights of the model.

get_adapter(name) → dict

Returns a dictionary with all weights of the adapter with the specified name.

Parameters

name (str) – The adapter name.

Returns

A nested dictionary containing the weights of the adapter. The dictionary is structured as follow: {<layer id>: {<module location>: <nn.Module>}}. <layer id> = -1 indicates global/ shared weights.

Return type

dict

get_input_embeddings() → torch.nn.modules.module.Module

Returns the model’s input embeddings.

Returns

A torch module mapping vocabulary to hidden states.

Return type

nn.Module

iter_layers() → Iterable[Tuple[int, torch.nn.modules.module.Module]]

Iterates over all layers of the model.

This abstract method has to ne implemented by every implementing model.

load_adapter(adapter_name_or_path: str, config: Union[dict, str] = None, version: str = None, model_name: str = None, load_as: str = None, source: str = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, **kwargs) → str

Loads a pre-trained pytorch adapter module from the local file system or a remote location.

Parameters
  • adapter_name_or_path (str) –

    can be either:

    • the identifier of a pre-trained task adapter to be loaded from Adapter Hub

    • a path to a directory containing adapter weights saved using model.saved_adapter()

    • a URL pointing to a zip folder containing a saved adapter module

  • config (dict or str, optional) – The requested configuration of the adapter. If not specified, will be either: - the default adapter config for the requested adapter if specified - the global default adapter config

  • version (str, optional) – The version of the adapter to be loaded.

  • model_name (str, optional) – The string identifier of the pre-trained model.

  • load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.

  • source (str, optional) –

    Identifier of the source(s) from where to load the adapter. Can be:

    • ”ah” (default): search on AdapterHub.

    • ”hf”: search on HuggingFace model hub.

    • None: search on all sources

  • leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.

  • set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.

Returns

The name with which the adapter was added to the model.

Return type

str

load_adapter_fusion(adapter_fusion_name_or_path: str, load_as: str = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, set_active: bool = False, **kwargs) → str

Loads a pre-trained AdapterFusion layer from the local file system.

Parameters
  • adapter_fusion_name_or_path (str) – a path to a directory containing AdapterFusion weights saved using model.save_adapter_fusion().

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.

Returns

The name with which the AdapterFusion was added to the model.

Return type

str

load_embeddings(path: str, name: str)

Load a saved embedding from the given path. If the embedding was saved with a tokenizer it is returned

Parameters
  • path – the path to the saved embedding

  • name – the name the embedding should be loaded as

Returns: a tokenizer if it ws saved with the embedding otherwise None

merge_adapter(name: str)

Merges the weights of the given LoRA module with the Transformer weights as described in the paper.

Parameters

name (str) – LoRA module to merge.

push_adapter_to_hub(repo_name: str, adapter_name: str, organization: Optional[str] = None, adapterhub_tag: Optional[str] = None, datasets_tag: Optional[str] = None, local_path: Optional[str] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, use_auth_token: Union[bool, str] = True, overwrite_adapter_card: bool = False, create_pr: bool = False, adapter_card_kwargs: Optional[dict] = None)

Upload an adapter to HuggingFace’s Model Hub.

Parameters
  • repo_name (str) – The name of the repository on the model hub to upload to.

  • adapter_name (str) – The name of the adapter to be uploaded.

  • organization (str, optional) – Organization in which to push the adapter (you must be a member of this organization). Defaults to None.

  • adapterhub_tag (str, optional) – Tag of the format <task>/<subtask> for categorization on https://adapterhub.ml/explore/. See https://docs.adapterhub.ml/contributing.html#add-a-new-task-or-subtask for more. If not specified, datasets_tag must be given in case a new adapter card is generated. Defaults to None.

  • datasets_tag (str, optional) – Dataset identifier from https://huggingface.co/datasets. If not specified, adapterhub_tag must be given in case a new adapter card is generated. Defaults to None.

  • local_path (str, optional) – Local path used as clone directory of the adapter repository. If not specified, will create a temporary directory. Defaults to None.

  • commit_message (str, optional) – Message to commit while pushing. Will default to "add config", "add tokenizer" or "add model" depending on the type of the class.

  • private (bool, optional) – Whether or not the repository created should be private (requires a paying subscription).

  • use_auth_token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running transformers-cli login (stored in huggingface). Defaults to True.

  • overwrite_adapter_card (bool, optional) – Overwrite an existing adapter card with a newly generated one. If set to False, will only generate an adapter card, if none exists. Defaults to False.

  • create_pr (bool, optional) – Whether or not to create a PR with the uploaded files or directly commit.

Returns

The url of the adapter repository on the model hub.

Return type

str

reset_adapter()

Resets weights of a LoRA module merged using model.merge_adapter(name).

save_adapter(save_directory: str, adapter_name: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().

Parameters
  • save_directory (str) – Path to a directory where the adapter should be saved.

  • adapter_name (str) – Name of the adapter to be saved.

Raises

ValueError – If the given adapter name is invalid.

save_adapter_fusion(save_directory: str, adapter_names: Union[transformers.adapters.composition.Fuse, list, str], meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves an AdapterFusion layer and its configuration file to a directory so that it can be shared or reloaded using load_adapter_fusion().

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion should be saved.

  • adapter_names (Union[Fuse, list, str]) – AdapterFusion to be saved.

Raises

ValueError – If the given AdapterFusion name is invalid.

save_all_adapter_fusions(save_directory: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves all AdapterFusion layers of this model together with their configuration to subfolders of the given location.

Parameters

save_directory (str) – Path to a directory where the AdapterFusion layers should be saved.

save_all_adapters(save_directory: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves all adapters of this model together with their configuration to subfolders of the given location.

Parameters

save_directory (str) – Path to a directory where the adapters should be saved.

save_embeddings(path, name, tokenizer=None)

Saves the embedding with the given name. If a tokenizer is passed as well the tokenizer is saved together with the embedding.

Parameters
  • path – The path where the embedding should be saved

  • name – The name of the embedding that should be saved

  • tokenizer – optionally a tokenizer to save with the embedding (default is None)

set_active_adapters(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)

Sets the adapter modules to be used by default in every forward pass. If no adapter with the given name is found, no module of the respective type will be activated.

Parameters

adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.

set_active_embeddings(name)

Sets the active embedding for the forward pass of the model

Parameters

name – The name of the embedding that should be used

set_input_embeddings(value)

Set model’s input embeddings.

Parameters

value (nn.Module) – A module mapping vocabulary to hidden states.

train_adapter(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], train_embeddings=False)

Sets the model into mode for training the given adapters.

train_adapter_fusion(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names.

train_fusion(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names.

CLIPVisionModel

class transformers.CLIPVisionModel(config: transformers.models.clip.configuration_clip.CLIPVisionConfig)

The vision model from CLIP without any head or projection on top. This model inherits from [PreTrainedModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config ([CLIPConfig]) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [~PreTrainedModel.from_pretrained] method to load the model weights.

adapter_summary(as_dict=False) → Union[str, dict]

Returns a string summary of all adapters currently added to the model. Each entry in the summary table has the following attributes:

  • name: the name of the adapter

  • architecture: the architectural base of the adapter

  • #param: the number of parameters of the adapter

  • %param: the number of parameters of the adapter relative to the full model

  • active: whether the adapter is active

  • train: whether the adapter weights are enabled for training

add_adapter(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds a new adapter module of the specified type to the model.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • config (str or dict or AdapterConfigBase, optional) –

    The adapter configuration, can be either:

    • the string identifier of a pre-defined configuration dictionary

    • a configuration dictionary specifying the full config

    • if not given, the default configuration for this adapter type will be used

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an

  • is thrown. set_active (exception) – Set the adapter to be the active one. By default (False),

  • adapter is added but not activated. (the) –

add_adapter_fusion(adapter_names: Union[transformers.adapters.composition.Fuse, list, str], config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds AdapterFusion to the model with alll the necessary configurations and weight initializations

Parameters
  • adapter_names (Fuse or list or str) –

    AdapterFusion layer to add. Can be either:

    • a Fuse composition block

    • a list of adapter names to fuse

    • a comma-separated string of adapter names to fuse

  • config (str or dict) –

    adapter fusion configuration, can be either:

    • a string identifying a pre-defined adapter fusion configuration

    • a dictionary representing the adapter fusion configuration

    • the path to a file containing the adapter fusion configuration

  • overwrite_ok (bool, optional) – Overwrite an AdapterFusion layer with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Activate the added AdapterFusion. By default (False), the AdapterFusion is added but not activated.

apply_to_adapter_layers(fn)

Applies a function to all adapter layers of the model.

config_class

alias of transformers.models.clip.configuration_clip.CLIPVisionConfig

delete_adapter(adapter_name: str)

Deletes the adapter with the specified name from the model.

Parameters

adapter_name (str) – The name of the adapter.

delete_adapter_fusion(adapter_names: Union[transformers.adapters.composition.Fuse, list, str])

Deletes the AdapterFusion layer of the specified adapters.

Parameters

adapter_names (Union[Fuse, list, str]) – AdapterFusion layer to delete.

eject_prefix_tuning(name: str)

Converts the prefix tuning with the given name from the reparameterized form into the flat form.

Parameters

name (str) – The name of the prefix tuning.

forward(pixel_values: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) → Union[Tuple, transformers.modeling_outputs.BaseModelOutputWithPooling]

The [CLIPVisionModel] forward method, overrides the __call__ special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

</Tip>

Parameters
  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) – Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [AutoImageProcessor]. See [CLIPImageProcessor.__call__] for details.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

  • Returns

    [transformers.modeling_outputs.BaseModelOutputWithPooling] or tuple(torch.FloatTensor): A [transformers.modeling_outputs.BaseModelOutputWithPooling] or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration ([<class ‘transformers.models.clip.configuration_clip.CLIPVisionConfig’>]) and inputs.

    • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

    • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) – Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

    • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

      Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

    • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

      Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

  • Examples

  • ```python

  • from PIL import Image (>>>) –

  • import requests (>>>) –

  • from transformers import AutoProcessor (>>>) –

  • CLIPVisionModel

  • model = CLIPVisionModel.from_pretrained (>>>) –

  • processor = AutoProcessor.from_pretrained (>>>) –

  • url = "http (>>>) – //images.cocodataset.org/val2017/000000039769.jpg”

  • image = Image.open (>>>) –

  • inputs = processor (>>>) –

  • outputs = model (>>>) –

  • last_hidden_state = outputs.last_hidden_state (>>>) –

  • pooled_output = outputs.pooler_output # pooled CLS states (>>>) –

  • ```

forward_context(context: transformers.adapters.context.ForwardContext, *args, **kwargs)

This method is called by the ForwardContext at the beginning of the forward pass.

freeze_model(freeze=True)

Freezes all weights of the model.

get_adapter(name) → dict

Returns a dictionary with all weights of the adapter with the specified name.

Parameters

name (str) – The adapter name.

Returns

A nested dictionary containing the weights of the adapter. The dictionary is structured as follow: {<layer id>: {<module location>: <nn.Module>}}. <layer id> = -1 indicates global/ shared weights.

Return type

dict

get_input_embeddings() → torch.nn.modules.module.Module

Returns the model’s input embeddings.

Returns

A torch module mapping vocabulary to hidden states.

Return type

nn.Module

iter_layers() → Iterable[Tuple[int, torch.nn.modules.module.Module]]

Iterates over all layers of the model.

This abstract method has to ne implemented by every implementing model.

load_adapter(adapter_name_or_path: str, config: Union[dict, str] = None, version: str = None, model_name: str = None, load_as: str = None, source: str = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, **kwargs) → str

Loads a pre-trained pytorch adapter module from the local file system or a remote location.

Parameters
  • adapter_name_or_path (str) –

    can be either:

    • the identifier of a pre-trained task adapter to be loaded from Adapter Hub

    • a path to a directory containing adapter weights saved using model.saved_adapter()

    • a URL pointing to a zip folder containing a saved adapter module

  • config (dict or str, optional) – The requested configuration of the adapter. If not specified, will be either: - the default adapter config for the requested adapter if specified - the global default adapter config

  • version (str, optional) – The version of the adapter to be loaded.

  • model_name (str, optional) – The string identifier of the pre-trained model.

  • load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.

  • source (str, optional) –

    Identifier of the source(s) from where to load the adapter. Can be:

    • ”ah” (default): search on AdapterHub.

    • ”hf”: search on HuggingFace model hub.

    • None: search on all sources

  • leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.

  • set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.

Returns

The name with which the adapter was added to the model.

Return type

str

load_adapter_fusion(adapter_fusion_name_or_path: str, load_as: str = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, set_active: bool = False, **kwargs) → str

Loads a pre-trained AdapterFusion layer from the local file system.

Parameters
  • adapter_fusion_name_or_path (str) – a path to a directory containing AdapterFusion weights saved using model.save_adapter_fusion().

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.

Returns

The name with which the AdapterFusion was added to the model.

Return type

str

merge_adapter(name: str)

Merges the weights of the given LoRA module with the Transformer weights as described in the paper.

Parameters

name (str) – LoRA module to merge.

push_adapter_to_hub(repo_name: str, adapter_name: str, organization: Optional[str] = None, adapterhub_tag: Optional[str] = None, datasets_tag: Optional[str] = None, local_path: Optional[str] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, use_auth_token: Union[bool, str] = True, overwrite_adapter_card: bool = False, create_pr: bool = False, adapter_card_kwargs: Optional[dict] = None)

Upload an adapter to HuggingFace’s Model Hub.

Parameters
  • repo_name (str) – The name of the repository on the model hub to upload to.

  • adapter_name (str) – The name of the adapter to be uploaded.

  • organization (str, optional) – Organization in which to push the adapter (you must be a member of this organization). Defaults to None.

  • adapterhub_tag (str, optional) – Tag of the format <task>/<subtask> for categorization on https://adapterhub.ml/explore/. See https://docs.adapterhub.ml/contributing.html#add-a-new-task-or-subtask for more. If not specified, datasets_tag must be given in case a new adapter card is generated. Defaults to None.

  • datasets_tag (str, optional) – Dataset identifier from https://huggingface.co/datasets. If not specified, adapterhub_tag must be given in case a new adapter card is generated. Defaults to None.

  • local_path (str, optional) – Local path used as clone directory of the adapter repository. If not specified, will create a temporary directory. Defaults to None.

  • commit_message (str, optional) – Message to commit while pushing. Will default to "add config", "add tokenizer" or "add model" depending on the type of the class.

  • private (bool, optional) – Whether or not the repository created should be private (requires a paying subscription).

  • use_auth_token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running transformers-cli login (stored in huggingface). Defaults to True.

  • overwrite_adapter_card (bool, optional) – Overwrite an existing adapter card with a newly generated one. If set to False, will only generate an adapter card, if none exists. Defaults to False.

  • create_pr (bool, optional) – Whether or not to create a PR with the uploaded files or directly commit.

Returns

The url of the adapter repository on the model hub.

Return type

str

reset_adapter()

Resets weights of a LoRA module merged using model.merge_adapter(name).

save_adapter(save_directory: str, adapter_name: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().

Parameters
  • save_directory (str) – Path to a directory where the adapter should be saved.

  • adapter_name (str) – Name of the adapter to be saved.

Raises

ValueError – If the given adapter name is invalid.

save_adapter_fusion(save_directory: str, adapter_names: Union[transformers.adapters.composition.Fuse, list, str], meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves an AdapterFusion layer and its configuration file to a directory so that it can be shared or reloaded using load_adapter_fusion().

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion should be saved.

  • adapter_names (Union[Fuse, list, str]) – AdapterFusion to be saved.

Raises

ValueError – If the given AdapterFusion name is invalid.

save_all_adapter_fusions(save_directory: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves all AdapterFusion layers of this model together with their configuration to subfolders of the given location.

Parameters

save_directory (str) – Path to a directory where the AdapterFusion layers should be saved.

save_all_adapters(save_directory: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves all adapters of this model together with their configuration to subfolders of the given location.

Parameters

save_directory (str) – Path to a directory where the adapters should be saved.

set_active_adapters(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)

Sets the adapter modules to be used by default in every forward pass. If no adapter with the given name is found, no module of the respective type will be activated.

Parameters

adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.

train_adapter(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], train_embeddings=False)

Sets the model into mode for training the given adapters.

train_adapter_fusion(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names.

train_fusion(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names.

CLIPModel

class transformers.CLIPModel(config: transformers.models.clip.configuration_clip.CLIPConfig)

This model inherits from [PreTrainedModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config ([CLIPConfig]) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [~PreTrainedModel.from_pretrained] method to load the model weights.

adapter_summary(as_dict=False) → Union[str, dict]

Returns a string summary of all adapters currently added to the model. Each entry in the summary table has the following attributes:

  • name: the name of the adapter

  • architecture: the architectural base of the adapter

  • #param: the number of parameters of the adapter

  • %param: the number of parameters of the adapter relative to the full model

  • active: whether the adapter is active

  • train: whether the adapter weights are enabled for training

add_adapter(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds a new adapter module of the specified type to the model.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • config (str or dict or AdapterConfigBase, optional) –

    The adapter configuration, can be either:

    • the string identifier of a pre-defined configuration dictionary

    • a configuration dictionary specifying the full config

    • if not given, the default configuration for this adapter type will be used

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an

  • is thrown. set_active (exception) – Set the adapter to be the active one. By default (False),

  • adapter is added but not activated. (the) –

add_adapter_fusion(adapter_names: Union[transformers.adapters.composition.Fuse, list, str], config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds AdapterFusion to the model with alll the necessary configurations and weight initializations

Parameters
  • adapter_names (Fuse or list or str) –

    AdapterFusion layer to add. Can be either:

    • a Fuse composition block

    • a list of adapter names to fuse

    • a comma-separated string of adapter names to fuse

  • config (str or dict) –

    adapter fusion configuration, can be either:

    • a string identifying a pre-defined adapter fusion configuration

    • a dictionary representing the adapter fusion configuration

    • the path to a file containing the adapter fusion configuration

  • overwrite_ok (bool, optional) – Overwrite an AdapterFusion layer with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Activate the added AdapterFusion. By default (False), the AdapterFusion is added but not activated.

add_invertible_adapter(adapter_name: str)

Adds an invertible adapter module for the adapter with the given name. If the given adapter does not specify an invertible adapter config, this method does nothing.

Parameters

adapter_name (str) – The name of the adapter for which to add an invertible adapter module.

apply_to_adapter_layers(fn)

Applies a function to all adapter layers of the model.

config_class

alias of transformers.models.clip.configuration_clip.CLIPConfig

delete_adapter(adapter_name: str)

Deletes the adapter with the specified name from the model.

Parameters

adapter_name (str) – The name of the adapter.

delete_adapter_fusion(adapter_names: Union[transformers.adapters.composition.Fuse, list, str])

Deletes the AdapterFusion layer of the specified adapters.

Parameters

adapter_names (Union[Fuse, list, str]) – AdapterFusion layer to delete.

eject_prefix_tuning(name: str)

Converts the prefix tuning with the given name from the reparameterized form into the flat form.

Parameters

name (str) – The name of the prefix tuning.

forward(input_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, return_loss: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) → Union[Tuple, transformers.models.clip.modeling_clip.CLIPOutput]

The [CLIPModel] forward method, overrides the __call__ special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

</Tip>

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

    Indices can be obtained using [AutoTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.

    [What are input IDs?](../glossary#input-ids)

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    [What are attention masks?](../glossary#attention-mask)

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    [What are position IDs?](../glossary#position-ids)

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) – Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [AutoImageProcessor]. See [CLIPImageProcessor.__call__] for details.

  • return_loss (bool, optional) – Whether or not to return the contrastive loss.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

  • Returns

    [transformers.models.clip.modeling_clip.CLIPOutput] or tuple(torch.FloatTensor): A [transformers.models.clip.modeling_clip.CLIPOutput] or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration ([<class ‘transformers.models.clip.configuration_clip.CLIPConfig’>]) and inputs.

    • loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) – Contrastive loss for image-text similarity.

    • logits_per_image:(`torch.FloatTensor` of shape (image_batch_size, text_batch_size)) – The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores.

    • logits_per_text:(`torch.FloatTensor` of shape (text_batch_size, image_batch_size)) – The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores.

    • text_embeds(`torch.FloatTensor` of shape (batch_size, output_dim) – The text embeddings obtained by applying the projection layer to the pooled output of [CLIPTextModel].

    • image_embeds(`torch.FloatTensor` of shape (batch_size, output_dim) – The image embeddings obtained by applying the projection layer to the pooled output of [CLIPVisionModel].

    • text_model_output(`BaseModelOutputWithPooling`): The output of the [CLIPTextModel].

    • vision_model_output(`BaseModelOutputWithPooling`): The output of the [CLIPVisionModel].

  • Examples

  • ```python

  • from PIL import Image (>>>) –

  • import requests (>>>) –

  • from transformers import AutoProcessor (>>>) –

  • CLIPModel

  • model = CLIPModel.from_pretrained (>>>) –

  • processor = AutoProcessor.from_pretrained (>>>) –

  • url = "http (>>>) – //images.cocodataset.org/val2017/000000039769.jpg”

  • image = Image.open (>>>) –

  • inputs = processor( (>>>) –

  • text=["a photo of a cat" (..) –

  • photo of a dog"] ("a) –

  • images=image

  • return_tensors="pt"

  • padding=True

  • ) (..) –

  • outputs = model (>>>) –

  • logits_per_image = outputs.logits_per_image # this is the image-text similarity score (>>>) –

  • probs = logits_per_image.softmax (>>>) –

  • ```

forward_context(context: transformers.adapters.context.ForwardContext, *args, **kwargs)

This method is called by the ForwardContext at the beginning of the forward pass.

freeze_model(freeze=True)

Freezes all weights of the model.

get_adapter(name) → dict

Returns a dictionary with all weights of the adapter with the specified name.

Parameters

name (str) – The adapter name.

Returns

A nested dictionary containing the weights of the adapter. The dictionary is structured as follow: {<layer id>: {<module location>: <nn.Module>}}. <layer id> = -1 indicates global/ shared weights.

Return type

dict

get_image_features(pixel_values: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) → torch.FloatTensor

The [CLIPModel] forward method, overrides the __call__ special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

</Tip>

Parameters
  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) – Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [AutoImageProcessor]. See [CLIPImageProcessor.__call__] for details.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

  • Returns – image_features (torch.FloatTensor of shape (batch_size, output_dim): The image embeddings obtained by applying the projection layer to the pooled output of [CLIPVisionModel].

  • Examples

  • ```python

  • from PIL import Image (>>>) –

  • import requests (>>>) –

  • from transformers import AutoProcessor (>>>) –

  • CLIPModel

  • model = CLIPModel.from_pretrained (>>>) –

  • processor = AutoProcessor.from_pretrained (>>>) –

  • url = "http (>>>) – //images.cocodataset.org/val2017/000000039769.jpg”

  • image = Image.open (>>>) –

  • inputs = processor (>>>) –

  • image_features = model.get_image_features (>>>) –

  • ```

get_text_features(input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) → torch.FloatTensor

The [CLIPModel] forward method, overrides the __call__ special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

</Tip>

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

    Indices can be obtained using [AutoTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.

    [What are input IDs?](../glossary#input-ids)

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    [What are attention masks?](../glossary#attention-mask)

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    [What are position IDs?](../glossary#position-ids)

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

  • Returns – text_features (torch.FloatTensor of shape (batch_size, output_dim): The text embeddings obtained by applying the projection layer to the pooled output of [CLIPTextModel].

  • Examples

  • ```python

  • from transformers import AutoTokenizer (>>>) –

  • CLIPModel

  • model = CLIPModel.from_pretrained (>>>) –

  • tokenizer = AutoTokenizer.from_pretrained (>>>) –

  • inputs = tokenizer (>>>) –

  • text_features = model.get_text_features (>>>) –

  • ```

iter_layers() → Iterable[Tuple[int, torch.nn.modules.module.Module]]

Iterates over all layers of the model.

This abstract method has to ne implemented by every implementing model.

load_adapter(adapter_name_or_path: str, config: Union[dict, str] = None, version: str = None, model_name: str = None, load_as: str = None, source: str = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, **kwargs) → str

Loads a pre-trained pytorch adapter module from the local file system or a remote location.

Parameters
  • adapter_name_or_path (str) –

    can be either:

    • the identifier of a pre-trained task adapter to be loaded from Adapter Hub

    • a path to a directory containing adapter weights saved using model.saved_adapter()

    • a URL pointing to a zip folder containing a saved adapter module

  • config (dict or str, optional) – The requested configuration of the adapter. If not specified, will be either: - the default adapter config for the requested adapter if specified - the global default adapter config

  • version (str, optional) – The version of the adapter to be loaded.

  • model_name (str, optional) – The string identifier of the pre-trained model.

  • load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.

  • source (str, optional) –

    Identifier of the source(s) from where to load the adapter. Can be:

    • ”ah” (default): search on AdapterHub.

    • ”hf”: search on HuggingFace model hub.

    • None: search on all sources

  • leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.

  • set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.

Returns

The name with which the adapter was added to the model.

Return type

str

load_adapter_fusion(adapter_fusion_name_or_path: str, load_as: str = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, set_active: bool = False, **kwargs) → str

Loads a pre-trained AdapterFusion layer from the local file system.

Parameters
  • adapter_fusion_name_or_path (str) – a path to a directory containing AdapterFusion weights saved using model.save_adapter_fusion().

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.

Returns

The name with which the AdapterFusion was added to the model.

Return type

str

merge_adapter(name: str)

Merges the weights of the given LoRA module with the Transformer weights as described in the paper.

Parameters

name (str) – LoRA module to merge.

push_adapter_to_hub(repo_name: str, adapter_name: str, organization: Optional[str] = None, adapterhub_tag: Optional[str] = None, datasets_tag: Optional[str] = None, local_path: Optional[str] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, use_auth_token: Union[bool, str] = True, overwrite_adapter_card: bool = False, create_pr: bool = False, adapter_card_kwargs: Optional[dict] = None)

Upload an adapter to HuggingFace’s Model Hub.

Parameters
  • repo_name (str) – The name of the repository on the model hub to upload to.

  • adapter_name (str) – The name of the adapter to be uploaded.

  • organization (str, optional) – Organization in which to push the adapter (you must be a member of this organization). Defaults to None.

  • adapterhub_tag (str, optional) – Tag of the format <task>/<subtask> for categorization on https://adapterhub.ml/explore/. See https://docs.adapterhub.ml/contributing.html#add-a-new-task-or-subtask for more. If not specified, datasets_tag must be given in case a new adapter card is generated. Defaults to None.

  • datasets_tag (str, optional) – Dataset identifier from https://huggingface.co/datasets. If not specified, adapterhub_tag must be given in case a new adapter card is generated. Defaults to None.

  • local_path (str, optional) – Local path used as clone directory of the adapter repository. If not specified, will create a temporary directory. Defaults to None.

  • commit_message (str, optional) – Message to commit while pushing. Will default to "add config", "add tokenizer" or "add model" depending on the type of the class.

  • private (bool, optional) – Whether or not the repository created should be private (requires a paying subscription).

  • use_auth_token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running transformers-cli login (stored in huggingface). Defaults to True.

  • overwrite_adapter_card (bool, optional) – Overwrite an existing adapter card with a newly generated one. If set to False, will only generate an adapter card, if none exists. Defaults to False.

  • create_pr (bool, optional) – Whether or not to create a PR with the uploaded files or directly commit.

Returns

The url of the adapter repository on the model hub.

Return type

str

reset_adapter()

Resets weights of a LoRA module merged using model.merge_adapter(name).

save_adapter(save_directory: str, adapter_name: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().

Parameters
  • save_directory (str) – Path to a directory where the adapter should be saved.

  • adapter_name (str) – Name of the adapter to be saved.

Raises

ValueError – If the given adapter name is invalid.

save_adapter_fusion(save_directory: str, adapter_names: Union[transformers.adapters.composition.Fuse, list, str], meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves an AdapterFusion layer and its configuration file to a directory so that it can be shared or reloaded using load_adapter_fusion().

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion should be saved.

  • adapter_names (Union[Fuse, list, str]) – AdapterFusion to be saved.

Raises

ValueError – If the given AdapterFusion name is invalid.

save_all_adapter_fusions(save_directory: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves all AdapterFusion layers of this model together with their configuration to subfolders of the given location.

Parameters

save_directory (str) – Path to a directory where the AdapterFusion layers should be saved.

save_all_adapters(save_directory: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves all adapters of this model together with their configuration to subfolders of the given location.

Parameters

save_directory (str) – Path to a directory where the adapters should be saved.

set_active_adapters(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)

Sets the adapter modules to be used by default in every forward pass. If no adapter with the given name is found, no module of the respective type will be activated.

Parameters

adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.

train_adapter(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], train_embeddings=False)

Sets the model into mode for training the given adapters.

train_adapter_fusion(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names.

train_fusion(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names.