adapter-transformers Logo

Getting Started

  • Installation
  • Quickstart
  • Adapter Training

Adapter Methods

  • Overview and Configuration
  • Adapter Methods
  • Method Combinations

Advanced

  • Adapter Activation and Composition
  • Prediction Heads
  • Embeddings
  • Extending the Library
  • Transitioning from Earlier Versions

Loading and Sharing

  • Loading Pre-Trained Adapters
  • Contributing Adapters to the Hub
  • Integration with HuggingFace’s Model Hub

Supported Models

  • Model Overview
  • ALBERT
  • Auto Classes
  • BART
  • Bidirectional Encoder representation from Image Transformers (BEiT)
  • BERT
  • BertGeneration
  • CLIP
  • DeBERTa
  • DeBERTa-v2
  • DistilBERT
  • Encoder Decoder Models
  • OpenAI GPT2
  • EleutherAI GPT-J-6B
  • MBart
  • RoBERTa
  • T5
  • Vision Transformer (ViT)
  • XLM-RoBERTa

Adapter-Related Classes

  • Adapter Configuration
  • Model Adapters Config
  • Adapter Modules
  • AdapterLayer
  • Model Mixins
  • Adapter Training
  • Adapter Utilities

Contributing

  • Contributing to AdapterHub
  • Adding Adapter Methods
  • Adding Adapters to a Model
adapter-transformers
  • Docs »
  • Search


© Copyright 2020-2022, Adapter-Hub Team

Built with Sphinx using a theme provided by Read the Docs.
Versions v: master
Branches
master