Longformer
Year: 2,020
Journal: Allen Institute for Artificial Intelligence
Languages: English
Programming languages: Python
Input data:
text (tokens/characters)
Project website: https://github.com/allenai/longformer
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer’s attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention.