Longformer

Year: 2,020
Journal: Allen Institute for Artificial Intelligence
Languages: English
Programming languages: Python
Input data:

text (tokens/characters)

Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer’s attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.