Longformer-Encoder-Decoder

Year: 2,020
Journal: Allen Institute for Artificial Intelligence
Languages: English
Programming languages: Python
Input data:

text

We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization dataset.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.