SpanBERT

Year: 2,020
Journal: Transactions of the Association for Computational Linguistics 
Languages: English
Programming languages: Python
Input data:

sequence of words

We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. 

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.