DeBERTa

Year: 2,021
Journal: International Conference on Learning Representations
Programming languages: Python
Input data:

text (each word represented as two vectors)

In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pre-training.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.