MPNet

Year: 2,020
Journal: Conference on Neural Information Processing Systems
Languages: English
Programming languages: Python
Input data:

sentences

In this paper, we propose MPNet, a novel pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in XLNet).

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.