MPNet
Year: 2,020
Journal: Conference on Neural Information Processing Systems
Languages: English
Programming languages: Python
Input data:
sentences
Project website: https://github.com/microsoft/MPNet
In this paper, we propose MPNet, a novel pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in XLNet).