Efficiently Learning an Encoder that Classifies Token Replacements Accurately

Year: 2,020
Journal: International Conference on Learning Representations
Languages: English
Programming languages: Python
Input data:

Plain text

ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish “real” input tokens vs “fake” input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.