Bidirectional Encoder Representations from Transformers
Year: 2,019
Journal: Conference of the North American Chapter of the Association for Computational Linguistics
Languages: Afrikaans, Albanian, Arabic, Aragonese, Armenian, Asturian, Azerbaijani, Bashkir, Basque, Bavarian, Belarusian, Bengali, Bishnupriya, Bosnian, Breton, Bulgarian, Burmese, Catalan, Cebuano, Chechen, Chinese (simplified), Chinese (traditional), Chuvash, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Gujarati, Haitian, Hebrew, Hindi, Hungarian, Icelandic, Ido, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Kirghiz, Korean, Latin, Latvian, Lithuanian, Lombard, Low, Macedonian, Malagasy, Malay, Malayalam, Manipuri, Marathi, Minangkabau, Mongolian, Nepali, Newar, Norwegian (Bokmal), Norwegian (Nynorsk), Occitan, Persian (Farsi), Piedmontese, Polish, Portuguese, Punjabi, Romanian, Russian, Saxon Luxembourgis, Scots, Serbian, Serbo-Croatian, Sicilian, Slovak, Slovenian, South, Spanish, Sundanese, Swahili, Swedish, Tagalog, Tajik, Tamil, Tatar, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Volapük, Waray-Waray, Welsh, West, Western, Yoruba
Programming languages: Python
Input data:
concatenation of two segments (sequences of tokens)
Segments usually consist of more than one natural sentence. The two segments are presented as a single input sequence to BERT with special tokens delimiting them
-> pair of sentences
Output data:
token, segment and position embeddings
Project website: https://github.com/google-research/bert
BERT is a method of pre-training language representations, meaning that we train a general-purpose “language understanding” model on a large text corpus (like Wikipedia), and then use that model for downstream NLP tasks that we care about (like question answering). BERT outperforms previous methods because it is the first unsupervised, deeply bidirectional system for pre-training NLP.