French Language Understanding via Bidirectional Encoder Representations from Transformers
concatenation of two segments (sequences of tokens)
Segments usually consist of more than one natural sentence. The two segments are presented as a single input sequence to (FlauBERT) with special tokens delimiting them (s. BERT)
-> pair of sentences
output representation depends on the entire input sequence (e.g. each token instance has a vector representation that depends on its left and right context)
In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches.