Multi-lingual language model Fine-Tuning
Year: 2,019
Journal: EMNLP-IJCNLP / Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing
Languages: All Languages
Programming languages: Python
Input data:
words
Project website: https://github.com/n-waves/multifithttps://nlp.fast.ai
We propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language.