Multi-lingual language model Fine-Tuning

Year: 2,019
Journal: EMNLP-IJCNLP / Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing
Languages: All Languages
Programming languages: Python
Input data:

words

We propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.