Berkeley Neural Parser

Year: 2,018
Journal: Annual Meeting of the Association for Computational Linguistics
Languages: Arabic, Basque, Chinese, English, French, German, Hebrew, Hungarian, Korean, Polish, Swedish
Programming languages: Python
Input data:


Output data:

parse tree

We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-ofthe-art discriminative constituency parser. The use of attention makes explicit the manner in which information is propagated between different locations in the sentence, which we use to both analyze our model and propose potential improvements.

Sign In


Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.