We find that each of these two approaches allows the network to achieve better performance, but this improvement is dependent on the size of the dataset. We test these methods on translating from English into morphologically rich languages, Turkish and Inuktitut, and consider both automatic metrics and human evaluations. The second method imbues structure at the data level by segmenting the data with morphological tokenization. One method, the TP-Transformer, augments the traditional Transformer architecture to include an additional component to represent structure. We hypothesize that this structural learning could be made more robust by explicitly endowing Transformers with a structural bias, and we investigate two methods for building in such a bias. These models have no explicit linguistic structure built into them, yet they may still implicitly learn structured relationships by attending to relevant tokens. Download a PDF of the paper titled Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages, by Paul Soulos and 11 other authors Download PDF Abstract:Machine translation has seen rapid progress with the advent of Transformer-based models.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |