A comparative study of neural machine translation models for Turkish language
dc.authorid | Velioglu, Riza/0000-0002-2160-4976 | |
dc.contributor.author | Ozdemir, Ozgur | |
dc.contributor.author | Akin, Emre Salih | |
dc.contributor.author | Velioglu, Riza | |
dc.contributor.author | Dalyan, Tugba | |
dc.date.accessioned | 2024-07-18T20:49:19Z | |
dc.date.available | 2024-07-18T20:49:19Z | |
dc.date.issued | 2022 | |
dc.department | İstanbul Bilgi Üniversitesi | en_US |
dc.description.abstract | Machine translation (MT) is an important challenge in the fields of Computational Linguistics. In this study, we conducted neural machine translation (NMT) experiments on two different architectures. First, Sequence to Sequence (Seq2Seq) architecture along with a variation that utilizes attention mechanism is performed on translation task. Second, an architecture that is fully based on the self-attention mechanism, namely Transformer, is employed to perform a comprehensive comparison. Besides, the contribution of employing Byte Pair Encoding (BPE) and Gumbel Softmax distributions are examined for both architectures. The experiments are conducted on two different datasets: TED Talks that is one of the popular benchmark datasets for NMT especially among morphologically rich languages like Turkish and WMT18 News dataset that is provided by The Third Conference on Machine Translation (WMT) for shared tasks on various aspects of machine translation. The evaluation of Turkish-to-English translations' results demonstrate that the Transformer model with combination of BPE and Gumbel Softmax achieved 22.4 BLEU score on TED Talks and 38.7 BLUE score on WMT18 News dataset. The empirical results support that using Gumbel Softmax distribution improves the quality of translations for both architectures. | en_US |
dc.identifier.doi | 10.3233/JIFS-211453 | |
dc.identifier.endpage | 2113 | en_US |
dc.identifier.issn | 1064-1246 | |
dc.identifier.issn | 1875-8967 | |
dc.identifier.issue | 3 | en_US |
dc.identifier.scopus | 2-s2.0-85124646682 | en_US |
dc.identifier.scopusquality | Q2 | en_US |
dc.identifier.startpage | 2103 | en_US |
dc.identifier.uri | https://doi.org/10.3233/JIFS-211453 | |
dc.identifier.uri | https://hdl.handle.net/11411/8170 | |
dc.identifier.volume | 42 | en_US |
dc.identifier.wos | WOS:000752849700054 | en_US |
dc.identifier.wosquality | Q4 | en_US |
dc.indekslendigikaynak | Web of Science | en_US |
dc.indekslendigikaynak | Scopus | en_US |
dc.language.iso | en | en_US |
dc.publisher | Ios Press | en_US |
dc.relation.ispartof | Journal of Intelligent & Fuzzy Systems | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Neural Machine Translation | en_US |
dc.subject | Gumbel Softmax | en_US |
dc.subject | Sequence To Sequence | en_US |
dc.subject | Transformer | en_US |
dc.title | A comparative study of neural machine translation models for Turkish language | en_US |
dc.type | Article | en_US |