4 results for “topic:gleu-score”
Python code for various NLP metrics
To evaluate machine translation, they use several methods, some of which we fully implemented
We at the Telecommunication Research Center decided to test and evaluate the Tergoman machine translation system. This evaluation is done by 6 algorithms
Grammatical error correction using fine-tuned T5 and mT5 models on English (JFLEG) and Greek corpora. Includes COLING 2025 submission.