import os
= 'google.colab' in str(get_ipython())
running_in_colab if running_in_colab:
from google.colab import drive
'/content/drive')
drive.mount(= "/content/drive/MyDrive"
homedir else:
= os.getenv('HOME') homedir
mteval
Introduction
This library enables easy, automated machine translation evaluation using the evaluation tools sacreBLEU and COMET. While the evaluation tools readily provide command line access, they lack dataset handling and translation of datasets with major online machine translation services. This is provided by this mteval
library along with code that logs evaluation results and enables easier automation for multiple datasets and MT systems from Python.
Install
Installing the library from PyPI
pip install mteval
Setting up Cloud authentication and parameters in the environment
This library currently supports the cloud translation services Amazon Translate, DeepL, Google Translate and Microsoft Translator. To authenticate with the services and configure them, you need to set the following enviroment variables:
export GOOGLE_APPLICATION_CREDENTIALS='/path/to/google/credentials/file.json'
export GOOGLE_PROJECT_ID=''
export MS_SUBSCRIPTION_KEY=''
export MS_REGION=''
export AWS_DEFAULT_REGION=''
export AWS_ACCESS_KEY_ID=''
export AWS_SECRET_ACCESS_KEY=''
export DEEPL_API_KEY=''
export MMT_API_KEY=''
How to obtain subscription credentials
You can set the environment values by adding above export
statements to your .bashrc
file in Linux or in Jupyter notebook by adding environment variables to the kernel configuration file kernel.json.
This library has only been tested on Linux, not Windows or MacOS.
On Google Colab: Loading the environment from a .env file
Google Colab, which is a hosted cloud solution for Jupyter notebooks with GPU runtimes, doesn’t support persistent environment variables. The environment variables can be stored in a .env
file on Google Drive and loaded at each start of a notebook using mteval
.
Run the following cell to install mteval
from PyPI
!pip install mteval
Run the following cell to install mteval
from the Github repository
!pip install git+https://github.com/polyglottech/mteval.git
from dotenv import load_dotenv
if running_in_colab:
# Colab doesn't have a mechanism to set environment variables other than python-dotenv
= homedir+'/secrets/.env' env_file
Also make sure to store the Google Cloud credentials JSON file on Google Drive, e.g. in the /content/drive/MyDrive/secrets/
folder.
How to use
This is a short example how to translate a few sentences and how to score the machine translations with BLEU using human reference translations. See the reference documentation for a complete list of functions.
from mteval.microsoftmt import *
from mteval.bleu import *
import json
= ["Puissiez-vous passer une semaine intéressante et enrichissante avec nous.",
sources "Honorables sénateurs, je connais, bien entendu, les références du ministre de l'Environnement et je pense que c'est une personne admirable.",
"Il est certain que le renforcement des forces de maintien de la paix et l'envoi d'autres casques bleus ne suffiront pas, compte tenu du mauvais fonctionnement des structures de contrôle et de commandement là-bas."]
= ["May you have an interesting and useful week with us.",
references "Honourable senators, I am, of course, familiar with the credentials of the Minister of the Environment and consider him an admirable person.",
"Surely, strengthening and adding more peacekeepers is not sufficient when we know the command and control structures are not working."]
= []
hypotheses = microsofttranslate()
msmt for source in sources:
= msmt.translate_text("fr","en",source)
translation print(translation)
hypotheses.append(translation)
= json.loads(measure_bleu(hypotheses,references,"en"))
score print(score)
The source texts and references are from the Canadian Hansard corpus. For real-world evaluation, the set would have to be at least 100-200 segments long.