|PET is a stand-alone, open-source (under LGPL) tool written in Java that should help you post-edit and assess machine or human translations while gathering detailed statistics about post-editing time amongst other effort indicators.|
Tool, documentation and examples:
Source code on github
If you are interested in evaluating translations through post-editing, this is an easy and cheap solution: you only need to provide source and translation segments (from one or multiple MT systems - it does not depend on any MT system) to set an experiment. Translators then post-edit the translations, while implicit quality indicators such as post-editing time, keystrokes, edit operations, edit distance and possibly others are stored for each segment. Explicit quality assessments can also be collected. Monolingual and bilingual dictionaries can also be provided.
The tool also works for monolingual revision, can show reference translations, can render html for special markups, and allows establishing constraints for jobs on a per segment basis (for example, the maximum time or length for a given post-edited segment).
Our plan is to maintain and further develop the tool, so if you have any comments/suggestions on how to improve it or ideas for interesting experiments, let us know!
Wilker Aziz (University of Wolverhampton)
Lucia Specia (University of Sheffield)
Aziz, W.; Sousa, S. C. M.; Specia, L. (2012). PET: a tool for post-editing and assessing machine translation. In The Eighth International Conference on Language Resources and Evaluation, LREC′12, Istanbul, Turkey. May 2012. (pdf,bibtex)