Yaraku Inc., provider of the AI-powered computer-assisted translation (CAT) tool Yaraku Translate, is pleased to announce that a research paper authored by our natural language processing team has been published in Machine Translation(機械翻訳).
The journal is Japan’s only academic publication dedicated to machine translation and is issued by the Asia-Pacific Association for Machine Translation (AAMT).
The paper, titled Quality Estimation Reranking for Document-level Translation, presents research showing that a separate AI model can automatically select the best translation from multiple AI-generated candidates, resulting in more natural and coherent document-level translations. Its publication reflects external recognition of Yaraku’s ongoing research in translation technology.

Background and Contributions
The study investigates Quality Estimation (QE) Reranking, a method in which multiple translation candidates produced by an NMT model are evaluated by a separate AI model that predicts which candidate is the most accurate and natural.
While most previous studies focused on sentence-level evaluation, this research is notable for analyzing QE reranking at the document level, accounting for context across multiple sentences.
To evaluate translation quality, the authors compared several approaches:
- Comet — a neural evaluation method
- SLIDE — a document-level extension of the Comet family
- GEMBA-DA — an LLM-based direct assessment method
The results show a promising trend: translation quality improves as more translation candidates are used, with continuous gains observed up to 32.
This demonstrates how reranking can be integrated into translation engines to strengthen performance.
As the approach is efficient and simple, these findings suggest strong applicability to real-world business documents and other long-form text.

Relevance to Yaraku’s Development Approach
Although this research is not a feature directly implemented in Yaraku Translate, it provides academic validation for Yaraku’s technology direction, including:
- Enhancing document-level coherence and naturalness
- Designing systems that choose the best output among multiple translation candidates
- Improving translation through objective, data-driven evaluation, rather than intuition
Yaraku continues to strengthen translation quality by combining stable NMT with complementary use of large language models (LLMs) such as GPT and Claude, ensuring more natural and contextually appropriate translations.
Value for Users of Yaraku Translate
The research offers several meaningful insights for organizations using Yaraku Translate:
• Validated direction for translation technology
Confirms that Yaraku’s focus on document-level quality aligns with academically supported methodologies.
• A foundation for continuous product improvement
Indicates that Yaraku’s translation models are developed with scientific rigor, supporting ongoing enhancements.
• Higher consistency for enterprise documents
Benefits use cases such as contracts, manuals, and IR materials where cross-sentence consistency is essential.
• Evidence of a data-driven development approach
Demonstrates Yaraku’s commitment to evaluating translation quality with objective, research-based methods.
• Confidence in long-term product reliability
Shows Yaraku’s sustained investment in translation research and technology.
Paper Details
Title:
Quality Estimation Reranking for Document-level Translation
Published in:
AAMT Journal Machine Translation(機械翻訳), No. 83 — November 29, 2025
(Publisher: Asia-Pacific Association for Machine Translation)
Key Topics:
- Document-level QE reranking
- Reranking N-best outputs from NMT models
- Comparison of SLIDE, Comet-Kiwi, and GEMBA-DA
- Quality improvements achieved with automatic evaluation of translation candidates
- Computational efficiency for practical use
Authors


LINK
AAMT Journal Machine Translation(機械翻訳), No. 83
※You can read the paper starting from page 50 of the PDF.
