ethanke/translation-complexity-score
Python tool for scoring translation complexity using multiple metrics and approaches.
Translation Complexity Scorer
A tool for scoring translation complexity using multiple metrics and machine learning approaches. This tool helps translators, language service providers, and researchers assess text complexity to optimize translation workflows and resource allocation.
๐ Features
Multiple Scoring Methods
-
Traditional Readability Metrics
- Flesch-Kincaid Grade Level
- Coleman-Liau Index
- Gunning Fog Index
- SMOG Index
- Flesch Reading Ease
-
Linguistic Feature Analysis
- Average sentence length
- Lexical diversity (Type-Token Ratio)
- Syntactic complexity (Dependency tree depth)
- Vocabulary rarity analysis
-
Translation-Specific Metrics
- Semantic complexity using transformer embeddings
- Idiomatic expression density
- Domain-specific terminology detection
Technical Highlights
- ๐ GPU acceleration support (CUDA compatible)
- ๐ Normalized scores (0-1 range)
- ๐ฏ Configurable weights for different metrics
- ๐ Batch processing support
- ๐ Detailed score breakdown
- ๐งช Comprehensive test suite
๐ Requirements
- Python 3.8+
- CUDA-compatible GPU (optional, for faster processing)
- 4GB+ RAM
๐ง Installation
# Clone the repository
git clone https://github.com/ethanke/translation-complexity-score.git
cd translation-complexity-score
# Create a virtual environment (recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Download required models
python -m spacy download en_core_web_sm
python -m nltk.downloader punkt averaged_perceptron_tagger๐ Quick Start
from translation_complexity import TranslationComplexityScorer
# Initialize the scorer
scorer = TranslationComplexityScorer()
# Score a single text
text = "Your text to analyze here."
scores = scorer.score_text(text)
print(f"Overall Complexity: {scores['overall_complexity']:.2f}")
print(f"Complexity Level: {scorer.config.get_complexity_level(scores['overall_complexity'])}")
# Batch processing
texts = ["Text 1", "Text 2", "Text 3"]
batch_scores = scorer.batch_score(texts)๐ Example Output
{
"flesch_kincaid": 0.45,
"coleman_liau": 0.38,
"gunning_fog": 0.52,
"smog": 0.41,
"flesch_reading_ease": 0.65,
"avg_sentence_length": 0.48,
"lexical_diversity": 0.72,
"syntactic_complexity": 0.55,
"vocabulary_rarity": 0.33,
"semantic_complexity": 0.61,
"idiomatic_density": 0.25,
"domain_specificity": 0.44,
"overall_complexity": 0.48
}๐๏ธ Project Structure
translation_complexity/
โโโ metrics/
โ โโโ readability.py # Traditional readability metrics
โ โโโ linguistic.py # Linguistic feature analysis
โ โโโ translation.py # Translation-specific metrics
โโโ utils/
โ โโโ helpers.py # Utility functions
โโโ config.py # Configuration settings
โโโ scorer.py # Main scoring interface
โ๏ธ Configuration
You can customize the scoring weights and thresholds:
from translation_complexity import Config, TranslationComplexityScorer
config = Config(
READABILITY_WEIGHT=0.3,
LINGUISTIC_WEIGHT=0.4,
TRANSLATION_WEIGHT=0.3,
LOW_COMPLEXITY_THRESHOLD=0.25,
MEDIUM_COMPLEXITY_THRESHOLD=0.45,
HIGH_COMPLEXITY_THRESHOLD=0.65
)
scorer = TranslationComplexityScorer(config)๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
๐ Citation
If you use this tool in your research, please cite:
@software{translation_complexity_scorer,
title = {Translation Complexity Scorer},
author = {Ethan Kerdelhue},
year = {2025},
url = {https://github.com/ethanke/translation-complexity-score}
}๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
- Thanks to the Hugging Face team for their transformer models
- spaCy team for their excellent NLP library
- NLTK contributors for their linguistic analysis tools