🎉 v1.0

Get started

  • Welcome
  • Documentation
  • Pricing
    Soon

Free Tools

  • I18n MCP
  • CLI
  • CI/CD
  • Compiler
    Unstable
  • Connect Your Engine

Resources

  • Languages
  • LLM Models
0 stars
0 members
Dashboard

Lingo.dev

  • How it works
  • Changelognew
  • API
  • MCP

Localization Engine

  • Overview
  • Brand Voices
  • Instructions
  • Glossaries
  • LLM Models

Quality

  • Reports
  • AI Reviewers
  • Playground

Admin

  • API Keys
  • Team

AI Reviewers#

AI reviewers are automated quality checks that evaluate translations produced by your localization engine. After each translation request, selected AI reviewers run an independent LLM evaluation against your custom criteria — producing pass/fail verdicts or percentage scores, automatically, without manual review.


How it works#

When the localization engine completes a translation request, it checks which AI reviewers match the request's locale pair. Each matching AI reviewer with a passing sampling check is queued for asynchronous evaluation — review never blocks the translation response.

The review LLM receives the source text, translated output, locale pair, and your custom instruction. It returns a structured result: either a boolean pass/fail or a percentage score, with reasoning for imperfect results.


AI reviewer types#

Boolean AI reviewers#

Return a binary verdict: pass or fail. Use these for rules that are either met or not.

Examples:

  • "Does the translation preserve all HTML tags and attributes?"
  • "Are pluralization rules applied correctly for the target language?"
  • "Does the translation use formal address (Sie) in German?"

Results are aggregated as pass rates — 75% means 3 out of 4 evaluated translations passed.

Percentage AI reviewers#

Return a score from 0 to 100. Use these for quality dimensions that exist on a spectrum.

Examples:

  • "Rate the naturalness of the translation for a native speaker (0–100)"
  • "Score how well the translation preserves the original tone and intent (0–100)"
  • "Evaluate grammatical correctness on a scale of 0–100"

Results are aggregated as averages across the evaluation period.


AI reviewer configuration#

FieldDescription
NameA label identifying the AI reviewer (e.g., "Pluralization check")
InstructionThe evaluation criteria, written in natural language
Typeboolean (pass/fail) or percentage (0–100)
Source localeThe source locale to match, or * for any
Target localeThe target locale to match, or * for any
Provider / ModelThe LLM used for evaluation (independent of the translation model)
SamplingPercentage of requests to evaluate (0–100%)
Allow N/AWhether the AI reviewer can return "not applicable" for irrelevant pairs
EnabledToggle review on or off without deleting the configuration

Writing AI reviewer instructions#

The instruction field is the core of an AI reviewer. It tells the evaluation LLM exactly what to check. Write it as a specific, testable criterion.

Good instructions#

Boolean:

text
Check whether all HTML tags in the source text are preserved
exactly in the translation. Tags must not be added, removed,
modified, or reordered. Pass if all tags are preserved, fail
if any tag is missing or altered.

Percentage:

text
Rate the fluency of the translation on a scale of 0-100.
100 means a native speaker would find it completely natural.
0 means it reads like machine output. Deduct points for
awkward phrasing, unnatural word order, or overly literal
constructions.

What makes a good instruction#

  • Specific criteria — define exactly what pass/fail means, or what 0 and 100 represent
  • Observable outcomes — the LLM should be able to evaluate by reading the text, not guessing intent
  • One concern per AI reviewer — split multi-dimensional quality checks into separate AI reviewers

Locale matching#

AI reviewers match translation requests by source and target locale. Wildcard * matches any locale.

Source localeTarget localeMatches
endeOnly English → German translations
en*Any translation from English
*jaAny translation into Japanese
**All translations

A single translation request can trigger multiple AI reviewers if several match its locale pair.


Sampling#

Not every translation needs to be reviewed. The sampling rate controls what percentage of matching requests get evaluated.

SamplingBehavior
100%Every matching request is reviewed (thorough but higher cost)
50%Roughly half of matching requests are reviewed
10%One in ten — useful for high-volume engines where trends matter more than individual scores
0%AI reviewer is effectively paused without disabling it

Sampling is applied at request time using a random check. Over a sufficient volume of requests, the actual evaluation rate converges to the configured percentage.


N/A support#

When allowsNA is enabled, the review LLM can return "not applicable" instead of a score. This is useful for AI reviewers whose criteria don't apply to every locale pair.

Example: An AI reviewer checking formal address conventions returns N/A for English → English translations (English has no formal/informal distinction), but returns a score for English → German.

N/A results are excluded from averages and pass rates in reporting — they don't pull scores down or inflate them.


Reasoning#

AI reviewers provide reasoning for imperfect results to help you understand what went wrong:

  • Perfect score (pass or 100%) — reasoning is null (nothing to explain)
  • N/A — reasoning is null
  • Imperfect score — a brief one-sentence explanation

This keeps the review results actionable: when a translation fails a check, the reasoning tells you why without manual investigation.


Review model#

Each AI reviewer has its own LLM provider and model configuration, independent of the translation model. This separation is intentional — the model that produces the translation should not be the same model that evaluates it.

Model independence

Using a different model for review than for translation provides an independent assessment. If GPT-4o produces the translation, evaluating with Claude Sonnet gives you a second opinion rather than self-assessment.


AI reviewer reports#

Review results are visualized in the dashboard under the AI reviewer reports section, showing:

  • Pass rates over time — for boolean AI reviewers, plotted as daily percentages
  • Average scores over time — for percentage AI reviewers, plotted as daily averages
  • Per-locale-pair breakdown — see how each source → target pair performs independently
  • Aggregate view — combine all locale pairs into a single trend line

AI reviewer reports complement the volume-focused Reports — together they give you a complete picture of both throughput and quality.


Managing AI reviewers via MCP#

If you use the Lingo.dev MCP server, your AI coding assistant can create and configure AI reviewers directly:

text
"Create a boolean AI reviewer for all locale pairs that checks
whether HTML tags are preserved in translations."
text
"Add a percentage AI reviewer for English to German that rates
translation fluency on a 0-100 scale, sampling 50% of requests."

Next Steps#

Reports
Monitor translation volume, token usage, and locale coverage
LLM Models
Configure the translation models that AI reviewers evaluate
Glossaries
Set up terms that glossary compliance AI reviewers can check against
API Reference
Integrate the localization API into your workflow