Localization Engines#
A localization engine is a stateful translation API you build and configure on Lingo.dev. Instead of sending strings to a generic LLM and hoping the output matches your expectations, you build an API that produces the translations you actually expect — consistently, across every locale, on every request.
What a localization engine does#
Each engine combines five configurable layers. When a translation request arrives, the engine applies all of them automatically — no prompt engineering per request, no manual intervention.
| Layer | What it controls | Docs |
|---|---|---|
| LLM Models | Which model handles each locale pair, with ranked fallback chains | LLM Models → |
| Brand Voice | How your product speaks in each language — tone, formality, style | Brand Voices → |
| Instructions | Discrete linguistic rules for specific locales — testable and debuggable individually | Instructions → |
| Glossary | Exact term mappings per locale, with semantic matching — highest precedence in the engine | Glossaries → |
| AI Reviewers | Automated evaluation using an independent LLM, after each translation | AI Reviewers → |
How the layers interact#
The engine applies layers in a defined order, with a clear precedence hierarchy:
- Glossary — highest precedence. If a glossary rule matches, it overrides model judgment.
- Instructions — medium precedence. Locale-specific linguistic rules guide the model.
- Brand voice — sets overall context. Tone, formality, and style for the locale.
The model config determines which LLM processes the request, with automatic fallback if the primary model fails. AI reviewers run asynchronously after the translation completes — they never block the response.
Complementary, not competing
Design glossary, instructions, and brand voice to complement each other. Glossary handles exact terms, instructions handle locale-specific rules, and brand voice sets the overall voice. If a glossary item conflicts with an instruction, the glossary wins.
Defaults#
When you create a new engine, it comes pre-configured with model defaults — primary models and fallbacks selected from three years of weekly localization research, optimized for quality across common and low-resource languages alike. Most teams won't need to change them.
These defaults are designed to work well out of the box. You can edit any model config, swap providers, add fallbacks, or override specific locale pairs — but the defaults already reflect what we've found works best across hundreds of language pairs. Brand voice, instructions, and glossary start empty — you add them as you learn what your product needs in each locale.
Using engines#
Engines are accessible through every Lingo.dev integration:
| Integration | How it connects |
|---|---|
| CLI | Set engineId in i18n.json — every lingo.dev run routes through your engine |
| API | Call the localize endpoint with your API key — the engine applies all layers automatically |
| CI/CD | Same CLI config — translations run through your engine on every pull request |
| MCP | AI coding assistants can configure and use engines directly from the conversation |
If you omit engineId, the default engine in your organization is used.
Observability#
Every translation request is logged: model used, tokens consumed, whether a fallback handled it, which glossary items and instructions were applied. Monitor engine performance in Reports and translation quality in AI Reviewers.
Test engine configurations before they go live in the Playground — compare your engine against a raw model, or compare two engines side by side.