🎉 v1.0

Get started

  • Welcome
  • Documentation
  • Pricing
    Soon

Free Tools

  • I18n MCP
  • CLI
  • CI/CD
  • Compiler
    Unstable
  • Connect Your Engine

Resources

  • Languages
  • LLM Models
0 stars
0 members
Dashboard

Lingo.dev

  • How it works
  • Changelognew
  • API
  • MCP

Localization Engine

  • Overview
  • Brand Voices
  • Instructions
  • Glossaries
  • LLM Models

Quality

  • Reports
  • AI Reviewers
  • Playground

Admin

  • API Keys
  • Team

Localization Engines#

A localization engine is a stateful translation API you build and configure on Lingo.dev. Instead of sending strings to a generic LLM and hoping the output matches your expectations, you build an API that produces the translations you actually expect — consistently, across every locale, on every request.


What a localization engine does#

Each engine combines five configurable layers. When a translation request arrives, the engine applies all of them automatically — no prompt engineering per request, no manual intervention.

LayerWhat it controlsDocs
LLM ModelsWhich model handles each locale pair, with ranked fallback chainsLLM Models →
Brand VoiceHow your product speaks in each language — tone, formality, styleBrand Voices →
InstructionsDiscrete linguistic rules for specific locales — testable and debuggable individuallyInstructions →
GlossaryExact term mappings per locale, with semantic matching — highest precedence in the engineGlossaries →
AI ReviewersAutomated evaluation using an independent LLM, after each translationAI Reviewers →

How the layers interact#

The engine applies layers in a defined order, with a clear precedence hierarchy:

  1. Glossary — highest precedence. If a glossary rule matches, it overrides model judgment.
  2. Instructions — medium precedence. Locale-specific linguistic rules guide the model.
  3. Brand voice — sets overall context. Tone, formality, and style for the locale.

The model config determines which LLM processes the request, with automatic fallback if the primary model fails. AI reviewers run asynchronously after the translation completes — they never block the response.

Complementary, not competing

Design glossary, instructions, and brand voice to complement each other. Glossary handles exact terms, instructions handle locale-specific rules, and brand voice sets the overall voice. If a glossary item conflicts with an instruction, the glossary wins.


Defaults#

When you create a new engine, it comes pre-configured with model defaults — primary models and fallbacks selected from three years of weekly localization research, optimized for quality across common and low-resource languages alike. Most teams won't need to change them.

These defaults are designed to work well out of the box. You can edit any model config, swap providers, add fallbacks, or override specific locale pairs — but the defaults already reflect what we've found works best across hundreds of language pairs. Brand voice, instructions, and glossary start empty — you add them as you learn what your product needs in each locale.


Using engines#

Engines are accessible through every Lingo.dev integration:

IntegrationHow it connects
CLISet engineId in i18n.json — every lingo.dev run routes through your engine
APICall the localize endpoint with your API key — the engine applies all layers automatically
CI/CDSame CLI config — translations run through your engine on every pull request
MCPAI coding assistants can configure and use engines directly from the conversation

If you omit engineId, the default engine in your organization is used.


Observability#

Every translation request is logged: model used, tokens consumed, whether a fallback handled it, which glossary items and instructions were applied. Monitor engine performance in Reports and translation quality in AI Reviewers.

Test engine configurations before they go live in the Playground — compare your engine against a raw model, or compare two engines side by side.


Next Steps#

Connect Your Engine
Wire your CLI and codebase to a localization engine
LLM Models
Configure per-locale model selection and fallbacks
Brand Voices
Define how your product speaks in each language
Glossaries
Map source terms to exact translations per locale