|
Knowledgebase
EnterprisePlatform
Platform
APIReact (MCP)React (Lingo Compiler)
Alpha
CLIIntegrations
GuidesChangelog

Getting Started

  • How it works
  • MCP

Localization Engine

  • Overview
  • Brand Voices
  • Instructions
  • Glossaries
  • LLM Models

Quality

  • Reports
  • AI Reviewers
  • Playground

Admin

  • API Keys
  • Team

LLM Models

Max PrilutskiyMax Prilutskiy·Updated 13 days ago·4 min read

Every localization engine on Lingo.dev uses LLM models to produce translations. You choose which model handles each locale pair, configure fallbacks for reliability, and use wildcard locales to set defaults - all without managing API keys or provider accounts.

Available models#

Lingo.dev provides access to 400+ models from every major provider through a single platform:

ProviderNotable models
OpenAIGPT-4o, GPT-4 Turbo, o3, o4-mini
AnthropicClaude Opus, Claude Sonnet, Claude Haiku
GoogleGemini 2.0 Flash, Gemini Pro (up to 1M token context)
MetaLlama 3.3 70B, Llama 3.1 405B
MistralMistral Large, Mixtral
DeepSeekDeepSeek V3

The full model catalog - with context window sizes - is available on the LLM Models page.

No provider accounts needed

You don't need API keys from individual providers. Lingo.dev handles authentication, billing, and routing to all models through a unified infrastructure.

Model configs#

A model config assigns a specific model to a source-target locale pair within a localization engine.

FieldDescription
ProviderThe model provider (e.g., openai, anthropic, google)
ModelThe specific model (e.g., gpt-4o, claude-sonnet-4-5-20250514)
Source localeThe source locale, or * for any source
Target localeThe target locale, or * for any target

When the engine receives a translation request, it selects the most specific matching config based on the source and target locales.

Defaults and customization#

The Lingo.dev team has been researching which models produce the best translations for each language pair since 2023. When a new localization engine is created, it comes pre-configured with sensible model defaults - primary models and fallbacks selected based on that research, optimized for quality across common and low-resource languages alike. Most teams won't need to change them.

These defaults are designed to work well out of the box. You can edit any model config, swap providers, add fallbacks, or override specific locale pairs with models you prefer - but the defaults already reflect what we've found works best across hundreds of language pairs. The engine's model configuration is fully yours to control.

Fallback models#

LLMs evolve fast - new models ship weekly, capabilities improve with each generation, and pricing drops as competition intensifies. But that velocity comes at a cost: provider outages, rate limits, content filter changes, and model deprecations are routine. A production localization pipeline that depends on a single model is a pipeline that will eventually break.

Lingo.dev's localization engine is purpose-built for production-grade translation workflows. Each locale pair supports a fallback model - if the primary model fails, the engine automatically and transparently tries the next fallback model, without any intervention or failed requests reaching your users.

How fallback ordering works#

The engine sorts available configs by specificity, then by priority:

  1. Target locale specificity - exact target locale beats wildcard *
  2. Source locale specificity - exact source locale beats wildcard *
  3. Priority - default, then fallback

Example#

Given these configs for an engine:

SourceTargetModelPriority
endeGPT-4oDefault
endeClaude SonnetFallback
*deGemini FlashDefault
**GPT-4o-miniDefault

A request translating en → de tries models in this order:

  1. GPT-4o - exact match, default
  2. Claude Sonnet - exact match, fallback
  3. Gemini Flash - wildcard source, exact target, default
  4. GPT-4o-mini - wildcard both, default

A request translating fr → de skips the first two (source doesn't match) and starts at Gemini Flash.

Fallback tracking

When a fallback model handles a request, the engine records it in the request log. Monitor fallback usage in Reports to identify unreliable primary models.

Wildcard locales#

Set source or target locale to * to create default configs that apply when no locale-specific config exists.

Common patterns:

SourceTargetModelPurpose
**GPT-4oCatch-all default for any locale pair
en*Claude SonnetDefault for all English-source translations
*jaGPT-4oUse a specific model for Japanese targets
endeMistral LargeOverride default for this specific pair

Specific configs always take priority over wildcard configs. Use wildcards to set sensible defaults, then override for locale pairs that need special handling.

Managing model configs via MCP#

If you use the Lingo.dev MCP server, your AI coding assistant can configure models directly:

text
"Set GPT-4o as the primary model for English to German,
with Claude Sonnet as fallback."
text
"Add a catch-all model config using GPT-4o-mini for
all locale pairs."

Next Steps#

Brand Voices
Define overall tone and formality per locale
AI Reviewers
Monitor translation quality per model
Reports
Track model usage, fallbacks, and token consumption
API Reference
Integrate the localization API into your workflow

Was this page helpful?