DocsPricingResearchEnterpriseCareers
Hiring
Sign inSign upTalk to a Localization Expert

The localization engineering platform.

Teams configure
localization engines
that persist glossaries, brand voice, and per-locale model chains – then call them from backend code, CLI, CI/CD, or MCP. Terminology stays consistent across every locale, every release.
Create localization engineTalk to a localization expert

Words processed on Lingo.dev: 236,779,253

Mistral AIFreenetEdyoucatedSolanaLaurelCal.comTruelyMistral AIFreenetEdyoucatedSolanaLaurelCal.comTruely

A shift in where the work lives.

Localization used to sit outside the product – weeks of coordination, translators new to the domain, fresh context every request. Teams that moved it inside the product found it compounds.

Before

Translation as a vendor relationship.

Requests leave the product. Vendors translate without knowing the domain. Every turnaround arrives fresh, so terminology drifts release over release. Quality lives in reviewer opinions no one can audit.

After

Translation as infrastructure you configure.

A localization engine holds your glossary, brand voice, and per-locale model chain. Every request enters the product with that context already attached. Terminology locks. Quality is scored per dimension and tracked over time.

“Lingo.dev's approach reminds me of how Stripe revolutionized payments - making a complex process invisible to developers.”
Pete Koomen

Pete Koomen

Group Partner, Y Combinator

The primitive

What lives in a localization engine.

A stateful translation API. Create it, configure it, call it.

A localization engine persists domain context across every request. Your glossary enforces product terminology. Your brand voice defines register per locale. Your model chain selects providers with ranked fallback. Every rule your team adds persists, preventing terminology drift.

What lives in a localization engine

Glossary
Per locale pair, matched by semantic similarity
Brand voice
Tone and formality per locale
Instructions
Locale-specific rules (formats, conventions)
Model chain
Ranked providers with automatic fallback
AI Reviewers
Independent scorers per quality dimension
POST /process/localize
1curl -X POST https://api.lingo.dev/process/localize \ 2 -H "X-API-Key: $LINGO_API_KEY" \ 3 -H "Content-Type: application/json" \ 4 -d '{ 5 "engineId": "eng_abc123", 6 "sourceLocale": "en", 7 "targetLocale": "de", 8 "data": { 9 "greeting": "Hello, world!", 10 "cta": "Get started" 11 } 12 }'
200 OK
1{ 2 "data": { 3 "greeting": "Hallo, Welt!", 4 "cta": "Jetzt starten" 5 } 6}
“Developer experience is everything in developer tools. Lingo.dev nails it - they've turned complex localization into a few lines of code.”
Paul Copplestone

Paul Copplestone

CEO & Co-Founder, Supabase

Re-localize what changed. Retrieve what didn't.

Re-localizing the entire product every release is wasteful and overwrites approved content. Re-localizing only what changed is the right approach – but each change reaches the model alone, without the rest of the product around it, and terminology drifts. Retrieval augmented localization is the third way: re-localize only what changed, and retrieve the matching context per request.

Only what changed, no context

Every change is a fresh guess.

Input
"Users can switch providers at any time."
Context
(none)
Output (pt)
"fornecedores"

Without surrounding content, the model doesn't know whether "provider" is a vendor, a service, or a supplier. It picks the everyday word. Next release it might pick a different one.

648 terminology errors – top-tier model on the hardest case for terminology (regulatory prose).

Only what changed, with RAL

Only the terms that matter for this change.

Input
"Users can switch providers at any time."
Context
provider → prestador
Output (pt)
"prestadores"

The localization engine decomposes the input, runs similarity search against your glossary, and injects only the matching terms at inference time. The context window stays small. The translation stays consistent.

266 errors with RAL – same model, same articles. 59% fewer on the hardest case. Easier cases close further.

Professional localization infrastructure, step by step.

Every async localization runs this pipeline, with per-step configuration and per-locale failure isolation. Your glossary, brand voice, and instructions retrieved and injected before the model generates a token.

1

Source Refinement

An LLM pre-processes the source text before translation. Resolves ambiguous references, normalizes concatenated strings, and flags placeholder syntax issues. Prevents the garbage-in / garbage-out problem at the source.

2

Context Enrichment

Cosine similarity over vector embeddings matches source terms to glossary entries. Per-locale brand voice profiles inject tone and formality. Locale-specific instructions handle date formats, number separators, and currency conventions.

3

LLM Translation

The model receives enriched context: glossary matches, brand voice profile, instructions, and the source text. Translation happens with the full context of your product, not in isolation.

4

Human Post-Editing

Optional step, managed entirely behind the API. Our network of qualified translators reviews LLM output when your quality rules require it. Useful for regulated content, legal copy, or high-visibility launches. You configure the trigger - we handle the rest.

5

LLM Post-Review

A second, independent model evaluates the human-edited translation. Boolean checks for tag preservation. Percentage scores for fluency and naturalness. Catches inconsistencies that single-pass review misses.

6

Delivery

Results arrive via webhook with HMAC signature verification, or stream over WebSocket for real-time UIs. Each locale completes independently.

Research

Retrieval Augmented Localization.

A controlled evaluation across multiple LLM providers and multiple languages on terminology-dense content. Per-paragraph terminology errors compared against official human reference translations using GEMBA-MQM – tested with a paired Wilcoxon signed-rank test, Holm-Bonferroni corrected, p < 0.001 on every provider.

17–45%
Terminology error reduction
42,000+
Paired quality judgments
Read the paper
Quality

Quality you can measure, in a language you don't speak.

Cross-model quality scoring, glossary enforcement, and human review when it matters. Per-dimension scores your team can report upward, even for locales nobody on staff reads.

Automated Quality Scoring

Cross-model evaluation: one model translates, a different model scores. MQM dimensions for fluency, accuracy, terminology, and style. Per-locale trends and regression alerts in the dashboard.

Glossary and Brand Voice

Your glossary locks product terminology across locales. Brand voice configures tone and formality per language. Every release uses the terms your team already approved – no drift, no fresh guesses.

Human Review When It Matters

When a score drops below your threshold, the translation routes to our network of qualified translators behind the API - without vendor management or manual handoff coordination.

“With Lingo.dev, Dutch reads naturally, Russian fits our UI perfectly, and our brand voice stays consistent.”
Sebastiaan van Leeuwen

Sebastiaan van Leeuwen

Product Manager, Truely

Developers

Four surfaces into your localization engine.

Your localization owner configures quality rules. You call the API, run the CLI, or add the GitHub Action.

0 stars
0 members
Official GitHub Technology Partner

Lingo API

Translate via HTTP. Synchronous and asynchronous modes with webhook delivery.

Read the docs

Lingo React MCP

AI agents configure localization engines and translate directly in your IDE.

Read the docs

Lingo CLI

First translated build in 4 minutes. 14 file formats supported.

Read the docs

Lingo GitHub Action

Every push triggers translation. PRs include localized strings.

Read the docs
+
“Just like Dependabot made package updates effortless through automation, Lingo.dev is doing the same for localization.”
Grey Baker

Grey Baker

Creator of Dependabot

Platforms

A localization engine per tenant. Per product. Per team.

Lingo.dev hosts localization engines. Create as many as your architecture needs – each configured independently, provisioned via API, observable in isolation.

Per tenant

One localization engine per customer.

Provision a localization engine for every workspace or account via one API call. Each tenant gets their own glossary, brand voice, and model chain. Usage tracks per tenant.

Per surface

One localization engine per content surface.

Marketing copy, product UI, legal boilerplate, help docs – each has different terminology. Run a separate localization engine per surface. Glossary and quality scores stay isolated.

At scale

As many localization engines as you need.

Localization engines are first-class infrastructure primitives on Lingo.dev. Provision via API. Call like any service. Run one or thousands – the platform handles model routing, storage, and scoring. Usage tracks per localization engine.

Enterprise

Professional Localization, Delivered As Infrastructure.

The full professional localization workflow - source pre-editing, retrieval-augmented translation, human post-editing, AI quality gates - encapsulated behind a single API. SOC 2 Type II audited.

Engineering

Professional localization pipeline

Source pre-editing, context enrichment, LLM translation, human post-editing, AI quality gates, and delivery - orchestrated behind a single API call with per-locale failure isolation.

Retrieval-augmented translation

Glossary terms matched by semantic similarity via vector retrieval, with brand voice and product context pulled automatically from your localization engine's state.

Stateful localization engines

Your localization engine accumulates glossary, brand voice, translation memory, and model configuration across requests - context persists and improves with every job.

Localization

Centralized governance

Localization team defines glossaries, brand voice, quality thresholds, and review checkpoints per localization engine - product teams self-serve through CI/CD.

Quality and cost reporting

Per-localization-engine dashboards with quality scores, token consumption, glossary coverage, change rates from GitHub commits, and per-model LLM cost breakdowns.

Team autonomy

Translations arrive in pull requests - Localization team owns governance and quality standards while localization scales without proportional team growth.

Security

SOC 2 Type II

Audited security controls with encryption at rest and in transit, DPA available on request.

Data encryption

AES-256 at rest, TLS in transit, API keys hashed at the application layer, with data residency options for EU and US regions.

99.9% Uptime SLA

Multi-region infrastructure with ranked model fallback chains protecting against individual provider outages.

Talk to a Localization Expert
SOC 2 Type II/99.9% Uptime SLA/GitHub Technology Partner/Dedicated Support
world map

Create Your First Localization Engine.

One API call. Glossary, brand voice, model config, quality rules - all encapsulated in a stateful localization engine that improves with every job.

Read the docsTalk to a Localization Expert

Platform

Localization APIAsync Jobs APILocalization EnginesLanguage DetectionLingo.dev Platform MCPPricing

Developer Tools

Lingo React MCPLingo CLILingo GitHub ActionLingo React Compiler
Alpha

Resources

DocumentationGuidesChangelogLanguagesLLM Models

Company

BlogResearchEnterpriseCareers
Hiring
humans.txt

Community

GitHubDiscordTwitterLinkedIn
HQed in San Francisco + worldwide
SOC 2 Type II·CCPA·GDPR
Backed byY Combinator
Combinator
&Initialized Capital
Initialized Capital
&our customers
Privacy·Terms·Cookies·security.txt

© 2026 Lingo.dev (Replexica, Inc).

All systems normal
Sign inSign upTalk to a Localization Expert