|
Documentation
EnterprisePlatform
PlatformAPI
React (MCP)CLIIntegrationsReact (Lingo Compiler)
Alpha
GuidesChangelog

Synchronous

  • How it works
  • Localize
  • Recognize

Asynchronous

  • Localization
  • Pipeline
  • Provisioning

Async Localization Pipeline

Max PrilutskiyMax Prilutskiy·Updated 9 days ago·5 min read

An async localization job can run through an optional pipeline of stages that wrap the core translate step - AI source cleanup, human review, AI post-edit, and a back-translation drift check. Each stage is independently toggled on the localization engine, runs only for jobs created via the async API, and appears as its own step record in the job's status response.

Async API only

Pipeline stages only apply to jobs created through the Async Localization API. The synchronous /localize endpoint runs the core translate step only - pipeline settings on the engine are ignored.

On this page

  • Stages at a glance
  • Pre-localization AI edit
  • Post-localization human review
  • Post-localization AI review
  • Back-translation check
  • Configuring the pipeline
  • Observability

Stages at a glance#

The pipeline wraps the core localization step. Any combination of stages can be enabled - disabled stages are skipped entirely, and a failure in any non-critical stage does not fail the job.

1

Pre-localization AI edit

Optional. An AI agent cleans the source payload - fixing typos, grammar, and spelling errors - before translation. Non-critical: if it fails, the original source is used.

2

Core localization

Always runs. Your localization engine applies its model config, glossary, brand voice, and instructions to produce the translation.

3

Post-localization human review

Optional. The translation is sent to a qualified human translator, who edits it for clarity, consistency, and accuracy. The workflow waits for their output before continuing.

4

Post-localization AI review

Optional, requires human review. An AI agent reviews the human output and adjusts it to adhere to your engine configuration.

5

Back-translation check

Optional. The final translation is translated back into the source locale, compared against the original, and adjusted if semantic drift is detected.

Pre-localization AI edit#

An AI agent reviews the source payload before translation, correcting typos, grammar mistakes, and spelling errors. Cleaner source text produces more consistent translations - especially across many target locales, where a single error in the source propagates to every output.

This stage is non-critical. If the pre-edit call fails or times out, the original source is passed through unchanged and the job continues.

When to enable: source content is user-generated, scraped, or otherwise likely to contain noise. Skip it for curated content that has already passed editorial review.

Post-localization human review#

The AI translation is submitted to a qualified human translator, who edits it for clarity, consistency, and accuracy while adhering to your engine configuration. The async job pauses until the human output is returned, then carries that output forward to any remaining stages.

Tier#

Two tiers are available:

TierBest for
StandardAccurate translation with a human voice - marketing copy, UI strings, help content
ProProfessional use with an even higher accuracy level - legal, medical, and regulated content

Timeout#

You configure how long the workflow waits for the human translator. If the timeout expires with no response, the stage is marked skipped and the job continues with the AI translation as the final output. Default: 48 hours.

Event-driven wait

The workflow resumes on a webhook event from the translation provider - it is not polling on a tight loop. Setting a long timeout does not consume compute in the background.

Post-localization AI review#

After the human translator returns their edit, an AI agent reviews the human output and adjusts it to align with your engine's configuration - glossary terms, brand voice, and instructions. Human translators sometimes introduce variations that conflict with engine-level rules; this stage reconciles the two.

This stage is only available when post-localization human review is enabled, and only runs when the human stage actually produces output. If the human stage times out or is skipped, post-edit is also skipped — even when postEdit.enabled is true.

Not the same as AI Reviewers

Post-localization AI review modifies the translation output to conform with your engine rules. AI Reviewers are a separate feature that scores translation quality asynchronously without altering the output. You can use both together - post-edit refines the text, reviewers evaluate the result.

Back-translation check#

A multi-step quality check that verifies the translation preserves the meaning of the source.

1

Reverse translate

The translated output is translated back into the source locale using the same engine configuration, adapted for the reverse direction.

2

Detect drift

An AI agent compares the back-translation against the original source and flags issues by severity - minor, major, or critical.

3

Correct if needed

If major or critical issues are detected, another AI pass adjusts the forward translation to resolve them. Minor issues are recorded for observability but not auto-corrected.

Back-translation is a classic QA technique in human translation workflows. The pipeline automates it using LLMs, surfacing semantic drift that would otherwise require a second pair of human eyes.

Configuring the pipeline#

Pipeline configuration has two layers: an engine-level default and an optional per-request override.

Engine-level configuration#

Open the engine's Pipeline tab in the dashboard and toggle each stage independently. The engine's configuration applies to every async job routed to it when no override is provided.

  • Post-localization AI review is disabled until human review is enabled.
  • Back-translation check is independent of the other stages.

Per-request override#

Pass a pipelineConfig object in the POST /jobs/localization body to override stages for a single submission. Stages you omit inherit from the engine config.

json
{
  "sourceLocale": "en",
  "targetLocales": ["de", "fr"],
  "data": { "title": "Hello" },
  "pipelineConfig": {
    "preEdit": { "enabled": true },
    "backTranslation": { "enabled": false }
  }
}

Use this for one-off submissions that need a different pipeline than the engine's default, without creating a new engine.

Observability#

Every enabled stage produces a step record on the localization job. Retrieve the job with GET /jobs/localization/:jobId and inspect the steps array:

json
{
  "id": "ljb_...",
  "status": "completed",
  "outputData": { "title": "Hallo" },
  "steps": [
    {
      "stepId": "preEdit",
      "type": "action",
      "status": "completed",
      "errorMessage": null,
      "createdAt": "2026-04-17T10:00:00Z",
      "startedAt": "2026-04-17T10:00:01Z",
      "completedAt": "2026-04-17T10:00:04Z"
    },
    {
      "stepId": "localize",
      "type": "action",
      "status": "completed",
      "errorMessage": null,
      "createdAt": "2026-04-17T10:00:04Z",
      "startedAt": "2026-04-17T10:00:04Z",
      "completedAt": "2026-04-17T10:00:12Z"
    }
  ]
}

Each step's stepId identifies the stage:

StagestepId
Pre-localization AI editpreEdit
Core localizationlocalize
Post-localization human reviewhumanEdit
Post-localization AI reviewpostEdit
Back-translation checkbackTranslation

Each step is marked completed, failed, or skipped independently. Non-critical failures do not fail the job - the last successful output carries forward to the final outputData, and the job status becomes completed_with_warnings with per-step messages in the job's warnings array. See Reports for aggregate stage success rates.

Next Steps#

Async Localization API
Submit jobs that run through the pipeline
Engines
The engine layers each pipeline stage wraps around
AI Reviewers
Score translation quality independently - works with or without the pipeline
Reports
Monitor stage success rates and job throughput

Was this page helpful?