🔧 This page is being updated — some sections may look incomplete. Check back soon.
Site update in progress · some pages may look incomplete · check back soon
// Case Study 03 -- Localisation Automation

36 hours of work. Every. Single. Time.

A global technology org trained employees across 8 language regions. Every course update meant manually re-editing every video for every language.


✗ Manual -- per language
  • 01Export raw video from Storyline~20m
  • 02Strip original audio track~15m
  • 03Import localised audio, re-time~90m
  • 04Re-sync subtitle file~60m
  • 05Burn subtitles, export final~30m
  • 06QA review, fix errors, re-export~45m
✓ Automated -- all languages
  • --Run one command
  • --Audio stripped & re-timed automatically
  • --Subtitle files synced via FFmpeg
  • --8 languages processed in parallel
  • --Output files named & ready to upload
36h
Manual -- per course update
×8 languages
<8m
Automated -- same output
all 8 languages in one run
Terminal Simulation -- run it yourself

Type the command. Watch it work.

The terminal below is a simulation of the actual workflow. Type the command and hit Enter -- or click "Run for me".

⚡ GPU Accel 50% · 4 cores
bash -- video-localization-tool
L&D Team · Video Localization Tool v2.1 Type the command below or click "Run for me" ───────────────────────────────────────── dev@ld-tools:~/localization$
$
Command to type
bash localize.sh --input module_v2.mp4 --langs all --subs auto

Paste into the terminal above and press Enter — or just click Run for me.

// Script executed -- 8 languages processed
All complete ✓

36 hours became 8 minutes.

270×
Faster than manual
8
Languages in one run
0
Human sync errors
7m 43s
Total run time

Languages processed:

EN ✓ DE ✓ FR ✓ ES ✓ ZH ✓ JA ✓ PT ✓ AR ✓

What this demonstrates: Engineering mindset applied to L&D operations. Most instructional designers accept manual workflows as a given. The question I asked was different -- "why is a human doing this?" Once that question has an answer, it has a script.

GPU acceleration note

FFmpeg Bash Shell Scripting Articulate Storyline Subtitle Tools (SRT) Parallel Processing
Full case study →

Under the hood

How I Built This

✓ What's real in this demo
  • FFmpeg commands shown are the actual commands from the production script -- filter chains and subtitle rendering options are verbatim
  • Time figures are real -- 36 hours manual vs under 8 minutes automated is the actual measured difference
  • Language-specific log entries are accurate -- DE narration really is ~7% longer than EN
  • Parallel processing -- the script processed all 8 languages simultaneously
~ What's simulated
  • Terminal execution -- no server is running FFmpeg; the log output is a faithful reconstruction of actual script output
  • Output files -- no video files are actually generated; file paths match the real output structure
→ Production architecture
localize.sh ──▶ Input validation ──▶ Language loop
│
└──▶ FFmpeg // per language, parallel
    ├──▶ Extract video stream -vn flag
    ├──▶ Strip + replace audio -map 0:v -map 1:a
    ├──▶ Normalise levels loudnorm filter
    ├──▶ Burn SRT subtitles subtitles filter
    └──▶ Export final .mp4 -c:v copy
│
└──▶ ./output/{LANG}/ ──▶ LMS upload ready
Next → Case Study 04
The Invisible Student
Step through a course and watch every action become a live xAPI statement
xAPI / Tin CanLRSLive JSON
Demo 03 · Automation

36 hours of manual localisation work, automated to under 4 minutes.

FFmpeg + shell scripting replaces manual subtitle sync, audio swap, and file export across 9 languages. The same pipeline handles SRT generation, timing correction, and Storyline-compatible audio packaging.

FFmpegBash AutomationStoryline 360SRT/Subtitles
Demo 03 — Localisation Automation
Interactive demo loads here
The full interactive experience runs in the editorial view — click below to launch it.