Translation management systems used to be the center of everything in localization. If you wanted to translate, review, manage, or ship content, you went through your TMS. However, in The Agile Localization Podcast, Jourik Ciesielski, CTO at ELAN Languages, makes it clear that this model is starting to break. AI hasn’t just added new capabilities. It’s exposed deeper limitations in how TMS platforms are designed and how teams think about them.
Listen to the new episode on:
From feature checklists to workflow-first thinking
Most TMS buying decisions still look the same. Teams build a spreadsheet, list features, and tick boxes. Does it have integrations? Yes. Does it support MT? Yes. Does it have QA? Yes. Decision made.
Jourik argues this is exactly the wrong approach now. When you choose based on features, you end up shaping your processes around what the tool can do. In the AI era, that logic flips. You start with the workflow you actually need, then build or configure technology around it.
That shift is subtle but powerful. It moves localization away from static systems and toward flexible, evolving processes that can keep up with modern content demands.
Why enterprise buyers and LSPs want different things
Not all TMS users are solving the same problem.
Enterprise buyers want specificity. They care about clean integrations with their CMS, smooth parsing of their content, and the ability to plug in the AI models they prefer. Anything beyond that is nice to have.
2026 AI translation report: Enterprise insights
LSPs, on the other hand, need flexibility at scale. They serve multiple clients, industries, and content types. That means strong APIs, advanced reporting, and the ability to connect across systems matter much more.
This difference is critical. It explains why one-size-fits-all TMS products often fall short. The real opportunity for vendors is not adding more features, but enabling different users to build what they need.
Building blocks, not locked-in products
One of Jourik’s strongest points is that TMS vendors should stop packaging everything into predefined products. Instead, they should provide building blocks.
That means giving users access to models, APIs, prompts, and workflows they can configure themselves. It means enabling teams to design processes based on their own business goals, not forcing them into rigid structures.
"If I were a TMS provider today, I would basically give all the building blocks to my users: the use cases, translation, post editing, content creation, QA, whatever, the models, OpenAI, Anthropic, Gemini. I would really give the power to my users so they can write their own prompts and really build and design the processes themselves.
In practice, this looks like more openness, better connectors, and less reliance on feature bundling. The winner in this space won’t be the one with the longest feature lists. They’ll be the ones that make it easiest to build.
Why architecture now matters more than UI
There was a time when UI and usability were major differentiators for TMS platforms. That still matters, but it’s no longer the main thing.
As Jourik puts it, the plumbing matters more. For AI to actually deliver value, it needs to be deeply integrated across systems. That includes CMS platforms, ERPs, ticketing tools, repositories, and more. It also needs to support multiple tasks, not just translation.
None of that works without strong architecture. APIs, middleware, orchestration layers, and connectivity are what make modern localization systems function. Without them, even the best-looking platform falls apart under real-world complexity.
Where custom AI workflows already work
One of the most practical examples Jourik shares is in after-sales support.
He describes building an AI workflow that reads incoming emails, identifies the issue, searches for answers in internal systems, drafts a response, and sends it to a human for final approval.
It’s multilingual, end-to-end, and grounded in a real business need. This is where AI shines today. Not in abstract features, but in solving specific, high-impact workflows.
"For me, AI only starts to make sense when certain conditions are met. First of all, I want it to be deeply customized. Number two, I want it to be available across different tasks, not just machine translation.
Final thoughts
The message from this conversation is clear. TMS vendors don’t just need to add AI features. They need to rethink their role.
That means moving away from closed systems and toward open ecosystems. It means prioritizing architecture over surface-level functionality. And it means giving users the tools to build their own workflows instead of forcing them into predefined ones.
Because in the AI era, the value isn’t in the system itself. It’s in what you can build with it.
Jourik’s Background
Jourik Ciesielski is Chief Technology Officer at ELAN Languages, a top-20 global Language Service Provider and the largest LSP in the Belgium, Netherlands, and Luxembourg region. With 15 years of experience in language technology, Jourik brings deep technical expertise in translation management systems, neural machine translation, and enterprise localization architecture. Following the acquisition of his consulting firm by ELAN Languages, he now spearheads technology development and AI strategy, positioning him as a thought leader on how modern enterprises should approach TMS selection and custom AI workflows.
Listen to the new episode on:
Yuliia Makarenko
Yuliia Makarenko is a marketing specialist with over a decade of experience, and she’s all about creating content that readers will love. She’s a pro at using her skills in SEO, research, and data analysis to write useful content. When she’s not diving into content creation, you can find her reading a good thriller, practicing some yoga, or simply enjoying playtime with her little one.