It’s fascinating to watch how LLMs are taking over the translation industry. In the last few months, we’ve seen many great applications of how machines can assist and perform translation tasks. The reason LLM’s seem to be taking over is because of the context they can accept and leverage when translating.
Today, we’re very excited to announce immediate access to the latest version of Crowdin AI with the fine-tuning feature! A way to feed TM’s and glossaries of any size before asking the LLM to translate your content.
If you have tried Crowdin AI before, you have seen that when you use it in the editor and ask for translation, it uses your TM and glossary for better quality translations. However, if you used Crowdin AI to pre-translate an entire project, it wouldn’t use your glossary or TM. You could only provide context and instructions to the machine.
The reason why it was not possible to provide your entire TM or termbase is due to the maximum request size of the LLM services. They simply cannot accept a 5 GB TM + large termbase + your file for translation in one request.
With the fine-tuning feature we are introducing today, you can pre-train the LLM model with your translation assets, such as TM and glossary. No matter how big your localization assets are, they can all be “shown” to the machine before you ask it to translate your file.
Fine-tuning traditional MT engines required a lot of work and effort, and were primarily used by corporations. With the latest version of Crowdin AI, the fine-tuning of Translation AI is available to everyone and is super easy to use.
Even though it’s still a beta version, we’re eager to show it to as many users as possible to get feedback and improve the service.