Google’s new update in its AI can now detect 9 different Asian Languages, including Hindi, Urdu

There are hundreds of languages available from all over the world and tech giants including Google, Apple, Amazon and even social media platforms like Facebook have been trying to recognize more languages every day. The issue that the majority of these companies face is that not all the languages from around the globe are available as a collection of written texts and this makes it difficult for their AI-enabled systems to detect various languages. This is the reason behind the detailed workings of Google researchers to explore some new strategies that can help detect a variety of new languages from all over the world. The researchers of Google trained the AI in such a way that now it can detect a variety of 9 Asian/Indian languages and can also distinguish among them.

According to two Software Engineers Arindrima Datta and Anjuli Kannan, Google researchers recently uncovered a multilingual speech parser that can easily transcribe multiple tongues from all over the world and at the Interspeech 2019, Google researchers uncovered that their single end-to-end model can recognize nine different Asian languages including Hindi, Marathi, Urdu, Bengali, Tamil, Telugu, Kannada, Malayalam and even Gujarat as well and at the same AI it can also demonstrate some new improvement in its automatic speech recognition quality as well.

According to the researchers of Google, the core purpose behind the focus on India is because it is a society with inherently multilingual languages where you can detect more than 20 different languages that millions of people speak. Some of these languages also overlap with acoustic and lexical content due to its geographic proximity of shared cultural history and also of the native speakers. The majority of the Indians are bilingual or trilingual which means that the use of multiple languages within a single conversation is a common phenomenon in India and this can help train multilingual models more efficiently.


In the multilingual model, the researchers somehow combine acoustic, pronunciation and language components in a single product. To avoid any sort of misconduct or wrong translations, the researchers of Google modified the system architecture in such a way that it includes an additional language identifier input, an external signal derived from the local language of the training data. To fine-tune the global per-language model and to improve the overall performance of the system, the researchers allocated additional parameters per language in the form of residual adapter modules which results in a multilingual system the pretty much tops all other single-language detecting systems and it also simplifies training and serving along with meeting all the requirements for digital applications like Google Assistant.

Bottom Line

Google is one of the largest companies that always tries to bring some new tech for its users comfort and on the basis of the feedback on this new technology, we hope Google continues to research more on multilingual ASRs to detect a variety of other languages as well and to also be able to assist its diverse users. The aim behind this new multilingual system is to not just organize the information of the world but to also make a variety of languages accessible for its users by ensuring that its products work in as many languages as possible. This new system is very much likely to be introduced into Google Assistant soon and according to Google, the Interpreter Mode will be translating dozens of new languages and with nine new AI-generated voices as well.


Photo: iStock

Read next: Google’s new Password Checkup feature is meant to help users with password weakling
Previous Post Next Post