Google’s New AI Update Is Ready to Blow Your Mind

Ever since technology has become mainstream, everyone is busy exploring if machines can replace the human mind. They wanted to explore if there will be a time when machines and computers will be able to comprehend and process information as we do. One of the basic challenges that the language-based machine learning models were facing has to do with understanding the context or reference. It seems like Google has come quite close to provide a solution, recently it has introduced a new model which is named as Reformer. This artificial-intelligence-based model has the capability to comprehend a book of 1 million lines by using just 16 GB space. The new model, "Reformer" is a better version of the Transformer that was just using neural networks for comparing words in a paragraph and then it was understanding the relationship between these lines. However, Reformer has a better way of understanding the information as it can comprehend a few lines or paragraphs that are written on the text that is in focus.

The older version of the language-based AI model, "Transformer" used pair matching for understanding sentences, it took a lot of data space in case it was required to process a text of more than a few thousand words. In short, it was quite difficult for the older AI to process even a long article or a book. To resolve this issue, Google launched its improved version of the language-based AI model and named it Reformer. The current model was designed to solve the data space issues as well as the attention span issue. For the attention span, "Reformer" model uses locality sensitive hashing or LHS.

With the use of locality-sensitive hashing, the model is no longer comparing words with each other instead, it is using a hash function to band the identical or rather similar words in one place. These words are then compared with each other and later with the identical words that have been placed in another place. This reduces the time limit as well as prevents processing overload. For solving the memory issue, the researchers tried reversible residual layers with the use of activation of one layer which is layer used at another layer. For testing purposes, Google used images that were read by Reformer and it later created full-frame images out of that. With the new model, processing books is easier, which will open huge opportunities in the future.




Read next: Americans Trust Amazon and Google More than the Government and Oprah: Survey
Previous Post Next Post