Google is in full swing at its annual Google I/O event. A virtual event for developers that began on May 18 and in which various novelties will be presented. Like the new Android 12, the new Wear OS or various privacy and transparency improvements. But one of the things that most caught our attention from yesterday’s session was the presentation of MUM. A new multimodal AI model that facilitates access to information and interaction and with which the Google search engine will become much more intelligent.

 

The search format has constantly evolved since the creation of the Internet. From a model that indexed by categories we went to an environment that reacts to our circumstances and adds global information to our private information.

In 2019, in order to better understand queries, Google launched BERT. An artificial intelligence model designed to better understand queries directed to the search engine. That is, a system focused on understanding the language that users use in our searches. After two years, Google replaces BERT with MUM, its Unified Multitasking Model, which is a layer of understanding that goes beyond just text results and offers more comprehensive search results.

mum Google I/O 2021

What was BERT

BERT – or Bidirectional Transformer Encoder Representations – was a language representation technique based on neural network architecture for Natural Language Processing that allowed training a state-of-the-art question answering system.

The aim of BERT was to improve the understanding of the searches made by users, relating each of the words of a phrase with the rest to understand the way in which we usually express ourselves and offer us better and more precise search results.

BERT processes each word of a query in relation to the rest of the words. Instead of doing it one by one in order as it was done until now. In this way, BERT models can obtain a more precise context of search intent.

 

MUM, the new Multitasking Unfied Model

But MUM goes much further. He is not only able to understand a question made up of different result conditions. Rather, it groups the whole question to deliver visual results, video, text or any other support that the search engine detects as valid.

As Prabhakar Raghavan, Senior Vice President of Google, explained during the Google I / O 2021 event: “Let’s imagine a complex and conversational query, like ‘I have climbed Adams Mountain and now I want to hike Mount Fuji next fall, what Should I do differently to prepare? ´ This would perplex today’s search engines, but in the future, MUM could understand this complicated question and generate a well-organized answer, targeting highly relevant results. We have started internal pilots with MUM and are excited about its potential to improve Google products. ”

MUM

In the future, MUM might understand that two mountains are being compared, so elevation and trail information may be relevant. You might also understand that, in the context of hiking, “getting ready” could include things like physical training, as well as finding the right equipment. Because you can identify and understand these nuances, MUM has the potential to display more relevant information for questions like these.

Another aspect that makes MUM unique is its ability to generate responses based on its in-depth knowledge of the world. For example, the system could generate a response highlighting that the user may need different clothing depending on the time of year in which they want to climb each of the mountains.

Furthermore, this technology has the potential to break the language barrier by transferring knowledge between languages. This means that you can obtain knowledge from sources that are not written in the language in which the user performed their search. Process all that data and be able to provide us with that information.

Lastly, being multimodal, MUM can process information in different formats; web pages, images, videos, etc., simultaneously. Over time, a user might take a picture of their hiking boots and ask Google, “Can I use them to hike a hilly trail?” MUM’s artificial intelligence would understand the image and relate it to the question. To then answer if that footwear is optimal or not for what is desired. And if not, include a list of suggestions.

mum

The future of MUM

At the presentation of this new technology, Prabhakar Raghavan explained that it is still in a state of testing and that it will still be a while until it is incorporated into the search engine: “As we carefully test the many BERT applications launched since 2019, MUM will undergo this process as we apply these models to Search over the coming months and years. Specifically, we will look for patterns that may indicate biases in machine learning to avoid introducing biases into our systems. ”

 

MUM is trained to understand 75 languages, by being multitasking it obtains a greater understanding of the information and by being multimodal it understands the information in text and images and, in the future, it will expand to other options such as video or audio.

Francesc es el responsable de Content Marketing de Sinapsis. Con más de diez años de dedicación al copywriting ha acumulado una gran experiencia en diversos temas aunque su mayor pasión sigue siendo el marketing online. Friky de corazón, ha encontrado en el SEO una nueva forma de seguir "jugando".