New Google 'Multisearch' Feature Combines Images and Text

You may already have heard of Google Lens, an AI-powered image search technology that allows you to find more about something by capturing it with your camera. But did you know that Google is introducing a new feature that allows you to search using both images and text? We take a look at the new Google ‘multisearch’ feature and what it could mean for the world of search.

What is multisearch?

Google multisearch is a feature that will allow you to search the web using a combination of text and images. It will be a function within Google Lens and is designed for those user queries that aren’t as simple as a single image.

For example, if there is an object in front of you that you want to search for, but you need to add some further descriptive elements to what you’re asking for, you can use multisearch to help you.

Google gives the following examples of situations where you might want to use the multisearch feature in Google Lens:

1) Screenshot a stylish orange dress and add the query “green” to find it in another colour

2) Snap a photo of your dining set and add the query “coffee table” to find a matching table

3) Take a picture of your rosemary plant and add the query “care instructions”

These are a few scenarios, but it sounds like a feature that could come in useful in a lot of situations! This is just another way that Google is encouraging people to ‘go beyond the search box’, and it demonstrates how the way we search is evolving.

Related article: The Future of Search: Beyond Google?

Will people actually use multisearch?

The novelty is certainly exciting, but will people use the multisearch feature once the excitement of trying it wears off?

Well, voice search was a novelty at one time, but more and more people are using it and the number of digital voice assistants in use worldwide is projected to grow from 3.25 billion in 2019 to 8.4 billion in 2024.

Plus, the examples Google gives are good ones because these are situations where you can’t necessarily express what you need with a text search alone, at least not easily. 

For example, if you come across an image of a piece of clothing online and want to search for similar items in a specific colour or by a particular brand, it allows you to do that with just a few steps, rather than having to search around while trying different text descriptions of the item.

However, for more general queries, it seems unlikely that multisearch will be that relevant at first. It’s also unlikely that this will be the sole way of searching in the future, but it does give people the option to find more information about a product they would like in another colour or style or to find similar examples to browse.

It seems like it will be especially useful in the visual and creative spaces, for things like fashion, arts and crafts, interior styling and food, as well as for online shopping.

When will multisearch be available?

At the moment, there is a beta version of multisearch available in the US in English, and Google have indicated that the best results at this stage will be those for shopping searches. It is unclear yet when this feature will be rolled out on a global test, as it sounds like improvements still need to be made.

How might the feature evolve in the future?

It’s not even rolled out fully yet, but it’s already time to talk about the future of multisearch.

When we first discussed Google Lens, we mentioned how the image recognition technology will need to evolve in the future to understand user intent.

Google has hinted that they are “exploring ways in which this feature might be enhanced by MUM – our latest AI model in search”.

This suggests that the image recognition technology still needs to evolve to understand user intent, but much as the traditional search engine has become more sophisticated over time, it will get there in the end.

Most likely, Google Lens will begin to return far more relevant image search results as it learns more about the context of particular images searches and your personal search history – therefore determining the intention of your search.

As we’ve discussed previously, imagine a world where you could take a picture of the ingredients in your fridge and find a recipe for dinner! 

With multisearch, this may soon be possible with one ingredient by taking a photo of it and adding ‘+recipe’ to the image. Then, in the future, it may be possible to do this with multiple items in the same image, and perhaps Google Lens will eventually become sophisticated enough to understand user intent without needing the clarifying text. We will see!

 

That’s just a brief guide to the new multisearch feature on Google Lens – and in many ways, more questions have been raised than answered! We will see how this feature adapts over time to meet the needs of users and return more relevant results and, in turn, whether that will impact our searching habits.

 

SB.