Google has announced making visual search more natural with multi-search, a new tool to search using images and text simultaneously.

The company introduced multi-search earlier this year as a beta in the US, and will now expand it to more than 70 languages in the coming months.

“We’re taking this capability even further with ‘multi-search near me,’ enabling you to take a picture of an unfamiliar item, such as a dish or plant, then find it at a local place nearby, like a restaurant or gardening shop,” said Prabhakar Raghavan, Senior Vice President, Google Search.

The company will start rolling “multi search near me” out in English in the US this fall.

Buy Me A Coffee

People are using Google to translate text into images over 1 billion times a month, across more than 100 languages.

“We’re now able to blend translated text into the background image thanks to a machine learning technology called Generative Adversarial Networks (GANs),” Raghavan informed.

With the new Lens translation update, people will now see translated text realistically overlaid onto the pictures underneath.

“Just as live traffic in navigation made Google Maps dramatically more helpful, we’re making another significant advancement in mapping by bringing helpful insights –like weather and how busy a place is — to life with an immersive view in Google Maps,” the company announced.