Google now lets you search for things you can’t describe by starting with a picture – The Verge

Posted: April 11, 2022 at 6:48 am

You like the way that dress looks but youd rather have it in green. You want those shoes but prefer flats to heels. What if you could have drapes with the same pattern as your favorite notebook? I dont know how to Google for these things, but Google Search product manager Belinda Zeng showed me real-world examples of each earlier this week, and the answer was always the same: take a picture, then type a single word into Google Lens.

Today, Google is launching a US-only beta of the Google Lens Multisearch feature it teased last September at its Search On event, and while Ive only seen a rough demo so far, you shouldnt have to wait long to try it for yourself: its rolling out in the Google app on iOS and Android.

While its mostly aimed at shopping to start it was one of the most common requests Googles Zeng and the companys search director Lou Wang suggest it could do a lot more than that. You could imagine you have something broken in front of you, dont have the words to describe it, but you want to fix it... you can just type how to fix, says Wang.

In fact, it might already work with some broken bicycles, Zeng adds. She says she also learned about styling nails by screenshotting pictures of beautiful nails on Instagram, then typing the keyword tutorial to get the kind of video results that werent automatically coming up on social media. You may also be able to take a picture of, say, a rosemary plant, and get instructions on how to care for it.

We want to help people understand questions naturally, says Wang, explaining how multisearch will expand to more videos, images in general, and even the kinds of answers you might find in a traditional Google text search.

It sounds like the intent is to put everyone on even footing, too: rather than partnering with specific shops or even limiting video results to Google-owned YouTube, Wang says itll surface results from any platform were able to index from the open web.

But it wont work with everything like your voice assistant doesnt work with everything because there are infinite possible requests and Googles still figuring out intent. Should the system pay more attention to the picture or your text search if they seem to contradict? Good question. For now, you do have one additional bit of control: if youd rather match a pattern, like the leafy notebook, get up close to it so that Lens cant see its a notebook. Because remember, Google Lens is trying to recognize your image: if it thinks you want more notebooks, you might have to tell it that you actually dont.

Google is hoping AI models can drive a new era of search, and there are big open questions about whether context and not just text can take it there. This experiment seems limited enough (it doesnt even use its latest MUM AI models) that it probably wont give us the answer. But it does seem like a neat trick that could go fascinating places if it became a core Google Search feature.

Link:

Google now lets you search for things you can't describe by starting with a picture - The Verge

Related Posts