Google announced that it will launch its own AI chat robot "Bard", which will provide real-time information and humanized automatic question and answer with automatic text generation, in order to fight against ChatGPT, at the Paris event held last night (2/8) , is to further announce the "visual" evolutionary version of the search function that integrates advanced AI artificial intelligence technology in depth and extensively, and takes "everything you see can be searched" as the main axis core.

Google also emphasized that these new search methods and functions will be officially released to users around the world in the next few months this year.

The three key points of the evolutionary version of "Visual Search" that appeals to this major update include: using the "Search Screen" function of the smart camera of the mobile phone to support the content of photos or videos seen on websites and applications, and using both text and The "Multiple Search" function of pictures, and the "Local Search" function that combines nearby businesses and multiple searches.

Please read on...

In addition, the "indoor AR real scene" function provided for Google Maps will also expand to support more countries and cities, which is the largest scale so far, including Taipei, Singapore, Sydney, Melbourne, London, Berlin, Paris, Barcelona, More than 1,000 airports, train stations and shopping centers in Frankfurt, Madrid, Prague and São Paulo.

inside.

The three new functions of Google AI Evolutionary Search will bring a brand-new search control experience. The key points are listed as follows:

1. Search the content of photos and videos on the mobile phone screen

Through the Google Smart Lens function, in addition to currently being able to search the image content of your mobile phone, camera or storage album to find relevant information, the in-depth understanding of various forms of information in the real world based on advanced AI model technology will be more comprehensive. Directly targeting the mobile phone screen, whether it is on a website or an application (such as a message or a video application), you can directly search for image content information without popping out a window, bringing a more natural and intuitive search Experience. However, this function is currently limited to mobile phones using the Android system, and iPhone mobile phones with the iOS system are not yet supported.

The newly launched "Your Phone Screen Search" function allows Android phone users to directly search for content that appears on the screen when they receive a message, send a picture or video, or open a video application.

(provided by Google)

For example, when the mobile phone receives a video from a friend sent to travel in Paris, if you want to know the building landmarks appearing in the background, you can long press the power or home screen button of the Android phone (this operation will further activate the Google Assistant), Then tap the "Search Screen", and the smart lens will recognize the landmark as the Luxembourg Palace, and you can click further to learn more about it.

2. Text + image multiple search

It can use pictures and text to search for information at the same time, mainly through the Google Lens smart lens, which allows users to add text to ask questions after taking a photo, and obtain search results with pictures and text.

This feature can now use multiple search functions on mobile phones as long as the user's region or language supports Google Smart Lens.

Google, for example, when you see a yellow-shaped chair, if you want to know the color of a similar style in different colors, you can take a similar picture through the Google Smart Lens, and enter "light brown" in the search box to search .

3. Image-based local search

The newly added "local search" function is to bring a more localized and convenient search experience to the multiple search functions.

Just take a picture with the smart lens of your mobile phone and add "near me" (near me), you can get the local merchants that provide the product in the picture near your location, or you need something right now.

This feature is currently only available in the United States and only supports English, and is expected to roll out to global users in the next few months.