Google continues to put a lot of work into its Artificial Intelligence solution, and during the recent showcase, one of the features that stand out the most was Google Lens.
In the development of the new solution, Google merged the prowess of their AI assistant with the recognition algorithm found in Google Photos, resulted in a feature that could change the future of AI.
Essentially the feature offers the chance for users to take a picture of any physical entity and let the AI identify it and provide all information about it through the Assistant.
During I/O, the company’s annual developer conference, the Google Lens demo took center stage and proved to be quick and efficient while also groundbreaking.
While very innovating, Google Lens can be considered as an evolution or advanced version of 2011’s Google Googles, especially because it can provide a plethora of information without having to perform a search query first.
Due to the recent updates implemented to Google’s AI Assistant, something that allows it to be on almost any Android device, Google Lens will also be available for most users when deployment takes place in the following weeks. most users when deployment takes place in the following weeks.
With the soon-to-be-launched Google Lens feature and the improvements of Google’s Assistant, the race for the AI supremacy is starting to heat up and Google is taking the lead early on.
Don't forget to keep up with the latest tech news on our social media!
Comments will be approved before showing up.