Google’s first announcement at its I/O 2017 keynote is a new product called Google Lens. The software uses machine learning to interpret information from images and video. It "understands what you're looking at and helps you take action," according to Google CEO Sundar Pichai.

In an effort to solve a number of pain points found in videos and photography, Google Lens collaborates with other Google features such as Assistant and Photos to help users execute functions more seamlessly.

For example, users can capture a series of photos in a dark area, which may come out noisy and Google Lens will combine the photos to create a clearer image. Users can take an image with an obstruction in front of it and Google Lens will remove the obstruction and complete the image.

More machine learning features on Google Lens are expected as Google Assistant also expands its compatibility to the iPhone.

Some more functional capabilities include users being able to take an image of a network and password on a Wi-Fi router and Google Lens will connect a device to that network.

Users can also point a phone at a business while out and about to pull up details about that business or use Google Lens and Google Translate to interpret signs in different languages.