Last week, Google showed off smarter Assistant features, including the Google Lens computer vision. Now, developers have discovered that Google Lens was actually made active on the latest version of the Google Photos app.

XDA Developers made an APK teardown of the latest version of the Google Photos app for Android and quickly discovered that Google Lens can actually be triggered to fully function. For the uninitiated, Lens will be able to identify objects, texts and even landmarks through an Android smartphone’s camera.

The feature was actually first announced by Google last week during its Developer Days event in Poland. Google Lens will be able to translate languages, provide additional information on places and events and even convert paper money to different currencies, according to Venture Beat.

Google Lens One of Google's example on how Google Lens will work. Photo: Google

Google Lens was buried deep within the latest version of Google Photos, which is currently being rolled out to all Android smartphone users. Although some developers were able to try out Google Lens, it looks like Google has already switched it off making it impossible for everyone to try it out now.

Luckily, the folks over at XDA and Android Police were able to try it out even for just a short time. Both sites came to the conclusion that development on Google Lens isn't finished yet since it was still unable to identify certain objects, even the Google Home smart speaker. However, when Lens did work, it delivered on what Google actually promised.

Android Police first tried out Lens by identifying books and Blu-ray disc cases. Lens was able to identify three out of four products, and it even provided an option to look at Google search results. One of the Blu-ray disc covers wasn’t identified however. It’s being speculated that it may be because the product's photo was taken at an offset angle.

Next up, Android Police tried to have Lens identify information based on written text. Google Lens was almost able to perfectly identify a written invitation, but it got the time wrong. Lens listed 1:30 pm when the invitation listed 10:30 am.

The next part of the test involved Google Lens trying to identify landmarks. It was able to accurately identify two specific locations in Disney World based on photos taken from a previous trip. However, Lens was unable to identify Space Mountain since the location itself was far in the background.

One of Google’s previous demos had Lens identifying a router based on its product sticker. Lens was able to copy the login information to Google Keep by scanning the sticker on the bottom of the router. Android Police tried to do the same trick, but Lens only provided it with the URL to the manufacturer’s legal page.

The last part of the test involved identifying random objects in a room. Google Lens was unable to identify most of the products, which include an LCD writing tablet, an iPod nano, an old-school Apple USB mouse and the Google Home smart speaker.

It’s pretty clear that Google Lens still needs some work before it’s officially released to all users. However,  the technology behind this new feature is really promising and could potentially become a must-have for all Android smartphone users.