Google Pixel
The Google Pixel phone is displayed during the presentation of new Google hardware in San Francisco, Oct. 4, 2016. Reuters/Beck Diefenbach

The Google Pixel 2 and Pixel 2 XL devices, which were revealed earlier this month, are highly focused on imaging, just like their predecessors, the first Pixel devices. On Wednesday, the smartphone company released its first developer preview for Android 8.1 — Oreo OS, which will make use of the Pixel 2 series’ high-end camera capabilities.

The Pixel 2 series builds on the success of the first Pixel devices, which were awarded the best smartphone camera of 2016 by DxOMark. The company has, in the past year worked with DxOMark on the cameras for the Pixel 2 and revealed the device on Oct. 4.

One of the new features of the device is its neural networks API which is its artificial intelligence based software, that the company says will "enhance the on-device machine intelligence." In addition to the AI software, the device comes with a special imaging chipset called the Pixel Visual Core. The Pixel Visual Core is an eight-core processor, which can work in coordination with its machine-learning assisted HDR+ mode, and can process photos five times faster than the device’s main chipset.

Basically, the Pixel Visual Core can provide better camera performance using machine learning capabilities of the device. This makes it work better in terms of performance than a regular chipset as it can have a more detailed control of the camera hardware.

The AI-based software uses Halide — a special programming language for image processing and computational photography, the use of which makes image processing work a lot faster. It also uses TensorFlow — an open source software library specially designed for machine learning. With the Android 8.1 update, this feature works as a cross-platform machine learning framework for mobile devices, which means that it can work for many third-party apps, which have not been made on TensorFlow but on other networks such as Caffe2.

Google’s new chipset can actually process images faster than a typical smartphone chipset since it can crunch a lot of numbers and perform many more mathematical operations in the same time. This means that regular camera tasks such as noise reduction, color processing sharpening and other data processing will happen at a more efficient rate in the Pixel 2 than a typical smartphone.

In addition to these, Google is also using intensive image processing algorithms for providing more photography features to users and these algorithms can run even better on the Pixel Visual Core than they did on the processor in the first-generation Pixel handsets. This means that tasks such as determining the field of depth in an image and using blur effect in an image to create a bokeh effect can now take place much faster.

According to the company’s blog post on the launch of Android 8.1 Preview, the chipset can now support third-party apps, which means that Pixel 2’s HDR+ mode is now available for any third-party app and not only the Pixel camera.