Pixel 3: Why a single camera lens makes so good portrait photos

Nowadays, when you buy a mobile phone, you often look at the built-in lenses of the device. The more the better, right? That more cameras do not necessarily provide better pictures, Google shows with its Pixel devices. In a blog post, the company talks about the technology behind it.

While most modern smartphones come with two or even more lenses on the back, Google continues to rely on a camera for its Pixel phones. Several lenses can capture more depth - at least the theory - and separate the background from the foreground, which looks especially nice in portrait mode. Nevertheless, the Google Phones manage to shoot terrific portraits again and again and the new Pixel 3 even tops its predecessor in some aspects.

Why is that? Now that explains Google in a new (very technical) blog entry and also shows why his current flagship can almost completely eradicate some of the Pixel 2 errors occurring. Because for recordings in portrait mode, the predecessor relied on a neural network-driven system for phase detection. Thus, the individual PDAF pixels of an image were compared with each other and the system was able to calculate the actual depth information by detecting a parallax shift.

Okay, that sounds super complicated, but it's this complexity that causes Pixel 2 to occasionally produce artifacts when creating a portrait shot. To fix this, Google is now using machine learning, which corrects the depth perceived by the PDAF system.

"Specifically, we train a neural convolution network written in TensorFlow that takes the PDAF pixels as input and learns to predict the depth. It's this new and improved ML-based method of depth estimation that drives the portrait mode on Pixel 3. "
Rahul Garg, Google's research scientist

The fascinating thing about this approach is the applied learning method with which Google trained the algorithm. A monster rig with five Pixel 3 phones was built and a Wi-Fi based software tool ensures that all devices record images simultaneously. All smartphones have been lined up in a row, so you have an object from different angles recorded. This allowed Google's engineers to train the algorithm and predict the depth in pictures better.

Google has released a digital photo album comparing the old and new technology. Here you can see how the camera technology has improved over the years.

Post a Comment