When the Pixel 2 launches, Google uses a neural network and the camera's phase-detect autofocus to determine what's in the foreground. This does not work when you have a scene. Google tackles this with the pixel 3 by teaching a neural network to predict depth using a myriad of factors, as well as the typical size of objects and the sharpness of various points in the scene.
It required some creative technology to train this neural network, Google said.
Pixels still relying on single cameras for photos (and thus limiting their photographic options ), this does illustrate the advantage of using AI. It can improve the image quality without tying it to hardware upgrades.