The potential of a dual-camera system in the iPhone 7 Plus

I’ve been seeing tweets that imply the dual-camera system on the iPhone 7-plus would be great for 3D-photography and possibly add to a VR system at some point in the future. Ever since, I’ve wanted to read more about how the dual-camera system would help usher in 3D and I came across this blog-post by Shutterstock’s CEO Jon Oringer that sheds some light on the matter:

A flat lens right in front of a sensor (like a typical camera phone lens) doesn’t optically produce [Depth Of Field]. Today’s camera phones don’t have the ability to measure distance, so they can’t digitally re-create the DOF drama that a conventional lens does on its own. This next photo is more like one taken with a camera phone: Most of the image is in focus and there is little depth or drama to the image.

Just as our two eyes work together to detect depth, two lenses do the same. By using the disparity of pixels between two lenses, the camera processor can figure out how far away parts of the image are. […]

The magic is how software takes information from the two lenses and processes it into an image. Between the extra data collected from this new hardware, and the advancement of machine vision technology, the new iPhone camera is going to be incredible. Depth of Field is one of the last features necessary to complete the full migration from handheld camera to camera phone. Soon both amateur and professional photographers will only need to carry their mobile devices.

Let me illustrate what I’ve understood with an example: Say there are two poles in front of you, one a foot away and another ten feet away. In a dual-camera system, one camera can approximate the (relative) distance of the pole a foot away from you and the other can do the same for the one that’s ten feet away from you. The first camera (the one that has a shorter focus) focuses on the first pole and so it must be closer, implying the second is farther—verified by the fact that the second camera can easily focus on it.

Now, if you were to place poles between these two poles, each at a distance of one foot from each other, realising how far each pole is is only a question of picking up on how in-focus or out-of-focus that pole is when seen from both cameras. (Allow me to cook up some random, arbitrary numbers here) A pole that is 90% out-of-focus on the first—near-focused—camera and 10% out of focus is the second camera is the second-furthest pole.
In a single-camera system you could only measure the fact that an object is 10% out of focus. Whether that means the object is (the equivalent distance of 10%) further away from you or nearer to you wouldn’t be as easy to determine1. This, in effect, gives you the ability to measure in the third dimension (Length and breadth and now depth).

Again, this is an over-simplified illustration of what I’ve understood. I could be wrong—I’m not very well familiar with the academics of optics. I’d be happy to correct any bit that I got wrong.

(Back in 2011, HTC released a phone called HTC Evo 3D that had a dual-camera system and allowed you to capture a 3D image and view that image on its auto-stereoscopic display. I suppose it was a clunky experience; it never really took off.)


  1. Perhaps you could by calculating the time light takes to bounce off an object—further the distance longer the light takes to reach the camera. I assume this is how the LED-assisted focus systems such as L.G.’s G3 work.
    Also note that Google’s camera app used to figure out the relative distance of objects through a single camera system too but you needed to move the camera around your object as you would for a panorama. ↩︎

In Uncategorized by Mayur Dhaka