These Aren’t The Cameras You’re Looking For: How Apple Just Killed Smartphones With A Single New Feature

 

“There are 5 catastrophic failures in 3D production”

I was advising the renowned director Ang Lee from across a small conference table in a Beverly Hills office of the 3D technology startup RealD around 2010/2011 as he was going into production on Life of Pi 3D. It was an honor to be there with he and his team.

We discussed technical elements of 3D production throughout the day, specifically focusing on framing and depth.

Life of Pi 3D went on to win multiple Academy Awards. Mr. Lee won the Oscar for Best Director.

I was at RealD for 6 years, first raising capital ($36 of $90M total), then building RealD PRO – the professional division of RealD that serviced customers from Disney to Lockheed Martin, BMW and the NSA with a suite of visual technology.

Most people know RealD for their 3D technology and -like it or not – 3D movies.

RealD grew from $0 in 2007 to $280M and an IPO on the NYSE in 2010 by taking industrial technology developed at two little companies – Stereographics (San Mateo, CA / Lenny Lipton), and ColorLink (Boulder, CO / Gary Sharp) – and applying the technology to entertainment applications.

What most people don’t know is RealD is not a 3D company.

RealD is a light technology company. Their scientists and technologies manage, manipulate and filter light in a variety of ways. The company’s products are used in everything from F/A-18 Fighter Jet HUDs to virtual reality CAVEs. Customers include Ford, Boeing, Pfizer, the US Navy, Lockheed Martin, BMW, Exxon, BP, and many others. The technology has been deployed into many classified applications (if you think drones are cool, you wouldn’t believe what is actually in use by some of these companies/organizations), and used extensively in highly elaborate software from companies like BAE Systems.

I was involved in all of these industries at RealD, working on confidential projects involving photogrammetry, virtual reality, augmented reality, and other visual imaging solutions. During this time I became intimately familiar with the operations of multi-aperture cameras.

I deployed RealD technology for stereoscopic neurosurgery imaging where doctors use VR/AR and 3D in real time performing laparoscopic neurosurgery. I’ve walked through US Navy aircraft carrier cabins – virtually building the most advanced aircraft carrier in VR prior to welding a single sheet of steel. I’ve seen Ford redesigning vehicle interiors using haptic technology, Pfizer building molecular models of the next great pharmaceutical in stereo 3D, theme parks around the world deploying RealD technology for virtual/interactive rides, among others (some confidential, related to classified operations).

Across swaths of industries, 3D/VR/AR has been in use for over 30 years and I was in the center of it, working with many of these companies directly, advising, selling and deploying some of the most advanced imaging technology the world has ever seen (and has never seen publicly).

This is why I am shocked that the new iPhone 7+, with the most groundbreaking visual technology feature I’ve ever seen – dual camera lenses, has not erupted in the press.

iphone-7-plus-design.jpg

(the iPhone 7+ dual camera lens feature is hugely significant)

Deploying dual camera lenses in a consumer product is significant because the cameras can act as a visual sensor beyond what a single imaging chip can do.

With two camera sensors, an iPhone will be able to:

-measure distance
-map objects
-capture 3D images
-create parallax
-depth of field
-stream VR

This puts millions of dollars of technology in the hands of developers and consumers everywhere, for practically pennies.

This technology enables almost infinite possibilities.

How?

With dual imaging chips (lenses), each chip can capture a separate view of the same data set – more pixels, but different pixels (different angles). Using this data, developers can begin comparing the data to output information stated above.

For example, knowing the distance between the lenses (interaxial), a developer could write a comparative program where pixel distance in each frame is compared, and the distance from the object could then be determined very accurately (in Boy Scouts we learned this on maps using compasses – calling it triangulation).

Now, imagine that you can activate both camera lenses simultaneously, program imaging software to account for the various catastrophic issues problematic with stereo capture, and boom – you have a VR streaming device.

As Apple has touted, the additional lens also enhance images by creating user-chosen variable depth (depth of field). Well, if you can create a depth field, you can also use that depth for judgment – your phone becomes a third eye sensor with capabilities similar to LIDAR. In other words, Apple just MVP’d imaging to leap frog LIDAR and make mapping with sensors a standard feature in a phone. Think about that for a second. The iPhone could literally be used to calculate and advise you on movement-oriented decisions using these dual imaging chips.

Clip an iPhone to some wheels and it becomes a robot with depth vision. Clip the iPhone to a household vacuum cleaner and it could clean your home. It could cut your lawn as it drives an automated lawn mower. It could look at and more accurately diagnose a medical condition, identify objects, build virtual models of objects that could be 3D printed accurately…the sky is the limit.

This is not a simple feature addition, this is the most revolutionary imaging technology ever to be added to an an Apple product. Perhaps the most advanced imaging technology to be added to any consumer product ever.

Deploying technology, like Oculus is doing with VR, into the hands of millions of people reduces costs drastically, enabling access to otherwise cost-prohibitive technologies for infinite applications. Apple has just done this with their dual-camera feature.

The possibilities are endless when consumer products can mimic capabilities of expensive industrial systems.

Industries currently using VR, AR & 3D include:

– Education
– Medicine
– Geosciences
– Government & Defense
– Entertainment
– Design/Development

Right now, people are coming to work every day, donning VR headsets and 3D goggles and stepping into CAVES where they use heavy technology to execute industrial applications.

This has been the case since the early 1980s when Lenny Lipton, who cowrote “Puff The Magic Dragon” used his royalties to invent the first digital 3D technology using a CRT monitor frame-sync’d to LCD shutter glasses via an infrared emitter. The gov’t and military quickly picked up the technology from his company Stereographics, and a totally new business was born. This is the next generation of that first flame.

As people realize the potential for VR/AR and 3D we will see unexpected applications change the way we live, which is why the September 7th, 2016 announcement of the Apple iPhone7+ is so significant.

The next 2 years we’re going to see an explosion of these sensing capabilities brought about by the addition of the second imaging chip. I believe this is as important, if not more important, than the launch of the first iPhone. After all, if a picture is worth one thousand words, a 3D virtual augmented sensor image must be worth at least 10X more.