Until recently, demand for 3D sensors and 3D data processing technology has been driven largely by the consumer electronics sector, where it is becoming common to see them bundled with smartphones. Demand has also been rising in the automotive industry, where 3D technologies are used as the main enabler of the emergent self-driving vehicle segment. But, after being somewhat of a niche application area in enterprise applications, the use of 3D data capture and analytics is beginning to boom across many industries, including healthcare, retail, and transportation and logistics (T&L). As a result, the global 3D sensor market is estimated to exceed more than $5 Billion (US) by 2024.
The fast dropping prices of 3D and other edge artificial intelligence (AI) sensors, stemming from increasing competition among 3D sensor providers, is making enterprise utilization more feasible. For example, Waymo, Alphabet Inc.’s self-driving technology subsidiary, just announced in March 2019 that it will start selling to partners its proprietary lidars – a type of 3D sensor and the main enabler of self-driving vehicles, a move that should help further depress prices. There does not seem to be any fundamental technical barriers to bringing the cost of long-range lidars down from $75,000 per sensor to less than $1,000 in the next few years.
In other words, 3D sensors are quickly becoming a very real option for industrial applications. But should you invest in 3D technologies now? Or wait a little longer? Let’s first examine what 3D sensors can do and how those capabilities differ from camera technologies that may be in place today.
How do optical 3D sensors work?
Cameras generate a 2D array of pixels, where each pixel represents the grayscale or color value of the corresponding area in the scene. In contrast, a 3D sensor generates a 2D array where each pixel represents the distance of the corresponding point in the scene to the sensor. But not all 3D sensors work the same way, just as not all cameras are the same.
One common way to extract 3D information from a scene is to use stereo vision. In fact, this is how our eyes work. In stereo vision, two cameras are used to obtain two differing views of a scene. By matching a pixel from one camera to the corresponding pixel obtained from the other camera and using information about how the cameras are set up, the distance to the actual point can be calculated. One drawback of 3D sensing using stereo vision is that the pixel matching is computationally expensive, which makes it unsuitable for edge applications where computational power is limited. Another drawback is that they may not work well in environments that are cluttered or contain many objects with similar colors, e.g. parcels that generally tend to be brown.
Another type of 3D sensor uses structured light illumination. In this approach, a specific pattern of infrared light, generally consisting of dots, is projected onto the scene. Objects in the scene distort the pattern which is then captured using an infrared camera. Comparing the distorted pattern to the known projected one allows the sensor to calculate the depth of each point. This approach is sometimes called active stereo vision because it is similar to using stereo vision but uses a single camera and a light source.
The third common type of 3D sensing methodology is time of flight (ToF). ToF sensors capture 3D information by measuring the round-trip time of an artificial light signal provided by a laser or an LED from the sensor to a point in the scene and back.
Pros and cons of 3D sensors over cameras
There are many enterprise applications that stand to benefit from 3D sensor technologies today. (I’ll provide some specific examples in my next blog post in the coming weeks.) However, some may argue that standard cameras remain more accessible to many organizations for the following reasons:
However, I would argue that 3D sensors have a number of important advantages over cameras. 3D sensor technologies…
For those three reasons alone, I recommend that you strongly evaluate the potential 3D sensor applications for your business as you look for ways to improve operational planning or asset monitoring. If you’re worried about the current cost of 3D sensors compared to cameras, just remember that – as with any technology – you must consider the total cost of ownership (TCO) as well as the long-term ROI of various technology options. This will often boil down to your individual business case. There are some very specific scenarios in which 3D sensors will almost always be the smartest option, especially in the transportation & logistics sector. I’ll discuss those in more detail in my next post.
###
Editor’s Note: Tune in to Your Edge in the coming weeks to find out if 3D sensors are a smart investment for your business, based on the proven 3D sensor enterprise applications that Cuneyt reveals.
###
For the last 14 years, Cuneyt Taskiran has been working on developing innovative machine learning and video analytics solutions that solve complex problems for Zebra customers. Over the past 10 years, he has focused on designing and productizing Zebra’s Enterprise Asset Intelligence solutions for Intelligent Automation in the logistics and retail sectors.
In his current role as Strategic Business Development Manager, Mr. Taskiran is responsible for developing and assessing new business opportunities, as well as incubating emerging Zebra technology, in Zebra’s Chief Technology Office.
Prior to Zebra, Mr. Taskiran worked as a technical project lead and a principal research engineer at Motorola Enterprise Solutions, where he led development of video analytics systems for consumer and public safety applications. Zebra acquired this Motorola business in 2014.
Mr. Taskiran holds a PhD in Electrical and Computer Engineering and an MA in Linguistics both from Purdue University.