After the year we’ve had, most people would love the ability to see into the future. Who doesn’t want to know what’s coming next so that they can make better decisions right now? But, short of a crystal ball, organizations must find a way to “live in the moment.” That’s where computer vision comes in.
If you can see what’s happening within your operation at this very moment – and actually understand why it’s happening – you can either stop it or amplify it, depending on whether it’s poised to lead to a negative or positive outcome. The challenge, of course, is that as humans, we can’t see or process everything that’s happening within the four walls in real time with our own eyes. But we can sense, analyze and act on nearly everything occurring around us if we have computer vision.
That’s part of the reason why Zebra acquired Cortexica one year ago today.
Of course, computer vision – like most technologies – has matured quite a bit in the last 12 months, and its value has burgeoned in light of the challenges and demands that nearly every industry is facing in the wake of the COVID-19 pandemic. So, we’ve asked Zebra’s Leader of Incubation in Zebra’s Chief Technology Office (CTO), Matt Hayes, to answer some of the frequently asked questions we receive about computer vision’s purpose, potential applications and contributing role to the realization of Zebra’s Enterprise Asset Intelligence vision. Read what he had to say:
Your Edge Blog Team: Can you start by asking probably the most fundamental question: what is computer vision?
Matt: In short, computer vision is the ability to analyze images and understand what might be in – or happening within – the image. A more nuanced answer starts to explain different techniques in how computer vision is applied. One can think about this as classic computer vision (CV) and deep learning (DL) computer vision. Classic CV takes a rules-based approach, much like how other software is developed. It accounts for a variety of predefined rules applied to a problem or series of problems. Deep learning CV is a system that helps identify and understand patterns as a way to learn the rules, without the same need for predefined code. In place of predefine rules, annotated images are given to the DL system to understand what patterns, objects or people it should try to find. Both approaches have their place in solving for use cases. For instance, classic CV tends to solve for Industrial Machine Vision challenges for controlled scenarios, like specific product quality, whereas deep learning CV tends to solve for challenges that have a far wider and variable set of images.
Your Edge Blog Team: Computer vision was named as one of the most mature technologies in the Gartner Hype Cycle for Artificial Intelligence (AI), 2019*, and other industry watchers have predicted that the global market for computer vision will be worth anywhere from $17.4 billion to $48.32 billion by 2023. In addition, a recent Deloitte survey confirmed growing interest in enterprise computer vision solutions, with 57% of U.S.-based companies reporting that they have already adopted computer vision. Why do you think computer vision is gaining such traction right now? And, why is computer vision so important in AI?
Matt: Well, let me just put it this way: it’s quite hard these days to discern fact from fiction. More and more enterprise leaders and industry experts are trying to push beyond their existing systems of record – where employees notate what happened – to systems of reality that sense what actually happened. As a result of this industries-wide dynamic, we have been seeing the confluence of AI and the Internet of Things (IoT) – also referred to as “AIoT” – which is leading, edge sensing to become smarter.
Gartner captures this wider dynamic in its analysis of the Hype Cycle for Artificial Intelligence, 2019, with many technologies ascending the curve in an effort to meet each industry’s need for smarter solutions. An important interpretive key to Gartner analysis is to focus on those technologies that it feels are poised to move into the “Slope of Enlightenment,” which only occurs once the facts are separated from the fiction – or should I say the “reality is distinguished from the hype” – and it becomes clear what a technology can truly solve today and how it can expand from there.
Interestingly, one of the first technologies to move into the “Slope of Enlightenment” was computer vision. CV is quite important to the broader superset of AI. As robots continue to advance and perceive their environments in order to navigate a busy retail store or gauge the dynamics of a warehouse, the ability to see, understand and adapt become table stakes – these are essential “skills” that must be acquired in order to conduct even basic workflows. There are some very exciting areas of AI technologies, like “reinforcement learning” which continue to show promise for helping robots with “visual servoing” a top priority in right now. In basic, this is the means by which we help robots gain hand-eye coordination and other abilities that people have that are fundamental challenges for robots.
CV is also a great way to understand business information from retail shelf conditions or frictionless checkout. We’re also seeing CV being used more with personal protective equipment (PPE) to help keep people safe with the gear they wear and use. Also, CV’s growing adoption is apropos in present times for face covering detection or understanding social distancing to help slow the spread of COVID-19.
Your Edge Blog Team: How is computer vision being used today in various industries? And are there other computer vision applications that we can expect to see emerge in the next 12-18 months?
Matt: As organizations aspire to understand their operations with real-time vision systems, the application areas are seemingly endless. Interesting, some of Zebra’s estimates indicate that less than 1% of frames, or images, of Closed Circuit Television (CCTV) are analyzed for enterprise understanding and benefit. So, there are some customers who want to harness the value of their sunk investment in CCTV to better observe operations at a distance in order to deduce new operations’ key performance indicators (KPI).
Others may leverage custom applications for industrial product quality control. For example, computer vision can help closely measure tight tolerances of quality assurance of a product on a manufacturing or assembly line.
Add in the need for automation of repetitive tasks or, in some cases, dangerous tasks, and you quickly notice the value of intelligent automation systems (i.e. robots) that use computer vision to offer a unique vantage point of real-time operations whether in an industrial or retail environment.
Your Edge Blog Team: How do these computer vision applications fit into Zebra’s Intelligent Edge Solutions vision?
Matt: Front-line workers across a host of industries are beginning to interact and partner with smart robots in supply chain, retail and healthcare environments, to name a few, which is driving the operational edge to become quite dynamic. Operational orchestration is accelerating. So, Zebra is focused on the coordination of real-time intelligent technologies, enterprise assets and inventories, and well-equipped/well-informed employees – all of which combine into what we call Intelligent Edge Solutions.
Your Edge Blog Team: Is that why Zebra acquired Cortexica a year ago?
Matt: Yes. Cortexica gave us a host of new vision-based analytics and artificial intelligence (AI) solutions to drive new user experiences and greater operational efficiencies, specifically as they relate to object recognition, image and video analysis, and visual search. As such, we can accelerate our vision, no pun intended, of providing seamless workflow orchestration solutions for large enterprises and small businesses alike.
Your Edge Blog Team: So, the Cortexica acquisition significantly enhanced the “Sense” and “Analyze” layers of Zebra’s “Sense-Analyze-Act” solution framework, right?
Matt: Yes. We gained new computer vision-based AI capabilities that are now enabling us to address a range of emerging use cases that complement our core portfolio. In fact, Cortexica’s contributions to Zebra’s portfolio are palpable. Its computer vision technology has already been integrated into our SmartSight™/Enterprise Mobile Automation (EMA50) retail solution, a retail shelf analytics solution, and SmartPack™, which provides a critical juncture view of activity occurring both at warehouse and cross-dock doors as well as inside the shipping container or trailer. Customers are now receiving actionable intelligence from their back-end systems versus just dashboards of data that someone must dissect and interpret. The operational reality being captured by the computer vision-based solutions is giving organizations – and especially their front-line workers – the definitive guidance they need on what is happening in their facilities, why it is happening and what to do to right now to either resolve the issue or take advantage of the opportunity. Essentially, computer vision helps take the responsibility of “sensing” off the human worker. It then works with solutions such as Zebra Prescriptive Analytics or the recently acquired Reflexis One™ intelligent work platform to provide simple “next best step” instructions to that worker – whether a manager or associate – so he or she can take fast, definitive action to achieve the desired outcome. That might be a fully-stocked retail store shelf, a fully-staffed restaurant, an on-time shipment or something as simple as a fully-satisfied customer.
Your Edge Blog Team: Have customers already started to benefit from the Cortexica acquisition and Zebra’s computer vision solutions, then?
Matt: Yes, and our AI portfolio and roadmap continues to grow with new uses cases across a variety of verticals that Zebra serves. We’re seeing our partner community begin to see how Zebra AI can help their enterprise applications as well. It’s a very exciting time.
Your Edge Blog Team: Can you address the issue of privacy as it relates to computer vision? Some may have concerns that the heavy utilization of cameras might violate people’s rights, whether customers or workers. There may also be concerns that AI’s visual recognition capabilities could allow for easy tracking, even posing threats to people’s personal safety and security if a system were to be hacked. Someone could hypothetically be followed coming or going from locations. What is Zebra doing to prioritize privacy?
Matt: Great question. As you know, Europe has implemented GDPR and California has started to move in this direction as well. These new data privacy requirements actually increase the need for smarter technologies to be applied. An example of this is human blurring, namely pixelating people’s physical images to the point that an individual or group cannot be personally identified. While the blurring effect is really for image privacy, the underlying technologies are CV-driven because we have to know what – or in this case who – to protect, frame by frame. We have products that have already implemented these protections: one example is Zebra SmartPack which uses human blurring for worker privacy purposes.
Your Edge Blog Team: One last question: we’re assuming that customers are heavily influencing Zebra’s innovation strategy around CV-based platforms. Can you talk about customers’ current needs or ambitions and how they directly correlate to the engineering work underway in our labs today?
Matt: Customers have truly become partners. The level of collaboration needed to think through, and even reconstruct workflows, requires very agile and dynamic cross-organizational teams. Focusing on problem solving at the user experience level makes an AI-driven solution quite practical as we have to be sure the technology will deliver measurable gains to users, whether in the form of operational visibility, efficiency, productivity and more. Oftentimes, our AI research and engineering staff are directly involved in conversations and actions facilitated by these cross-organizational teams. Such direct customer engagement drives our innovation and helps our customers see the art of the possible and the reality of the practical.
What we have been forecasting, and what customers continue to see, is that aggregating user experiences at an enterprise associate level starts to require ecosystem orchestration at the group and departmental levels. This is both an AI boon and a legacy system bane. It is good because the on-demand economy has driven consumer expectations higher and higher, thus creating new opportunities for manufacturers, supply chain organizations and retailers. Those opportunities require them to continue to compress their operations into smaller windows, tighter tolerances and far greater flexibilities. Making good decisions across the whole organization matters – a big reason Zebra sees the intelligent edge of operations as fertile AI ground to help our customers and, consequently, their end-customers. It’s also a bane in that with legacy systems – sweating older assets longer to get a wider economy-to-scale base – comes at an opportunity cost to realizing a better “top line” future. Driving near term value to offset change management, while orienting to longer term objectives and their opportunities seems to achieve the right balance. This is made possible by Zebra, and specifically our AI engineering staff, working closely with our customers.
###
Want to learn more about Zebra’s customer-first approach to innovation? Listen to this podcast with Chief Technology Officer Tom Bianculli.
Then check out these other insights about how advanced technology, including computer vision, can be leveraged today to improve your operational visibility, increase staff and customer engagement and improve business outcomes:
Matthew (Matt) Hayes is the leader of Solution Incubation where he is responsible for a team of advanced research engineers in Zebra's Chief Technology Office. Matthew has more than 20 years of experience in leadership and product management positions within the technology industry. He has also served non-profit organizations as a Treasurer and Board Director.
Matthew’s leadership experience includes multi-year strategy, corporate incubation, emerging P&L, recurring-revenue services strategies and business models. Matthew is a frequent speaker on topics such as incubation and future technologies. He is a patent holding inventor and has authored industry articles, blogs and participated in incubation and technology panels.