A woman inspects equipment on a manufacturing line
By Jim Witherspoon | December 7, 2023

Deep Learning Isn’t a “Bleeding Edge” Technology, but It Can Help Stop the Bleed at the Edge of Your Business. Here’s How.

If you’re wondering, “How is deep learning already at work in the real world (i.e., my world)?” or “How could deep learning help me work in new, more efficient ways?” then you’ve come to the right place. 

Which of the following is not true about deep learning?

  • It’s a “new” technology concept/capability.

  • It’s the same thing as machine learning.

  • It requires a significant on-premise infrastructure investment (and a big team of data scientists).

If you said, “All the above,” you’re absolutely right.

The underpinnings of deep learning (i.e., machine learning and convolutional networks) have been around for a very long time – as in almost a century. However, deep learning as we know it today didn’t rise to hashtag status until about 10 years ago, which is when I think we as humans began to realize, or perhaps accept, our limitations. The more we were pushed to work faster and do everything perfectly, the faster we realized those two ambitions don’t jive well. 

It doesn’t matter how many people on your team are focused on completing a task, nor does it matter if they are the most experienced specialists in the world. When you must rush to make a decision – and make the right decision – the odds of getting it wrong increase. As does the cost of making that (possibly wrong) decision. 

According to a McKinsey study conducted right before the pandemic, workers at all levels were “spending 37% of their time making decisions, and more than half of this time was thought to be spent ineffectively. For managers at an average Fortune 500 company, this could translate into more than 530,000 days of lost working time and roughly $250 million of wasted labor costs per year.”

That’s just one of many reasons why more business leaders are keen to learn more about deep learning these days. Get this… 

“According to [McKinsey’s] results, the level of inefficiency [in decision making] does decrease with seniority. While 68% of middle managers say most of their decision-making time is inefficient, 57% of C-level executives report the same. Looking more closely at the data, there is little evidence of economies of scale. The respondents who dedicate most of their time to decision making rate themselves no better than their peers at using that time well (exhibit).”

A graphic from a McKinsey study of decision-makers

In other words, no one is perfect – not even executives who are trusted to have all the answers and steer the ship in the right direction. Perhaps that’s why we’re starting to see that, in some instances, AI (and deep learning specifically) should be used to compensate for our physical and computing limitations. 

As humans, we can only see so much, do so much, and think so fast before we start glitching. And no matter how hard we try to be perfect at work, we are always going to have biases or disadvantages. For example, we cannot always tell when a bottle is deformed or a pill’s markings are more orange than red. We may also struggle to make the right decision because we lack the full context of a situation. 

However, we can make these limitations a nonissue by training AI to see things, connect the dots between seemingly disparate data points, and make decisions in ways that we can’t. This reduces our risk of getting things wrong, which is huge when you think about what it takes to succeed in business.

That said, AI is only as smart as we allow it to be. It can’t help us if we don’t teach it how to help us. That’s why it’s so important that you get the AI model – and the training model – right when you decide it’s time to ask for AI assistance with certain business tasks. This is especially true when using AI to inform, or make, decisions (such as a pass/fail decision during quality inspections).

So, let’s talk about what you need to know before you spend any money on deep learning tools (or any AI-powered automation assistants).

 

Deep Learning 101

There are a lot of terms being tossed around in relation to AI, including deep learning, machine learning, and neural networks, among others. So, you’re probably wondering, “What’s the difference between machine learning and deep learning?” 

Technically, deep learning is a subset of machine learning, as Dr. Yan Zhang explained in this post, and more likely to be used when "the dimensionality of the data and the complexity of the model are too big to handle." Take face detection, for instance. It is true you could reduce the dimensionality using traditional approaches such as principal component analysis (PCA), and this was done in the famous Eigenfaces approach. But PCA only offers a linear model, which cannot compete with the non-linearities of today's Deep Networks when applied to megapixel images. 

For example, at Zebra, we use deep learning when we’re helping customers…

  • Focus a machine vision system on items – or certain qualities of items – for inspection.

  • Improve worker safety. In this case, we may use deep learning in conjunction with cameras to detect when workers enter unsafe areas, get too close to machinery or don’t have the proper personal protective equipment (PPE) on.

  • Predictively schedule maintenance action to prevent downtime of equipment and systems.

  • Determine what parts they need to make when and where so they can more effectively schedule production and prevent either delivery delays or inventory waste.

In fact, we’re using a deep learning-based solution to help a customer in the fast-moving consumer goods (FMCG) space automate and improve the efficiency of their returns process. Before returned items can be put back into circulation, their lot number must be logged and expiry date verified by a worker. Small font sizes, poor mark quality, and the use of low-contrast text made this a time-consuming and unpopular job, creating bottlenecks and waste as products expired before they could be returned to the shelves. The deep learning-enabled solution we worked with them to implement now allows workers to verify these details automatically by showing the item to a camera, resulting in improved efficiency and reduced waste.

Now, because deep learning is the training of neural networks, it can learn through either supervised or unsupervised processes – much like we do as humans. We learn both in a structured educational setting (i.e., school, professional development courses, etc.), and as we go about our day. Every new experience we have and every interaction with a person can amount to “training”. We’re taking away more information or a different perspective. 

The difference between how the human brain learns and how deep learning occurs within a neural network is that the AI/neural network’s “learning” is occurring in a totally controlled environment. We (humans) are transferring what we’ve learned from our brains to the AI/neural network to help it understand right from wrong. It’s doing what we’re telling it to do, in a way. For example, we train the AI system used to inspect semiconductors coming off a fab using “good” and “bad” images using deep learning. We teach it what to look for so that it can eventually work autonomously. 

Why use deep learning, or AI at all, for inspections when you can just teach a person what’s good or bad? Well, it all boils down to our physical and computing limitations and the need to make them a moot point. 

If you really want to feel confident that what you’re shipping to a customer is of the highest quality, or that the quality of a product hasn’t degraded along its supply chain journey, you’re going to need to scrutinize it like no human can. You’re going to need some form of AI to quickly and perfectly inspect it – most likely using a combination of cameras and AI-based software, such as a machine vision system or even a fixed industrial scanner with deep learning optical character recognition (OCR) capabilities.

  

Which Deep Learning Model is Best?

There isn’t a standard rule that says, “This type of deep learning model is going to be universally applicable in this type of workflow or business setting.” However, these are the models we typically lean into when we have certain objectives:

  • Deep Learning Optical Character Recognition (OCR): This is an easy way to automatically read the text on an image/item, such as a lot number, part number or expiration date. What’s cool about this capability is that it can be deployed out of the box within 5 minutes. It doesn’t have to be trained, and you don’t need a skilled data scientist to get it online. You can read more about how it works here and actually see it in action here:

If you’re in pharma, this video is also a must watch: 

And if you’re in the auto industry, watch this demo of how deep learning OCR can work for tire pressure sensor checks:

  • Anomaly Detection/Defect Detection: We might use images from your fixed industrial scanner or machine vision camera to teach the neural network to spot the difference between this image and that image. This is helpful for quality control. You can teach the neural network what is correct and then have it look for anything that appears even slightly different.

  • Object Location/Segmentation at the Pixel Level: The goal here is to identify and locate certain things, such as a screw placement, a marking, etc. 

  • Classification at the Global Level: In this scenario, the neural network is trained to look at the whole image and decipher what something is. (For example, is it a mop or a dog?)

There are also solution-based, pre-trained deep learning models in which we pretty much draw a box around something, teach the neural network to look for what’s inside that box, and call it a day. There is also traditional deep learning in which we typically annotate what to look for in a set of images and then build a model around the annotation. These are typically very customized to a particular business objective, so it would be impossible to talk about every example here. 

However, the example I mentioned before about how one FMCG company is using deep learning for expedited returns processing and item reshelving is a great example of what’s possible. To give you another, we have a large automotive customer who has its front-line team looking for anomalies on battery packs using deep learning. These anomalies include dirt, debris, scratches, contaminants, creases, folds, etc. And if you’re a sushi fan (or company), you’ll love that deep learning can help ensure that the packaged sushi combos being put on shelves at a store or restaurant are exactly as labeled and staged properly. (Check out the sushi pictures on page seven of this brochure which, by the way, has a ton of examples of how deep learning could be used.)

Now, in the title of this post I said that deep learning can help stop bleeds within your business. What did I mean by that? 

In short, you need to eliminate waste. In many cases, waste occurs when products initially pass (human) inspection but a defect is later identified and a mass recall becomes necessary. Alternatively, waste stems from the delayed identification of a defect, leading to mass production of a faulty product that never makes it to market. There’s also significant time wasted when inspection is handled exclusively by humans, because we simply can’t work as fast as a machine can, no matter how hard we try. 

So, instead of fighting this fact, embrace it. Lean into deep learning. See it as a better version of yourself – a tool that can make you look like a rockstar when you’re able to pick up the pace of production or fulfillment, reduce error rates and set a new quality record (for the fewest discarded or recalled products).

If you are someone who needs to see to believe that something is as good as it sounds, then my colleagues and I would be happy to set up a demo for you. Trust me, deep learning is much cooler to witness in real life than to read about. But I’m happy you took the time to read this post all the way to the end. 

In the coming weeks, my colleagues and I are going to share with you everything we’re learning about deep learning, including the many ways it can help you, the mistakes that others have made and you should not repeat, and how new deep learning models is teaching “old” OCR technology new tricks that will amaze your team. So, stay tuned into the Your Edge Blog for those insights, or reach out to us to learn more about deep learning’ potential benefit to you (if you just can’t wait.)

You should also check out this newly released study about machine vision usage among automotive manufacturers if you haven’t seen it yet.

###

 

Editor's Note:

Want to learn more about deep learning or machine vision specifically? Tune into the Industrial Automation Insider podcast. These are great episodes to check out first:

Topics
AI, Healthcare, Warehouse and Distribution, Automation, Retail, Article, Hospitality, New Ways of Working, Energy and Utilities, Blog, Manufacturing, Digitizing Workflows, Transportation and Logistics, Machine Vision, Public Sector, Banking, Quality Control,
Jim Witherspoon
Jim Witherspoon

Jim Witherspoon is currently a Product Manager with Zebra’s Machine Vision and Fixed Industrial Scanning group where he is responsible for the Zebra Aurora™ software and helping to drive the best solutions to market. 

Jim has more than 18 years of experience with machine vision and fixed industrial scanning technology and has worked in application engineering, sales, and management.  Jim previously worked with many of the world’s top companies to install thousands of machine vision and fixed industrial scanning systems and solutions.

Zebra Developer Blog
Zebra Developer Blog

Are you a Zebra Developer? Find more technical discussions on our Developer Portal blog.

Zebra Story Hub
Zebra Story Hub

Looking for more expert insights? Visit the Zebra Story Hub for more interviews, news, and industry trend analysis.

Search the Blog
Search the Blog

Use the below link to search all of our blog posts.