Ask the Expert: Is There a Difference Between Machine Learning and Deep Learning?
Artificial Intelligence (AI), Machine Learning and Deep Learning are all related but not exactly the same.
In a recent Q&A with Yan Zhang, the Computer Vision tech lead for Zebra’s Chief Technology Office (CTO) Solution Incubation team, we learned that much of the “intelligence” we think is being delivered via Artificial Intelligence (AI) technology is actually the output of Machine Learning algorithms. (Shows how smart we humans are sometimes, right?)
We also learned that many people are incorrectly assuming AI and Machine Learning are interchangeable terms. They are not. (Another buzzword myth busted.)
Machine Learning, is in fact, a subset of AI. As Yan phrased it:
“We could use AI as a broad term when referring to an intelligent machine and then use Machine Learning to describe techniques and computer algorithms that extract patterns and build prediction models from existing data to complete prediction/inference actions on other data streams. But, in general, Machine Learning should be used when we refer to ANI—Artificial Narrow Intelligence.”
However, after these series of revelations around AI and Machine Learning, the Your Edge Blog Team started wondering what other common technology misperceptions exist in the mass market. We were also curious about what other similarly-sounding tech terms could be easily confused or incorrectly used. Since it was our deep conversation around AI and Machine Learning that first spurred this curiosity, we naturally migrated toward the topic of “Machine Learning vs Deep Learning.”
Fortunately, Yan was gracious enough to provide us with some clarity (once again):
Your Edge Blog Team: First and foremost, are Machine Learning and Deep Learning technically the same thing?
Yan: Deep Learning is a subset of Machine Learning and has gained huge success and popularity in the past several years due to its unprecedented performance. Deep Learning actually originated from artificial neural network (NN)—one of the popular supervised Machine Learning techniques – as did Support Vector Machines (SVM), decision trees and ensembles (boosting).
Your Edge Blog Team: Is that why you think the terms Machine Learning and Deep Learning are so often used interchangeably?
Yan: Yes, absolutely. Although, it is really important for everyone to understand that even though there’s really only one key difference between Machine Learning and Deep Learning – in the way they “learn” – the time and effort that goes into that training is quite significant. And quite different.
That’s why we, as technologists, want to do what we can to ensure others are using the right terminology to reference these technologies. Understanding what Machine Learning, Deep Learning and AI technologies are and are not – and acknowledging what they can and cannot do today – is imperative to selecting the right model for the right application.
Machine Learning and Deep Learning are probably used interchangeably more often in the application and solution domain at a higher level as they both are essentially “Machine Learning”. Differentiating them makes sense when specific algorithms and framework are the focus, as Machine Learning and Deep Learning apply to different scenarios in terms of the amount of data and computing resources available.
Specifically, Deep Learning requires more data and GPUs (graphics processing unit) for training to accommodate the deep neural network architecture. Traditional Machine Learning techniques have their advantages when these preconditions can’t be fully met. For example, boosting ensemble trees are still being used predominantly in data science tasks and have won numerous Kaggle competitions. Whereas Deep Learning approaches have excelled in many Computer Vision and natural language processing (NLP)-based solutions such as autonomous driving and smart home devices – environments where abundant data continuously flows in.
Your Edge Blog Team: You mentioned that Deep Learning originated from artificial neural network, which is a supervised Machine Learning technique. Does that mean that all Deep Learning is also supervised?
Yan: Depending on whether the training data need explicit labels, Deep Learning and Machine Learning algorithms can be categorized into supervised, unsupervised and reinforcement learning. Compared to traditional neural networks with fully connected layers, Deep Learning has been used interchangeably with Deep Neural Network that has a dozen to a hundred layers of Convolutional Neural Network, which has a different network architecture from traditional NN.
Your Edge Blog Team: Can you elaborate a bit more into what does go into “training” or “teaching” a Machine Learning or Deep Learning-based system? How much time does it take? And how much human guidance?
Yan: Among the three major categories of Machine Learning techniques, supervised learning is the most commonly used and requires human training/teaching—often referred to as “data labeling” or “annotation”. While traditional Machine Learning algorithms usually reach accuracy plateaus through several hours of training on moderate data volume, Deep Learning approaches enjoy continuous accuracy increases through huge data sets and hours or days of training on GPUs. Human guidance is critical in the early stage data cleanup, annotation and training model selection and iterative finetuning as new data comes in. A term has been commonly used for human involvement in Machine Learning tasks: “Human-in-the-Loop Machine Learning”. Machine Learning training is far from a plug-and-play process and does require experienced professionals to achieve success.
Your Edge Blog Team: There are many enterprises likely weighing the benefits of Deep Learning vs Machine Learning. But is one really “better” than the other? Is this really an “either-or” decision? Or is it more likely that they will co-exist within an organization’s architecture?
Yan: Both Machine Learning and Deep Learning have their own advantages and apply to different scenarios in terms of solution requirements, data and computing resource availability. Multiple algorithms in Machine Learning and Deep Learning should be kept in the toolbox, and the most appropriate one should be chosen for specific problems based on the return/benefits and cost analysis for each.
Your Edge Blog Team: Can you give some examples of scenarios in which Deep Learning techniques would be more beneficial?
Yan: Deep Learning usually requires ample data sets, powerful GPUs and longer training time to achieve high performance. In many Computer Vision and NLP solutions – such as image recognition – Deep Learning techniques have reached or exceeded human level performance. For example, the AI team working within Zebra’s Chief Technology Office (CTO) recently developed a human blurring feature for the Zebra SmartPack™ product to address a customer’s requirement. Deep Learning techniques showed clear advantages both in human detection rate and computation speed; the software we developed yielded state-of-the-art performance in these aspects. The performance would have been lower if only traditional Machine Learning methods were used. It’s indeed critical to identify the scenarios where Deep Learning excels and ensure the prerequisites are met.
Your Edge Blog Team: Are there organizations applying Deep Learning in similar capacities today? Or would say these are ambitious use cases?
Yan: Tech giants like Google, Facebook, Microsoft, Amazon and Tesla have applied Deep Learning widely to their products such as search engines, customized ad/product recommendations, smart home assistants, Internet of Things (IoT) solutions and self-driving cars. Deep Learning is also starting to emerge in other industries such as retail, manufacturing and healthcare. While still a long way to go, Deep Learning does provide a promising area where near- or above-human level perception can be achieved.
Your Edge Blog Team: Which industries do you think will most benefit from Deep Learning in, say, the next year? And then in the next five years?
Yan: In the next few years, Machine Learning will benefit industries including retail, agriculture, transportation, healthcare, and finance in a wide range of solutions. Retail robots will likely start to emerge on a greater scale with functions from simple floor cleaning to intelligent shelf inventory scanning. In transportation, besides self-driving, plenty of efforts have been put forth to evolve cars into data hubs with real-time edge intelligence.
Using Deep Learning in healthcare environments to aid with medical diagnoses, medication customizations for individual patients and drug discovery is still in a somewhat-early stage, but breakthroughs are being made at a fast pace. AI-powered robotic solutions expand the application horizon in agriculture specific to crop monitoring and diagnosis, where the aim is to increase the yield significantly.
Your Edge Blog Team: We seem to hear quite frequently about the different ways businesses are using Machine Learning algorithms to more intelligently extract and analyze the volumes of data at their disposal to better predict market trends and opportunities, understand customer behavior, anticipate and pre-empt potentially disruptive events. Would you say that Machine Learning is more mature than Deep Learning? Or does Machine Learning just get a bit more love because it is the term most often associated with AI (even though we now know Machine Learning AND Deep Learning are both AI subsets)?
Yan: Machine Learning does apply more broadly as it’s less restrictive in data volume and computation resource. On the other hand, Deep Learning enjoys the performance niche and breakthrough when the prerequisites can be met—when the large data set and GPUs are available.
Your Edge Blog Team: We spoke briefly last time about potential AI and more specifically ANI (i.e. Machine Learning) applications in retail, manufacturing, transportation and logistics. Do you feel as though Machine Learning is more prevalent in industry today? Is that the model organizations tend to adopt first? And, if so, why do you think that is?
Yan: Machine Learning has seen successful deployments in many applications across industries. Organizations in verticals such as manufacturing, retail and logistics are actively considering and evaluating Machine Learning’s potentials. During a recent site visit to a customer site, the customer rep asked specifically if Zebra can apply Machine Learning to enhance solutions. Machine Learning provides a comprehensive and systematic framework that analyzes data in various forms of images, voice and texts under a wide spectrum of scenarios. As a result, the Machine Learning solutions are more robust to imperfect data and can be transferred and/or adapted among different domains with reasonable efforts.
Your Edge Blog Team: There’s a lot of buzz about how Machine Learning and IoT are going to change the world and, specifically, how they will work together to transform the way we do business. Can you elaborate on that? How might IoT and Machine Learning work together?
Yan: IoT is a system of interconnected devices (sensors, digital equipment, etc.) with unique IDs. Initially, IoT devices provide basic data collection, tracking and monitoring of the surrounding environment. The recent surge of AI-enabled IoT (AIoT) where AI or Machine Learning functions have been deployed to the edge devices for more intelligence and real-time analytics. As a result, AI running on premises, i.e. edge AI, has been forecasted as one of the top AI trends. Compared to AI services running remotely, edge AI provides greater privacy and low-latent inference with little extra network bandwidth required. In summary, Machine Learning brings more intelligence to IoT, provides more insights to its sensing environment and enables users to make better and quicker decisions.
Your Edge Blog Team: What about IoT and Deep Learning? Could they work in a complementary manner as well?
Yan: Definitely, IoT and Deep Learning augment each other. IoT devices provide continuous large data flow needed by Deep Learning to reach its peak performance. On the other hand, Deep Learning provides more intelligence and value to IoT devices. Tech giants such as Google, Amazon, and Microsoft have ramped up their AIoT investment and product offerings for multiple industries, e.g. Google IoT cloud, AWS IoT Greengrass and Microsoft Azure IoT.
An important clarification to make here related to edge AI or Machine Learning on IoT devices: although Deep Learning requires hours or days to train good models, inference/prediction is usually much faster at the level of milliseconds or seconds on GPUs. Moreover, Deep Learning model compression and optimization has seen significant advancement which enables accelerated Deep Learning inference on computation-constrained edge devices. This helps to further advance the adoption of Deep Learning onto IoT.
Your Edge Blog Team: One final question…if Machine Learning and Deep Learning are subsets of AI, does that mean they are technically pre-requisites for achieving a true AI output? Would you consider them stepping stones to the “holy grail” active AI experience?
Yan: Machine Learning is an essential component of AI. Several indispensable elements are critical to a successful AI solution including:
1. A viable business case where ML or AI brings value and solves a customer’s problem.
2. A large set of high-quality data with representative variations that facilitate Machine Learning model training.
3. Adequate computation resource with GPUs.
4. Experienced practitioners with a pragmatic attitude on problem solving and solution refinement through fast iterations.
For all these components, human effort plays an important role through the entire process from selecting applicable use cases to solution development.
If you are interested in continuing the conversation about Machine Learning, Deep Learning, AI or IoT with Yan and other members of Zebra’s CTO team, leave us a note in the Comments section below or send us a message.
Want Our Bi-Weekly Blog Roundup?
Subscribe to Zebra's Blog
Prefer Real-Time Notifications?
Get the RSS feeds
Search the Blog
Are You a Zebra Developer?
Find more technical discussions on our Developer Portal blog.
Reflexis is Now Part of Zebra Technologies
Visit the Reflexis blog for more retail, hospitality and banking-related insights.
Fetch Robotics is Now Part of Zebra Technologies
Visit the Fetch blog for robotics-related insights.
Looking for more expert insights?
Visit the Zebra Story Hub for more interviews, news, and industry trend analysis.