Five myths of MLOps
2021 will be a pivotal year for businesses, especially with regards to how they adopt and integrate machine learning technology into their existing infrastructure. If the economic impacts of the COVID-19 pandemic have taught us anything, it’s that success in business is inextricably tied to the ability to adapt to rapidly changing market conditions.
Just like with previous economic disruptions, the businesses that survive—and thrive—will be those that can successfully adopt technologies that enable business transformation within a new paradigm. Machine learning is one of those technologies, and more organizations are realizing the imperative to invest in ML if they want to remain competitive. Our 2021 enterprise trends in machine learning report found that 83% of organizations have increased AI/ML budgets year-on-year and that organizations are doubling down on use cases that can yield both top- and bottom-line benefits, such as customer experience improvements and automation. Indeed, machine learning is no longer a nice-to-have—it’s a fundamental commercial necessity for any business to prosper in today’s market.
The question then becomes, how can businesses that are new to machine learning get started—and how can those that are already building ML models ensure their success?
Businesses new to ML can often get caught up in the allure of trying to design a glitzy new model in order to eke out some elusive “advantage” over competitors. However, the fact of the matter is that for nearly every commonly encountered business problem in today’s world, there exists an off-the-shelf machine learning model or algorithm that can be applied. In fact, our 2021 report revealed that while enterprises are dramatically increasing their investments in ML, the vast majority of use cases remain focused on solving ordinary business problems. The key is in just getting started with ML, integrating it into your existing operational processes, and enabling rapid iteration.
The main factor in machine learning which differentiates the successful businesses from the failures is the speed and ease at which they can iterate on new models and gather new data within their pipeline. This is where MLOps comes in. Simply put, MLOps is the discipline of AI model delivery. It’s what enables organizations to scale production capacity to deliver faster results, thereby generating significant business value.
With all that in mind, this post will address the five most common misunderstandings we encounter about MLOps, and provide advice on how to best integrate a seamless model deployment process into your organization’s existing infrastructure. Your organization needs ML in 2021, and MLOps is the key to your success.
To learn more about MLOps, you can also download Gartner’s 3-stage MLOps framework.
Myth #1: MLOps is easy
Maybe if you’re Chuck Norris. For the rest of the mere mortals among us, MLOps is fantastically hard to build on your own. Many organizations want to take a do-it-yourself approach to MLOps, but this can incur a lot of unforeseen costs, risks, and operational overhead. One common question we get is, “Doesn’t deployment just mean using a container?” In reality, it entails a whole lot more than that—and MLOps is about a lot more than just deployment.
Why is MLOps so complicated? Put simply, MLOps feels simple because it’s relatively easy to deploy a single model, but there are more factors to a successful ML project than just deployment. In reality, there are several components to MLOps that organizations often fail to consider up front. When taken together, they can quickly become too complicated for most teams to manage on their own. And even if they could, it wouldn’t be worth the opportunity cost to do so.
To give you a taste of just some of the factors at play, a full MLOps pipeline will comprise multiple steps including hardware orchestration, integration of language and framework SDKs, container management, model versioning, incorporation of advanced hardware such as multithreaded CPUs and GPUs, inference API management, load balancing, and the security and governance of users, models, data, and operations. And within each of these areas are layers of additional, complex considerations.
For example, once you select a cloud computing platform to host and serve your model—a complex task on its own—you then need to containerize all your models and manage their scaling using Kubernetes or a similar service. This usually takes an entire team of its own, or at least a few dedicated developers. On top of this, you need to layer the many interfacing processes which are crucial for handling the necessary minutiae associated with model deployment. This includes many important tasks such as versioning models, upgrading libraries, and monitoring resources. Even more importantly, the security of the system needs to be actively managed.
The bottom line? What may seem like a simple task actually entails many complicated, interacting factors that must operate in lockstep to succeed, all while being constantly maintained. Organizations that take a DIY approach ultimately end up with:
- Long lead times
- Expensive talent
- High total cost of ownership
- Polyglot programming language and framework integration that’s time-consuming to build, add to, and update over time
- Authentication and security that’s difficult, expensive, and time-consuming to build and maintain
- Limited model governance with only basic model and API versioning and no integration with DevOps tooling
To many self-efficacious organizations, the idea of outsourcing MLOps to a third-party platform can be seen as anathema. After all, it’s that resilient commitment to an entrepreneurial do-it-yourself spirit which often launches a business into the initial strata of success. However, it’s those very organizations that outsource their MLOps that see the best results; our 2021 report revealed that companies using third-party MLOps solutions spend less on infrastructure, take less time to deploy models, and spend a smaller percentage of their data scientists’ time on model deployment when compared to those that build and maintain their own MLOps systems from scratch.
Myth #2: MLOps is only useful when you have lots of models
You would think that if you only have one or a few models, you’d be better off just bootstrapping their deployment as you go, right? Wrong. All the issues you have when deploying many models are present when you only have a few. You still need to create APIs, train on advanced hardware like GPUs and TPUs, version iterations of your models, scale inference in response to user demand, generate logs and other metadata, establish security practices, and create governance around the ways in which your model is used. It’s always best to develop with the future in mind, and having a robust MLOps pipeline in place begins paying dividends even with just the first model. You don’t want to do things in an ad-hoc manner and then be stymied and disorganized once you begin adding additional models into your pipeline.
Myth #3: ML can use the same processes as regular software
This looks true on the surface, but in reality regular software and machine learning models serve very different purposes and have significantly divergent needs. Machine learning is massively differentiated from standard software by the fact that data is at the core of the application. This means that the code is built around servicing the data rather than the application behavior. That means more and larger databases, data lakes, and distributed file systems. The code for machine learning is usually modular, built up in containerized microservices and orchestrated around the interlocking data components. These require more versioning, more monitoring, and more frequent deployments. They also interact unpredictably at scale, leading to a need for frequent troubleshooting and inspection.
Machine learning also differs from the traditional software lifecycle due to developers themselves and the nature of the problems they tackle. In classic software development, teams attempt to stick to a single language and paradigm as much as possible. A firmware product might be entirely architected in C++ or a website back end in Go, with perhaps only minor variations in the language used for specialized tasks. However, machine learning is an ever-evolving ecosystem in which data scientists, who often don’t have the same background as traditional software engineers, look to adopt the latest framework, language, or library that will help them solve some data-intensive task. This is why machine learning code spans the gamut from Java, Scala, Spark, Python, Julia, R, and many others, oftentimes all within a single codebase. Being able to integrate all these tools in a fail-safe way takes a much different mindset and process than building on a standard full-stack app.
Finally, machine learning is an ever-evolving, open-loop system in which models are created, trained, and deployed, then retrained and redeployed in response to feedback from the user. This is especially true in the case of reinforcement learning, but is present in some way in all ML subtypes. In traditional software development, code is launched in fixed, incremental releases. Some changes are made by the development team, the code is pushed live, and then feedback may or may not be gathered for future features or upgrades. However, in machine learning, a model may need to be retrained on a daily basis. As customers interact with the code and provide new data, model drift and reductions in accuracy can occur, and the model will need to be retrained. Thus it’s essential that effective and streamlined processes exist for doing so such that the deployment process doesn’t become a bottleneck in the system.
Myth #4: Governance isn’t important for MLOps
AI/ML model governance refers to the policies that organizations set around their models and overall machine learning platforms. These may have to do with security, access rights, versioning, data gathering, and documentation. Governance is important because ML is often diffuse and slippery. It’s often unclear how a trained ML model might act, and models can drift and have declining performance over time. Access to ML models and their code also must be rigorously protected because customer trust is at the heart of every business. In granting you access to their data, customers implicitly confer such trust and expect that you will keep their details safe. At the end of the day, effective governance protects both your bottom line and your brand.
Governance may seem like a concern that’s only relevant for businesses with a lot of models, but it’s actually important from the very beginning. Whether you have one model or thousands, if you’re using machine learning then you need governance.
The good news is, governance doesn’t need to be a manual effort. MLOps gives you the tools you need to govern your ML models without having to create a piecemeal solution that will require constant maintenance.
Yet another reason why you need MLOps as soon as you begin investing in ML—not after.
Myth #5: I can’t afford to invest in MLOps right now
Actually, you can’t afford not to. If the economic impacts of COVID-19 have taught us anything, it’s that companies that can’t adapt to new and difficult circumstances will often get left in the dust. In a time where consumer behavior and concerns are rapidly changing, understanding your customer base using machine learning, data, and statistics is vital.
In times of economic uncertainty, many organizations respond by cutting costs. However, now is not the time to decrease your investment in ML. As we’ve seen, the businesses that survive economic disruptions are the ones that embrace the technological innovations needed to weather the storm.
You need to invest in ML now. And in order to get the most out of your ML investments, you need MLOps.
Get started with MLOps
Machine learning is not just a trend, it’s a fundamental shift in the way in which companies do business. MLOps is crucial to a smoothly functioning ML offering. The fact is that ML differs from traditional software in fundamental ways and requires a paradigm shift in terms of organizational processes. Do-it-yourself MLOps is inefficient and can introduce errors or unwanted behavior into an ML pipeline. In-sourcing your own MLOps team is expensive and laborious. Luckily, companies such as Algorithmia provide offerings to bridge this gap, taking the work out of MLOps and allowing you to focus on the important stuff, like building unique models using your proprietary data.
To learn more about MLOps, download Gartner’s 3-stage MLOps framework today.
Upcoming webinar: 7 steps to effective AI governance
Take actionable steps to improve your AI/ML governance in 2021. Join us for a webinar with H.P. Bunaes, founder of AI Powered Banking, on March 16 to learn about the 7 steps to put an effective AI governance strategy in place. Register now.