Algorithmia Blog - Deploying AI at scale

Customer churn prediction with machine learning

Illustration of revolving door with customers leaving

Why is churn prediction important? 

Defined loosely, churn is the process by which customers cease doing business with a company. Preventing a loss in profits is one clear motivation for reducing churn, but other subtleties may underlie a company’s quest to quell it. Most strikingly, the cost of customer acquisition usually starkly outweighs that of customer retention, so stamping out churn also compels from a more subtle financial perspective. 

While churn presents an obvious difficulty to businesses, its remedy is not always immediately clear. In many cases, and without descriptive data, companies are at a loss as to what drives it. Luckily, machine learning provides effective methods for identifying churn’s underlying factors and proscriptive tools for addressing it.

Methods for solving high churn rate

As with any machine learning task, the first, and often the most crucial step, is gathering data. Typical datasets used in customer churn prediction tasks will often curate customer data such as time spent on a company website, links clicked, products purchased, demographic information of users, text analysis of product reviews, tenure of the customer-business relationship, etc. The key here is that the data be high quality, reliable, and plentiful. 

Good results can often still be obtained with sparse data, but obviously more data is usually better. Once the data has been chosen, the problem must be formulated and the data featurization chosen. It’s important that this stage be undertaken with an attention to detail, as churn can mean different things to different enterprises. 

Types of churn

For some, the problem is best characterized as predicting the ratio of churn to retention. For others, predicting the percentage risk that an individual customer will churn is desired. And for many more, identifying churn might constitute taking a global perspective and predicting the future rate at which customers might exit the business relationship. All of these are valid qualifications, but they must be chosen consistently across the customer churn prediction pipeline.

Once the data has been chosen, prepped, and cleaned, modeling can begin. While identifying the most suitable deep learning prediction model can be more of an art than a science, we’re usually dealing with a classification problem (predicting whether a given individual will churn) for which certain models are standards of practice. 

For classification problems such as this, both decision trees and logistic regression are desirable for their ease of use, training and inference speed, and interpretable outputs. These should be the go-to methods in any practitioner’s toolbox for establishing a baseline accuracy before moving onto more complex modeling choices. 

For decision trees, the model can be further tweaked by experimenting with adding random forests, bagging, and boosting. Beyond these two choices, Convolutional Neural Networks, Support Vector Machines, Linear Discriminant Analysis, and Quadratic Discriminant Analysis can all serve as viable prediction models to try. 

Defining metrics with customer data 

Once a model has been chosen, it needs to be evaluated against a consistent and measurable benchmark. One way to do this is to examine the model’s ROC (Receiver Operating Characteristic) curve when applied to a test set. Such a curve plots the True Positive rate against the False Positive rate. By looking to maximize the AUC (area under the curve), one can tune a model’s performance or assess tradeoffs between different models. 

Another useful metric is the Precision-Recall Curve, which, you guessed it, plots precision vs. recall. It’s useful in problems where one class is more qualitatively interesting than the other, which is the case with churn because we’re interested in the smaller proportion of customers looking to leave than those who aren’t (although we do care about them as well). 

In this case, a business would hope to develop potential churners with high precision so as to target potential interventions at them. For example, one such intervention might involve an email blast offering coupons or discounts to those most likely to churn. By carefully selecting which customers to target, businesses can allay the cost of these redemptive measures and increase their effectiveness.

Sifting through insights from model output 

Once the selected model has been tuned, a post-hoc analysis can be conducted. An examination of which input data features were most informative to the model’s success could suggest areas to target and improve. The total pool of customers can even be divided into segments, perhaps by using a clustering algorithm such as k-means. 

This allows businesses to hone in on the particular markets where they may be struggling and custom tailor their churn prevention approaches to meet those markets’ individual needs. They can also tap into the high interpretability of their prediction model (if such an interpretable model was selected) and use it to identify the decisions which led those customers to churn.

Combating churn with machine learning

While churn prediction can look like a daunting task, it’s actually not all that different from any machine learning problem. When looked at generally, the overall workflow looks much the same. However, special care must be given to the feature selection, model interpretation, and post-hoc analysis phases so that appropriate measures can be taken to alleviate churn. 

In this way, the key skill in adapting machine learning to churn prediction lies not in any particular, specialized model to the task but in the domain knowledge of the practitioner and that person’s ability to make knowledgeable business decisions given the black box of a model’s output.

Perfect order fulfillment: a Tevec case study

Shipping containers

Read the case study

Algorithmia is fortunate to work with companies across many industries with varied use cases as they develop machine learning programs. We are delighted to showcase the great work one of our customers is doing and how the AI Layer is able to power their machine learning lifecycle.

Tevec is a Brazil-based company that hosts Tevec.AI, a supply chain recommendation platform that uses machine learning to forecast demand and suggest optimized replenishment/fulfillment order for logistics chains. Put simply, Tevec ensures retailers and goods transport companies deliver their products to the right place at the right time.

In founder Bento Ribeiro’s own words, the “Tevec Platform is a pioneer in the application of machine learning for the recognition of demand behavior patterns, automating the whole process of forecasting and calculation of ideal product restocking lots at points of sale and distribution centers, allowing sales planning control, service level, and regulatory stocks.”

Tevec runs forecasting and inventory-optimization models and customizes user permissions so they can adjust the parameters of their inventory routine, such as lead times, delivery dates, minimum inventory, and service levels. Users can fine-tune the algorithms and adapt for specific uses or priorities. 

The challenge: serving and managing at scale

Initially, Tevec was embedding ML models directly into its platform, causing several issues:

  • Updating: models and applications were on drastically different update cycles, with models changing many times between application updates
  • Versioning: model iterating and ensuring all apps were calling the most appropriate model was difficult to track and prone to error
  • Data integrations: manual integrations and multi-team involvement made customization difficult
  • Model management: models were interacting with myriad endpoints such as ERP, PoS systems, and internal platforms, which was cumbersome to manage

Algorithmia provides the ability to not worry about infrastructure and guarantees that models we put in production will be versioned and production-quality.”  

Luiz Andrade, CTO, Tevec

The solution: model hosting made simple with serverless microservices

Tevec decoupled model development from app development using the AI Layer so it can seamlessly integrate API endpoints, and users can maintain a callable library of every model version. Tevec’s architecture and data science teams now avoid costly and time-consuming DevOps tasks; that extra time can be spent on building valuable new models in Python, “the language of data science,” Andrade reasons. That said, with the AI Layer, Tevec can run models from any framework, programming language, or data connector—future-proofing Tevec’s ML program.

With Algorithmia in place, Tevec’s data scientists can test and iterate models with dependable product continuity, and can customize apps for customers without touching models, calling only the version needed for testing. 

Algorithmia’s serverless architecture ensures the scalability Tevec needs to meet its customers demands without the costs of other autoscaling systems, and Tevec only pays for compute resources it actually uses.

Looking ahead

Tevec continues to enjoy 100-percent year-on-year growth, and as it scales so will its ML architecture deployed on Algorithmia’s AI Layer. Tevec is planning additional products beyond perfect order forecasts and it is evaluating new frameworks for specific ML use cases—perfect for the tool-agnostic AI Layer. Tevec will continue to respond to customer demands as it increases the scale and volume of its service so goods and products always arrive on time at their destinations.

Algorithmia is the whole production system, and we really grabbed onto the concept of serverless microservices so we don’t have to wait for a whole chain of calls to receive a response.”

Luiz Andrade, CTO, Tevec

Read the full Tevec case study.

AI and the Cloud: Cloud Machine Learning

If you’ve been keeping informed of what’s happening in the AI and machine learning world, you’ve probably heard a lot of talk about this nebulous thing called the cloud. While the cloud is often used to describe a variety of offerings for decentralized computing, there’s an underlying similarity between all such services. 

Use cases for cloud machine learning

Simply put, the cloud consists of collections of anonymous servers housed by tech companies in server farms, and the use cases for the cloud are endless. These servers are used to do everything from running the latest high tech machine learning algorithms on your data to hosting your website to serving as cloud storage for your photography collection. 

Using the cloud is a vital component of most tech businesses in this new AI age, and whoever ends up dominating the market will stand to become entrenched for years to come.

Costs and benefits of a cloud AI platform

For AI and machine learning, the key benefit of the cloud to practitioners lies in the fact that for most people, setting up and hosting their own machine learning infrastructure is prohibitively expensive. Entry-level GPU cards for training machine learning models run close to $1,000, and the best cards run 2-4 times that. Of course, for many models you achieve greater training speeds by running cards in parallel, but doing so requires purchasing multiple cards and networking them together—no easy feat. 

On top of this, you need to house the cards in a desktop of some sort with sufficiently powerful cooling capabilities to prevent overheating. Then you need to factor in the costs of supplying power to the system, as training machine learning models is incredibly resource-intensive. After all is said and done, in order to build an elite machine learning hardware setup, you’re looking at startup costs of potentially over $10,000, and this isn’t even taking into account what would be involved if you were interested in using more specialized hardware such as TPUs or FPGAs.

Serverless ML architectures offer potentially infinite scalability when run on cloud services, and their real-time scaling produces minimal waste, generating only the resources needed to respond to demand. For these reasons, serverless is the clear choice for cloud-based machine learning. However, without proper configuration, organizations run the risk of underprovisioning resources in their quest for efficiency.

Using the cloud with trained models

Getting started with training models on the cloud is incomparably simple. Using a cloud provider, you can simply choose a machine with compute power sufficient for your task, spin up an instance, load your libraries and code, and be off to the races. 

Serverless costs range anywhere from a few cents to a few dollars per hour, and you only pay for the time you use. You can shut off the machine whenever you like, and of course you don’t have to deal with all the costs involved in hardware setup, failure, and maintenance. 

Hardware for cloud AI platforms 

Certain cloud providers also give access to niche hardware that’s not available anywhere else. For example, using GCP you can train your machine learning models on TPUs, specialized processors designed to handle complex tensor arithmetic. Other platforms offer access to FPGAs. 

For most people and most workloads, it’s hard to beat the diversity of hardware options and affordable pay-as-you-go model that the cloud provides. That’s not to say that running applications on the cloud will always be inexpensive. For example, it costs OpenAI over $250/hr just to train their latest NLP language model, GPT-2. 

Hosting models in the cloud

The cloud isn’t just for training models—it’s used for hosting them too. Data scientists and developers can package their trained models as services and then deploy them to generate online predictions. Cloud services can also provide useful analytics to hosts about server load and how many times their model was queried.

Avoiding lock-in

For enterprises, choosing a cloud service is an important step in establishing a tech stack because switching providers downstream can often be difficult. Once an organization couples its code, developer team, and infrastructure to a specific framework or service, those choices can be hard to undo, simply due to how hierarchical the development process is. 

Code is built atop code, and making changes in the core libraries often involves rewriting and reworking a sizable portion of the code base. What’s more, many services have specific frameworks and AI platforms tied to their usage. AWS uses SageMaker, and GCP is optimized for use with TensorFlow. GCP also provides a service called Cloud AutoML, which will automate the process of training a machine learning model for you. 

Algorithmia’s AI Layer supports any cloud-based model deployment and serving need so users can avoid vendor lock-in. We have built-in tools for versioning, serving, deployment, pipelining, and integrating with your current workflows. 

The AI Layer integrates with any data connectors your organization is currently using to make machine learning easier, taking you from data collection to model deployment and serving much faster. 

As AI research progresses and becomes more accessible, the only thing that’s clear is that the cloud is a key component of the evolving AI landscape and will continue to be for the foreseeable future. 

Interested in learning more about the AI Layer? Get a demo to see if the AI Layer is the right solution for your organization.

Harnessing Machine Learning for Data Science Projects 

Jira workflow animation

We hear more and more everyday that businesses are sitting on troves of valuable data. It’s compared to precious metals, unrefined oil, or cash in a vault. But those items aren’t valuable simply because they exist. Their value comes from what is created out of them. The same holds true for data. Rows full of numbers and text only become useful when you can tell stories and draw insights from them. 

For those less familiar with data-driven business initiatives, the path from raw data, to extracting insights, to making decisions based on those insights may seem like a black hole. But like any process of turning a raw material into a valuable product, there is a system to follow and a way to avoid the black hole. In the case of data, it comes in the form of data science projects. 

The intent of this article is to guide you through the process of creating and executing a data science project, including selecting machine learning models most appropriate for your goals. While this is written in the business context, this process is relevant to those working on personal projects as well. 

What is a data science project?

A data science project is the structured process of using data to answer a question or solve a business problem. Conducting data science projects is becoming more common as more companies become more proactive about finding value in the data they have been storing. Common goals for these projects include: 

  • Developing more targeted and effective marketing campaigns
  • Increasing internal operational efficiency 
  • Revenue forecasting 
  • Predicting likelihood of default (banking/financial services) 

Prompting a data science project

There are two common scenarios in which a data science project might start. The first begins at the top of an organization with directives from senior management. They may have outlined specific problems to be explored and are looking for employees to find opportunities for improvement through the use of data. It’s common for organizations like this to have data scientists or senior analysts embedded in divisions of the organization. This helps them obtain the relevant business knowledge, in addition to their technical skills, to draw out relevant insights. 

Data science projects can also begin at the individual level. It’s not uncommon for an employee to notice a problem or inefficiency and want to fix it. If they have access to the company’s data warehouse and analytics tools, they may begin their investigation alone before bringing others in on the project.  

An example of a data science project

A data scientist at a brick-and-mortar retailer may be tasked with developing a predictive model to judge the likely success of new locations of the store. The business goal of this project is for real estate and facilities division team members of this company to understand the success of other established locations and use this knowledge to guide decision making in future transactions. 

Note that we will use this retail location example and variations on it for the entirety of this piece to further emphasize points. 

How does machine learning fit into a data science project?

Before getting too far into this discussion, we need to define a few terms. There is often some confusion between machine learning and data science, with some individuals believing that one is “better” than the other, or that they are somehow mutually exclusive. 

Data science is an encompassing term that refers to a discipline whose main pillars are: 

  • Mathematics, specifically statistics
  • Computer science 
  • Business acumen and domain knowledge

Machine learning is a subfield of artificial intelligence. It is the process of using algorithms to learn and understand large amounts of data and then make predictions based on specific questions asked. Machine learning regression modeling is where math and computer science intersect, as it takes compute power and a knowledge of programming to develop and build on these statistical models. 

From these definitions, it should be clear that machine learning is a vital component of data science. It is the bridge between raw data and solving business problems. You will need to build models and validate them before drawing any conclusions or providing recommendations.

The data science workflow and project process 

When beginning your data science project, it’s useful to frame it as a series of questions that we will discuss in detail. 

  1. What business problem am I trying to solve?

While your personal projects don’t necessarily require a specific focus, businesses are looking to reach certain targets like increasing revenue, cutting costs, operating more efficiently, decreasing customer churn, etc. 

With that in mind, consider how the answer to your project question would influence the business. Ideally, it would give the company the information it needs to develop a plan of action.  

Let’s illustrate this using our retail store example. Instead of asking “Which store brought in the most revenue during Q2?” frame it as “Why did store 123 bring in the most revenue in Q2?” The first question gives you a simple answer that probably can’t be acted upon without further research. The second question suggests that recommendations can easily be extracted out of the answer. 

If you are unsure of the question you want to ask, it’s helpful to first engage in exploratory analysis—making visualizations and small manipulations of the raw data, especially in your area of the business. If anything jumps out, or looks like an opportunity for further research, you can begin your question there. 

  1. Do I have all of the data I need to answer this question?

To develop a predictive model about retail store success, you probably need some the following information: 

  • Store address
  • Type of location (In a mall? Standalone building?)
  • Revenue by period
  • Square footage
  • Daily traffic 
  • Number of employees per location 

Your company likely has all of this information, but it’s probably stored within various SaaS applications and databases. In addition, you may need some information from publicly available data sources like demographics, population, and weather trends, to round out your picture of the location. 

  1. How will I put everything together in a manageable form?

Combining data sources into a form that you can analyze usually involves the ETL (Extract, Transform, Load) process through the use of one or multiple tools. 

Here’s an overview of ETL:

  • Extraction – The process of pulling data from various sources (relational databases, SaaS applications, etc.).
  • Transformation – Data undergoes a series of changes based on rules that meet the requirements needed for analysis. This step includes data cleaning and normalization (putting numerical values in standard units).  
  • Load – Extracted and transformed data is sent to the end system, usually a data warehouse where it can be linked to an analytics tool. 
  1. How will I approach the analysis?

Before deciding on the machine learning model you will use (we’ll get into some actual use cases in the next section), think about how you would frame the answer to your question. Maybe you’re going to make a prediction or possibly uncover segments. What you choose to do will depend on the type of data available to you and your business goals. 

  1. How will I communicate my results to a broader audience?

In other words, what do you plan on doing with the results of your data science project? For example, will you create a dashboard, send a report to interested parties once a month? Or only discuss when asked about it—remember, you are trying to provide value to the business. This is particularly important point to keep in mind for self-directed projects. 

Which algorithms are used for machine learning?

Machine learning algorithms can be broken down broadly into two methods: supervised learning and unsupervised learning. A supervised method requires there to be a defined target with data to compare it to. An unsupervised method does not have any specific target. 

Let’s illustrate this difference with two questions related to retail stores in our hypothetical example. 

  • Unsupervised: Do our retail stores fall into natural groupings?
  • Supervised: How can we identify stores with a high likelihood of converting customers into store credit card holders?

The supervised question has an explicit target: we want to find stores that share a business-specific characteristic. The unsupervised grouping isn’t looking for anything in particular.

It’s important to note that neither of these methods is “better” or more useful than the other. Their value depends completely on business goals. An unsupervised method is particularly useful when trying to uncover segments that don’t appear obvious by just looking at data laid out in spreadsheets. 

In our retail store example, once placing stores in natural groupings, business teams might be able to use their domain knowledge and intuition to infer something about these stores that is not explicitly laid out in the data. The supervised example is useful for a company that has a goal in mind, and wants to bring all stores up to the level of the successful ones. 

Supervised machine learning methods

  • Regression – This is a predictive data science algorithm that explores the relationship between a dependent variable and one or more independent variables. The output is always a numeric value. Continuing with our example, you could use a linear regression to predict a new loctaion’s potential revenue, given a set of numeric variables.  
  • Classification – This is a predictive method used to determine which category a new observation belongs to. The target output is two or more categories, often framed simply as “yes” or “no.” Example: Given the data we have about other store locations, and our definition of success, should we open a new store in this location? Yes/No. 
  • Class probability estimation – A binary classification is not always useful in every situation. Even our retail store example requires more nuance than a simple yes or no. This is the advantage of class probability estimation, which predicts the likelihood that a new observation belongs to a specific class. Example: Given the data we have about other stores, and our definition of success, what is the likelihood this new store will be successful? The output is a numeric estimate between 0 and 1. 

Unsupervised machine learning methods

  • Clustering – The unsupervised question examples earlier would probably lead a data scientist to develop a clustering model. Clustering means grouping observations based on similarities. It’s also a form of exploratory data analysis. When interpreting clusters, you will need to look at the underlying components of each group, conduct summary statistics, and compare this information to other groups. It’s important to determine if these clusters have any significant meaning based on your knowledge of the business.  
  • Dimension reduction – When attempting to analyze multiple large data sets, you can run into the problem of having too many variables that are intercorrelated. Dimension reduction is the process of eliminating redundant variables in a data set. This is a reduction of the number of variables in a data set. Breaking down data into vital components can be analysis in and of itself, or it can be a first step in refining linear regression models. A commonly used dimension reduction data science algorithm is principal components analysis (PCA).

Neural networks and how they fit into data science algorithms

Neural networks have come in and out of fashion in the computer science and cognitive computing communities for the past seven decades. They have seen a resurgence recently because of an increase in compute power and more practical applications of the technology. Neural networks are also the underlying architecture of deep learning AI.  

Neural networkWhile neural networks are really their own discipline, we’ll discuss them briefly here. Neural networks have three parts: the input layer, output layer, and hidden layer. The input and output layers are part of almost any algorithm—you provide data, and the computer returns some information. The hidden layer is the interesting part. You can think of it as a stack of algorithms (supervised or unsupervised), that build on each other until it reaches a final output. 

Neural networks are often referred to as “black boxes,” meaning you don’t really have an understanding of the “thought” process. In some situations it may be fine not to know, but in other business contexts like financial services and credit scoring, this lack of transparency can be problematic. Keep this in mind if you are considering incorporating neural networks into your data science project.

An additional risk of neural networks is that they can fit training data too well, and become irrelevant when trying to analyze general population data. 

The importance of data structures and algorithms in data science 

As we mentioned earlier, the technical component of data science skills is where math and computer science meet. Having a foundation in statistical methods is essential to data science, as is having an understanding of not just programming, but computer science itself. 

Data structures and algorithms are the foundation of computer science. A data structure is an organized way of storing data and using it efficiently. And as discussed, an algorithm is an unambiguous, finite, step-by-step procedure to reach a desired output. 

So why is this important to a data scientist? For one, developing algorithms for data science projects is not a one-time task. You will be constantly refining the model with new variables and rows of data. With more data comes more demands on processors and records that take longer to access. Large-scale data science projects cannot be efficiently modified or replicated without the base understanding of how data is organized and processed in a computer. Data scientists should not be reinventing the wheel every time they develop an algorithm. Instead, they should be thinking about how an algorithm can be easily scaled and reproduced

Data science and machine learning: their differences and how they are used in the enterprise

People often confuse data science and machine learning,  but they are in fact separate entities, despite what the memes say. Let’s make clear what differences there are between data science and machine learning and give some examples of how each is used in business settings.

Is data science the same as machine learning?

Data science and machine learning are similar but not the same thing. Data science is a broad category of work that deals with data and computing. Machine learning falls into that category, but not all data science is machine learning. It’s like how all squares are rectangles but not all rectangles are squares. All machine learning is data science, but not all data science is machine learning. 

What is data science?

Data science includes programming skills and knowledge of mathematics and statistics with the goal of gaining meaningful insights from data. Data analysis, information engineering, artificial intelligence, and machine learning all fall under the category of data science. 

What is machine learning?

Machine learning is actually a type of data analysis with automated analytical model building. As a branch of artificial intelligence, machine learning is based on the notion that systems can learn from datasets, identify patterns within them, and make decisions without human control. In machine learning projects, a data scientist builds a model programmed to find patterns with certain rules. Then, the model is fed training data to analyze in order to quality-control the results. Once it is properly trained, the machine learning model is ready to perform its function without the help of humans.

How is data science used in the enterprise?

Data science has a wide range of uses, involving all parts of the enterprise from marketing to finance. Data science has proved its value, and data scientists are always finding new ways to implement solutions in the enterprise. The most data-driven businesses tend to win, so companies today cannot expect to be successful without leveraging data. Here are a few of the ways data science is being used in the enterprise.

  • Product Development: There is a lot of information that needs to go into product decisions. Data science makes it easier to analyze all the relevant data to come to the best conclusion possible. Data science makes product development not only more efficient but also metrics-based, a smart way to conduct business. 
  • Price Optimization: Keeping prices competitive is crucial in industries such as ecommerce. Data science can be used to scrape prices from competing sites and implement dynamic pricing to keep prices lower than the competition. 
  • Product recommendations: Recommended products often drive upsells on retail sites, and these are made using data science to analyze customer interactions with the website to glean behavioral trends and make recommendations.
  • Customer Segmentation: Data analysis can be used to segment customers into different audiences. Companies have been segmenting customers for decades, but with data science, it is becoming a more robust practice.

How is machine learning used in the enterprise?

Machine learning is a more recent development in business. Some companies are just beginning to fully grasp the potential for machine learning at the enterprise level. The possibilities really are endless for machine learning use cases. Some business processes or decisions up until recently required humans to crunch numbers and review data; they can now be done using artificial intelligence algorithms. Here are some of the popular ways companies are using machine learning, but remember, there are always new solutions being developed. 

  • Fraud Detection: Models can be trained to analyze transaction details in real time and classify them as either legitimate or fraudulent, alerting the team when there is suspicious activity. 
  • Medical Diagnosis: Machine learning is now being used in healthcare diagnostics to identify patterns in images and other data. ML models can analyze MRIs, CAT scans, physician notes, and more.
  • Demand Forecasting: Predictive models can make forecasts for future demand as well as other business metrics such as customer churn, customer retention, and sales forecasts.
  • Image and Speech Recognition: Companies like Google use image recognition to classify images and for reverse image search and speech recognition for their virtual digital assistants and voice activated applications. 

Algorithmia can help

Machine learning and data science are important innovations in the business world. Algorithmia understands the value of implementing machine learning at the enterprise level, which is why we created the AI Layer

The AI Layer allows data scientists to focus on training models rather than infrastructure and deployment challenges. Machine learning models can be difficult to get into production, but with the AI Layer in place from the beginning, productionizing ML is painless.

The AI Layer empowers ML leadership, data scientists, and devops teams to deploy and serve machine learning models quickly, giving them valuable time back for focusing on evaluating model output and health. 

Currently, data scientists are spending the majority of their time on infrastructure tasks—not their core roles. The AI Layer is a serverless microservices architecture that makes deploying, serving, and scaling challenge-free.

Get a demo of the AI Layer to see how it can benefit your organization.

Continue learning

Best Practices in Machine Learning Infrastructure

Data Scientists Should be Able to Deploy and Iterate Their Own Models

Roadmap for ML: Navigating the Machine Learning Roadmap