All posts by Algorithmia

Customer churn prediction with machine learning

Illustration of revolving door with customers leaving

Why is churn prediction important? 

Defined loosely, churn is the process by which customers cease doing business with a company. Preventing a loss in profits is one clear motivation for reducing churn, but other subtleties may underlie a company’s quest to quell it. Most strikingly, the cost of customer acquisition usually starkly outweighs that of customer retention, so stamping out churn also compels from a more subtle financial perspective. 

While churn presents an obvious difficulty to businesses, its remedy is not always immediately clear. In many cases, and without descriptive data, companies are at a loss as to what drives it. Luckily, machine learning provides effective methods for identifying churn’s underlying factors and proscriptive tools for addressing it.

Methods for solving high churn rate

As with any machine learning task, the first, and often the most crucial step, is gathering data. Typical datasets used in customer churn prediction tasks will often curate customer data such as time spent on a company website, links clicked, products purchased, demographic information of users, text analysis of product reviews, tenure of the customer-business relationship, etc. The key here is that the data be high quality, reliable, and plentiful. 

Good results can often still be obtained with sparse data, but obviously more data is usually better. Once the data has been chosen, the problem must be formulated and the data featurization chosen. It’s important that this stage be undertaken with an attention to detail, as churn can mean different things to different enterprises. 

Types of churn

For some, the problem is best characterized as predicting the ratio of churn to retention. For others, predicting the percentage risk that an individual customer will churn is desired. And for many more, identifying churn might constitute taking a global perspective and predicting the future rate at which customers might exit the business relationship. All of these are valid qualifications, but they must be chosen consistently across the customer churn prediction pipeline.

Once the data has been chosen, prepped, and cleaned, modeling can begin. While identifying the most suitable deep learning prediction model can be more of an art than a science, we’re usually dealing with a classification problem (predicting whether a given individual will churn) for which certain models are standards of practice. 

For classification problems such as this, both decision trees and logistic regression are desirable for their ease of use, training and inference speed, and interpretable outputs. These should be the go-to methods in any practitioner’s toolbox for establishing a baseline accuracy before moving onto more complex modeling choices. 

For decision trees, the model can be further tweaked by experimenting with adding random forests, bagging, and boosting. Beyond these two choices, Convolutional Neural Networks, Support Vector Machines, Linear Discriminant Analysis, and Quadratic Discriminant Analysis can all serve as viable prediction models to try. 

Defining metrics with customer data 

Once a model has been chosen, it needs to be evaluated against a consistent and measurable benchmark. One way to do this is to examine the model’s ROC (Receiver Operating Characteristic) curve when applied to a test set. Such a curve plots the True Positive rate against the False Positive rate. By looking to maximize the AUC (area under the curve), one can tune a model’s performance or assess tradeoffs between different models. 

Another useful metric is the Precision-Recall Curve, which, you guessed it, plots precision vs. recall. It’s useful in problems where one class is more qualitatively interesting than the other, which is the case with churn because we’re interested in the smaller proportion of customers looking to leave than those who aren’t (although we do care about them as well). 

In this case, a business would hope to develop potential churners with high precision so as to target potential interventions at them. For example, one such intervention might involve an email blast offering coupons or discounts to those most likely to churn. By carefully selecting which customers to target, businesses can allay the cost of these redemptive measures and increase their effectiveness.

Sifting through insights from model output 

Once the selected model has been tuned, a post-hoc analysis can be conducted. An examination of which input data features were most informative to the model’s success could suggest areas to target and improve. The total pool of customers can even be divided into segments, perhaps by using a clustering algorithm such as k-means. 

This allows businesses to hone in on the particular markets where they may be struggling and custom tailor their churn prevention approaches to meet those markets’ individual needs. They can also tap into the high interpretability of their prediction model (if such an interpretable model was selected) and use it to identify the decisions which led those customers to churn.

Combating churn with machine learning

While churn prediction can look like a daunting task, it’s actually not all that different from any machine learning problem. When looked at generally, the overall workflow looks much the same. However, special care must be given to the feature selection, model interpretation, and post-hoc analysis phases so that appropriate measures can be taken to alleviate churn. 

In this way, the key skill in adapting machine learning to churn prediction lies not in any particular, specialized model to the task but in the domain knowledge of the practitioner and that person’s ability to make knowledgeable business decisions given the black box of a model’s output.

Data science and machine learning: their differences and how they are used in the enterprise

People often confuse data science and machine learning,  but they are in fact separate entities, despite what the memes say. Let’s make clear what differences there are between data science and machine learning and give some examples of how each is used in business settings.

Is data science the same as machine learning?

Data science and machine learning are similar but not the same thing. Data science is a broad category of work that deals with data and computing. Machine learning falls into that category, but not all data science is machine learning. It’s like how all squares are rectangles but not all rectangles are squares. All machine learning is data science, but not all data science is machine learning. 

What is data science?

Data science includes programming skills and knowledge of mathematics and statistics with the goal of gaining meaningful insights from data. Data analysis, information engineering, artificial intelligence, and machine learning all fall under the category of data science. 

What is machine learning?

Machine learning is actually a type of data analysis with automated analytical model building. As a branch of artificial intelligence, machine learning is based on the notion that systems can learn from datasets, identify patterns within them, and make decisions without human control. In machine learning projects, a data scientist builds a model programmed to find patterns with certain rules. Then, the model is fed training data to analyze in order to quality-control the results. Once it is properly trained, the machine learning model is ready to perform its function without the help of humans.

How is data science used in the enterprise?

Data science has a wide range of uses, involving all parts of the enterprise from marketing to finance. Data science has proved its value, and data scientists are always finding new ways to implement solutions in the enterprise. The most data-driven businesses tend to win, so companies today cannot expect to be successful without leveraging data. Here are a few of the ways data science is being used in the enterprise.

  • Product Development: There is a lot of information that needs to go into product decisions. Data science makes it easier to analyze all the relevant data to come to the best conclusion possible. Data science makes product development not only more efficient but also metrics-based, a smart way to conduct business. 
  • Price Optimization: Keeping prices competitive is crucial in industries such as ecommerce. Data science can be used to scrape prices from competing sites and implement dynamic pricing to keep prices lower than the competition. 
  • Product recommendations: Recommended products often drive upsells on retail sites, and these are made using data science to analyze customer interactions with the website to glean behavioral trends and make recommendations.
  • Customer Segmentation: Data analysis can be used to segment customers into different audiences. Companies have been segmenting customers for decades, but with data science, it is becoming a more robust practice.

How is machine learning used in the enterprise?

Machine learning is a more recent development in business. Some companies are just beginning to fully grasp the potential for machine learning at the enterprise level. The possibilities really are endless for machine learning use cases. Some business processes or decisions up until recently required humans to crunch numbers and review data; they can now be done using artificial intelligence algorithms. Here are some of the popular ways companies are using machine learning, but remember, there are always new solutions being developed. 

  • Fraud Detection: Models can be trained to analyze transaction details in real time and classify them as either legitimate or fraudulent, alerting the team when there is suspicious activity. 
  • Medical Diagnosis: Machine learning is now being used in healthcare diagnostics to identify patterns in images and other data. ML models can analyze MRIs, CAT scans, physician notes, and more.
  • Demand Forecasting: Predictive models can make forecasts for future demand as well as other business metrics such as customer churn, customer retention, and sales forecasts.
  • Image and Speech Recognition: Companies like Google use image recognition to classify images and for reverse image search and speech recognition for their virtual digital assistants and voice activated applications. 

Algorithmia can help

Machine learning and data science are important innovations in the business world. Algorithmia understands the value of implementing machine learning at the enterprise level, which is why we created the AI Layer

The AI Layer allows data scientists to focus on training models rather than infrastructure and deployment challenges. Machine learning models can be difficult to get into production, but with the AI Layer in place from the beginning, productionizing ML is painless.

The AI Layer empowers ML leadership, data scientists, and devops teams to deploy and serve machine learning models quickly, giving them valuable time back for focusing on evaluating model output and health. 

Currently, data scientists are spending the majority of their time on infrastructure tasks—not their core roles. The AI Layer is a serverless microservices architecture that makes deploying, serving, and scaling challenge-free.

Get a demo of the AI Layer to see how it can benefit your organization.

Continue learning

Best Practices in Machine Learning Infrastructure

Data Scientists Should be Able to Deploy and Iterate Their Own Models

Roadmap for ML: Navigating the Machine Learning Roadmap

Algorithmia is TensorFlow 2.0 Compatible

Algorithmia and TensorFlow compatible for model deployment

TensorFlow 2.0 shipped today, 30 September 2019, with new features, such as faster debugging and iteration with Eager Execution, a TensorFlow-enhanced implementation of the Keras API, and simplification and compatibility improvements across its APIs. TensorFlow 2.0 is a major upgrade that should increase performance, streamline workflows, and provide more compatibility for new or updated models.

We offer day 1 support

At Algorithmia, we believe data scientists should be able to deploy and serve models from any framework and keep up with the pace of tool development. To that end, we’re eager to announce that we support model deployments in the TensorFlow 2.0 framework—Google’s latest version that was released today. 

Effective immediately, our Cloud AI Layer customers can host, serve, and deploy pre-trained TensorFlow 2.0 models the same way they do with previous versions

Our Enterprise customers will receive the same support in their next product update.

Accessing TensorFlow 2.0 packageset on Algorithmia

While TensorFlow 2.0 includes a conversion tool for existing 1.x models, those conversions will not be fully automatic. Rest assured that the AI Layer will remain fully backward-compatible with all previous versions of TensorFlow—and the more than 15 other frameworks we support.

What’s next

We won’t stop there. We want to provide users with the freedom to choose the best tool for every job, and that means immediate support for future versions of TensorFlow and other frameworks in development. If you have any questions about framework support or our rollout schedule, please contact your account manager or send us a message.

Happy model deployment!

Five Machine Learning Applications for the Back Office

While consumer-facing applications of machine learning (ML) have gotten a lot of attention (Netflix, Uber, and Amazon) the back office deserves some recognition. Enterprise-level systems that run the business—think finance, robo-advisors, accounting, operations, human resources, and procurement—tend to be large, complex, and process-centric. But, they also use and produce large amounts of both structured and unstructured data that can be handled in new ways to save time and money. 

Machine learning combined with solution-specific software can dramatically improve the speed, accuracy, and effectiveness of back-office operations and help organizations reimagine how back-office work gets done. 

A current trend among mid and large organizations is to implement Robotic Process Automation (RPA) in the back office to minimize manual tasks and achieve efficiencies. While there are specific use cases that make RPA an appropriate technology, there are significant differences with a machine learning approach.

Robotic Process Automation and artificial intelligence

Robotic Process Automation is software that mimics human actions while AI is built to simulate human intelligence. As an example, an RPA bot can be programmed to receive invoices via email (triggered on a specific subject line), download the invoice and put in a specific folder. An AI activity would be to “read” the invoice and extract the pertinent information like amount, invoice number, supplier name, due date, etc.

One of the more interesting downsides of RPA as outlined by Garter in the Magic Quadrant Report on RPA Vendors is that RPA automations create long-term technical debt, rather than overcoming it. 

As you overlay RPA onto current technology and tasks, you are locking yourself into those technologies instead of updating and evolving. 

Organizations must manually track the systems, screens, and fields that each automation touches in each third-party application and update the RPA scripts as those systems change. This is very challenging in a SaaS world in which product updates happen much more regularly than on-prem.

As such, the shift toward AI, and specifically ML, is to improve process, not just speed. Here are five specific applications of ML that can be used to improve back-office operations:

Account reconciliation (finance)

Account reconciliations are painful and error-prone. They are also critical to every business to ensure the proper controls are in place to close the books accurately and on-time. Many companies do this manually (which really means using Excel, macros, pivot tables, and Visual Basic) or have invested in RPA, which doesn’t get you very far, or in a Boolean rules-based system, which is expensive to set up and not super accurate. 

Challenges in Account Reconciliation

An ML approach is ideal for account reconciliations, specifically matching reconciliations, because you have ground-truth data—previous successful matched transactions and consistent fields in subsequent reconciliations. The challenge has been that for large and complex data sets, the combinatorial problem of matching is really hard. Companies like Sigma IQ have focused on this problem and solved it with a combination of machine learning and point-solution software as a hosted platform.

Invoice processing/accounts payable (accounting)

We introduced invoice processing earlier in this article as a use case for ML in the back office as a way to understand the differences between RPA and ML. The reality is that every business deals with invoices at some level, and as natural language processing (NLP) and ML advance, these improvements will roll down from the enterprise level to small businesses. 

Aberdeen Group indicates that well-implemented accounts payable systems can reduce time and costs by 75 percent, decrease error rates by 85 percent, and improve process efficiency by 300 percent, so it makes sense to pursue, right?

Using ML to augment accounting

Machine learning models automate accounting

Companies like AODocs are extending their NLP and ML capabilities to take some of the pain out of invoice management by automatically capturing information from invoices and triggering the appropriate workflow. These types of solutions can greatly reduce or eliminate manual data entry, increase accuracy, and match invoice to purchase order.

Employee attrition detection (HR)

There are many applications of AI in the HR function, including applicant tracking and recruiting (resume scanning and skills analysis), attracting talent before hiring, individual skills management/performance development (primarily via regular assessment analysis), and enterprise resource management.

Using ML to track employee satisfaction

One interesting use case from an ML−NLP perspective is employee attrition. Hiring is expensive, and retaining employees and keeping them happy is imperative to sustainable growth. Identifying attrition risk requires source data—like a consistently applied employee survey that uses unstructured data analysis for the open field comments. Overlaying this data with factors such as tenure, time since last pay raise/promotion, sick days used, scores on performance reviews, skill set competitiveness with market, and generally available employment market data can help assess probability of satisfaction.

Predicting repairs and upkeep for machinery (operations)

The influx of sensors into all types of equipment including trucks, oil rigs, assembly lines, and trains means an explosion of data on usage, wear, and tear of such equipment. Pairing this data with historical records on when certain types of equipment need certain preemptive maintenance means expensive machinery can be scheduled for downtime and repair not just based on number of hours used or number of miles driven, but what actual usage is.

Predix is a General Electric company that powers industrial apps to process the historic performance data of equipment. Its sensors and signals can be used to discern a variety of operational outcomes such as when machinery might fail so that you can plan for—or even prevent—major malfunctions and downtime.

Predictive analytics for stock in transit (procurement)

For companies that spend a lot of money on hard goods that need to be moved for either input into manufacturing or delivery to a retail shelf, stock in transit is a major source of opportunity for applying ML models to predict when goods will arrive at a destination.

Machine learning models predict when goods will arrive at a destination

Item tracking has improved dramatically with sensors, but it is only a point-in-time solution that doesn’t predict when the goods will arrive or when they should arrive. Weather, traffic, type of transport, risk probabilities, and historical performance are all part of the data that can help operations nail the flow of goods for optimal process timing.

Stock in Transit 

SAP S/4HANA has an entire module dedicated to making trade-off predictions between different options for stock in transit solutions to meet customer order objectives. 

Further opportunities for back office machine learning

These are just five of the hundreds of use cases ML paired with solution-specific software can be applied in order to improve the way the back office functions. Whether it is cutting down on manual tasks, improving accuracy, reducing costs, or helping teams change their critical processes wholesale, machine learning can augment nearly every back-office process.

The state of machine learning in financial services

Machine learning for finance

The financial services industry has often been at the forefront of using new technology to solve business problems. It’s no surprise that many firms in this sector are embracing machine learning, especially now that increased compute power, network connectivity, and cloud infrastructure are cheaper and more accessible. 

This post will detail five important machine learning use cases that are currently providing value within financial services organizations. 

Fraud detection 

The cost of financial fraud for a financial services company jumped 9 percent between 2017 and 2018, resulting in a cost of $2.92 for every dollar of fraud. We have previously discussed machine learning applications in fraud detection in detail, but it’s worth mentioning some additional reasons why this is one of the most important applications for machine learning in this sector. 

Most fraud prevention models are based on a set of human-created rules that result in a binary classification of “fraud” or “not fraud.” The problem with these models is that they can create a high number of false positives. It’s not good for business when customers receive an abnormally high number of unnecessary fraud notifications. Trust is lost, and actual fraud may continue to go on undetected. 

Machine learning clustering and classification algorithms can help reduce the problem of false positives. They continually modify the profile of a customer whenever they take a new action. With these multiple points of data, the machine can take a nuanced approach to determine what is normal and abnormal behavior. 

Creditworthiness

Creditworthiness is a natural and obvious use of machine learning. For decades, banks have used very rudimentary logistic regression models with inputs like income 30-60-90-day payment histories to determine likelihood of default, or the payment and interest terms of a loan. 

The logistic model can be problematic as it can penalize individuals with shorter credit histories or those who work outside of traditional banking systems. Banks also miss out on additional sources of revenue from rejected borrowers who would likely be able to pay.

With the growing number of alternative data points about individuals related to their financial histories (e.g., rent and utility bill payments or social media actions), lenders are able to use more advanced models to make more personalized decisions about creditworthiness. For example, a 2018 study suggests that a neural network machine learning model may be more accurate at predicting likelihood of default as compared to logistic regression or decision-tree modeling. 

Despite the optimism around increased equitability for customers and a larger client base for banks, there is still some trepidation around using black box algorithms for making lending decisions. Regulations, including the Fair Credit Reporting Act, require creditors to give individuals specific reasons for an outcome. This has been a challenge for engineers working with neural networks. 

Credit bureau Equifax suggests that it has found a solution to this problem, releasing a “regulatory-compliant machine learning credit scoring system” in 2018. 

Algorithmic trading

Simply defined, algorithmic trading is automated trading using a defined set of rules. A basic example would be a trader setting up automatic buy and sell rules when a stock falls below or rises above a particular price point. More sophisticated algorithms exploit arbitrage opportunities or predict stock price fluctuations based on real-world events like mergers or regulatory approvals. 

The previously mentioned models require thousands of lines of human-written code and have become increasingly unwieldy. Relying on machine learning makes trading more efficient and less prone to mistakes. It is particularly beneficial in high frequency trading, when large volumes of orders need to be made as quickly as possible. 

Automated trading has been around since the 1970s, but only recently have companies had access to the technological capabilities able to handle advanced algorithms. Many banks are investing heavily in machine learning-based trading. JPMorgan Chase recently launched a foreign exchange trading tool that bundles various algorithms including time-weighted average price and volume-weighted average price along with general market conditions to make predictions on currency values.

Robo-advisors

Robo-advisors have made investing and financial decision-making more accessible to the average person. Their investment strategies are derived from an algorithm based on a customer’s age, income, planned retirement date, financial goals, and risk tolerance. They typically follow traditional investment strategies and asset allocation based on that information. Because robo-advisors automate processes, they also eliminate the conflict of financial advisors not always working in a client’s best interest.

While robo-advisors are still a small portion of assets under management by financial services firms ($426 billion in 2018), this value is expected to more than triple by 2023. Customers are enticed by lower account minimums (sometimes $0), and wealth management companies save on the costs of employing human financial advisors. 

Cybersecurity and threat detection 

Although not unique to the financial services industry, robust cybersecurity protocols are absolutely necessary to demonstrate asset safety to customers. This is also a good use case to demonstrate how machine learning can play a role in assisting humans rather than attempting to replace them. Specific examples of how machine learning is used in cybersecurity include: 

Malware detection: Algorithms can detect malicious files by flagging never-before-seen software attempting to run as unsafe. 

Insider attacks: Monitoring network traffic throughout an organization looking for anomalies like repeated attempts to access unauthorized applications or unusual keystroke behavior

In both cases, the tedious task of constant monitoring is taken out of the hands of an employee and given to the computer. Analysts can then devote their time to conducting thorough investigations and determining the legitimacy of the threats.

It will be important to watch the financial sector closely because its use of machine learning and other nascent applications will play a large role in determining those technologies’ use and regulation across countless other industries.