Algorithmia Blog - Deploying AI at scale

Instant, repeatable ML model deployment using DevOps principles

Algorithmia is DevOps for ML

DevOps Engineers are delighted when they find a product or service that fits in with their already refined CI/CD processes. When we choose a new pipeline tool, engineers depend on a number of factors. Ease of use, repeatable processes, and a solid support model are key. Choosing a continuous integration tool for machine learning model deployment is no different.

Algorithmia puts DevOps up front

When a company starts with a DevOps perspective from the onset, the finished product can more easily complement new deployment challenges every day. Algorithmia recognizes model deployment obstacles and takes those key aspects of DevOps to heart in its ML platform, which offers instant, repeatable deployments, thereby removing the burden that typically falls on data scientists. 

By using existing DevOps concepts centered on deployment, security, and scalability, Algorithmia overcomes many of the hurdles necessary to decrease the time it takes for ML models to make it to production.

Algorithmia also adheres to DevOps principles in providing a fully scalable production environment for hosting completed machine learning models. Since much of the underlying technology is run on tried and true products and processes, important functions like auditing and traceability are in place up front

Future-proofing model management 

From the start you will notice that models are maintained in a familiar repository structure (Git). These models can be consumed through most major programming languages data scientists use, such as Python, R, Java, and Algorithmia’s own CLI client. 

Most big data spans vast amounts of storage, and giving your models access to that data should not force you to move from one cloud to another. In addition to providing your own, secure area for Hosted Data Collections, the data you already host in Amazon S3, Google Cloud Storage, Azure, and even Dropbox is easily accessible by your models from within your Algorithmia account.

Model collaboration made easy

To complement its deployment capabilities, Algorithmia has included a number of other features meant to enable collaboration inside (or even outside) your organization. Publishing your models and sharing them with other users can help build more advanced applications and is paramount to preventing tech silos in decentralized organizations. When you manage your model portfolio on Algorithmia, you control how and where your data is shared.

DevOps engineers strive to remove barriers that block innovation in all aspects of software engineering. Now faced with an additional task of deploying a myriad of AI models, that same attitude will ensure data science products will open up even more opportunities for data exploration and use. 

Remove deployment barriers

Luckily, deploying machine learning models with Algorithmia can be just another process intrinsic to the tenets of DevOps engineers. Algorithmia recognizes that there would be significant challenges in the field of machine learning model deployment otherwise. Data scientists would take on unnecessary aspects of infrastructure (that could be easily handled by the platform) to ensure their models would complement existing DevOps procedures. On the DevOps side, Algorithmia recognized that without a deployment platform, DevOps engineers might be deploying something foreign to them. For that reason, the Algorithmia platform is the natural approach to the new and ever-evolving field of machine learning for DevOps engineers and data scientists alike.

Further resources

The Algorithmia Learning and Training Center

Machine learning infrastructure best practices

Permissioning your algorithms on Algorithmia

Multiclass classification in machine learning

Ice cubes being sorted by opacity. Result: Clear ice cubes in one tray and opaque ice cubes in another.

What is multiclass classification? 

Multiclass classification is a classification task that consists of more than two classes, (ie. using a model to identify animal types in images from an encyclopedia). In multiclass classification, a sample can only have one class (ie. an elephant is only an elephant; it is not also a lemur). 

Outside of regression, multiclass classification is probably the most common machine learning task. In classification, we are presented with a number of training examples divided into K separate classes, and we build a machine learning model to predict to which of those classes previously unseen data belongs (ie. the animal types from the example above). In seeing the training data, the model learns patterns specific to each class and uses those patterns to predict the membership of future data.

Multiclass classification use cases

For example, a cybersecurity company might want to be able to monitor a user’s email inbox and classify incoming emails as either potential phishers or not. To do so, it might train a classification model on the email texts and inbound email addresses and learn to predict from which sorts of URLs threatening emails tend to originate. 

As another example, a marketing company might serve an online ad and want to predict whether a given customer will click on it. (This is a binary classification problem.)

How classifier machine learning works

Hundreds of models exist for classification. In fact, it’s often possible to take a model that works for regression and make it into a classification model. This is basically how logistic regression works. We model a linear response WX + b to an input and turn it into a probability value between 0 and 1 by feeding that response into a sigmoid function. We then predict that an input belongs to class 0 if the model outputs a probability greater than 0.5 and belongs to class 1 otherwise.

Another common model for classification is the support vector machine (SVM). An SVM works by projecting the data into a higher dimensional space and separating it into different classes by using a single (or set of) hyperplanes. A single SVM does binary classification and can differentiate between two classes. In order to differentiate between K classes, one can use (K – 1) SVMs. Each one would predict membership in one of the K classes.

Naive Bayes in ML classifiers

Within the realm of natural language processing and text classification, the Naive Bayes model is quite popular. Its popularity in large part arises from the fact of how simple it is and how quickly it trains. In the Naive Bayes classifier, we use Bayes’ Theorem to break down the joint probability of membership in a class into a series of conditional probabilities. 

The model makes the naive assumption (hence Naive Bayes) that all the input features to the model are mutually independent. While this isn’t true, it’s often a good enough approximation to get the results we want. The probability of class membership then breaks down into a product of probabilities, and we just classify an input X as class k if k maximizes this product.

Deep learning classification examples

There also exist plenty of deep learning models for classification. Almost any neural network can be made into a classifier by simply tacking a softmax function onto the last layer. The softmax function creates a probability distribution over K classes, and produces an output vector of length K. Each element of the vector is the probability that the input belongs to the corresponding class. The most likely class is chosen by selecting the index of that vector having the highest probability.

While many neural network architectures can be used, some work better than others. Convolutional Neural Networks (CNNs) typically fare very well on classification tasks, especially for images and text. A CNN extracts useful features from data, particularly ones that are invariant to scaling, transformation, and rotation. This helps it detect images that may be rotated, shrunken, or off-center, allowing it to achieve higher accuracy in image classification tasks.

Unsupervised classification

While nearly all typical classification models are supervised, you can think of unsupervised classification as a clustering problem. In this setting, we want to assign data into one of K groups without having labeled examples ahead of time (just as in unsupervised learning). Classic clustering algorithms such as k-means, k-medoids, or hierarchical clustering perform well at this task.

Keep learning

A guide to reinforcement learning

What is sentiment analysis

How do microservices work? 

Machine learning examples 

depiction of a basic neural network

Machine learning is providing enterprise-level solutions and careers for many companies. It is quite a broad field, however, and comprises many subcategories and countless uses. So what exactly is machine learning, how and why is it used, and what companies are using it? Let’s dive into those questions and clarify some of the noise surrounding the ML space and focus on specific ML examples. 

What is machine learning and where is it used?

Machine learning is a branch of artificial intelligence that automates analytical model building. Machine learning systems learn from data, identify patterns, and then make decisions based on those patterns with minimal or no human intervention. It is used in every industry and across many different branches of an organization. ML has countless applications within enterprises and is a worthwhile investment toward automating previously time-consuming processes.

list of machine learning examples

Some common examples of machine learning uses include sales or demand forecasting, product recommendation engines, speech recognition, and chat bots. These are all processes that can be crucial to the success of a business, but can be time consuming for individual workers. Companies can save time and resources by training a model to perform these functions, rather than using people.

What can machine learning be used for?

Machine learning can be used for any process that a model can be trained to do. That means that if there is training data available to teach a model which patterns to look for, then machine learning can do it. Many of the uses for machine learning center around labeling images or assigning meaning to words, and other pattern-identifying tasks. 

What are some examples of machine learning?

The best way to understand a concept or technology is to learn from examples. So here is a list of 12 common enterprise machine learning examples. 

 

1. Demand forecasting

Demand forecasting is crucial for businesses to perform to predict and prepare for future product demand. The process of manual demand forecasting is tedious and time consuming, so automating it with machine learning is a worthy investment for enterprises to make. 

 

2. Sentiment analysis

It’s important for every company to understand their customers to provide them with the products or services they need in a satisfactory way. Sentiment analysis is the analysis of customer sentiment toward a brand and its offerings. 

 

3. Customer churn prediction

Customer churn prediction is how companies analyze the number of customers who will take their business elsewhere within the customer base. Knowing when someone is likely to churn allows organizations to create marketing strategies to combat churn or attract new customers to replace customers who churn.

 

4. Customer retention analysis

Customer retention analysis is the counterpart to customer churn prediction. Rather than analyzing how many customers will churn, it analyzes customers who will likely stay with a brand.

 

5. Order fulfillment

Machine learning can be used to conduct order fulfillment, making it possible to fill orders at a faster pace and with fewer human resources than manual order fulfillment.

 

6. Account reconciliation

One of the ways the financial sector benefits from machine learning is account reconciliation. This simply requires a model that is trained to identify similarities and differences in order to flag inconsistencies in reports.

 

7. Invoice accounting

Invoice accounting can also be conducted using machine learning by programming an ML model to process invoices, thus saving time and resources typically spent performing this task.

 

8. Recruiting

ML has been used to review resumes for a while now. Applicant tracking system software is often used to reduce bias in the hiring process. It sorts candidates by credentials, accomplishments, experience, and skills, theoretically without the implicit biases of recruiters.

 

9. Predicting operations upkeep

Consider how helpful it would be for a company to know when maintenance will be needed on equipment before something breaks. Machine learning for operations upkeep relies on past maintenance records to identify patterns in usage, wear and tear, and strain, and uses those patterns to predict when maintenance will be needed.

 

10. Procurement tracking predictions

Machine learning can be used to predict deliveries and arrivals in procurement. The same technology that delivery companies already use to predict package arrival dates/times can be used for procurement tracking predictions in any enterprise.

 

11. Fraud detection

The financial sector relies heavily on ML models, which can be trained to detect anomalies that may be the result of fraud. This can be used to proactively decline transactions to prevent fraud from happening. 

 

12. Creditworthiness

Lenders use machine learning to determine the creditworthiness of potential borrowers or investors. They can use ML models to identify red flags in their credit history. Credit scores are already determined using algorithms.

A list of industries using machine learning

If it’s a large, national, or global company, it is most likely using machine learning in some capacity, but mid-size and even small startups are leveraging ML to stay competitive. Here are a few examples of the types of companies utilizing machine learning.

Social media

Every social media platform uses machine learning in at least small ways. Machine learning determines what shows up on each user’s newsfeed, which suggestions for accounts to follow appear, and which ads each user sees. 

Search engines

Search engines use machine learning algorithms to determine search results. For example, Google uses hundreds of data points in its search algorithm to determine which pages it will rank in its search results. This is similar for other search engines as well.

Voice assistant companies

Voice interfaces use machine learning to recognize and assign meaning to speech in order to provide a response or perform the requested action. Speech recognition is a difficult task for machines to perform, but this technology is constantly improving.

Ecommerce sites

Many ecommerce companies use machine learning to provide product recommendations to provide a better user experience and upsell their customers while they are browsing the site. 

If your company is ready to start a machine learning project or already using machine learning, see how Algorithmia can help with deployment and scaling.

Demo Algorithmia message

Continue learning

Demand forecasting: what it is and why it’s important

Using machine learning for sentiment analysis: a deep dive

Customer churn prediction with machine learning

Deploying on Algorithmia with ONNX Runtime

Topographic map with three trails leading to a central point illustrating that a lot of paths lead to productionization

Simplifying model deployment 

Deploying models should be an uncomplicated endeavor. To that end, Algorithmia continually aims to ensure machine learning’s value is seen, which means getting to model deployment as quickly and as painlessly as possible. Fortunately, there are many paths to get there.

A lot of roads lead to productionization

As a data scientist or machine learning engineer, you’ve learned to use the best tool for the job. You might be using PyTorch for easy debugging and its handling of variable input lengths, or Caffe2 for deploying on mobile or edge devices. Unfortunately, these and other deep learning models all use a different serialization format so you must make inferences in the same language that you saved the model. 

The Open Neural Network Exchange (ONNX) format aims to solve this issue by creating a unified serialization format for deep learning frameworks such as Caffe2, Microsoft Cognitive Toolkit, MXNet, and PyTorch with connectors to more languages and frameworks.

Of course, with the Algorithmia platform, you can deploy your models that have been trained in various deep learning frameworks and serve them anywhere you can consume an API in the language of your choice! 

However, if you already have a model saved in the ONNX format, you can easily deploy it on Algorithmia using the ONNX Runtime library. The ONNX Runtime module was created by Microsoft in order to make it easy to utilize ONNX in a variety of languages.

The onnxruntime-gpu module is downloaded via PyPi and by following the steps below, you can deploy your ONNX model in a Python algorithm in just a few minutes.

ONNX Runtime walkthrough

If you’ve never created an algorithm before, then we suggest going through our Getting Started Guide. Once you work through that, or if you already have experience deploying your models on Algorithmia, you can create your Python algorithm:

And once that’s created, you can add `onnxruntime` to your dependency file in your algorithm:

Note, that because we created a GPU enabled algorithm, we are importing the GPU version of ONNX Runtime from PyPi versus the CPU version.

And now, import `onnxruntime` as usual in your algorithm:

Note that we added a few other imports to process our image data, but to run an onnx model, all you need is to add:  `import onnxruntime` to your algorithm.

On line 13 in the above image, you’ll notice that we are passing in our model that we loaded using the Algorithmia Data API into `onnxruntime.InferenceSession()`.

Once you publish your algorithm, you can make inferences on data passed to your model via our API. Check out our client guides to learn how to serve your model via our API in the language of your choice.

To see this example running on our public instance of Algorithmia check out the full demo for deploying your ONNX model on Algorithmia as well as the docs in our Developer Center

Algorithmia makes it simple to deploy your ONNX model into production in a few easy steps. We handle the scaling, dependency management, and offer a centralized repository for your models. Let us know how we can help bridge the gap between your data scientists and DevOps teams! Let us know how we can help enable your data science teams to deploy their models today.

Six open-source machine learning tools you should know

use the best machine learning tools

As more organizations incorporate AI and machine learning in to their processes, data scientists and analysts must continue to learn about how these techniques solve business problems. Previously on this blog we discussed how particular algorithms may be more appropriate for particular research questions and business problems. Another factor to keep in mind are the specific tools that data scientists use for their work. 

Most data scientists are at least familiar with how R and Python programming languages are used for machine learning, but the possibilities do not end there. Machine learning and AI tools are often software libraries, toolkits, or suites that aid in executing tasks. Like machine learning algorithms, there is not necessarily a “best” AI or machine learning tool. What you use will (and should) depend on the task you are trying to perform. 

Machine learning tools

While there are a growing number of machine learning tools available, we’ve chosen a few open-source options that are popular with many data scientists. Below we’ve detailed several machine learning platforms and tools that any new or experienced data scientist should consider exploring. 

Scikit-learn

Scikit-learn is a fundamental tool for anyone performing machine learning tasks using the Python programming language. It is a machine learning library built to be used in conjunction with NumPy and SciPy, Python’s libraries for numerical and scientific computing, respectively. 

Supported algorithms in Scikit-learn include classification, regression, clustering, and dimensionality reduction. The library has extensive documentation and an active user base, and is a good machine learning tool for someone new to Python programming and machine learning. 

TensorFlow

TensorFlow is an end-to-end machine learning tool developed by the Google Brain team meant for large-scale machine learning and numerical computation projects. The platform constructs deep neural networks to conduct tasks like natural language processing (NLP), image recognition, and translation. 

TensorFlow is known for being easy to use but also more powerful than many other machine learning libraries or toolkits. This is partially because it uses Python to provide a front-end API for developing applications, while actually running those applications in C++. 

One of TensorFlow’s greatest benefits is its ability to deal with the detailed task of implementing rules for each function within an application. TensorFlow will automatically figure out the right order of functions, so a data scientist can stay focused on conceptual questions and the overall purpose of the algorithm. TensorFlow is often used for more complex projects and workflows. 

PyTorch

PyTorch is a machine learning library for NLP and computer vision. It is built on the Torch library (a scientific computing framework) and uses Python scripting language. 

Many data scientists like PyTorch as a tool because of its flexibility and speed. The tool helps users develop dynamic neural networks and can build and change graphs and visualizations as a user continues to work. PyTorch also allows for distributed training (users performing parallel computations), which reduces the time to complete actions. 

PyTorch is generally the go-to tool for projects that require quick development and need to be usable in a short period of time. 

Uber Ludwig

Ludwig is one of many machine learning toolkits developed by Uber’s AI lab and made available to the open-source community in the past year. It is a toolbox built on TensorFlow and designed for deep learning AI projects. It differs from some of the other Uber tools in that it does not require knowledge of a programming language to use. Instead, users test and train models within a drag-and-drop interface, making the machine learning technology more accessible to all members of an analytics or data science team. 

One of Ludwig’s most notable features include its easy-to-understand visualizations, meant to provide the reasoning behind the results of a deep learning algorithm and avoid the “black box” problem. In addition, Ludwig is not meant to be a stand-alone tool and can be integrated with an organization’s other applications via Python API. 

Stanford CoreNLP 

Developed by the NLP Research Group at Stanford University, CoreNLP provides a set of tools specifically focused on analyzing human languages. It performs common NLP tasks like sentiment analysis and information extraction. In addition, it can also help data scientists perform more detailed tasks like understanding dependencies in a portion of text (ie. how pronouns relate to each other in a passage), which may result in a clearer understanding of the text. 

In addition to English, CoreNLP has NLP models for Arabic, Chinese, French, German, and Spanish, setting it apart from many other commercial NLP tools. 

CoreNLP is written in Java and requires Java to run, but can interface with multiple programming languages, including Python. According to some users, one drawback to CoreNLP is that it is optimized for use on local machines rather than in the cloud, and may be better suited for those working on individual projects. 

Weka

Weka was developed at the University of Waikato in New Zealand and is a popular tool among students and individuals who are just getting started with machine learning. The creators of the tool have curated a series of videos and have written a book on machine learning and data mining techniques. 

Weka has a simple GUI allowing users to understand the basics of data mining tasks like data preprocessing, clustering, classification, regression, and visualization without having to focus too much on programming languages. Weka also has a deep learning package, allowing users to attempt more complex analyses while still using the simple interface. Users who prefer to write in a programming language can do that was well. Weka is a Java application but can be used with R or Python via API. Because of its origin in academia, Weka is most commonly used as a teaching tool or for smaller projects. 

Apache SparkMLlib

MLlib is the machine learning library of Apache Spark, the open-source distributed computing framework, and designed for large scale computing environments. As a result, MLlib works best for larger enterprise learning environments. 

MLlib contains multiple algorithms that fall under the following categories: regression, classification, dimension reduction, clustering, and recommendations. MLlib has significant language compatibility, allowing data scientists to write applications in Java, Scala, and Python.

The tool also benefits from being a part of the Spark framework, meaning that it allows for quick, iterative computing, and easy deployment that does not require any additional installation. Spark’s large and active community of contributors will also lead to more growth and adoption of MLlib and more resources to support it.