Algorithmia Blog - Deploying AI at scale

AI software adds exciting possibilities to established development practices

colorful shapes on a white background with the text "All existing infrastructure will soon have AI software-specific requirements.

AI software enters business workflow

When we hear the term AI software, some of us think of a futuristic world where machine learning has taken artificial intelligence to extreme levels. Fortunately, today’s AI services provide tools for all types of businesses to interact with complex data. 

AI software examples 

AI software called Natural Language Processing allows for the understanding of voice commands in home automation devices and provides intelligence for language translation. 

Facial recognition is a machine learning use case that is used by social media platforms to accurately tag photos. Open-Source Facial Recognition is a deep learning model that recognizes not only that a face exists but also who the face belongs to. 

The open availability of these and other models allows for data scientists to be immediately productive in their use of AI software for data analysis.

Infrastructure changes ahead for machine learning workflows

As more and more aspects of AI become mainstream, software and business services will include it as a critical part of their roadmaps. Existing infrastructure will have additional requirements geared more toward new problems a business is trying to solve with an AI software implementation. 

The future-reaching nature and highly adaptable features of a centralized repository of machine learning models have already provided solutions to a large number of analytic problems with big data.

Algorithmia is leading the way to a machine learning–oriented future by providing a scalable deployment infrastructure that handles critical aspects of the machine learning lifecycle: deployment, manageability, scalability, and security. In this way, data scientists and DevOps can focus on using their expertise to do their intended jobs while Algorithmia seamlessly handles the rest. Designed to complement existing processes, Algorithmia will easily become your central hub for ML developments.

Typical languages for AI software development

Many programming languages used for AI software development are familiar to those accustomed to using powerful programs and scripting tools to automate various tasks. For instance, DevOps engineers use Python to manipulate data beyond normal read, write, and update routines. 

Python is conducive to AI software creation tasks due to the familiar object-oriented design, extensive libraries, and fast development time to support neural networks and other NLP solutions.

Scala is a prominent machine learning language and is gaining popularity because Spark, a big data processor, is written in Scala. Scala is a compiled language and offers flexibility and scalability, which lends itself well to big data projects.

Of course, Java is popular for its ease of use and ability for data scientists to debug and package models used. Large-scale projects take advantage of Java’s simplified workflow, and it has aspects that make it desired for graphical representations of data. 

In addition to these languages, Algorithmia provides a treasure trove of pre-developed machine learning models for most major AI software languages in languages such as Python, R, Rust, Go, Swift, and Scala.

AI software should “just work”

Before tools, processes, and infrastructure matured, DevOps engineers were busy pioneering methods to automate products and services all the way to production. Key aspects of this CI/CD pipeline include source code management, building, packaging, and deployment, all of which must be done in a secure, repeatable manner with little to no human interaction necessary. 

This usually involves loosely tying a number of different products and technologies together. The easiest approach is using an existing AI platform; there is no need to recreate the wheel. 

Frictionless AI and ML model management

Algorithmia handles everything that would normally require close collaboration between data scientists and DevOps engineers. Often times, data scientists serve dual purposes: developing new tools and workflows in addition to solving critical business problems. 

Moreover, DevOps likely has never had to deploy a ML model. By incorporating an auto-scaling, serverless platform, Algorithmia allows for consistent deployment of your models for internal or external consumption.

As with all problem-solving initiatives that involve large data sets, accessing that data quickly and without the need to migrate to alternate formats is paramount. In addition to data hosted in the AI Platform, data stored with major cloud providers connect to the project with ease using an intuitive interface. By using the concept of “collections,” the Algorithmia AI Platform’s Data Model Layer allows teams of customers to work in a private subset of models, moderate model publishing, and organize models into logical groups based on teams.

Avoiding AI software engineering and infrastructure pitfalls

Another critical aspect of a successful AI model deployment pipeline is quality documentation. The need to achieve fast results while also gaining the confidence of stakeholders is only possible if the team is aware of the full capabilities of the AI platform they choose. 

The Algorithmia Developer Center has a plethora of documentation specific to our platform and other tutorials pertinent to the languages used for AI software engineering.

The scalability of the Algorithmia platform is the product of much development in cloud computing. After pushing your model’s code with Git, Algorithmia takes over. It not only handles the DevOps aspects of publishing your model as an API, it controls all aspects of preparing the model for scale. 

This advancement in AI software engineering enables data scientists to deliver solutions in a fraction of the time while providing tried and true DevOps processes that ­­­will not be foreign to an established team.

Start your machine learning journey on the right foot

Choosing the right AI platform for your team is probably the most influential factor in determining the direction in which your ML model development will mature. 

Many companies that offer solutions in the AI software realm also offer a myriad of other services; Algorithmia only does AI software. For a demo of what Algorithmia can do for your company’s ML program, sign up here

Developing your own machine learning projects 

Tower of books with overlaid text "Communication is the skill that sets the data tinkerers apart from those who influence business decisions"

Photo by Lysander Yuen

Data scientists and machine learning engineers often encounter a disconnect between what they learned (in school, a bootcamp, or independently), and how this knowledge is applied in their work. For instance, you may be proficient in R or Python, but still be usure how the code or the libraries you’re pulling from relates to actual use cases. 

Machine learning is more than its techniques

Machine learning techniques and principles are interesting to learn, but like many technical disciplines, they do not exist simply for the sake of existing. 

Machine learning is the technical foundation of data science; practitioners use their knowledge of statistics combined with computer science to develop predictive models and uncover patterns in data. These models help businesses with tasks like pricing, developing new products, or identifying the best customer for a service.  

So how does one go from coding in a classroom to advising executives on data-driven decision making? It starts with practice—designing and implementing your own machine learning models using real-world datasets. As you gain experience with more projects, you’ll begin to have a better understanding of which algorithms and methods are appropriate for specific types of questions. 

Read on to learn more about machine learning projects and how you design them. 

 

Cereal O's in a milky bowl

Photo by David Streit

What are some machine learning projects?

Some classic machine learning projects that you may have already been exposed to as a student include: 

  • Boston Housing – This project is meant to teach simple linear regression. You predict the value of a home based on independent variables like the number of rooms in a dwelling, a town’s crime rate, and the number of owner-occupied homes nearby.
  • Iris Flowers – This project teaches basic classification techniques. The purpose is to classify an observation into one of three species of iris based on five attributes related to sepal length, petal length, and petal width.  
  • Handwriting recognition – Using the Modified National Institute of Standards and Technology (MNIST) dataset, the goal of this project is to identify individual digits from handwritten images. This is a good project to attempt if you are interested in neural networks and deep learning AI
  • Breakfast cereals – The breakfast cereal data set contains nutrition and consumer rating information from 80 products. Because of its breadth of categorical and numerical variables, you can develop projects that include many machine learning techniques including dimension reduction and clustering.

All of the previous projects have corresponding datasets on Kaggle, a community where you can compete to solve real-world machine learning problems. It’s useful in that It also allows you to see different people’s approaches to the same question.

Where do I find data?

While Kaggle is one place to start, there are plenty of other sources with which to find datasets. As you conduct more machine learning projects, you may find yourself wanting to analyze more obscure or lesser-known datasets. Here are a few places to look: 

  • UC Irvine Machine Learning Repository – The UCI repository maintains 488 datasets that range in topics from smartwatch activity to forest fire tracking. This is also the home of the Iris dataset we spoke about above.
  • Data.gov – Multiple US federal agencies house their data here. This is an open-source place to find databases related to social sciences, public health, economics, and education. 
  • Quandl – Quandl is a platform that houses finance and and economics datasets created by hundreds of publishers. It offers free and paid services.
  • Five-Thirty-Eight – This data-driven reporting outlet makes all of its sources available on GitHub. Topics include sports, politics, and pop culture. 

How do you make a machine learning model from scratch?

If you really want to challenge yourself with a machine learning project, you can develop a model from scratch. As a reminder, a machine learning model is the equation or computation you develop by applying sample data to the parameters of an algorithm. An algorithm is a set of pre-defined rules. 

Although the specifics will be different based on your actual project, here are the general steps of developing a machine learning model: 

Find a problem to solve

Think about subjects that are interesting to you. Within that category find a problem that could be solved or at least initially approached through data analysis. For example, let’s say we’re interested in healthcare and we’d like to explore what it means to be a “good” hospital in the United States. 

Surgeons in the OR

Photo by Abraham Popocatl

Find relevant data and refine the question

It’s unlikely that we’ll find a dataset that would directly answer such a broad question, but we have an idea of where we should start. 

Medicare publishes data on hospital quality. One dataset in particular is a patient survey that lists an overall rating (1-5) based on dimensions like nurse communication, physician communication, and quietness of the hospitals.

Using this data, we can form more precise questions. For example: 

Which components of patient satisfaction influence a hospital’s overall score?

Or

Given a hospital’s nurse/physician/quietness/etc. data, what is its predicted overall quality score?

Import the data

Data can come in multiple formats, including JSON, XML, and CSV. How you choose to import the data will depend on if you want to conduct your analysis in R, Python, or a proprietary platform.

Explore and clean the data 

Now you’ll want to clean the data. This means getting rid of missing, null, and/or nonsensical values and possibly removing unnecessary columns. At this point you’ll also want to do some data visualizations to see if there are any interesting associations to explore further. 

Visualizing the data may also help you figure out if your machine learning project is a supervised or unsupervised learning task. In brief, a supervised task has a target outcome (number or a category), while an unsupervised task does not. Unsupervised tasks are often used to find patterns and associations in the data.

If you are considering a supervised method, then you will need to determine if a classification or regression algorithm is most appropriate. This will depend on the question you are trying to answer.

Develop and refine the model

This is where you begin to experiment and use your outside knowledge and intuition to make adjustments to your model. To train the data, and then validate it later, you will need to split data into training, validation, and test datasets. 

Many data scientists often start with the most basic algorithms when developing a model and move up from there: 

  • Linear regression for basic regression.
  • Logistic regression for basic classification.
  • K-means clustering for an unsupervised task. 

Once you think that you’ve developed the best model, you can evaluate its performance with metrics like r-squared values or AUC-ROC curve

Communicate your results

Communication is the skill that sets the data tinkerers apart from those who influence business decisions. A crucial part of a data scientist’s job is communicating what they’ve uncovered to a company’s leaders. And they’re doing more than just reporting; they’re also offering recommendations based on what they observed in the data.

Say for example (using our Medicare data scenario), you find that nurse communication is the most correlated variable to the overall patient satisfaction score. What do you do with that information? Your recommendation might be to brief healthcare leaders about the importance of conducting further research about the effects of nurse communications at the highest and lowest rated hospitals. 

It’s also good practice to post the results and the conclusions of your machine learning projects on your personal blog or to GitHub to share with the larger data science community. It’s an opportunity for you to help others learn, receive feedback, and possibly publicize a solution to a previously unsolved problem.

Protecting your machine learning system from attack

Algorithmia castle with a moat defending model tampering arrows.

Machine learning model security is not discussed enough. In serverless GPU–attached environments, block storage solutions like S3 are dependable for persisting your model files. Other than the URI, no other relevant information regarding the model file is saved in the source code. 

This exposes an interesting angle to potentially attack an ML system in production. An exposed API key or backdoor could allow hackers to replace a model file with their own. Below we’ll talk about why we can’t trust models coming from untrusted sources. We will also demonstrate a quick and simple process to authenticate models before loading them into memory.

What are the security implications?

Using open-source tools like TensorFlow potentially exposes your platform to cyber attacks. Even though open-source communities are known to quickly patch up bugs, this time delta may be more than enough time for hackers to initiate an attack or drastically affect business operations.

If your models are analyzing sensitive information like credit card information for fraud detection or scanning legal documents to help with discovery for a court case, hackers could use an exploit to export this information back to themselves.

The question of “why models haven’t been more protected than before now” has a trivial answer. Like with most emerging technologies, most companies do not or cannot determine what these exploits can be. Therefore as the industry matures, these security measures need to be implemented quickly, especially for models that process highly sensitive data.

According to TensorFlow’s official documentation that can be found here, TensorFlow models are basically programs and aren’t sandboxed within TensorFlow. A sandboxed application wouldn’t have access to files outside its environment and wouldn’t be able to communicate over the network. As it is, TensorFlow can read and write files, send and receive data over the network, and spawn additional processes, all of which are potentially vulnerable to attack. 

The documentation summarizes that “TensorFlow models are programs, and need to be treated as such from a security perspective.”

Authenticating model metrics

Another place where authentication can take place is during continuous integration, where metrics like F1 scores are calculated. This will ensure that the model being tested in CI and the model being deployed in production are the same. Authenticating models prevents data scientists from accidentally overwriting models files and prevents fraudulent models getting into production.

Authentication by computing model hash

Model inference source code is version-controlled via git. Since model files can be huge (several GBs), they must be stored in scalable object/blob storage systems, such as S3. Even though some object-storage services do offer the benefit of keeping track of file hashing, this isn’t the standard case across all services and may not be exposed through the service you’re using to deploy your model. 

Simple file-based hash authentication

Because of the potential for model-tampering, it makes sense to calculate the model file hash right after training, and hard-code this into the source code to prevent model file-tampering in flight. This will allow the inference service to verify the model file during runtime, before executing any TensorFlow model code. This is especially true when the only thing hard-coded in the model file is the filename in the source code.

Advanced weight-based hash authentication

Another way to calculate hashes is to use the weights that are provided in model files. The benefit to this approach is that it would be independent of model format and would work across different frameworks.

Fingerprinting models in this approach would provide consistency, reliability, and reproducibility, and protect an ML system from vulnerabilities. 

Model authentication demonstration on Algorithmia

Documentation walkthrough of model authentication on Algorithmia

We have implemented the simple file-based hash authentication method into our Jupyter notebook. The example trains a simple MNIST model, saves and calculates the hash of the model, deploys the model with the hash, and runs model authentication at runtime before running the model.

Ensure your ML models in production haven’t been hotswapped without anyone noticing. 

Model security with Algorithmia 

As machine learning becomes part of standard software development, vulnerabilities and new methods of attack are surfacing. Fortunately, Algorithmia has built-in security features to prevent model tampering and we are committed to stewarding these practices in the enterprise for everyone to benefit from. Algorithmia aims to empower every organization to achieve its full potential through the use of artificial intelligence and machine learning. 

Read more about ML security

Robust Physical-World Attacks on Deep Learning Visual Classification (arxiv) 

Adversarial Examples for Malware Protection (Patrick McDaniel)

Federated learning and securing data (Digitalist) 

Taking a closer look at machine learning techniques 

dendrogram graph outline

Analytic thinking has become a necessary skill for almost everyone working in a business environment. Although data scientists and analysts may be more intimately involved in handling and manipulating data, managers, executives, and other business leaders will be the ones making decisions based on a technical team’s insights and findings. 

Becoming a data-driven business requires everyone in an organization to understand the principles behind data science, including the machine learning techniques that transform raw data into insightful information. The purpose of this piece is to provide managers and aspiring data scientists an overview of the different methods that can be used to solve business questions.

What is a machine learning model?

In discussions about data science and machine learning, the term “model” is often thrown around. A machine learning model is the actual computation or equation that a data scientist develops after applying training data to an algorithm. It is the result of later steps in a data science project.

This piece will focus on machine learning techniques or ways of approaching problems along with examples of algorithms that fall into these categories. We’ll also mention a few real-world machine learning models for clearer examples of how algorithms have been applied in the enterprise. 

What data does machine learning use?

Machine learning models can be developed with almost any kind of data you can imagine. It can be numerical or categorical. More specifically, text-based data can be used for sentiment analysis, while images can be used to develop facial emotion recognition models.  

Data for machine learning comes from various sources including internal databases within an organization (most likely proprietary), and public or open-source datasets. What kinds of data an individual will use to develop a machine learning model will depend on the business question they are trying to solve.

What are some popular machine learning methods?

Machine learning techniques are often broken down into three categories: supervised learning, unsupervised learning, and reinforcement learning. Supervised and unsupervised learning are more commonly used in the business context, so our focus will be on techniques from these two categories. 

Supervised learning methods are used to find a specific target (numerical or categorical), which must also exist in the data. Unsupervised methods are employed when there is no specific target in mind. They are often used to uncover patterns or natural groupings in the data. We’ll note that there are some algorithms that could fall into either category depending on the specificity of the question being asked. 

Supervised learning: classification vs regression 

Regression and classification are the two main subcategories of supervised learning. While both are predictive methods, regression has a numerical output, while classification predicts the category that a new observation would fall into. This is often a binary output, but you can create models for more than two categories. A variation of classification known as class probability estimation is a numerical output (from 0 to 1) of how likely it is that a new observation will fall into a particular category.

  • Linear regression With linear regression, you can predict an output variable using one or more input variables. This is represented in the form of a line: y=bx+c. Linear regression models are one of the most familiar types of models, as many people have been exposed to linear equations as a part of their math education.
  • Support vector machine (SVM) SVM can be used for regression or classification. Linear SVM works by maximizing the distance between classes and drawing a line down the middle. New data is categorized by how it falls along that line. Non-linear SVM is used for more complex functions (like those with exponents) to more accurately find the widest point between data.
  • Logistic regression Despite the name, logistic regression is a classification algorithm—more specifically it performs a class probability estimation task. A logistic function is applied to a linear equation and the output is interpreted as the log-odds (a number that ranges from -∞-∞) of a new event being a member of a particular class. The log odds can then be translated into the probability (number 0-1) of a new item being a member of the class.
  • Decision tree Decision trees are a supervised segmentation technique that places observations in the data into subgroups. 
    • CART is a well-known version of a decision tree that can be used for classification or regression. Once the data scientist chooses a response variable, the computer program will make partitions through the predictor variables. The program automatically chooses the number of partitions to prevent underfitting or overfitting the data to the model. Decision trees are useful in situations where  interested parties need to see the entire logical reasoning behind a decision.
  • Random forest Simply put, a random forest is a group of decision trees that all have the same response variable, but slightly different predictor variables. The output of a random forest model is calculated by taking a “vote” of the predicted classification for each tree and having the forest output the majority opinion.  

Unsupervised methods 

  • Clustering Clustering refers to machine learning techniques that are used to find natural groupings among observations in a dataset. Also known as unsupervised segmentation, clustering techniques have two main types: hierarchical and k-means.
    • Hierarchical clustering This method produces a tree-shaped structure known as a dendrogram. Each node in the dendrogram is a cluster based on the similarity of the observations in it. In agglomerative hierarchical clustering, it is a bottom up approach that starts with each observation as its own cluster. As you move up the tree, the number of clusters becomes smaller until the top node contains every observation. The opposite is divisive clustering in which all observations begin in one cluster, and then you divide downward until you reach the desired number of clusters. One of the most well-known hierarchical visualizations is the “Tree of Life” dendrogram that charts all life on earth.
    • K-means clustering K-means clustering is a machine learning algorithm that forms groups of observations around geometric centers called centroids. The “k” refers to the number of clusters, which is determined by the individual conducting the analysis based on domain knowledge. This type of clustering is often used in marketing and market research as an approach to uncover similarity among customers or to uncover a previously unknown segment.

Other machine learning techniques  

The following machine learning techniques can be applied to regression or classification problems. 

Data reduction 

Data reduction algorithms reduce the number of variables in a data set by grouping similar or correlated attributes.

  • Principal Component Analysis (PCA) PCA is a commonly used dimension reduction technique that groups together variables that are measured on the same scale and are highly correlated. Its purpose is to distill the dataset down to a new set of variables that can still explain most of its variability. PCA is often used in the analysis of large survey datasets. This technique makes interpreting these kinds of surveys much simpler and allows researchers to make assertions about behaviors.
  • Similarity matching A similarity matching algorithm attempts to find similar individuals or observations based on the information that is already known about them. For example, a bank might use a similarity matching algorithm to find customers best suited for a new credit card based on the attributes of customers who already have the card.
  • K-Nearest Neighbor (KNN) Nearest-neighbor reasoning can be used for classification or prediction depending on the variables involved. It is a comparison of distance (often Euclidian or Manhattan) between a new observation and those already in a dataset. The “k” is the number of neighbors to compare and is usually chosen by the computer to minimize the chance of overfitting or underfitting the data. In a classification scenario, how closely the new observation is to the majority of the neighbors of a particular class determines which class it is in. For this reason, k is often an odd number to prevent ties. For a prediction model, an average of the targeted attribute of the neighbors predicts the value for the new observation. In the previous banking and credit card scenario, a classification output might be a simple yes or no to extend an offer. A prediction output might be the initial credit card limit offered to the customer.
  • Link prediction This method tries to predict the possibility and strength between two connections. This is often used for recommendations for social networking and e-commerce platforms. For example, if two unconnected people share a large number of mutual connections, a link-prediction model may suggest that these two people connect.

Combined methods 

Business problems are complex, and you may find that you’ll need to use multiple machine learning techniques to achieve your goal. An important part of data science is understanding how these algorithms work together to answer questions. For example, a data scientist might use PCA in the development of a regression model by first combining similar variables to make the analysis more manageable. 

What are the most important machine learning algorithms?

It’s hard to say what is the most important or best machine learning algorithm or whether there even is one. The methods you use will depend on your specific project needs and the data you have available. A critical skill for anyone interested in using machine learning in the business environment is knowing how to organize a data science project and thinking about which algorithms and techniques it should be approached with. 

Keep Learning

A deeper dive into supervised and unsupervised learning.


A look at random forest algorithms.


Open-source machine learning tools for use in data science projects.

How machine learning works

Eye drawing over a brain outline

The early stages of machine learning saw experiments involving theories of computers recognizing patterns in data and learning from them. Today, after building upon those foundational experiments, machine learning is more complex. 

While machine learning algorithms have been around for a long time, the ability to apply complex algorithms to big data applications more rapidly and effectively is a more recent development. Being able to do these things with some degree of sophistication can set a company ahead of its competitors.   

How does machine learning work?

Machine learning is a form of artificial intelligence (AI) that teaches computers to think in a similar way to how humans do: learning and improving upon past experiences. It works by exploring data, identifying patterns, and involves minimal human intervention

Almost any task that can be completed with a data-defined pattern or set of rules can be automated with machine learning. This allows companies to transform processes that were previously only possible for humans to perform—think responding to customer service calls, bookkeeping, and reviewing resumes. 

two machine learning techniques: supervised and unsupervised

Machine learning uses two main techniques:

  • Supervised learning allows you to collect data or produce a data output from a previous ML deployment. Supervised learning is exciting because it works in much the same way humans actually learn. 

In supervised tasks, we present the computer with a collection of labeled data points called a training set (for example a set of readouts from a system of train terminals and markers where they had delays in the last three months). 

  • Unsupervised machine learning helps you find all kinds of unknown patterns in data. In unsupervised learning, the algorithm tries to learn some inherent structure to the data with only unlabeled examples. Two common unsupervised learning tasks are clustering and dimensionality reduction

In clustering, we attempt to group data points into meaningful clusters such that elements within a given cluster are similar to each other but dissimilar to those from other clusters. Clustering is useful for tasks such as market segmentation.

Dimension reduction models reduce the number of variables in a dataset by grouping similar or correlated attributes for better interpretation (and more effective model training).

How is machine learning used?

From automating tedious manual data entry, to more complex use cases like insurance risk assessments or fraud detection, machine learning has many applications, including client-facing functions like customer service, product recommendations (see Amazon product suggestions or Spotify’s playlisting algorithms), and internal applications inside organizations to help speed up processes and reduce manual workloads.  

A major part of what makes machine learning so valuable is its ability to detect what the human eye misses. Machine learning models are able to catch complex patterns that would have been overlooked during human analysis. 

Thanks to cognitive technology like natural language processing, machine vision, and deep learning, machine learning is freeing up human workers to focus on tasks like product innovation and perfecting service quality and efficiency. 

You might be good at sifting through a massive but organized spreadsheet and identifying a pattern, but thanks to machine learning and artificial intelligence, algorithms can examine much larger sets of data and understand patterns much more quickly

How machine learning mimics human analysis.

What is the best programming language for machine learning?

Most data scientists are at least familiar with how R and Python programming languages are used for machine learning, but of course, there are plenty of other language possibilities as well, depending on the type of model or project needs. Machine learning and AI tools are often software libraries, toolkits, or suites that aid in executing tasks. However, because of its widespread support and multitude of libraries to choose from, Python is considered the most popular programming language for machine learning. 

In fact, according to GitHub, Python is number one on the list of the top machine learning languages on their site. Python is often used for data mining and data analysis and supports the implementation of a wide range of machine learning models and algorithms. 

Supported algorithms in Python include classification, regression, clustering, and dimensionality reduction. Though Python is the leading language in machine learning, there are several others that are very popular. Because some ML applications use models written in different languages,  frameworks like Algorithmia’s serverless microservices architecture comes into play to allow models to be built in multiple languages and seamlessly pipelined together.

The bottom line

Machine learning can provide value to consumers as well as to enterprises. An enterprise can gain insights into its competitive landscape and customer loyalty and forecast sales or demand in real time with machine learning. 

Our machine learning platform has built-in tools for versioning, deployment, pipelining, and integrating with customers’ current workflows. Algorithmia integrates with any technology your organization is currently using, fitting in seamlessly to make machine learning deployment a breeze, getting you from model building to productionization much faster. 

If you’re already implementing machine learning in your enterprise or you’d like to start, see how Algorithmia can help.

button to learn more about Algorithmia

Further Reading

Machine Learning Use Cases

More on types of supervised and unsupervised models

Unsupervised learning and emotion recognition