All posts by Stephanie Kim

Algorithmia on VMWare makes any-prem ML a reality

Algorithmia on VMWare is our newest on-premises product.

Here at Algorithmia, we have written and talked extensively about the challenges of deploying machine learning models at scale and the importance of data access in general machine learning development and infrastructure. When it comes down to it, ML is critical for any modern company to remain competitive. 

Machine learning has the most impact on a company’s core line of business applications that are often behind the firewall, particularly in regulated industries like financial services, insurance, health care, and laboratory sciences. 

For ML infrastructure to serve those industries, an on-premise product is a requirement. And users need low latency, high throughput, data-driven applications in a modern data-driven world. To those ends, we are thrilled to announce that Algorithmia Enterprise is now on VMWare!

Go where the data and users are 

Data and security go hand in hand, which is why concerns around ML security are the natural product of that relationship. 

One concern is the security implications of moving data between systems. Another is that data is expensive and difficult to move, so building and running ML models close to the data source is a preferred practice as it reduces costs, increases iteration speed, and satisfies security, compliance, and privacy requirements that many businesses have. 

Announcing Algorithmia Enterprise on VMWare

The general availability of Algorithmia Enterprise on VMware, the next version of our enterprise on-premises product, means customers can run Algorithmia on their existing VMWare infrastructure in their data center with the lowest latency and highest security for their ML–enabled applications. 

By providing a fully integrated solution for connecting, deploying, scaling and managing, we are enabling enterprises to leverage ML in a way they could not before.

Multi-cloud is in our DNA 

Customers faced with the challenges of multi-cloud sometimes try to build their own complex systems using native or in-house services across many cloud providers. This creates massive variability and volatility in deployment, upgrades, performance, and customer experience. And don’t forget that the engineering and support matrix grows with each variant. 

From the early days at Algorithmia, we knew that multi-cloud was critical to enabling our customer’s success, given the vastly different infrastructure choices one could make. So we focused on getting the foundation right as we knew the speed and quality of the deployment experience is a crucial advantage for customers.

And we also know that the customer experience must be fantastically consistent across any platform. By delivering on a truly multi-cloud platform that has UX, feature, and operational parity, we solve these problems for our customers and ensure a delightfully consistent experience.

Why VMWare?

The market has spoken. VMWare has won the on-premises cloud war and serves the majority of the private cloud/hypervisor market. The rest of the landscape is fractured, the number of variants and incompatibilities too high to navigate. The next largest vendor adoption is low, less than 10 percent. 

Services via VMWare are the standard offered in nearly every IT environment. By choosing VMWare as the preferred on-premises infrastructure platform, again we are enabling the greatest number of companies to achieve their full potential through the use of AI and ML.

Multi-cloud and any-prem

Now with Algorithmia Enterprise on VMware, multi-cloud ML deployment across public and private clouds is not just a wish, it is a reality. Companies that leverage the benefits of having their ML workload close to the data it needs and users that need it will realize that multi-cloud is a true differentiator for their business. 

Introducing GitHub Source Code Management for Algorithmia

GitHub + Algorithmia on a purple background

As a data scientist or machine learning engineer, your specialty is building out robust machine learning models. Your purpose is to make a positive impact on your business by offering data insights, reducing costs, increasing revenue, or even generating customer delight.

As your collection of models gets larger, it quickly becomes difficult to manage your code and collaborate with other members of your team unless you are implementing best practices such as version control and using a code repository.

Model management through centralized repositories

A centralized repository increases the visibility of your models so there is less duplication of work and also provides other teams the opportunity to use those models to solve business problems quickly by not reinventing the wheel. 

Algorithmia already provides a centralized repository for your algorithms in production that are backed by Git and served via our REST API. Our platform offers flexibility in where you store your source code and how you interact with your algorithm in your development process.

You can easily host your source code on Algorithmia, either on the Public version or your private Algorithmia Enterprise instance and you can utilize our Web IDE or you can take advantage of our CLI tool and API’s to stay in your local environment of choice

Algorithmia is happy to announce that we have expanded our source code management offerings, adding to the benefits of having a centralized repository for increased model management. And there is more to come.

Github-hosted source code for model management

When multiple users contribute to an algorithm’s development, there can be many points of friction. Conflicts can arise in the code base such as an inability to track what’s changed and who made that change. To organize the development process, enterprises need a centralized source code repository and a set of controls over what code gets implemented and how to track changes. 

When you need to collaborate with other team members on the same algorithm, taking advantage of model management by tracking who has contributed to your code base, who has updated the code and when, and other important auditing features and utilize GitHub features like GitHub Actions, you now can! 

By connecting your Algorithmia and GitHub accounts, you can store your source code on GitHub and deploy it to an algorithm in production on Algorithmia. This way, multiple users can easily contribute to the same algorithm, collaborate on a centralized code base, and ensure code quality with best practices like code reviews through pull requests and issues tracking

You can also take advantage of GitHub’s governance applications around dependency management to ensure that your models aren’t at risk of utilizing package versions with deprecated features. These governance features enhance Algorithmia’s current model management workflow for reproducibility of machine learning models in production.

Getting started

This guide will show you how easy it is to start using GitHub with Algorithmia.

First, click on “Create a New Algorithm” in the Algorithmia Web IDE. If you’ve never created an algorithm before, learn how in the Developer Center

You’ll see a form (pictured below) to fill out, and if you have created algorithms before on Algorithmia, you’ll notice that there are now two options for Repository Host where you can choose your source code. You’ll want to choose the GitHub option and then authorize your account: 

Create an Algorithm web page on Algorithmia

Once your accounts are linked, Algorithmia will be able to create repositories linked to that account. Note that you can create an algorithm under any organization you belong to in GitHub, or under the GitHub user account you connected to:

Create an Algorithm page on Algorithm with accounts successfully linked

After going through the configuration steps to create your algorithm, you’ll get a notification telling you that you’ve successfully created the repository and it will show you your new algorithm in the Web IDE:

Successfully connected GitHub to Algorithmia message

Be aware that when you click on “Source Code” for your new algorithm, you will be redirected to the GitHub repository for that algorithm so you can work in the environment you are most familiar with.

Source code hosted on GitHub

Linking other tools 

Now you can set up GitHub Actions, Azure Pipelines or numerous other integrations by using Algorithmia linked with GitHub repositories.

For a full step-by-step tutorial on how to get started with hosting your source code on GitHub, check out our guide for Source Code Management with GitHub.

Source code management and what’s next

With GitHub integrations or hosting your source code on Algorithmia, you can easily take advantage of ML model management best practices. 

We are constantly working on more integrations, including other version control systems and continuous integration pipelines that will enable our users to manage their codebases and deployments seamlessly with Algorithmia. 

Stay tuned for these and other new features that enhance your organization’s ability to connect, deploy, scale, and manage your machine learning pipelines.

Learn more about our other new features in our Release Notes.

Deploying on Algorithmia with ONNX Runtime

Topographic map with three trails leading to a central point illustrating that a lot of paths lead to productionization

Simplifying model deployment 

Deploying models should be an uncomplicated endeavor. To that end, Algorithmia continually aims to ensure machine learning’s value is seen, which means getting to model deployment as quickly and as painlessly as possible. Fortunately, there are many paths to get there.

A lot of roads lead to productionization

As a data scientist or machine learning engineer, you’ve learned to use the best tool for the job. You might be using PyTorch for easy debugging and its handling of variable input lengths, or Caffe2 for deploying on mobile or edge devices. Unfortunately, these and other deep learning models all use a different serialization format so you must make inferences in the same language that you saved the model. 

The Open Neural Network Exchange (ONNX) format aims to solve this issue by creating a unified serialization format for deep learning frameworks such as Caffe2, Microsoft Cognitive Toolkit, MXNet, and PyTorch with connectors to more languages and frameworks.

Of course, with the Algorithmia platform, you can deploy your models that have been trained in various deep learning frameworks and serve them anywhere you can consume an API in the language of your choice! 

However, if you already have a model saved in the ONNX format, you can easily deploy it on Algorithmia using the ONNX Runtime library. The ONNX Runtime module was created by Microsoft in order to make it easy to utilize ONNX in a variety of languages.

The onnxruntime-gpu module is downloaded via PyPi and by following the steps below, you can deploy your ONNX model in a Python algorithm in just a few minutes.

ONNX Runtime walkthrough

If you’ve never created an algorithm before, then we suggest going through our Getting Started Guide. Once you work through that, or if you already have experience deploying your models on Algorithmia, you can create your Python algorithm:

And once that’s created, you can add `onnxruntime` to your dependency file in your algorithm:

Note, that because we created a GPU enabled algorithm, we are importing the GPU version of ONNX Runtime from PyPi versus the CPU version.

And now, import `onnxruntime` as usual in your algorithm:

Note that we added a few other imports to process our image data, but to run an onnx model, all you need is to add:  `import onnxruntime` to your algorithm.

On line 13 in the above image, you’ll notice that we are passing in our model that we loaded using the Algorithmia Data API into `onnxruntime.InferenceSession()`.

Once you publish your algorithm, you can make inferences on data passed to your model via our API. Check out our client guides to learn how to serve your model via our API in the language of your choice.

To see this example running on our public instance of Algorithmia check out the full demo for deploying your ONNX model on Algorithmia as well as the docs in our Developer Center

Algorithmia makes it simple to deploy your ONNX model into production in a few easy steps. We handle the scaling, dependency management, and offer a centralized repository for your models. Let us know how we can help bridge the gap between your data scientists and DevOps teams! Let us know how we can help enable your data science teams to deploy their models today.

Algorithmia at Big Data London

People walking on Millennium Bridge in London. Bridge built by Architect Norman Foster

Recently, Algorithmia ventured from Seattle to London to discover what was happening at the Big Data London (BDL) conference in Kensington. We had great conversations with data engineers, data analysts, and business leaders about how Algorithmia makes it easy to deploy machine learning models into production. Our platform handles the MLOps portion of the data science pipeline, so data scientists can focus instead on solving business problems with data.

Highlights from the booth

At BDL, we got the opportunity to talk with many companies about where they are in their ML journeys. While some are just starting to evaluate use cases and consider infrastructure requirements, it’s very encouraging to hear about how they are planning to put their models into production. This is an important step and it is often overlooked. You don’t want to choose a training platform, for instance, that locks you into a specific ecosystem. It’s better to use the best possible platforms and services for each stage of your data science pipeline rather than get locked into one that tries to do everything, without excelling at any portion of it.

We also talked to many data scientists who are at the stage where they have several models sitting on laptops waiting to be utilized in production but don’t know where to go from there. This is a very common scenario, and Algorithmia has white glove customer support to help you get models off laptops and into operation.

Of course, there are also engineers and business owners who are experiencing the same friction points that Algorithmia helps address in the MLOps workflow. This includes: versioning, model updating, centralized repositories, and of course dependency management and scaling.

If any of these stages of the ML roadmap resonate with you, come talk to us at AWS re:Invent where we can go into more detail about getting your models deployed to a scalable, reliable infrastructure today.

Special topics in big data

There were several core themes at the conference, and ones that turned out to be very popular were: Data Governance, Self-Service Analytics, DataOps, Customer Experience Analytics, and of course Machine Learning and AI. 

Some crowd favorites included A GDPR Retrospective: Implementation by a Large-Scale Data Organization in Reality which covered GDPR compliance from a technical standpoint, rather than a business point of view like some of the other talks within that track. Another popular talk within Data Governance, focused on how data management is a customer service story, not just a technical one in Data Governance as a Customer Service. Here at Algorithmia we feel the same way about model management!

To be expected, there were some standout talks in the Keynote Theater. One of our favorites was from EY’s Chief Scientist Harvey Lewis, a leader in applied ML in business, who talked about the need for humans in the loop in the AI pipeline. Lewis covered use cases that showed how important it is to combine humans with machine learning algorithms to ensure that inferences are accurate when it comes to safety, compliance, and risk in the realm of accounting, auditing, and consultancy firms

Another big hit in the Keynote Theater was Making Everyone A Data Person At Lloyd’s. This talk focused on empowering all users across various teams within an organization to be more data-informed. The speakers talked about their initiative called the Data Lab within Lloyd’s Data, which focuses on making everyone within their company data-literate through mentorship and training.

Our tracks of interest

One of the tracks with the longest queues was the Self-Service Analytics tracks. We know because the Algorithmia booth was right near it so we got a chance to chat with many folks waiting in line. A crowd favorite came from our friends at Tableau, who served up a great talk on how to explore data and gain actionable insights with natural language processing

And of course, our favorite track: AILab, which hosted talks on everything from ethics in AI, to extracting actionable insights from machine learning. It also covered infrastructure and scaling modern machine learning systems. 

We’ve thought a lot about these subjects to, so be sure to read up on racial bias in AI,, gaining insights on customer churn data and scaling your machine learning models.

What was missing from the talks, was substance surrounding the difficulty in the deployment cycle. While scaling is important, making sure you can automate your ML deployment lifecycle is crucial. We’ve covered everything from shortening your deployment time to what makes cloud infrastructure crucial to machine learning

That wraps up our take on our first experience at the Big Data London data conference. And if you’re going to re:Invent next month, check out Diego Oppenheimer’s talk on continuous deployment, and don’t forget to set up a meeting to see how Algorithmia can enable your model deployment, serving, and management at scale.  

How To Deploy a Scikit-Learn Model on the AI Layer

For all the time that it takes to clean your data, train and test your model, then tune your model hyperparmeters, and further re-train and test your model, you’re not done until it’s deployed into production. Because what good is a model that’s just sitting on your laptop?

In this tutorial, you’ll learn how to write a simple algorithm using a pre-trained Scikit-learn model we’ve provided. This way, once you’ve gotten your own Scikit-learn model’s accuracy in an acceptable range, you’ll be able to easily deploy it into production.

Background

The tutorial will walk through how to:

  • create an algorithm,
  • host data in Algorithmia’s data collections,
  • deploy a random forest regression model trained on a Boston housing prices dataset in order to predict the price of Boston houses that the model has not seen before.

Note that for any model you deploy on Algorithmia, you will need to train and save the serialized model. For Scikit-learn models, you can use the Python Pickle library to serialize, and later deserialize, your model file.

Before you get started, you’ll need these files in the GitHub repository.

Get Started

Next, you’ll need to upload the .csv file and the pre-trained pickled Scikit-learn model to a data collection.

If you’ve never created a data collection before, check out the docs for hosting your data on Algorithmia. Go ahead and create a data collection. Below, you’ll notice that we named our data collection “demo_files,” but you can name yours as you like.

Once you’ve created a data collection, you can click “Upload Files” from where you stored the scikit-demo-boston-regression.pkl and boston_test_data.csv files on your computer:

Take note of the path created that starts with “data://” and shows your username and the data collection name along with your file name.

You’ll want to use this path in your algorithm to point to your own data and model path, so we recommend keeping this data collection page open and also opening a new tab where you can create your algorithm. This way you can easily copy and paste the paths from your data collections when you’re ready to add them to the demo code sample.

Next click the “Plus” icon in the navigation pane, and create your Scikit-learn algorithm, naming it as you like. If you’ve never created an algorithm before and want to understand what the permissions mean, check out our docs on Getting Started but note that for this example, you’ll want to choose: “Python 3.x” for your language so it runs well. The rest of the permissions and execution environment can stay under their default settings.

Once you create your algorithm, you’ll be able to edit it either through the CLI tools or the Web IDE. It’s your choice how you want to interact with your algorithm, but this tutorial will demonstrate working in the Web IDE:

Before we touch our code, we need to add the required dependencies. Click “Dependencies” found right above your source code. This will show a modal that is basically a requirements.txt file that pulls the stated libraries from PyPi. If you state the package name without a version number, you’ll automatically get the latest version we support, otherwise  state the needed version number or range.

In this example, we have an older Scikit-learn model, so we need a range of versions for our model to work. Go ahead and add these to the libraries already in your dependency file:

numpy

scikit-learn>=0.14,<0.18

So your whole dependency file will look like this:

You’ll want to remove the boilerplate code that exists in your newly created algorithm and copy/paste the code from the “demo.py” file found in the GitHub repository for this project.

Here is what you’ll see when you paste in the code from demo.py:

Notice in the first few lines of our script that we are importing the Python packages required by our algorithm. Then on line 7, we are creating the variable “client” in global scope to use throughout our algorithm. This will enable us to access our data in data collections via the Data API.

On line 12, inside the “load_model()” function, you’ll want to replace that string with the path from your data collections for the pickled model file.

Then, notice on line 13 that we are passing in that data collection path to the data API using:

client.file(file_path).getFile().name

And then we use the Pickle library to open the file.

Notice that line 20 is where we call the function “load_model().” This is important because you’ll always want to load the model outside of the “apply()” function. This is so the model file only gets loaded into memory during the initial call within that session, so while the first call to your algorithm might take a bit of time depending on the size of your model, subsequent calls will be much faster. If you were to load your model inside the “apply()” function, then the model would get loaded with each call of your algorithm.

Also, if you are tempted to add your model file as a Python module, and then import that module into your algorithm file, this will result in a loss of performance, and we don’t recommend it.

The next function, called “process_input(),” simply turns the .csv file into a numpy array, and we call that function within the “apply()” function where we will pass in the user provided “input.”

The “input” argument is the data or any other input from the user that gets passed into your algorithm. It’s important to support inputs from multiple data sources with exception handling, going beyond data collections like shown, but also for data files hosted in S3, Azure Blobs, or other data sources for which we have data connectors. For a great example of handling multiple types of files, or if you want to see a PyTorch algorithm in action, check out the Open Anomaly Detection algorithm in the Algorithmia Marketplace.

Notice that we are returning the predicted data as output from our Scikit-learn model in the apply() function.

Now click the “Build” button on the top right of the web IDE. This will commit our code to a git repository. Every algorithm is backed by a git repository and as you are developing your algorithm, whenever you hit “Build,” that will commit your code and return a hash version of your algorithm, which you’ll see in the Algorithmia console.

You can use that hash version to call your algorithm locally using one of the language clients for testing purposes while you work on perfecting your algorithm.

Note that you’ll get a semantic version number once you publish your algorithm.

Now we are ready to test our algorithm. Go back to the data collections page where we got our model path, and copy/paste the path for our .csv file into the Algorithmia console (wrapping it in quotes so it’s a proper JSON formatted string) and hit return/enter:

You’ll need to hit the “Publish” button now that you’re happy with the results. The publishing workflow will give you options for adding sample code (like the .csv data we tested on) so users of your algorithm can test out its inputs and outputs. You’ll also be able to choose if you want a backwards compatible version of your algorithm or if you’re making breaking changes to it. For more information on the publishing steps, check out our docs on Getting Started.

Once you publish, you can find the runable code on your algorithm’s description page:

That’s it for deploying your Scikit-learn model into production!