All posts by Stephanie Kim

Algorithmia at Big Data London

People walking on Millennium Bridge in London. Bridge built by Architect Norman Foster

Recently, Algorithmia ventured from Seattle to London to discover what was happening at the Big Data London (BDL) conference in Kensington. We had great conversations with data engineers, data analysts, and business leaders about how Algorithmia makes it easy to deploy machine learning models into production. Our platform handles the MLOps portion of the data science pipeline, so data scientists can focus instead on solving business problems with data.

Highlights from the booth

At BDL, we got the opportunity to talk with many companies about where they are in their ML journeys. While some are just starting to evaluate use cases and consider infrastructure requirements, it’s very encouraging to hear about how they are planning to put their models into production. This is an important step and it is often overlooked. You don’t want to choose a training platform, for instance, that locks you into a specific ecosystem. It’s better to use the best possible platforms and services for each stage of your data science pipeline rather than get locked into one that tries to do everything, without excelling at any portion of it.

We also talked to many data scientists who are at the stage where they have several models sitting on laptops waiting to be utilized in production but don’t know where to go from there. This is a very common scenario, and Algorithmia has white glove customer support to help you get models off laptops and into operation.

Of course, there are also engineers and business owners who are experiencing the same friction points that Algorithmia helps address in the MLOps workflow. This includes: versioning, model updating, centralized repositories, and of course dependency management and scaling.

If any of these stages of the ML roadmap resonate with you, come talk to us at AWS re:Invent where we can go into more detail about getting your models deployed to a scalable, reliable infrastructure today.

Special topics in big data

There were several core themes at the conference, and ones that turned out to be very popular were: Data Governance, Self-Service Analytics, DataOps, Customer Experience Analytics, and of course Machine Learning and AI. 

Some crowd favorites included A GDPR Retrospective: Implementation by a Large-Scale Data Organization in Reality which covered GDPR compliance from a technical standpoint, rather than a business point of view like some of the other talks within that track. Another popular talk within Data Governance, focused on how data management is a customer service story, not just a technical one in Data Governance as a Customer Service. Here at Algorithmia we feel the same way about model management!

To be expected, there were some standout talks in the Keynote Theater. One of our favorites was from EY’s Chief Scientist Harvey Lewis, a leader in applied ML in business, who talked about the need for humans in the loop in the AI pipeline. Lewis covered use cases that showed how important it is to combine humans with machine learning algorithms to ensure that inferences are accurate when it comes to safety, compliance, and risk in the realm of accounting, auditing, and consultancy firms

Another big hit in the Keynote Theater was Making Everyone A Data Person At Lloyd’s. This talk focused on empowering all users across various teams within an organization to be more data-informed. The speakers talked about their initiative called the Data Lab within Lloyd’s Data, which focuses on making everyone within their company data-literate through mentorship and training.

Our tracks of interest

One of the tracks with the longest queues was the Self-Service Analytics tracks. We know because the Algorithmia booth was right near it so we got a chance to chat with many folks waiting in line. A crowd favorite came from our friends at Tableau, who served up a great talk on how to explore data and gain actionable insights with natural language processing

And of course, our favorite track: AILab, which hosted talks on everything from ethics in AI, to extracting actionable insights from machine learning. It also covered infrastructure and scaling modern machine learning systems. 

We’ve thought a lot about these subjects to, so be sure to read up on racial bias in AI,, gaining insights on customer churn data and scaling your machine learning models.

What was missing from the talks, was substance surrounding the difficulty in the deployment cycle. While scaling is important, making sure you can automate your ML deployment lifecycle is crucial. We’ve covered everything from shortening your deployment time to what makes cloud infrastructure crucial to machine learning

That wraps up our take on our first experience at the Big Data London data conference. And if you’re going to re:Invent next month, check out Diego Oppenheimer’s talk on continuous deployment, and don’t forget to set up a meeting to see how Algorithmia can enable your model deployment, serving, and management at scale.  

How To Deploy a Scikit-Learn Model on the AI Layer

For all the time that it takes to clean your data, train and test your model, then tune your model hyperparmeters, and further re-train and test your model, you’re not done until it’s deployed into production. Because what good is a model that’s just sitting on your laptop?

In this tutorial, you’ll learn how to write a simple algorithm using a pre-trained Scikit-learn model we’ve provided. This way, once you’ve gotten your own Scikit-learn model’s accuracy in an acceptable range, you’ll be able to easily deploy it into production.

Background

The tutorial will walk through how to:

  • create an algorithm,
  • host data in Algorithmia’s data collections,
  • deploy a random forest regression model trained on a Boston housing prices dataset in order to predict the price of Boston houses that the model has not seen before.

Note that for any model you deploy on Algorithmia, you will need to train and save the serialized model. For Scikit-learn models, you can use the Python Pickle library to serialize, and later deserialize, your model file.

Before you get started, you’ll need these files in the GitHub repository.

Get Started

Next, you’ll need to upload the .csv file and the pre-trained pickled Scikit-learn model to a data collection.

If you’ve never created a data collection before, check out the docs for hosting your data on Algorithmia. Go ahead and create a data collection. Below, you’ll notice that we named our data collection “demo_files,” but you can name yours as you like.

Once you’ve created a data collection, you can click “Upload Files” from where you stored the scikit-demo-boston-regression.pkl and boston_test_data.csv files on your computer:

Take note of the path created that starts with “data://” and shows your username and the data collection name along with your file name.

You’ll want to use this path in your algorithm to point to your own data and model path, so we recommend keeping this data collection page open and also opening a new tab where you can create your algorithm. This way you can easily copy and paste the paths from your data collections when you’re ready to add them to the demo code sample.

Next click the “Plus” icon in the navigation pane, and create your Scikit-learn algorithm, naming it as you like. If you’ve never created an algorithm before and want to understand what the permissions mean, check out our docs on Getting Started but note that for this example, you’ll want to choose: “Python 3.x” for your language so it runs well. The rest of the permissions and execution environment can stay under their default settings.

Once you create your algorithm, you’ll be able to edit it either through the CLI tools or the Web IDE. It’s your choice how you want to interact with your algorithm, but this tutorial will demonstrate working in the Web IDE:

Before we touch our code, we need to add the required dependencies. Click “Dependencies” found right above your source code. This will show a modal that is basically a requirements.txt file that pulls the stated libraries from PyPi. If you state the package name without a version number, you’ll automatically get the latest version we support, otherwise  state the needed version number or range.

In this example, we have an older Scikit-learn model, so we need a range of versions for our model to work. Go ahead and add these to the libraries already in your dependency file:

numpy

scikit-learn>=0.14,<0.18

So your whole dependency file will look like this:

You’ll want to remove the boilerplate code that exists in your newly created algorithm and copy/paste the code from the “demo.py” file found in the GitHub repository for this project.

Here is what you’ll see when you paste in the code from demo.py:

Notice in the first few lines of our script that we are importing the Python packages required by our algorithm. Then on line 7, we are creating the variable “client” in global scope to use throughout our algorithm. This will enable us to access our data in data collections via the Data API.

On line 12, inside the “load_model()” function, you’ll want to replace that string with the path from your data collections for the pickled model file.

Then, notice on line 13 that we are passing in that data collection path to the data API using:

client.file(file_path).getFile().name

And then we use the Pickle library to open the file.

Notice that line 20 is where we call the function “load_model().” This is important because you’ll always want to load the model outside of the “apply()” function. This is so the model file only gets loaded into memory during the initial call within that session, so while the first call to your algorithm might take a bit of time depending on the size of your model, subsequent calls will be much faster. If you were to load your model inside the “apply()” function, then the model would get loaded with each call of your algorithm.

Also, if you are tempted to add your model file as a Python module, and then import that module into your algorithm file, this will result in a loss of performance, and we don’t recommend it.

The next function, called “process_input(),” simply turns the .csv file into a numpy array, and we call that function within the “apply()” function where we will pass in the user provided “input.”

The “input” argument is the data or any other input from the user that gets passed into your algorithm. It’s important to support inputs from multiple data sources with exception handling, going beyond data collections like shown, but also for data files hosted in S3, Azure Blobs, or other data sources for which we have data connectors. For a great example of handling multiple types of files, or if you want to see a PyTorch algorithm in action, check out the Open Anomaly Detection algorithm in the Algorithmia Marketplace.

Notice that we are returning the predicted data as output from our Scikit-learn model in the apply() function.

Now click the “Build” button on the top right of the web IDE. This will commit our code to a git repository. Every algorithm is backed by a git repository and as you are developing your algorithm, whenever you hit “Build,” that will commit your code and return a hash version of your algorithm, which you’ll see in the Algorithmia console.

You can use that hash version to call your algorithm locally using one of the language clients for testing purposes while you work on perfecting your algorithm.

Note that you’ll get a semantic version number once you publish your algorithm.

Now we are ready to test our algorithm. Go back to the data collections page where we got our model path, and copy/paste the path for our .csv file into the Algorithmia console (wrapping it in quotes so it’s a proper JSON formatted string) and hit return/enter:

You’ll need to hit the “Publish” button now that you’re happy with the results. The publishing workflow will give you options for adding sample code (like the .csv data we tested on) so users of your algorithm can test out its inputs and outputs. You’ll also be able to choose if you want a backwards compatible version of your algorithm or if you’re making breaking changes to it. For more information on the publishing steps, check out our docs on Getting Started.

Once you publish, you can find the runable code on your algorithm’s description page:

That’s it for deploying your Scikit-learn model into production!

Deploying R Models Into Production

The language you use to create a model can make it easier or harder to deploy into production. While the R programming language makes it easy to compare different models against each other and makes data exploration and visualization a breeze using R Studio, it’s not always the easiest language to deploy into production due to some complex versioning, with different packages installing and relying on different versions of the same dependency.

Luckily, Algorithmia has your back for working with R in production.

What is R Modeling?

R is a language often used for statistical computing. R provides a variety of exercises such as linear and nonlinear modeling, time-series analysis, classification, clustering, and classic statistical tests. For example, programmers can use linear regression analysis to create linear models (LM) that illustrate the relationship between two variables in R and visualize the residuals in R Studio. 

Not only is R a simple, effective, and popular programming language, it is also a suite of software including a data handling and storage facility, operators for calculations in matrices, intermediate-level tools for data analysis, and graphical programs for data analysis visualization in digital or hardcopy formats.

Tutorial: How to Deploy R Models into Production

Here is an introduction to show how to deploy a simple Iris classification model using a Naive Bayes classifier and along the way, we’ll discuss the different ways to load dependencies in R models. To get all the necessary files so you can follow along, here is the repository for this tutorial and for more information on deploying models in R and general algorithm creation in R check out the Algorithm Development Guides. Note that we support full R language and standard library version 3.4.

Consider this the first chapter of your self-taught course on deploying R models into production.

Upload Your Data To Data Collections

In this demo, we are going to host our data on the Algorithmia platform in Data Collections.

You’ll want to create a data collection to host your saved model and your test data:

  1. Login to your Algorithmia account and click your avatar which will show a dropdown of choices. Click “Manage Data”
  2. Then in the left panel on the page of data collection options, go ahead and click “My Hosted Data”
  3. Click on “Add Collection” under the “My Collections” section on your data collections page. Let’s name ours “iris_r_demo”.
  4. After you create your collection you can set the read and write access on your data collection. We are going to select “Private” since only you will be calling your algorithm in this instance. For more information on ACL permission types for Algorithmia hosted data check out the docs.
  5. Now, let’s put some data into your newly created data collection. You can either drag and drop the file naive_bayes_iris.rds or you can click “Drop files here to upload” from where you stored the repo on your computer.

Note, that you can also upload your model or data files from Dropbox or Amazon S3 using the Data API

Create Your Algorithm

Now we are ready to deploy our model.

First, create an algorithm

  1. Click the “Plus” icon at the top right of the navbar.
  2. Check out Getting Started in algorithm development to learn about the various permissions in the form. Do note that if you want to delete your algorithm later, you should set it to “Private”. Make sure you’ve gone through the Getting Started section to cover any questions you have about creating your algorithm.
  3. Click on the purple “Create Algorithm”.

Now that you have created your algorithm, you’ll get a modal with information about using the CLI and Git. Every algorithm has a Git repo behind it so you can experiment with different I/O in development mode by calling the hash version.

Add Code Sample

  1. Click on the tab “Source” and you’ll notice boilerplate code for Hello World.
  2. Let’s delete that code, and copy and paste the code from the file demo.R
  3. Note that you’ll need to change the name of the data collection path to the one we created earlier.

Recall our data collection is called “iris_r_demo” and you’ll need to change “YOUR_USERNAME” to your own username:

file_path = 'data://YOUR_USERNAME/iris_r_demo/naive_bayes_iris.rds'
library(algorithmia)
library(e1071)

client <- getAlgorithmiaClient()

read_data <- function(file_path) {
  # Use data api to process data passed in as user input
  csv_file <- client$file(file_path)$getFile()
  csv_data <- read.csv(csv_file,  stringsAsFactors=FALSE, check.names=FALSE, header=TRUE)
  return(csv_data)
}

load_model <- function() {
    # Load model that was saved as .rds file from data collections
    file_path <- "data://YOUR_USERNAME/iris_r_demo/naive_bayes_iris.rds"
    rds_file <- client$file(file_path)$getFile()
    loaded_model <- readRDS(rds_file, refhook = NULL)
    return(loaded_model)
}

# Load model outside of algorithm function - this way after the model is first
# loaded, subsequent calls will be much faster
model <- load_model()

prediction <- function(data) {
    # Using pre-trained Naive Bayes model make predictions on user data
    iris_pred_naive <- predict(model, data)
    return(iris_pred_naive)
}

# API calls will begin at the algorithm() method, with the request body passed as 'input'
# For more details, see algorithmia.com/developers/algorithm-development/languages
algorithm <- function(input) {
    example_data <- read_data(input)
    predictions <- prediction(example_data)
    return(predictions)
}

The code example above shows how to load our model client$file(file_path)$getFile() that we hosted in Data Collections via our Data API.

Note you always want to initialize the model outside of the algorithm function. This way, after the model is initially loaded, subsequent calls will be much faster within that session.

Add Dependencies

  1. Click the “Dependencies” button in the grey navbar.

As you can see in the dependency file above, there are four different ways you can load packages in R.

If you want the latest version from CRAN, then you simply type in the dependency name:

e1071

If you wanted that same package in an older version, all you have to do is install from CRAN the exact version by finding it in the packages archive which should be listed in the packages docs: found under “Old Sources”. There you’ll find the archived packages, so simply choose your version and load your dependency:

https://cran.r-project.org/src/contrib/Archive/e1071/e1071_1.6-6.tar.gz

Or, if you need to pull a dependency off of GitHub instead, all you need to do is install the package with:

-g /cran/e1071

Similar to how you would install through dev.tools() in R.

And finally, if you’re having issues with version conflicts, for instance, package A requires a version of package B while package C requires a different version of package B, then you can install with in your dependency file using install.packages():

-e install.packages(“e1071”)

Note the last format will take longer to load the dependencies than the other formats.

Compile Code

  1. Click the “Compile” button in the top right of the grey navbar
  2. Now test your code in the console by passing in the data file we stored in our data collection.
  3. REMEMBER: Change YOUR_USERNAME to your own name in the model path on line 15 of the code example.
  4. Click “Compile” which will provide you with a hash version of your algorithm that you can use to call and test your algorithm via our CLI or in our case, the IDE.

In this case, we simply passed in a string, but we recommend to create a more robust data structure such as an R list or Python dictionary. That way you can allow for various input types, output files, and other customizations. 

Because every algorithm is backed by a Git repository, you have a hash version while developing your algorithm that you can use to, but you’ll get a semantic version once you publish it.

Pass in the test file that we uploaded to our data collection:

"data://YOUR_USERNAME/iris_r_demo/iris_test_data.csv"

Then you’ll see the output, which is an array of Iris species names which classified our test data set.

["setosa","setosa","versicolor",...]

Finally, you can publish your algorithm by clicking on the “Publish” button on the top right of your screen.

This will open the modal that takes you through the workflow of publishing your model:

R Models R Production

First, you’ll notice your commit history, and you can write any release notes that make sense to include.

Once you’re done with the first tab, you can click “Next” and look at the Sample I/O which is an important piece of your algorithm. For users to consume your model, you’ll need to supply a sample input for them which will become a runnable example on your algorithm’s description page. That way users can use your sample data or their own to test on your model to see if it will work with their use case and data. Even if you’re the only one consuming your model, it’s important to document for your future self!

Finally, click on the last tab called “Versioning” will let you set the costs for how much you want to charge for your algorithm, whether it’s public or private, and if it’s a breaking change or a minor revision.

And that’s it! Once you click “Publish” you’ve just deployed your first R model to production on Algorithmia.

If you haven’t installed the Algorithmia library from CRAN on your local machine, do that now. If you need help check out the R Client Guides.

Now, let’s call our model via the API:

library(algorithmia)

input <- "data://YOUR_USERNAME/iris_r_demo/iris_test_data.csv"

client <- getAlgorithmiaClient(YOUR_API_KEY)

# Change the algo path to yours under your name or team.

algo <- client$algo("test_org/naive_bayes_iris/0.1.0")

result <- algo$pipe(input)$result

print(result)

When you run your algorithm, you’ll get the same result as you saw in the IDE while testing!

That’s it for deploying and calling your R model.

Resources:

 

Exploring the Deep Learning Framework PyTorch

Anyone who is interested in deep learning has likely gotten their hands dirty at some point playing around with Tensorflow, Google’s open source deep learning framework. Tensorflow has a lot of benefits like wide-scale adoption, deployment on mobile, and support for distributed computing, but it also has a somewhat challenging learning curve, and is difficult to debug. It also doesn’t support variable input lengths and shapes due to its static graph architecture unless you use external packages. PyTorch is a new deep learning framework that solves a lot of those problems.

PyTorch is only in beta, but users are rapidly adopting this modular deep learning framework. PyTorch supports tensor computation and dynamic computation graphs that allow you to change how the network behaves on the fly unlike static graphs that are used in frameworks such as Tensorflow. PyTorch also offers modularity, which enhances the ability to debug or see within the network. For many, PyTorch is more intuitive to learn than Tensorflow.

This talk will objectively look at PyTorch and why it might be the best fit for your deep learning use case. We’ll look at use cases that will showcase why you might want consider using Tensorflow instead.

Read More…

Investigating User Experience with Natural Language Analysis

User experience and customer support are integral to every company’s success. But it’s not easy to understand what users are thinking or how they are feeling, even when you read every single user message that comes in through feedback forms or customer support software. With Natural Language Processing and Machine Learning techniques it becomes somewhat easier to understand trends in user sentiment, main topics discussed, and detect anomalies in user message data.

A couple of weeks ago, we gave a talk about investigating user experience with natural language analysis at Sentiment Symposium and thought we’d share the talk, along with the speaker notes for anyone who is interested.

Read More…