All posts by Jon Peck

Accelerate MLOps: using CI/CD with machine learning models

CI/CD pipelineContinuous Integration and Continuous Deployment (CI/CD) are key components of any mature software development environment. During CI, newly added code is merged into the codebase, kicking off builds and automated testing. If all tests succeed, then the CD phase begins, deploying the changes automatically to production. In this way, developers can immediately release changes to production by simply committing to, or merging into, the proper branch in their version control system.

Developers have a great deal of flexibility as to how they build this pipeline, due to the wide variety of open and interoperable platforms for version control and CI/CD. This is not, however, always true in the world of machine learning: it can be difficult to properly version, track, and productionize machine learning models

Challenges of CI/CD in machine learning

Some existing services provide this functionality effectively, but lock data scientists into a black-box silo where their models must be trained, tracked, and deployed on a closed technology stack. Even when open systems are available, they do not always interoperate cleanly, forcing data scientists to undergo a steep learning curve or bring in specialists to build novel deployment tools.

At Algorithmia, we believe ML teams should have the freedom to use any training platform, language, and framework they prefer, then easily and quickly deploy their models to production. To enable this, we provide CI/CD workflows in Jenkins and GitHub Actions, which work out of the box, but can be easily modified to work with just about any CI/CD tool to continuously deploy your ML models into our scalable model-hosting platform, running in our public marketplace or in your own private-cloud or on-prem cluster.

This is made possible by our Algorithm Management API, which provides a code-only solution for deploying your model as a scalable, versioned API endpoint. Let’s take a look at a typical CI/CD pipeline for model deployment:

CI/CD workflow

The process kicks off when you train your initial model or modify the prediction code—or when an automatic process retrains the model as new data becomes available. The files are saved to network-available storage and/or a version control system such as Git. This triggers any tests that must be run, then kicks off the deployment process. In an ideal world, your new model will be live and running on your production servers within seconds or minutes as a readily available API endpoint apps and services can call. In addition, your endpoint should support versioning, so dependant apps/services can access older versions of your model as easily as the latest copy.

Algorithmia’s CI/CD tools provide the latter stages of that workflow: detecting the change in your saved model or code, and deploying your new model to a scalable, versioned Algorithmia API endpoint (an “algorithm” in our terminology). These are drop-in configurations: the only changes you need to make are to the single Python script, which specifies the settings to use (e.g. endpoint name and execution language) and which files to deploy.

If you’re using Jenkins or GitHub Actions, simply clone and configure the appropriate configuration. If you prefer a simple, notebook-driven deploy, check out our Jupyter Notebook example. If you’re using another tool, it should be fairly simple to customize the examples, or you can contact us to request new ones!

Chaining machine learning models in production with Algorithmia

Workflow showing the tools needed at each stage

In software development, it makes sense to create reusable, portable, self-contained modules that can seamlessly plug into any application. As the old adages insist: rely on modular design, don’t repeat yourself (DRY), and write once, run anywhere. The rise of API-first design, containerization, and serverless functions has taken these lessons even further—allowing individual modules to be developed in separate languages but executed from anywhere in any context.

To reach its full potential, machine learning must follow the same principles. It’s important to create reusable abstractions around your models, keep them in a well-documented and searchable catalog, and encourage model reuse across your organization.

During model training, techniques such as transfer learning begin to address this need; but how can we benefit from reuse of shared models and utilities once they are already in production?

Architectural principles

Design with abstraction in mind: while you may be building a model for a specific, constrained business purpose, consider how it might be used in other contexts. If it only takes singular inputs, for instance, could you provide a simple wrapper to allow batches of inputs to be passed in as a list? If it expects a filename to be passed in, should you also allow for URLs or base64-encoded input?

Document and centrally catalog your models: once you’ve put in the hours necessary to conceive and train a model, move it off your laptop and into a central repository where others can discover it. Provide simple, clean documentation, which describes the purpose, limitations, inputs, and outputs of the model.

Host your models in a scalable, serverless environment: downloading is tedious, limiting, and resource-wasteful. Instead, allow other developers to access your model directly via an API. This way, they’ll only need to add a few simple lines of code to their application, instead of duplicating your entire model and associated dependencies. Host that API endpoint in a serverless environment so it can scale indefinitely and satisfy calls from any number of applications.

Search for existing endpoints before creating your own: there’s no need to build your own code from scratch, or even add another large dependency to your project. If the functionality is already provided by an API you can call, using existing resources is preferred. By thinking API-first, you’ll decrease your own module’s weight while minimizing technical debt and maintenance effort.

Design with abstraction in mind: consider how [a model] might be used in other contexts.

Model reuse and scaling

Algorithmia’s public model marketplace and Enterprise AI Layer have been designed with these principles in mind. Every model is indexed in a central, searchable catalog (with the option for individual or team-level privacy) with documentation and live sample execution, so developers can understand and even live-test the model before integrating it into their codebase.

Every model is run in Algorithmia’s scalable serverless environment and automatically wrapped by a common API, with cut-and-paste code samples provided in any language. There is no need to dig through sprawling API documentation or change patterns based on which model is called: integrating a Java deep-learning model into a Python server feels and acts as seamless as calling a local method. Running an R package from frontend JavaScript is just a simple function call.

Screenshot from algorithmia.com of an R packageset running as a function call.

Chaining models

The benefits of Algorithmia’s design extend beyond executing models from end-user applications: it is equally simple to call one model from another model, a process known as model chaining or production model pipelining (not to be confused with training pipelines).

The core of this is the .pipe() call. UNIX users will already be familiar with the pipe “|” syntax, which sends input from one application to another; on Algorithmia, .pipe() sends input into an algorithm (a model hosted on Algorithmia), and can be used to send the output of one model directly into another model, or into a hosted utility function. For example, if we have a model called “ObjectDetection” for recognizing objects in a photo, and a utility function called “SearchTweets” for searching Twitter by keyword, and another model called “GetSentiment” which uses NLP to analyze the sentiment of text, we can write a line of code very similar to:

GetSentiment.pipe( SearchTweets.pipe( ObjectDetection.pipe(image).result ).result )

This runs an image through ObjectDetection, then sends the names of detected objects into SearchTweets, then gets the sentiment scores for the matching tweets.

Let’s implement this as an actual model pipeline, using the Algorithmia algorithms ObjectDetectionCOCO, AnalyzeTweets, UploadFileToCloudinary, and GetCloudinaryUrl. We’ll extend it a bit by picking one of the top sentiment-ranked tweets, overlaying the text on top of the image, and sending that image over to Cloudinary’s CDN for image hosting. Our full code looks something like this:

Object detection and tweet analysis chain code snippet

Line-by-line, here are the steps:

  1. Create a client for communicating with the Algorithmia service
  2. Send an image URL into ObjectDetectionCOCO v. 0.2.1, and extract all the labels found
  3. Search Twitter for tweets containing those labels via AnalyzeTweets v. 0.1.3, which also provides sentiment scores
  4. Sort the tweets based on sentiment score
  5. Upload the original image to Cloudinary
  6. Overlay the top-ranked tweet’s text on top of the image in Cloudinary’s CDN

Now, with just 6 lines of code, we’ve chained together two ML models and two external services to create a fun toy app! But let’s go further, making this an API of its own, so other developers can make use of the entire pipeline in a single call. We head to https://algorithmia.com (or to our own privately-hosted Enterprise instance) and click Create New Algorithm. Then place the same code into the algorithm body:

create new algorithm on algorithmia.com with memegenerator.py

After we publish this, any other user will be able to make use of this pipeline by making a single function call, from any language:

making pipelines available for use by others

You can try this out yourself, and even inspect the source code (enhanced with some overlay formatting and random top-N tweet selection) at https://algorithmia.com/algorithms/jpeck/MemeGenerator!

Going further

This toy example was fun to develop, but every industry has its own specific needs and workflows that can be improved with model chaining. For a few more model-chaining examples, read how to:

Or, explore our Model Pipelining whitepaper which addresses the business-level benefits of model pipelining within your enterprise.

Thanks for taking the time to explore with Algorithmia; we look forward to seeing what great model pipelines you dream up!

Building an ML−enabled fullstack application with Vue, Flask, Mongo, and Algorithmia

Full stack enabled application with Algorithmia

Are you an experienced fullstack developer looking to bring machine learning to your apps? Or are you an ML expert who wants to build a website to have a place to show off your models? In any event, the process of bringing AI to applications can be laborious and confusing—but it doesn’t have to be!

Algorithmia has created a complete end-to-end tutorial to demonstrate how you can quickly build a modern ML−enabled web application using the following popular technologies:

Course Specs

In our walkthrough example, we start from ground zero, showing you how to install and connect each of these technologies. From there, we build up each layer of the application, writing our backend logic, building out the presentation layer, and connecting to powerful serverless ML algorithms. 

By the end of the walkthrough, you’ll have an app skeleton for managing user profiles, enhanced by nudity-detection algorithms and auto-cropping models to create safe, automatic profile images. Use this as the basis for your next AI-powered app or take your newly acquired expertise and apply it to your own project with your favorite tech stack.

screenshot from within Algorithmia application

You can continue building out your app by adding any of Algorithmia’s 9,000+ serverless functions, or build your own ML pipelines right on the Algorithmia platform, connecting multiple powerful components together to create complex workflows callable by your app or service. 

Detect objects in an image, then search Twitter for relevant quotes, ranking them by sentiment score. Build a roommate-finder or dating tool to ensure stable matchups, automatically detect age, gender, and even emotion in user profiles—as only a few examples. Or build your own machine learning model to work standalone or in combination with any algorithm on the platform.

Visit Algorithmia’s Learning Center

Ready to jump in? Start the free, interactive course today: Building a Fullstack App with Algorithmia. It is just one of the course offerings in Algorithmia’s new Learning Center.

Check back often as the Learning Center is always growing. Explore dozens of free courses and acquire skills to improve your dev capabilities. Right now you can learn how to add serverless ML to your applications, manage data, deploy your own ML models with hands-on Scikit-learn and TensorFlow walkthroughs, and a lot more!

The Learning Center is housed within algorithmia.com and offers trainings on using Algorithmia’s AI Layer—a machine learning model deployment and management platform. The AI Layer makes it easy to deploy models as scalable microservices, regardless of framework, language, or data source.

Enrich Data in Tableau with Machine Learning using Algorithmia

World map showing where earthquakes occur and the time of occurrence

Tableau combines the ease of drag-and-drop visual analytics with the flexibility to dynamically integrate data from analytical systems. Algorithmia lets analysts go even further, extending worksheets with machine learning (ML) and allowing for the execution of Java, Node.js, Python, R, Ruby, Rust, and Scala code directly from within Tableau. 

Take advantage of Algorithmia’s broad catalog of prebuilt algorithms or upload your own ML model or utility code into Algorithmia’s public or enterprise private cloud offerings, then integrate them right into your Tableau Worksheets.

In this blog, we’ll explore a concrete example to show you how to leverage Algorithmia algorithms in your Tableau workflow. 

How to Leverage Algorithmia Algorithms in Tableau

Let’s dive in with an example. The United States Geological Survey (USGS) provides an excellent database of global earthquake data. With Tableau, we can quickly display the location, time, and magnitude of these events on a map. But earthquakes that occur at night may carry a higher risk of injuries/fatalities, since escaping or taking shelter within a collapsing building when visibility is low is harder. 

Time of day can act as a rough proxy for visibility, but a much better measure is the angle of the sun, which is affected by geolocation, date, and time of day. As an extreme example, the North Pole is sunless at high noon on December 21.

To determine visibility during an earthquake, we will enrich our worksheet using the SunMoonCalculator algorithm hosted on Algorithmia.com. This algorithm makes use of a package called SunCalc, written in Node.js, which allows us to easily connect to our worksheet using TabPy and Algorithmia.

PART 1: Using Data From USGS, Create a Map in Tableau

  1. Download the data: The USGS maintains an excellent Earthquake Catalog API that we can use to retrieve earthquake event data, including the geolocation, date/time, and magnitude of each event. This data is available in a variety of formats, including GeoJSON, but for now we’ll use a simple CVS export. Let’s start by downloading a small initial dataset: all magnitude 6+ quakes in 2019.
  2. Open Tableau and create a new Worksheet: Select the menu item Data → New Data Source, and click “Text File.” Select the file you just downloaded and take a minute to explore the raw data. When you’re ready, click the “Sheet 1” tab in the bottom left to continue building the Worksheet.

Connect data source in Tableau

Earthquakes data USGS

  1.  Assign geographic roles: Under “Measures” on the left, you should now see the fields “Longitude” and “Latitude.” Tableau automatically assigns these fields Geographic Roles. You can confirm this by clicking on the pill and picking “Geographic Role” to see how it was assigned.

Define latitude and longitude

  1. Convert dimensions: Click the drop-down menu next to “Longitude,” but this time, select “Convert to Dimension.” Repeat this for Latitude.

Convert to dimension

  1. Create a map: Double-click Longitude then Latitude. Tableau will add these fields to Columns and Rows and automatically render the points on a geographic map.

Create a map

  1. Show size of quakes: To get a feel for the size of these quakes, drag the “Mag” measure onto the “Size” box under “Marks”, This should cause each point to size itself according to the magnitude of the earthquake, though the size differences won’t be particularly extreme since these are all magnitude 6-9 events.

Show earthquake size

PART 2: Locally Install and Connect to TabPy, Tableau’s Python Server

Tableau supports a set of functions that you can use to pass expressions to external services. One of these external services, Tableau Python Server (TabPy), makes it possible to use Python scripts in Tableau calculated fields. This can be run as a local server or installed centrally for your organization. For this example, we’ll use the local server.

  1. Download and install TabPy according to the instructions found at: https://github.com/tableau/TabPy/blob/master/README.md.
  2. Ensure that the Algorithmia library is available in the Python environment under which TabPy will be running (‘pip install algorithmia’ in the relevant Python env or venv).
  3. Start the TabPy server and make note of the port on which it is listening (e.g. the console will read “Web service listening on port 9004”).
  4. Return to Tableau and connect it to the TabPy server: from the menu, pick Help → Settings and Performance → Manage External Service Connections. Choose “TabPy/External API” and make sure the port matches the one shown in the prior step. Test the connection and click OK.

Connect to TabPy server

PART 3: Use Algorithmia to Enrich the View

Now that we have a way to execute Python code from Tableau, we can use Algorithmia’s Python Client to connect to run jhurliman/SunMoonCalculator on our data.

  1. If you do not already have a free Algorithmia account, head here and use promo code “tabpy.” This will put an initial $50 into your account, on top of the 5,000 free credits all users receive every month.
  2. On algorithmia.com, click on your avatar, then “Manage API Keys.” Copy your default API key beginning with “sim…”
  3. Back in Tableau, click on Analysis → Create Calculated Field: name it “Sun Altitude.”
  4. Paste the following code, replacing ALGORITHMIA_API_KEY with the key from step 2:
SCRIPT_INT("
import Algorithmia
import math
client = Algorithmia.client('ALGORITHMIA_API_KEY')
algo = client.algo('jhurliman/SunMoonCalculator/0.1.0')
if _arg1[0] and _arg2[0]:
    input = {'lat': _arg1, 'lon': _arg2, 'time': _arg3}
    response = algo.pipe(input)
    rads = response.result['sun_altitude']
    return int(math.degrees(rads))
else:
    return 0
",
ATTR([Latitude]),
ATTR([Longitude]),
ATTR([Time])
)

Before continuing, let’s examine the code to understand what’s going on. 

SCRIPT_INT() is Tableau’s wrapper function for executing Python code via TabPy; there are others, such as SCRIPT_REAL(), but in this case we’ll be returning an integer. The first argument is a String containing the actual Python that should be run. The remaining arguments—ATTR([Measure])—extract data from our Worksheet and pass it into Python as _arg1, _arg2, etc. All passed arguments are provided as List objects to the Python script.

Inside the Python code, we begin by bringing in the Algorithmia client library (as well as the standard math library). Next, we construct an Algorithmia Client instance using our user-specific API Key, which can be used to call Algorithmia functions. Then, we create an Algorithm instance, which is a callable reference to a function hosted on Algorithmia…in this case, jhurliman/SunMoonCalculator.

Then, after checking that we’ve been provided non-empty inputs, we assemble the argument into a single dict corresponding to the expected input format shown in jhurliman/SunMoonCalculator’s documentation, and pass this input to the Algorithm via .pipe(input).

All responses from Algorithmia’s functions contain a .result and a .metadata property, so we’ll descend into the .result and grab the ‘sun_altitude’ value (again, as shown on jhurliman/SunMoonCalculator). This is provided in radians, but Tableau renders integers a bit faster, so we’ll convert to degrees and truncate our return value to the whole integer.

  1. Click OK to save the script, then mouseover the “Sun Altitude” measure on the left and click the drop-down arrow. Pick “Convert to Continuous.” This lets Tableau know the resulting angles represent a range of numbers, so it will colorize them on a continuous range in the next step.
  2. Now that we have a way of calculating sun altitude from Lat/Long/Time values, we want to display them on our map. Tableau makes this easy: just drag “Sun Altitude” onto Color. Wait for the calculations to complete.
  3. Our points are now colorized according to time-of-day, but the colors aren’t quite right. Click Color → Edit Colors, pick “Sunrise-Sunset Diverging”, and check “use full color range.”

Adjust color

  1. If all has gone well, your map points will now be blue for nighttime (very negative) values, yellow near dawn/dusk (values near zero), and red for midday (highly positive):

Time of day plotted by color

PART 4 (optional): Making the Integration More Secure, Efficient, and Ergonomic

While this method of embedding scripts is functional, it has a few flaws: the API Key is exposed inside the Worksheet, the Algorithmia client is re-created on each execution, and the script itself is longer than necessary. Fortunately, we can resolve all these problems by making use of a powerful feature in TabPy—deployed functions.

Open up a Python environment, modify the following code to use your ALGORITHMIA_API_KEY (and your local TabPy server port if not 9004), and run it:

import Algorithmia
import tabpy_client
TABPY_SERVER_URL = 'http://localhost:9004/'
DEBUG = True
def algorithmia(algorithm_name, input):
    if DEBUG: print("algorithm: %sinput: %s\n"%(algorithm_name,input))
    try:
        client = Algorithmia.client(ALGORITHMIA_API_KEY)
        algo = client.algo(algorithm_name)
        result = algo.pipe(input).result
    except Exception as x:
        if DEBUG: print(x)
        raise Exception(str(x))
    if DEBUG: print("result: %s"%result)
    return result
tabpy_conn = tabpy_client.Client(TABPY_SERVER_URL)
tabpy_conn.deploy('algorithmia', algorithmia, 'Run a function on Algorithmia: algorithmia(algorithm_name, input)', override=True)

Also note the DEBUG value: keeping this on will print some useful debugging information to your TabPy console, but you’ll probably want to re-run this with DEBUG=False in your production environment.

Head back to Tableau and change your Sun Altitude function to read:

SCRIPT_INT("
import math
algoname = 'jhurliman/SunMoonCalculator/0.1.0'
if _arg1[0] and _arg2[0]:
    input = {'lat': _arg1, 'lon': _arg2, 'time': _arg3}
    rads = tabpy.query('algorithmia', algoname, input)['response']['sun_altitude']
    return int(math.degrees(rads))
else:
    return 0
",
ATTR([Latitude]),
ATTR([Longitude]),
ATTR([Time])
)

Now, the API Key is embedded in your TabPy server instead of being exposed in your script, Algorithmia endpoints are easier to call because they don’t require manual construction of the Algorithmia client each time, and errors will be logged to the TabPy console. If desired, you can modify this code further to use a single global Client instance, or to acquire the API Key from env instead hard-coding it into in the deployed function.

WRAP UP

Tableau allows the rapid development and deployment of visualizations so you can take insights generated by advanced analytics and put them into the hands of decision makers. With Algorithmia, you can take those visualizations to another level, enhancing them with machine learning models developed in-house or provided by independent developers on the Algorithmia.com marketplace.

Note: the code samples included in this post can also be found at: https://github.com/algorithmiaio/integrations/tree/master/Tableau.

Algorithmia’s AI Layer makes it easy to deploy models as scalable services, regardless of framework, language, or data source. The AI Layer empowers organizations to:

  • Deploy models from a variety of frameworks, languages, and platforms.
  • Connect popular data sources, orchestration engines, and step functions.
  • Scale model inference on multiple infrastructure providers.
  • Manage the ML life cycle with tools to iterate, audit, secure, and govern.

Developer Spotlight: Benjamin Kyan

Developers are the heart of Algorithmia’s marketplace: every day, you create and share amazing algorithms, build upon and remix each others’ work, and provide critical feedback which helps us to improve as a service. Thanks to you, we have over 4500 algorithms and 60,000 individuals working together on the Algorithmia platform — making AI accessible to any developer, anywhere, anytime.

We owe you a huge debt, and try to give back a little with programs such as our free-forever promise which delivers 5k credits monthly to every user. But it’s also important to publicly recognize individuals who contribute to the ecosystem, so today we’re shining a spotlight on Benjamin Kyan.

Ben’s algorithms cover a wide range, from utility algos such as URL-2-PDF through deep learning tools like StyleThief. We caught up with him to learn a bit more about how he got into programming, and what perks his interest. Read More…