We're excited to announce a partnership between Arthur and Algorithmia. Learn how to get started with the Arthur and Algorithmia integration, so you can complete your AI stack.

Deploy, serve, monitor, and maintain AI at scale with Arthur and Algorithmia

Editor’s note: Today’s post is a guest post by Adam Wenchel, CEO of Arthur. It was originally published on the Arthur blog.

At Arthur, we are obsessed with making sure that our customers have complete AI observability for all of their machine learning models, no matter where they are deployed. But we know that production AI systems need more than just monitoring—and that’s why today, we are excited to announce our partnership with Algorithmia, the industry leader in machine learning operations (MLOps) and model management.

Algorithmia’s enterprise MLOps platform manages all stages of the machine learning lifecycle within existing operational processes, enabling you to put models into production quickly, securely, and cost-effectively. Unlike inefficient and expensive do-it-yourself MLOps management solutions that lock users into specific technology stacks, Algorithmia automates ML deployment, optimizes collaboration between operations and development, leverages existing SDLC and CI/CD systems, and provides enterprise-grade security and governance across the lifecycle.

Arthur is the proactive model monitoring platform that gives organizations the confidence and peace of mind that their AI deployments are performing at peak. Arthur provides a layer of performance monitoring, algorithmic bias detection, and explainability, even for black box models, so data science teams can detect, diagnose, and fix any issues in production.

We are working with the Algorithmia team to make it easier than ever to use their top-notch deployment, serving, and management tools and our leading monitoring capabilities. Together, Arthur and Algorithmia provide a powerful set of tools that give you complete control over your production AI. This blog post will demonstrate just how easy it is to get started with the Arthur and Algorithmia integration, so you can complete your AI stack.

Arthur and Algorithmia: Helping you reach AI maturity faster

Algorithmia’s 2021 enterprise trends in machine learning report revealed that 56% of organizations struggle with governance, security, and auditability issues—making it the #1 challenge for model deployment and management. Arthur extends Algorithmia’s monitoring capabilities to give you rich insights into model performance, data integrity, and AI fairness, helping you manage AI governance more completely. If you’re already using Algorithmia for MLOps, it takes just minutes to add Arthur monitoring. With our integration, Algorithmia customers can take full advantage of advanced ML lifecycle management workflows enabled by Arthur’s runtime monitoring capabilities—think automated refits based on data drift, horse racing (A/B testing) different versions of a model, and more.

Let’s walk through what it looks like to use Arthur and Algorithmia together to monitor and maintain AI at scale.

Deploy your model behind an endpoint with Algorithmia

We’ll use Algorithmia to serve our trained model behind an Algorithmia endpoint. Here is an example Algorithmia project where we’re doing just that: We have a saved model (pkl file), and we’re loading this model into memory. Our implementation of Algorithmia’s apply() function is to invoke the predict_proba() method for any new inputs to the endpoint.

This model is now deployed behind an endpoint that we can hit with REST, or with any of Algorithmia’s supported SDKs in several languages.

Log your model inputs and outputs with ArthurAI

Next, let’s integrate this project with Arthur so we can monitor this model. Let’s assume we’ve already registered a model with Arthur, and will fetch this object by its model ID. We’ll import the ArthurAI Python library and set up a connection with the platform. We will use that connection to fetch our model by its ID.

The last step here is that we’ll want to wrap our model’s predict_proba() method using the log_inferences decorator provided in the arthurai library. To do this, we’ll define a simple function called model_predict() for this wrapping, and we’ll decorate that method as shown below.

Finally, our implementation of Algorithmia’s apply() function is simply to call our model_predict() wrapper for any inputs to the endpoint. Each time the endpoint is invoked, the model’s input and output are logged with ArthurAI for automated analytics and alerting.

With a couple of extra lines of Python, we can easily add Arthur’s monitoring tools to any models we have deployed with Algorithmia.

Unlock model optimization, versioning, and fairness with ArthurAI and Algorithmia

The Arthur-Algorithmia integration enables you to deploy, serve, manage, and maintain your AI at scale with advanced monitoring and governance. Ready to get started today?

If you’re already using Algorithmia and Arthur, it takes minutes to implement the integration as demonstrated above. If you’re not already using Algorithmia, sign up to watch a demo today and learn how enterprise MLOps can accelerate your time-to-value for ML.

And if you’re not already using Arthur, get in touch with us to schedule a demo and learn more about how we can help you get started with proactive model monitoring.

Diego Oppenheimer