AI LAYER

The fastest and most powerful way to operationalize your machine learning life cycle

Algorithmia AI Layer Life Cycle

Deploy your models.

Get your models into production and ready to scale with ease.
Git-push your pre-trained model, function, or algorithm, and the AI Layer automatically creates a versioned, permissioned, scalable API endpoint any application or model can call.
Publishing a Model

Deploy with minimal effort

Git-push your trained model or upload with our GUI and we’ll create a versioned, permissioned, scalable API endpoint that’s ready in seconds.
Algorithm Versions List

Version everything

Automatic model versioning enforces governance and consistency. Call any version of a model at any time.

Work with any popular framework

Tensorflow or PyTorch, Scikit-learn or MXNet–or hundreds of other combinations. Choose the frameworks and workbenches you need.
Tensorflow
Tensorflow
Keras
Keras
SciKit Learn
SciKit-Learn
PyTorch
PyTorch
Caffe
Caffe
OpenCV
OpenCV

Write in multiple languages

Python, Scala, R, Java, JavaScript, Ruby, and more. Choose the language that works best for each project.
Python, Scala, R, Java, JavaScript, Ruby, and more. Choose the language that works best for each project.
Python
Python
JavaScript
JavaScript
R
R
Ruby
Ruby
Scala
Scala
Java
Java

Use GPUs or CPUs

Tune your ML portfolio for performance and cost, running each model on high-performance GPUs where you need it or inexpensive CPUs where you don’t.
Tune your ML portfolio for performance and cost, running each model on high-performance GPUs where you need it or inexpensive CPUs where you don’t.
CPU
CPU
GPU
GPU

Model pipelining

Build only what you need and reuse modular ML components across multiple jobs. Optimize performance by caching frequently used components and running each model on the right infrastructure for the job.
Build only what you need and reuse modular ML components across multiple jobs. Optimize performance by caching frequently used components and running each model on the right infrastructure for the job.

Advanced tech stack

Leverage the flexibility, compatibility, and performance of Git, Docker, Kubernetes, and other industry-leading infrastructure standards.
Leverage the flexibility, compatibility, and performance of Git, Docker, Kubernetes, and other industry-leading infrastructure standards.
GitHub
GitHub
Docker
Docker
Kubernetes
Kubernetes

Deployment coaching

Move ML off your laptop and into production with Model Deployment Coaching with a deployment engineer.
Move ML off your laptop and into production with Model Deployment Coaching with a deployment engineer.
  • Training on how to get a model into production
  • Complimentary AI Layer credits
  • Use case definition and templates
  • Model testing and benchmarking
  • A personalized production plan
Enterprise

White glove engineering support

All AI Layer customers can request support via email or chat. Enterprise customers receive onramp training, access to a Learning Management System, a support SLA, a dedicated account manager, and priority support queueing.
All AI Layer customers can request support via email or chat. Enterprise customers receive onramp training, access to a Learning Management System, a support SLA, a dedicated account manager, and priority support queueing.
Enterprise

Model portfolio management

Centralize all your ML assets in a permissioned, searchable repository that includes documentation and live previews for each version of each model. Test the models inline and integrate them using preconfigured code snippets for almost any language.
Centralize all your ML assets in a permissioned, searchable repository that includes documentation and live previews for each version of each model. Test the models inline and integrate them using preconfigured code snippets for almost any language.

Connect to your data sources
and training platforms.

Connect to data in AWS, Azure, Google Cloud Platform, and SaaS-based Dropbox. With connectors for MS SQL Server, MySQL, and PostgreSQL, you can put your on-prem data to work without the cost and hassle of cloud migration. And if you need to access data stores outside of our connector library, our Data Portal makes it easy to build direct connections.
Related Resources: Connecting Your Data

Your data, your cloud

Connect to data in AWS, Azure, Google Cloud Platform, and Dropbox.
AWS S3
AWS S3
Azure Blob
Azure Blob
GCS
GCS
Dropbox
Dropbox

Integrate with your current workflows

Enable continuous integration and continuous delivery pipelines programmatically, via APIs, without interacting with the Algorithmia UI.
AWS Sagemaker
Sagemaker
Jupyter
Jupyter
KubeFlow
KubeFlow
ML Flow
ML Flow
Tensorflow Hub
Tensorflow
Hub
Spark
Spark
H2O.ai
H2O.ai
And More

Deploy models from most data science workbenches

SageMaker, Kubeflow, TensorFlow Hub, Jupyter, MLFlow, Spark, H2O.ai, and more…
Enterprise

Work with custom data sources

If you need to access data stores outside of our connector library, our Data Portal makes it easy to build direct connections.
If you need to access data stores outside of our connector library, our Data Portal makes it easy to build direct connections.
Enterprise

Leverage your enterprise data sources

Easily connect to any database with custom data connectors, so you can put your on-prem data to work without the hassle of cloud migration.
Easily connect to any database with custom data connectors, so you can put your on-prem data to work without the hassle of cloud migration.

Scale effortlessly.

The AI Layer’s predictive scaling keeps compute resources running at peak efficiency, and intelligent caching and low latency let you serve custom multi-model pipelines on demand under any load. There’s nothing to configure or provision. Just deploy your models and we handle the rest.
Grafana Dashboard Screenshot
Custom Prometheus/Grafana Charts

High-scale model serving

Deploy and get back to work. The AI Layer automates the processes and infrastructure necessary to serve any number of models and meet any demand with patent-pending serverless technology that runs across CPUs and GPUs.

Trusted cloud

Serve your models on the world’s first and largest public ML platform and marketplace, trusted by more than tens of thousands of developers with thousands of models. Enterprises can run the AI Layer behind their firewall or have Algorithmia host.
Scaling Graph

Scale elastically

Scale across CPUs and GPUs to meet demand, then release unused resources to reduce costs.
Enterprise

Advanced scheduling

Maximize hardware investments. Densely scheduled jobs keep your infrastructure running at peak capacity.
Maximize hardware investments. Densely scheduled jobs keep your infrastructure running at peak capacity.
Enterprise

Run on any compute

Public cloud. Private cloud. Even fully air-gapped. Serve your models on the right compute platform for your business.
Public cloud. Private cloud. Even fully air-gapped. Serve your models on the right compute platform for your business.
AWS
AWS
Private Cloud
Private
Azure
Azure
On-prem Cloud
On-prem
GCP
GCP (Beta)
Air-gapped
Air-gapped
Enterprise

Custom monitoring

Monitor and manage the health and performance of your ML system, from system-wide metrics down to individual models, users, or API calls.
Monitor and manage the health and performance of your ML system, from system-wide metrics down to individual models, users, or API calls.

Manage the ML life cycle.

Automate everything that happens after you deploy your model.

ML is a process, and the AI Layer automates the hassle of managing iterations. Model versioning, discovery, evaluation, and sharing are all built into the system, as is detailed logging. When you need an audit trail, you can access every API call, so you always know which user called which version of which algorithm and with what data.
Model Overview

Catalog your ML portfolio

Centralize all your ML assets in a single, searchable repository that includes documentation and live previews for each version of each model.
Grafana Dashboard Screenshot
Model Evaluation Results

Evaluate performance and accuracy

Run multiple models at once on the same infrastructure and dataset, comparing results, errors, drift, and performance side by side, so you’ll always know you’re running the best model for any job.

Increase efficiency with pipelining

Build multi-model workflows of independent, modular components. Choose the best components for any job and maximize code reuse across your organization.

Security

Your data and operations are always secure. Every API call runs in its own container and memory space so there’s no risk of accidental data sharing. Data is always encrypted, and we provide full support for any encryption offered by other infrastructure providers.
Your data and operations are always secure. Every API call runs in its own container and memory space so there’s no risk of accidental data sharing. Data is always encrypted, and we provide full support for any encryption offered by other infrastructure providers.

Govern model access

Use your organization's auth system to manage users and access on the AI Layer.
Use your organization's auth system to manage users and access on the AI Layer.
Enterprise

Logging and auditability

The AI Layer provides a comprehensive audit trail, logging every API call. Know which user called which version of which algorithm and version, and capture additional data if necessary.
The AI Layer provides a comprehensive audit trail, logging every API call. Know which user called which version of which algorithm and version, and capture additional data if necessary.
Enterprise

Custom Auth

The AI Layer’s authentication and authorization is fully customizable, supporting SSO and a variety of directories and identity platforms, including Open ID Connect, LDAP, and more.
The AI Layer’s authentication and authorization is fully customizable, supporting SSO and a variety of directories and identity platforms, including Open ID Connect, LDAP, and more.

The AI Layer allows you to pick and choose the best model and infrastructure for the job.

The AI Layer allows you to pick and choose the best model and infrastructure for the job.

AI Layer Architecture

Deploy the AI Layer one of three ways.

Algorithmia.com

The AI Layer as a service.

No setup—deploy in minutes.
Infinite scalability on demand.
Pay for only what you use.
Starts at $0.0001 per compute second

Managed

The AI Layer on a virtual private cloud.

Enterprise-grade management tools.
API access to manage all major functions of the AI Layer.
Best-in-class security, governing, and reporting.

Enterprise

The AI Layer behind your firewall—on public cloud, private cloud, or on-prem.

All the features of the managed AI Layer.
Plus
Complete control of your infrastructure.