What You Should Know Before Deploying ML in Production – InfoQ.com

Key Takeaways

What should you know before deploying machine learning projects to production? There are four aspects of Machine Learning Operations, or MLOps, that everyone should be aware of first. These can help data scientists and engineers overcome limitations in the machine learning lifecycle and actually see them as opportunities.

MLOps is important for several reasons. First of all, machine learning models rely on huge amounts of data, and it is very difficult for data scientists and engineers to keep track of it all. It is also challenging to keep track of the different parameters that can be tweaked in machine learning models. Sometimes small changes can lead to very big differences in the results that you get from your machine learning models. You also have to keep track of the features that the model works with; feature engineering is an important part of the machine learning lifecycle and can have a large impact on model accuracy.

Once in production, monitoring a machine learning model is not really like monitoring other kinds of software such as a web app, and debugging a machine learning model is complicated. Models use real-world data for generating their predictions, and real-world data may change over time.

As it changes, it is important to track your model performance and, when needed, update your model. This means that you have to keep track of new data changes and make sure that the model learns from them.

Im going to discuss four key aspects that you should know before deploying machine learning in production: MLOps capabilities, open source integration, machine learning pipelines, and MLflow.

There are many different MLOps capabilities to consider before deploying to production. First is the capability of creating reproducible machine learning pipelines. Machine learning pipelines allow you to define repeatable and reusable steps for your data preparation, training, and scoring processes. These steps should include the creation of reusable software environments for training and deploying models, as well the ability to register, package, and deploy models from anywhere. Using pipelines allows you to frequently update models or roll out new models alongside your other AI applications and services.

You also need to track the associated metadata required to use the model and capture governance data for the end-to-end machine learning lifecycle. In the latter case, lineage information can include, for example, who published the model, why changes were made at some point, or when different models were deployed or used in production.

It is also important to notify and alert on events in the machine learning lifecycle. For example, experiment completion, model registration, model deployment, and data drift detection. You also need to monitor machine learning applications for operational and ML-related issues. Here it is important for data scientists to be able to compare model inputs from training-time vs. inference-time, to explore model-specific metrics, and to configure monitoring and alerting on machine learning infrastructure.

The second aspect that you should know before deploying machine learning in production is open source integration. Here, there are three different open source technologies that are extremely important. First, there are open source training frameworks, which are great for accelerating your machine learning solutions. Next are open source frameworks for interpretable and fair models. Finally, there are open source tools for model deployment.

There are many different open source training frameworks. Three of the most popular are PyTorch, TensorFlow, and RAY. PyTorch is an end-to-end machine learning framework, and it includes TorchServe, an easy to use tool for deploying PyTorch models at scale. PyTorch also has mobile deployment support and cloud platform support. Finally, PyTorch has C++ frontend support: a pure C++ interface to PyTorch that follows the design and the architecture of the Python frontend.

TensorFlow is another end-to-end machine learning framework that is very popular in the industry. For MLOps, it has a feature called TensorFlow Extended (TFX) that is an end-to-end platform for preparing data, training, validating, and deploying machine learning models in large production environments. A TFX pipeline is a sequence of components which are specifically designed for scalable and high performance machine learning tasks.

RAY is a reinforcement-learning (RL) framework, which contains several useful training libraries: Tune, RLlib, Train, and Dataset. Tune is great for hyperparameter tuning. RLlib is used for training RL models. Train is for distributed deep learning. Dataset is for distributed data loading. RAY has two additional libraries, Serve and Workflows, which are useful for deploying machine learning models and distributed apps to production.

For creating interpretable and fair models, two useful frameworks are InterpretML and Fairlearn. InterpretML is an open source package that incorporates several machine learning interpretability techniques. With this package, you can train interpretable glassbox models and also explain blackbox systems. Moreover, it helps you understand your model's global behavior, or understand the reason behind individual predictions.

Fairlearn is a Python package that can provide metrics for assessing which groups are negatively impacted by a model and can compare multiple models in terms of their use of fairness and accuracy metrics. It also supports several algorithms for mitigating unfairness in a variety of AI and machine learning tasks, with various fairness definitions.

Our third open source technology is used for model deployment. When working with different frameworks and tools, you have to deploy models according to each framework's requirements. In order to standardize this process, you can use the ONNX format.

ONNX stands for Open Neural Network Exchange. ONNX is an open source format for machine learning models which supports interoperability between different frameworks. This means that you can train a model in one of the many popular machine learning frameworks,such as PyTorch, TensorFlow, or RAY. You can then convert it into ONNX format and it in different frameworks; for example, in ML.NET.

The ONNX Runtime (ORT) represents machine learning models using a common set of operators, the building blocks of machine learning and deep learning models, which allows the model to run on different hardware and operating systems. ORT optimizes and accelerates machine learning inferencing, which can enable faster customer experiences and lower product costs. It supports models from deep learning frameworks such as PyTorch, and TensorFlow, but also classical machine learning libraries, such as Scikit-learn.

There are many different popular frameworks that support conversion to ONNX. For some of these, such as PyTorch, ONNX format export is built in. For others, like TensorFlow or Keras, there are separate installable packages that can process this conversion. The process is very straightforward: First, you need a model trained using any framework that supports export and conversion to ONNX format. Then you load and run the model with ONNX Runtime. Finally, you can tune performance using various runtime configurations or hardware accelerators.

The third aspect that you should know before deploying machine learning in production is how to build pipelines for your machine learning solution. The first task in the pipeline is data preparation, which includes importing, validating, cleaning, transforming, and normalizing your data.

Next, the pipeline contains training configuration, including parameters, file paths, logging, and reporting. Then there are the actual training and validation jobs that are performed in an efficient and repeatable way. Efficiency might come from specific data subsets, different hardware, compute resources, distributed processing, and also progress monitoring. Finally, there is the deployment step, which includes versioning, scaling, provisioning, and access control.

Choosing a pipeline technology will depend on your particular needs; usually these fall under one of three scenarios: model orchestration, data orchestration, or code and application orchestration. Each scenario is oriented around a persona who is the primary user of the technology and a canonical pipeline, which is the scenarios typical workflow.

In the model orchestration scenario, the primary persona is a data scientist. The canonical pipeline in this scenario is from data to model. In terms of open source technology options, Kubeflow Pipelines is a popular choice for this scenario.

For a data orchestration scenario, the primary persona is a data engineer, and the canonical pipeline is data to data. A common open source choice for this scenario is Apache Airflow.

Finally, the third scenario is code and application orchestration. Here, the primary persona is an app developer. The canonical pipeline here is from code plus model to a service. One typical open source solution for this scenario is Jenkins.

The figure below shows an example of a pipeline created on Azure Machine Learning. For each step, the Azure Machine Learning service calculates requirements for the hardware compute resources, OS resources such as Docker Images, software resources such as Conda, and data inputs.

Then the service determines the dependencies between steps, resulting in a very dynamic execution graph. When each step in the execution graph runs, the service configures the necessary hardware and software environment. The step also sends logging and monitoring information to its containing experiment object. When the step completes, its outputs are prepared as inputs to the next step. Finally, the resources that are no longer needed are finalized and detached.

The final tool that you should consider before deploying machine learning in production is MLflow. MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It contains four primary components that are extremely important in this lifecycle.

The first is MLflow Tracking, which tracks experiments to record and compare parameters and results. MLflow runs can be recorded to a local file, to a SQLAlchemy compatible database, or remotely to a tracking server. You can log data for a run using Python, R, Java, or a REST API. MLflow allows you to group runs under experiments, which can be useful for comparing runs and also to compare runs that are intended to tackle a particular task, for example.

Next is MLflow Projects, which packs ML code into a project, a reusable and reproducible form, in order to share with other data scientists or transfer to a production environment. It specifies a format for packaging data science code, based primarily on conventions. In addition, this component includes an API and command line tools for running projects, making it possible to chain together multiple projects into workflows.

Next is MLflow Models, which manages and deploys models from a variety of machine learning libraries to a variety of model serving and inference platforms. A model is a standard format for packaging machine learning models that can be used in a variety of downstream tools; for example, real time serving through a REST API or batch inference on Apache Spark. Each model is a directory containing arbitrary files, together with a model file in the root of the directory that can define multiple flavors that the model can be viewed in.

The final component is MLflow Registry, a centralized model store, set of APIs, and UI for managing the full lifecycle of an MLflow model in a collaborative way. It provides a model lineage, model versioning, stage transition, and annotation. The Registry is extremely important if you're looking for a centralized model store and a different set of APIs in order to manage the full lifecycle of your machine learning models.

These four aspects---MLOps capabilities, open source integration, machine learning pipelines, and MLflow---can help you create a streamlined and repeatable process for deploying machine learning in production. This gives your data scientists the ability to quickly and easily experiment with different models and frameworks. In addition, you can improve your operational processes for your machine learning systems in production, giving you the agility to update your models quickly when real-world data shifts over time, turning a limitation into an opportunity.

See the rest here:

What You Should Know Before Deploying ML in Production - InfoQ.com

Related Posts
This entry was posted in $1$s. Bookmark the permalink.