What You Should Know Before Deploying ML in Production – InfoQ.com

Key Takeaways

What should you know before deploying machine learning projects to production? There are four aspects of Machine Learning Operations, or MLOps, that everyone should be aware of first. These can help data scientists and engineers overcome limitations in the machine learning lifecycle and actually see them as opportunities.

MLOps is important for several reasons. First of all, machine learning models rely on huge amounts of data, and it is very difficult for data scientists and engineers to keep track of it all. It is also challenging to keep track of the different parameters that can be tweaked in machine learning models. Sometimes small changes can lead to very big differences in the results that you get from your machine learning models. You also have to keep track of the features that the model works with; feature engineering is an important part of the machine learning lifecycle and can have a large impact on model accuracy.

Once in production, monitoring a machine learning model is not really like monitoring other kinds of software such as a web app, and debugging a machine learning model is complicated. Models use real-world data for generating their predictions, and real-world data may change over time.

As it changes, it is important to track your model performance and, when needed, update your model. This means that you have to keep track of new data changes and make sure that the model learns from them.

Im going to discuss four key aspects that you should know before deploying machine learning in production: MLOps capabilities, open source integration, machine learning pipelines, and MLflow.

There are many different MLOps capabilities to consider before deploying to production. First is the capability of creating reproducible machine learning pipelines. Machine learning pipelines allow you to define repeatable and reusable steps for your data preparation, training, and scoring processes. These steps should include the creation of reusable software environments for training and deploying models, as well the ability to register, package, and deploy models from anywhere. Using pipelines allows you to frequently update models or roll out new models alongside your other AI applications and services.

You also need to track the associated metadata required to use the model and capture governance data for the end-to-end machine learning lifecycle. In the latter case, lineage information can include, for example, who published the model, why changes were made at some point, or when different models were deployed or used in production.

It is also important to notify and alert on events in the machine learning lifecycle. For example, experiment completion, model registration, model deployment, and data drift detection. You also need to monitor machine learning applications for operational and ML-related issues. Here it is important for data scientists to be able to compare model inputs from training-time vs. inference-time, to explore model-specific metrics, and to configure monitoring and alerting on machine learning infrastructure.

The second aspect that you should know before deploying machine learning in production is open source integration. Here, there are three different open source technologies that are extremely important. First, there are open source training frameworks, which are great for accelerating your machine learning solutions. Next are open source frameworks for interpretable and fair models. Finally, there are open source tools for model deployment.

There are many different open source training frameworks. Three of the most popular are PyTorch, TensorFlow, and RAY. PyTorch is an end-to-end machine learning framework, and it includes TorchServe, an easy to use tool for deploying PyTorch models at scale. PyTorch also has mobile deployment support and cloud platform support. Finally, PyTorch has C++ frontend support: a pure C++ interface to PyTorch that follows the design and the architecture of the Python frontend.

TensorFlow is another end-to-end machine learning framework that is very popular in the industry. For MLOps, it has a feature called TensorFlow Extended (TFX) that is an end-to-end platform for preparing data, training, validating, and deploying machine learning models in large production environments. A TFX pipeline is a sequence of components which are specifically designed for scalable and high performance machine learning tasks.

RAY is a reinforcement-learning (RL) framework, which contains several useful training libraries: Tune, RLlib, Train, and Dataset. Tune is great for hyperparameter tuning. RLlib is used for training RL models. Train is for distributed deep learning. Dataset is for distributed data loading. RAY has two additional libraries, Serve and Workflows, which are useful for deploying machine learning models and distributed apps to production.

For creating interpretable and fair models, two useful frameworks are InterpretML and Fairlearn. InterpretML is an open source package that incorporates several machine learning interpretability techniques. With this package, you can train interpretable glassbox models and also explain blackbox systems. Moreover, it helps you understand your model's global behavior, or understand the reason behind individual predictions.

Fairlearn is a Python package that can provide metrics for assessing which groups are negatively impacted by a model and can compare multiple models in terms of their use of fairness and accuracy metrics. It also supports several algorithms for mitigating unfairness in a variety of AI and machine learning tasks, with various fairness definitions.

Our third open source technology is used for model deployment. When working with different frameworks and tools, you have to deploy models according to each framework's requirements. In order to standardize this process, you can use the ONNX format.

ONNX stands for Open Neural Network Exchange. ONNX is an open source format for machine learning models which supports interoperability between different frameworks. This means that you can train a model in one of the many popular machine learning frameworks,such as PyTorch, TensorFlow, or RAY. You can then convert it into ONNX format and it in different frameworks; for example, in ML.NET.

The ONNX Runtime (ORT) represents machine learning models using a common set of operators, the building blocks of machine learning and deep learning models, which allows the model to run on different hardware and operating systems. ORT optimizes and accelerates machine learning inferencing, which can enable faster customer experiences and lower product costs. It supports models from deep learning frameworks such as PyTorch, and TensorFlow, but also classical machine learning libraries, such as Scikit-learn.

There are many different popular frameworks that support conversion to ONNX. For some of these, such as PyTorch, ONNX format export is built in. For others, like TensorFlow or Keras, there are separate installable packages that can process this conversion. The process is very straightforward: First, you need a model trained using any framework that supports export and conversion to ONNX format. Then you load and run the model with ONNX Runtime. Finally, you can tune performance using various runtime configurations or hardware accelerators.

The third aspect that you should know before deploying machine learning in production is how to build pipelines for your machine learning solution. The first task in the pipeline is data preparation, which includes importing, validating, cleaning, transforming, and normalizing your data.

Next, the pipeline contains training configuration, including parameters, file paths, logging, and reporting. Then there are the actual training and validation jobs that are performed in an efficient and repeatable way. Efficiency might come from specific data subsets, different hardware, compute resources, distributed processing, and also progress monitoring. Finally, there is the deployment step, which includes versioning, scaling, provisioning, and access control.

Choosing a pipeline technology will depend on your particular needs; usually these fall under one of three scenarios: model orchestration, data orchestration, or code and application orchestration. Each scenario is oriented around a persona who is the primary user of the technology and a canonical pipeline, which is the scenarios typical workflow.

In the model orchestration scenario, the primary persona is a data scientist. The canonical pipeline in this scenario is from data to model. In terms of open source technology options, Kubeflow Pipelines is a popular choice for this scenario.

For a data orchestration scenario, the primary persona is a data engineer, and the canonical pipeline is data to data. A common open source choice for this scenario is Apache Airflow.

Finally, the third scenario is code and application orchestration. Here, the primary persona is an app developer. The canonical pipeline here is from code plus model to a service. One typical open source solution for this scenario is Jenkins.

The figure below shows an example of a pipeline created on Azure Machine Learning. For each step, the Azure Machine Learning service calculates requirements for the hardware compute resources, OS resources such as Docker Images, software resources such as Conda, and data inputs.

Then the service determines the dependencies between steps, resulting in a very dynamic execution graph. When each step in the execution graph runs, the service configures the necessary hardware and software environment. The step also sends logging and monitoring information to its containing experiment object. When the step completes, its outputs are prepared as inputs to the next step. Finally, the resources that are no longer needed are finalized and detached.

The final tool that you should consider before deploying machine learning in production is MLflow. MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It contains four primary components that are extremely important in this lifecycle.

The first is MLflow Tracking, which tracks experiments to record and compare parameters and results. MLflow runs can be recorded to a local file, to a SQLAlchemy compatible database, or remotely to a tracking server. You can log data for a run using Python, R, Java, or a REST API. MLflow allows you to group runs under experiments, which can be useful for comparing runs and also to compare runs that are intended to tackle a particular task, for example.

Next is MLflow Projects, which packs ML code into a project, a reusable and reproducible form, in order to share with other data scientists or transfer to a production environment. It specifies a format for packaging data science code, based primarily on conventions. In addition, this component includes an API and command line tools for running projects, making it possible to chain together multiple projects into workflows.

Next is MLflow Models, which manages and deploys models from a variety of machine learning libraries to a variety of model serving and inference platforms. A model is a standard format for packaging machine learning models that can be used in a variety of downstream tools; for example, real time serving through a REST API or batch inference on Apache Spark. Each model is a directory containing arbitrary files, together with a model file in the root of the directory that can define multiple flavors that the model can be viewed in.

The final component is MLflow Registry, a centralized model store, set of APIs, and UI for managing the full lifecycle of an MLflow model in a collaborative way. It provides a model lineage, model versioning, stage transition, and annotation. The Registry is extremely important if you're looking for a centralized model store and a different set of APIs in order to manage the full lifecycle of your machine learning models.

These four aspects---MLOps capabilities, open source integration, machine learning pipelines, and MLflow---can help you create a streamlined and repeatable process for deploying machine learning in production. This gives your data scientists the ability to quickly and easily experiment with different models and frameworks. In addition, you can improve your operational processes for your machine learning systems in production, giving you the agility to update your models quickly when real-world data shifts over time, turning a limitation into an opportunity.

See the rest here:

What You Should Know Before Deploying ML in Production - InfoQ.com

Chinese hackers exploited years-old software flaws to break into telecom giants – MIT Technology Review

Rob Joyce, a senior National Security Agency official, explained that the advisory was meant to give step-by-step instructions on finding and expelling the hackers. To kick [the Chinese hackers] out, we must understand the tradecraft and detect them beyond just initial access, he tweeted.

Joyce echoed the advisory, which directed telecom firms to enact basic cybersecurity practices like keeping key systems up to date, enabling multifactor authentication, and reducing the exposure of internal networks to the internet.

According to the advisory, the Chinese espionage typically began with the hackers using open-source scanning tools like RouterSploit and RouterScan to survey the target networks and learn the makes, models, versions, and known vulnerabilities of the routers and networking devices.

With that knowledge, the hackers were able to use old but unfixed vulnerabilities to access the network and, from there, break into the servers providing authentication and identification for targeted organizations. They stole usernames and passwords, reconfigured routers, and successfully exfiltrated the targeted networks traffic and copied it to their own machines. With these tactics, they were able to spy on virtually everything going on inside the organizations.

The hackers then turned around and deleted log files on every machine they touched in an attempt to destroy evidence of the attack. US officials didnt explain how they ultimately found out about the hacks despite the attackers attempts to cover their tracks.

The Americans also omitted details on exactly which hacking groups they are accusing, as well as the evidence they have that indicates the Chinese government is responsible.

The advisory is yet another alarm the United States has raised about China. FBI deputy director Paul Abbate said in a recent speech that China conducts more cyber intrusions than all other nations in the world combined. When asked about this report, a spokesperson from the Chinese embassy in Washington DC denied that China engages in any hacking campaigns against other countries.

This story has been updated with comment from the Chinese embassy in Washington.

Here is the original post:

Chinese hackers exploited years-old software flaws to break into telecom giants - MIT Technology Review

Solana Ventures Launches $100 Million Fund Focused on Web3 Projects in South Korea Bitcoin News – Bitcoin News

Solana Ventures has revealed the launch of a $100 million fund dedicated to Web3 startups in South Korea. According to Solana Labs general manager Johnny Lee, the capital will be dedicated to non-fungible tokens (NFTs), decentralized finance (defi), and game finance (gamefi) development.

Proponents behind the smart contract protocol Solana plan to expand into South Korea by offering a Web3 fund worth $100 million to startups and developers creating Web3 projects.

Solana Labs general manager Johnny Lee told Techcrunch reporter Jacquelyn Melinek that the fund will focus on Web3 applications that revolve around NFTs, defi, blockchain gaming concepts, and gamefi.

Austin Federa, the head of communications at Solana Labs, explained to Melinek that the fund stems from the Solana community treasury and Solana Ventures pool of capital.

Solana Ventures, the investment arm of Solana Labs, explained that gaming and non-fungible tokens are popular in South Korea. Lee detailed that a lions share of NFT and gaming activities on the Solana network derive from the East Asian country.

A big portion of Koreas gaming industry is moving into web3, Lee detailed on Wednesday. We want to be flexible; theres a wide range of project sizes, team sizes, so some of [our investments] will be venture-sized checks, the Solana Labs general manager remarked.

Solanas native token solana (SOL) is in the top ten crypto market positions in ninth place in terms of capitalization. SOLs $13.22 billion market capitalization represents 1.03% of the crypto economys $1.290 trillion market valuation.

SOL, however, is down 39.2% over the last month and 19.6% of the fall was during the past two weeks. In terms of total value locked (TVL) in defi, Solana is ranked fifth with $3.76 billion. Solanas TVL in defi has lost 33.96% in the past month, according to defillama.com statistics.

Additionally, Solana suffered another network outage as the network halted block production on June 1. In December 2021, Solana Ventures, in a partnership with Griffin Gaming and Forte, launched a $150 million fund for Web3 products.

Amid the announcement concerning Solana Ventures latest fund focused on South Korea and Web3 development, Lee said he expects Solana to showcase high-quality and fun games during the last two quarters of 2022.

What do you think about the latest Web3 fund revealed by Solana Ventures? Let us know what you think about this subject in the comments section below.

Jamie Redman is the News Lead at Bitcoin.com News and a financial tech journalist living in Florida. Redman has been an active member of the cryptocurrency community since 2011. He has a passion for Bitcoin, open-source code, and decentralized applications. Since September 2015, Redman has written more than 5,000 articles for Bitcoin.com News about the disruptive protocols emerging today.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Go here to read the rest:

Solana Ventures Launches $100 Million Fund Focused on Web3 Projects in South Korea Bitcoin News - Bitcoin News