Introducing the Cloud Development Kit for Terraform (Preview) – idk.dev

Infrastructure as Code (IaC) is a fundamental component of modern DevOps practices because it enables you to deploy any version of your application infrastructure at will, and facilitates the full lifecycle management of all the resources required to run and monitor your application. Organizations who have adopted DevOps practices often deploy hundreds or even thousands of changes to production a day, allowing them to deliver software faster, cheaper, and with lower risk.

When you explore the IaC options available today, you quickly discover customers have many choices, and two of the most popular for deploying infrastructure to AWS are CloudFormation, a service native to AWS, and Terraform, an open-source offering from HashiCorp. HashiCorp is an AWS Partner Network (APN) Advanced Technology Partner and member of the AWS DevOps Competency, and Terraform is a widely used tool that allows you to create, update, and version your infrastructure. According to GitHub Octoverse, HashiCorp Configuration Language (HCL) as one of the fastest growing languages over the past several years.

CloudFormation YAML and Terraform HCL are popular IaC languages and the right fit for many customers use cases, however we often hear other customers say they want to define and provision infrastructure with the same familiar programming languages used to code their applications, rather than needing to learn a new domain specific language. The AWS Developer Tools team responded with the AWS CDK in 2019 for CloudFormation, and now, AWS and HashiCorp are proud to announce that were bringing the CDK to Terraform.

Today, wed like to tell you more about the developer preview of the Cloud Development Kit for Terraform, or cdktf, that lets you define application infrastructure with familiar programming languages, while leveraging the hundreds of providers and thousands of module definitions provided by Terraform and the Terraform community. The CDK for Terraform preview is initially available in TypeScript and Python, with other languages planned in the future.

The AWS Cloud Development Kit (CDK) and HashiCorp Terraform teams collaborated to create this new project by leveraging two key technologies of the AWS CDK: the CDK construct programming model, and the javascript interoperability interface, or jsii. The CDK construct programming model is a set of language native frameworks for defining infrastructure resources and adaptors to generate configuration files for an underlying provisioning engine. The jsii allows code in any supported language to naturally interact with JavaScript classes, enabling the delivery of libraries in multiple programming languages, all from a single codebase. Using these components, the AWS CDK generates CloudFormation configuration from code written in TypeScript, JavaScript, Python, Java, or C#. Similarly, the CDK for Terraform generates Terraform configuration to enable provisioning with the Terraform platform.

To better illustrate the developer experience using the CDK for Terraform, consider the following example. This example creates a new Terraform application called hello-terraform, that contains a single stack named HelloTerraform. Within the HelloTerraform stack, the AWS provider is used to define CDK constructs to provision a EC2 instance. When this code is run, it produces a Terraform JSON configuration file that you can use to run a terraform plan , terraform apply or use the cdktf-cli to run cdktf deploy.

This code when synthesized will produce the following Terraform JSON configuration.

For an in-depth tutorial of using CDK for Terraform, read the HashiCorp blog post.

HashiCorp Terraform follows an Infrastructure as Code approach and is extensible to support many providers of cloud infrastructure and software services.

Historically Terraform has supported HashiCorp Configuration Language (HCL) and JSON. While HCL is meant to be read and written by people, JSON can be machine-generated and consumed for programmatic interaction with the platform. Also, Kubernetes Custom Resource Definitions (CRDs) can be used to provision resources via the Terraform platform. Now, with the introduction of the CDK for Terraform, programming languages such as Python and TypeScript can be used to generate Terraform JSON configuration that is provisioned using Terraform.

Here are some additional things you need to know about CDK for Terraform:

Works with existing providers and modules. cdktf-cli includes a helpful CLI tool that lets you import anything hosted in the Terraform Registry into your project, allowing you to leverage any of the Terraform resource type providers or common infrastructure configuration modules.

Keep your current developer workflow. With the cdktf-cli, the core Terraform workflow remains the same, including the ability to plan changes before applying. Additionally, cdktf-cli supports commands like `cdktf diff` and `cdktf deploy` that are similar for AWS CDK users.

Familiar programming language to declarative state. cdktf code is written using familiar programming languages but produces your desired state as standard JSON. This means you can enjoy the expressiveness and power of programming languages without compromising on the robustness of the declarative desired state approach.

Language support. cdktf lets you define IaC using TypeScript and Python, with support for more languages coming in the future.

Open source. cdktf is an open source project and we cant wait for the contributions from the community. If you would like to see a feature for the CDK for Terraform, please review existing GitHub issues and upvote. If a feature does not exist in a GitHub issue, feel free to open a new issue. In addition to opening issues, you can contribute to the project by opening a pull request.

Project is in alpha. cdktf is in the early stages of development, and while we think it is ready for you to give it a try and let us know what you think, use it with care and at your own discretion.

We are very excited about this project, and helping make life more productive and fun for developers. To see the CDK for Terraform in action, we encourage you to read the HashiCorp blog post, and follow the step-by-step tutorial.

For more information about other projects that leverage the CDK construct programming model, check out the AWS CDK for defining CloudFormation configuration and cdk8s for defining Kubernetes configuration.

Happy Infrastructure-as-coding!

Chris is a Senior Product Manager at AWS on the Developer Tools team that brings you the AWS SDKs, the AWS CLI, and is a humble custodian of the AWS Cloud Development Kit (CDK). He is especially interested in emerging technology and seeing science fiction become science fact. When not thinking about DevOps and Infrastructure as Code, Chris can be found hiking, biking, boarding, camping, or paddling with his wife and two kids around the Pacific Northwest.

Anubhav Mishra is a Technical Advisor to the CTO at HashiCorp. He is passionate about Developer Advocacy and helping developers and operators do better. Previously, he worked at Hootsuite, where he created Atlantis An Open Source project that helps teams collaborate on Infrastructure using Terraform. Anubhav loves working with distributed systems and exploring new technologies. He also loves open source software and is continuously finding ways to contribute to projects that excite him. That has led him to contribute to projects like Virtual Kubelet (a CNCF project) and dapr. He often speaks at conferences. In his free time, he DJs, makes music and plays football. Hes a huge Manchester United supporter.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

See the original post here:
Introducing the Cloud Development Kit for Terraform (Preview) - idk.dev

The Difference Between AI ML and Deep Learning And Why They Matter – AiThority

Technology is developing today at a pace thats never been seen before. New advancements and breakthroughs happen far more readily than at any time in the past. One of the most talked-about areas of cutting-edge tech is that of artificial intelligence (AI).

AI is driving the digital transformation of organizations in all manner of niches. So wide-ranging are the applications of AI, that youve probably already interacted with an example of the tech today. Despite AIs growing ubiquity, though, its still not an area thats readily understood.

One of the main reasons that its tricky to get your head around AI is that the field has its own lexicon of phrases. Video conferencing software may get described as AI-driven or using machine or deep learning. That could tempt you to choose the solution. But, do you truly understand what it all means?

If the answers no, youll want to read on. Youll also be by no means alone in giving that response. The terminology of AI is far from straightforward. Get to grips with AI, machine learning, and deep learning, though, and youre well on your way.

When many people hear about AI, their first thoughts may still be for science fiction films of years gone by. There was even a blockbuster named, simply, AI. The truth is, though, that artificial intelligence has been a part of real-life for years now.

As a phrase, AI refers to any technology which works to mimic human intelligence. Some of the hallmarks of human intelligence that AI aims to replicate include:

A common misconception is that a solution or piece of software can either be or use AI. Such tools are better described as displaying or exhibiting AI. Theyre artificial theyre machines, after all and are displaying intelligence.

Aside from migration to the Cloud, adoption of processes that exhibit AI is probably the most widespread tech trend of recent years. Everything from a web meeting platform to an analytics suite may adopt some elements of AI.

Recommended:Great SaaS Companies Need Great SaaS Ecosystems: 5 Ways To Build A Defensible Business In The Cloud

Two of the most widespread AI processes are machine learning and deep learning. These are often the innovations that so-called AI-driven solutions leverage. That begs the obvious question, then, of what is machine learning?

As mentioned, the first thing to understand about machine learning is that it is an example of AI. Its one particular process by which an artificial system can display intelligence. Put simply and as the name suggests its when a machine can learn.

By machine in this context, we mean an algorithm. By learning, we mean taking ideally a large volume of data and using it for specific pre-defined tasks. Typically, machine learning algorithms analyze sets of data and identify patterns. They then use those patterns to generate conclusions or take defined actions.

Machine learning algorithms get smarter as they go along. The more data the algorithms analyze, the better their predictions, conclusions, or actions. A straightforward example is an algorithm used by video or music streaming services.

Those algorithms collect data on the choices that users make. That means things like which artists people listen to or the genre of programs they watch. They then use the data to predict and recommend new bands or shows that users may like. The more data the algorithms process, the better they can forecast what a user will enjoy.

The applications of machine learning go far beyond streaming and entertainment. But well talk about that later. First, we need to discuss deep learning and how its different.

We must begin our definition of deep learning in a similar way to that of machine learning. In this case, its vital to understand that deep learning is machine learning AND an example of AI. In many ways, its the next evolution of machine learning.

Machine learning algorithms deal with structured and labeled data.

They analyze it, create more data, and use that to generate conclusions or predictions. When outcomes arent as desired, the algorithms can get retrained. Thats via human intervention. As compared to the human brain, machine learning algorithms are simplistic.

Deep learning is the process thats attempting to close the gap between artificial and human intelligence further.

Rather than a hard-coded algorithm, deep learning utilizes many-layered and interconnected examples. Thats to replicate better our brains, which combine tens of thousands of neurons.

Systems of deep learning algorithms are known as artificial neural networks. Theyre literal if still more straightforward copies of the human brain. Being built as they are, allows deep learning networks to do far more than machine learning algorithms.

Such networks dont need structured data to operate. Theyre able to make sense of unstructured and unlabeled data sets. Whats more, with enough training they can make sense of far more complex information. And can do so at the first time of asking.

One of the areas to which deep learning is crucial is that of self-driving cars. Its the ability of artificial neural networks to assess and process complex information that makes such things possible. They help cars to understand the environment around them.

Autonomous vehicles can recognize road signs, pedestrians, other road users, and more. They can also spot patterns in how those things are behaving. Doing so is crucial in the case of pedestrians and other vehicles. It allows the cars to react accordingly and is all down to deep learning.

Fascinating as AI and its elements undoubtedly are, why should you care? Its a fair question with a definitive answer. You must care about AI, machine, and deep learning because its impacting marketing in a big way.

The MarTech solutions of the future and the present will almost all exhibit elements of AI.

If youre in the SaaS niche or involved in other marketing, youll soon come across AI-enabled solutions. Thats assuming you havent already.

Machine learning is already getting leveraged in a broad array of MarTech solutions. From webinar services to chatbots, the element of AI is making all kinds of software or tools smarter.

Weve already talked about how machine learning gets utilized for recommendations. That extends to products, as well as artists or TV shows. Its the basis of the kind of you might also like sections that you often see on e-commerce websites.

Thats far from the only marketing application of machine learning. With a machine learning algorithm, you can make better sense of your customer data. And do so in a fraction of the time.

Say, for instance, that you want to run a targeted campaign. One focused on a specific sector of your target audience. Machine learning allows you to segment the audience with greater accuracy.

An algorithm can use the data from your current CRM or other sources to produce sample personas. It can then ID those leads in your email list or database who match the characteristics of the personas. This can be like a silver bullet for customer acquisition.

Chatbots, too, can use machine learning to aid an organizations marketing. Many companies now implement a chatbot on their website. Theyre those little chat windows which pop up to see if you need help when you load a webpage.

Thanks to machine learning, visitors to a site can hold a full and useful conversation with the chatbot. The algorithms behind the tool get trained to recognize queries. And to provide the right responses. From a marketing point of view, that may mean pointing a site visitor to the correct product or service.

Chatbots and other channels of written communication are also hotbeds for deep learning in MarTech. Thats thanks to the possibilities afforded by natural language processing (NLP). NLP is a further aspect of AI and is made possible by the artificial neural networks of deep learning.

Human language is incredibly complex. Just think back to the maddening grammar rules and exceptions of high school English. For computers or machines, the nuances of language have long been impossible to decipher. That, though, is no longer the case.

Via deep learning and NLP, algorithms can now grasp meaning and context in language. That has profound implications for marketing as well as customer service, where the tech is more often applied. Take, for instance, the two vital aspects of marketing that are SEO and content creation.

Keyword research is a crucial element of SEO. Its about recognizing the words and phrases for which your target audience search. NLP can supercharge this process. You can use algorithms to generate more accurate keywordsall through using existing written communications from your customers.

In content marketing, its vital that all content is useful and interesting to its audience. NLP can help in this regard, too. With NLP, you can better recognize whats important to your customers. You might, for instance, use an algorithm to ID common topics on a company forum. That shows you your customers interests and shows useful topic ideas for content.

Thats purely the tip of the iceberg in terms of deep learnings potential applications. The area remains a comparatively young one. Its sure to be explored and utilized further in years to come.

Artificial intelligence is making its presence felt across industries and disciplines. The broad array of processes under the umbrella of AI are revolutionizing fields. That includes, but is by no means limited to, MarTech.

If youre going to keep up with the modern trends of marketing, then, you must understand all things AI. Hopefully, know youve read this guide, youve got a good grounding. You should at least know your machine from your deep learning. And understand how theyre both examples of AI.

(To share your insights on AI in Martech, please write to us at sghosh@martechseries.com)

Share and Enjoy !

Excerpt from:
The Difference Between AI ML and Deep Learning And Why They Matter - AiThority

New stellar stream, born outside the Milky Way, discovered with machine learning – Penn: Office of University Communications

Researchers have discovered a new cluster of stars in the Milky Way disk, the first evidence of this type of merger with another dwarf galaxy. Named after Nyx, the Greek goddess of night, the discovery of this new stellar stream was made possible by machine learning algorithms and simulations of data from the Gaia space observatory. The finding, published in Nature Astronomy, is the result of a collaboration between researchers at Penn, the California Institute of Technology, Princeton University, Tel Aviv University, and the University of Oregon.

The Gaia satellite is collecting data to create high-resolution 3D maps of more than one billion stars. From its position at the L2 Lagrange point, Gaia can observe the entire sky, and these extremely precise measurements of star positions have allowed researchers to learn more about the structures of galaxies, such as the Milky Way, and how they have evolved over time.

In the five years that Gaia has been collecting data, astronomer and study co-author Robyn Sanderson of Penn says that the data collected so far has shown that galaxies are much more dynamic and complex than previously thought. With her interest in galaxy dynamics, Sanderson is developing new ways to model the Milky Ways dark matter distribution by studying the orbits of stars. For her, the massive amount of data generated by Gaia is both a unique opportunity to learn more about the Milky Way as well as a scientific challenge that requires new techniques, which is where machine learning comes in.

One of the ways in which people have modeled galaxies has been with hand-built models, says Sanderson, referring to the traditional mathematical models used in the field. But that leaves out the cosmological context in which our galaxy is forming: the fact that its built from mergers between smaller galaxies, or that the gas that ends up forming stars comes from outside the galaxy. Now, using machine learning tools, researchers like Sanderson can instead recreate the initial conditions of a galaxy on a computer to see how structures emerge from fundamental physical laws without having to specify the parameters of a mathematical model.

The first step in being able to use machine learning to ask questions about galaxy evolution is to create mock Gaia surveys from simulations. These simulations include details on everything that scientists know about how galaxies form, including the presence of dark matter, gas, and stars. They are also among the largest computer models of galaxies ever attempted. The researchers used three different simulations of galaxies to create nine mock surveysthree from each simulationwith each mock survey containing 2-6 billion stars generated using 5 million particles. The simulations took months to complete, requiring 10 million CPU hours to run on some of the worlds fastest supercomputers.

The researchers then trained a machine learning algorithm on these simulated datasets to learn how to recognize stars that came from other galaxies based on differences in their dynamical signatures. To confirm that their approach was working, they verified that the algorithm was able to spot other groups of stars that had already been confirmed as coming from outside the Milky Way, including the Gaia Sausage and the Helmi stream, two dwarf galaxies that merged with the Milky Way several billion years ago.

In addition to spotting these known structures, the algorithm also identified a cluster of 250 stars rotating with the Milky Ways disk towards the galaxys center. The stellar stream, named Nyx by the papers lead author Lina Necib, would have been difficult to spot using traditional hand-crafted models, especially since only 1% of the stars in the Gaia catalog are thought to originate from other galaxies. This particular structure is very interesting because it would have been very difficult to see without machine learning," says Necib.

But machine learning approaches also require careful interpretation in order to confirm that any new discoveries arent simply bugs in the code. This is why the simulated datasets are so crucial, since algorithms cant be trained on the same datasets that they are evaluating. The researchers are also planning to confirm Nyxs origins by collecting new data on its streams chemical composition to see if this cluster of stars differs from ones that originated in the Milky Way.

For Sanderson and her team members who are studying the distribution of dark matter, machine learning also provides new ways to test theories about the nature of the dark matter particle and where its distributed. Its a tool that will become especially important with the upcoming third Gaia data release, which will provide even more detailed information that will allow her group to more accurately model the distribution of dark matter in the Milky Way. And, as a member of the Sloan Digital Sky Survey consortium, Sanderson is also using the Gaia simulations to help plan future star surveys that will create 3D maps of the entire universe.

The reason that people in my subfield are turning to these techniques now is because we didnt have enough data before to do anything like this. Now, were overwhelmed with data, and were trying to make sense of something thats far more complex than our old models can handle, says Sanderson. My hope is to be able to refine our understanding of the mass of the Milky Way, the way that dark matter is laid out, and compare that to our predictions for different models of dark matter.

Despite the challenges of analyzing these massive datasets, Sanderson is excited to continue using machine learning to make new discoveries and gain new insights about galaxy evolution. Its a great time to be working in this field. Its fantastic; I love it, she says.

Robyn Sanderson is an assistant professor in the Department of Physics and Astronomy in the School of Arts & Sciences at the University of Pennsylvania.

Gaia is a space observatory of the European Space Agency whose mission is to make the largest, most precise three-dimensional map of the Milky Way Galaxy by measuring the positions, distances, and motions of stars with unprecedented precision.

Supercomputers used for this research included Blue Waters at the National Center for Supercomputing Applications, NASA's High-End Computing facilities, and Stampede2 at the Texas Advanced Computing Center.

Go here to read the rest:
New stellar stream, born outside the Milky Way, discovered with machine learning - Penn: Office of University Communications

Machine learning PODA model projects the impact of COVID-19 on US motor gasoline demand – Green Car Congress

A team from Oak Ridge National Laboratory (ORNL), Aramco Services Company, MIT, the Michigan Department of Transportation and Argonne National Laboratory has developed a machine-learning-based model (Pandemic Oil Demand Analysis, PODA) to project the US medium-term gasoline demand in the context of the COVID-19 pandemic and to study the impact of government intervention. Their open-access paper appears in the journal Nature Energy.

The PODA model is a machine-learning-based model to project the US gasoline demand using COVID-19 pandemic data, government policies and demographic information. The Mobility Dynamic Index Forecast Module identifies the changes in travel mobility caused by the evolution of the COVID-19 pandemic and government orders. The Motor Gasoline Demand Estimation Module quantifies motor gasoline demands due to the changes in travel mobility. Ou et al.

They found that under the reference infection scenario, US gasoline demand grows slowly after a quick rebound in May, and is unlikely to recover to a non-pandemic level prior to October 2020.

Under both the reference and a pessimistic scenario, continual lockdown (no reopening) could worsen the motor gasoline demand temporarily, but it helps the demand recover to a normal level more quickly due to its impact on infection rate.

Under the optimistic infection scenario, the projected trend of motor gasoline demand will recover to about 95% of the non-pandemic gasoline level (almost fully recover) by late September 2020.

However, under the pessimistic infection scenario, a second wave of infections in mid-June to August could lower the gasoline demand once morebut not worse than it was in April 2020.

The researchers conclude that their results imply that government intervention does impact the infection rate, which thereby impacts mobility and fuel demand.

Projections of the evolution of COVID-19 pandemic trends show that lockdowns help to reduce COVID-19 transmissions by as much as 90% compared with the baseline without any social distancing in Austin, Texas. However, this unprecedented phenomenon could last for a few years: Kissler et al. suggested that, even after the pandemic peaked, COVID-19 surveillance should be continued as a resurgence in contagion could be possible as late as 2024. Therefore, beyond the immediate economic responses, the longer-term impact on the US economy may persist well beyond 2020. An effective forecast or estimate of the pandemic impacts could help people to well prepare and navigate around unknown risks. More specifically, reliably projecting the oil demand, a critical leading indicator of the state of the US economy, is beneficial to related business activities and investment decisions.

There are studies that discuss the impacts of unexpected natural hazards and/or disasters on energy demand and/or consumption and studies that evaluate the impacts of previous pandemics on tourism and economics . However, few studies have quantified and forecast the oil demands under multiple pandemic scenarios, and this research is desperately needed.

To date, studies focused on the energy impacts of the COVID-19 pandemic are limited to the short-term energy outlook released by the US Energy Information Administration (EIA); this outlook uses a simplified evolution of the COVID-19 pandemic to forecast the US gross domestic product, energy supplies, demands and prices until the fourth quarter of 202115. In this work, we develop a model that combines personal mobility with motor gasoline demand and uses a neural network to correlate personal mobility with the evolution of the COVID-19 pandemic, government policies and demographic information.

Ou et al.

The model contains two major modules: a Mobility Dynamic Index Forecast Module and a Motor Gasoline Demand Estimation Module. The Mobility Dynamic Index Forecast Module identifies the changes in travel mobility caused by the evolution of the COVID-19 pandemic and government orders, and it projects the changes in travel mobility indices relative to the pre-COVID-19 period in the United State.

The change in travel mobility, which affects the frequency of human contact or the level of social distancing, can reciprocally impact the evolution of the pandemic to some extent.

The Motor Gasoline Demand Estimation Module estimates vehicle miles traveled on pandemic days while it considers the dynamic indices of travel mobility, and it quantifies motor gasoline demands by coupling the gasoline demands and vehicle miles travelled.

The neural network model, which is the core of the PODA model, has 42 inputs, 2 layers and 25 hidden nodes for each layer, with rectified linear units as the activation function. In the PODA model, the potential induced travel demand due to the lower oil prices under the COVID-19 pandemic is not explicitly considered.

Resources

Ou, S., He, X., Ji, W. et al. (2020) Machine learning model to project the impact of COVID-19 on US motor gasoline demand. Nat Energy doi: 10.1038/s41560-020-0662-1

See the rest here:
Machine learning PODA model projects the impact of COVID-19 on US motor gasoline demand - Green Car Congress

Machine Learning Market to Reach USD 117.19 Billion by 2027; Increasing Popularity of Self-Driving Cars to Propel Demand from Automotive Industry,…

Pune, July 17, 2020 (GLOBE NEWSWIRE) -- The global machine learning market size is anticipated to rise remarkably on account of the advancement in deep learning. This, coupled with the amalgamation of analytics-driven solutions with ML abilities, is expected to aid in favor of the market in the coming years. As per a recent report by Fortune Business Insights, titled, Machine Learning Market Size, Share & Covid-19 Impact Analysis, By Component (Solution, and Services), By Enterprise Size (SMEs, and Large Enterprises), By Deployment (Cloud and On-premise), By Industry (Healthcare, Retail, IT and Telecommunication, BFSI, Automotive and Transportation, Advertising and Media, Manufacturing, and Others), and Regional Forecast, 2020-2027, the value of this market was USD 8.43 billion in 2019 and is likely to exhibit a CAGR of 39.2% to reach USD 117.19 billion by the end of 2027.

Get Sample PDF Brochure: https://www.fortunebusinessinsights.com/enquiry/request-sample-pdf/machine-learning-market-102226

Coronavirus has not only brought about health issues and created social distance among people but it has also hampered the industrial and commercial sectors drastically. The whole world is following home quarantine, and we are unsure when we can freely roam the streets again. The governments of various nations are also making considerable efforts to bring the COVID-19 situation under control, and hopefully, we will overcome this obstacle soon.

Fortune Business Insights is offering special reports on various markets impacted by the COVID-19 pandemic. These reports provide a thorough analysis of the market and will be helpful for the players and investors to accordingly study and chalk out the growth strategies for better revenue generation.

Click here to get the short-term and long-term impacts of COVID-19 on this Market.Please visit: https://www.fortunebusinessinsights.com/machine-learning-market-102226

What Are the Objectives of the Report?

The report is based on a 360-degree overview of the market that discusses major factors driving, repelling, challenging, and creating opportunities for the market. It also talks about the current trends prevalent in the market, recent industry developments, and other interesting insights that will help investors accordingly chalk out growth strategies for the future. The report also highlights the names of major segments and significant players operating in the market. For more information on the report, log on to the company website.

Drivers & Restraints-Huge Investment on Artificial Intelligence to Aid in Favor of Market

The e-commerce sector has showcased significant growth in the past few years, with the advent of retail analytics. Companies such as Alibaba, eBay, Amazon, and others are utilizing advanced data analytics solutions for boosting their sales graph. Thus, the advent of analytical solutions into the e-commerce sector, offering enhanced consumer experience and rise in sales graph is one of the major factors promoting the machine learning market growth. In addition to this, the use of machine intelligence solutions for encrypting and protecting data is adding boost to the market. Furthermore, massive investments in artificial intelligence (AI) and efforts to introduce innovations in this field are further expected to add impetus to the market in the coming years.

On the flipside, national security threat issues such as deepfakes and other fraudulent cases, coupled with the misuse of robots, may hamper the overall market growth. Nevertheless, the introduction and increasing popularity of self-driving cars from the automotive industry is projected to create new growth opportunities for the market in the coming years.

Speak To Analyst https://www.fortunebusinessinsights.com/enquiry/speak-to-analyst/machine-learning-market-102226

Segment:

IT and Telecommunication Segment Bagged Major Share Soon to be Overpowered by Healthcare Sector

Based on segmentation by industry, the IT and telecommunication segment earned 22.0% machine learning market share and emerged dominant. But the current COVID-19 pandemic increased the popularity of wearable medical devices to keep track of personal health and diet. This is expected to help the healthcare sector emerge dominant in the coming years.

Regional Analysis-Asia Pacific to Exhibit Fastest Growth Rate Owing to Rising Adoption by Developing Economies

Region-wise, North America emerged dominant in the market, with a revenue of USD 3.07 billion in 2019. This is attributable to the presence of significant players such as IBM Corporation, Oracle Corporation, Amazon.com, and others and their investments in research and development of better software solutions for this technology. On the other side, the market in Asia Pacific is expected to exhibit a rapid CAGR in the forecast period on account of the increasing adoption of artificial intelligence, machine learning, and other latest advancements in the rising economies such as India, China, and others.

Competitive Landscape-

Players Focusing on Development of Responsible Machine Learning to Strengthen their position

The global market generates significant revenues from companies such as Microsoft Corporation, IBM Corporation, SAS Institute Inc., Amazon.com, and others. The principal objective of these players is to develop responsible machine learning that will help prevent unauthorized use of such solutions for fraudulent or data theft crimes. Other players are engaging in collaborative efforts to strengthen their position in the market.

Major Industry Developments of this Market Include:

March 2019 The latest and most advanced ML capability was added to the 365 platforms by Microsoft. This new feature will help strengthen the internet-facing virtual machines by increasing security when merged with the integration of machine learning by Azures security centre.

List of the Leading Companies Profiled in the Machine Learning Market Research Report Include:

Quick Buy:

Machine Learning Market Research Report: https://www.fortunebusinessinsights.com/checkout-page/102226

Detailed Table of Content

TOC Continued...!!!

Get your Customized Research Report: https://www.fortunebusinessinsights.com/enquiry/customization/machine-learning-market-102226

Have a Look at Related Research Insights:

Commerce Cloud Market Size, Share & Industry Analysis, By Component (Platform, and Services), By Enterprise Size (SMEs, and Large Enterprises), By Application (Grocery and Pharmaceuticals, Fashion and Apparel, Travel and Hospitality, Electronics, Furniture and Bookstore, and Others), By End-use (B2B, and B2C), and Regional Forecast, 2020-2027

Big Data Technology Market Size, Share & Industry Analysis, By Offering (Solution, Services), By Deployment (On-Premise, Cloud, Hybrid), By Application (Customer Analytics, Operational Analytics, Fraud Detection and Compliance, Enterprise Data Warehouse Optimization, Others), By End Use Industry (BFSI, Retail, Manufacturing, IT and Telecom, Government, Healthcare, Utility, Others) and Regional Forecast, 2019-2026

Artificial Intelligence (AI) Market Size, Share and Industry Analysis By Component (Hardware, Software, Services), By Technology (Computer Vision, Machine Learning, Natural Language Processing, Others), By Industry Vertical (BFSI, Healthcare, Manufacturing, Retail, IT & Telecom, Government, Others) and Regional Forecast, 2019-2026

Artificial IntelligenceAI in Manufacturing Market Size, Share & COVID-19 Impact Analysis, By Offering (Hardware, Software, and Services), By Technology (Computer Vision, Machine Learning, Natural Language Processing), By Application (Process Control, Production Planning, Predictive Maintenance & Machinery Inspection), By Industry (Automotive, Medical Devices, Semiconductor &Electronics), and Regional Forecast, 2020-2027

Artificial Intelligence AI in Retail Market Size, Share & Industry Analysis, By Offering (Solutions, Services), By Function (Operations-Focused, Customer-Facing), By Technology (Computer Vision, Machine Learning, Natural Language Processing, and Others), and Regional Forecast, 2019-2026

Emotion Detection and Recognition Market Size, Share and Global Trend By Component (Software tools, Services), By Technology (Pattern Recognition Network, Machine Learning, Natural Language Processing), By Application (Marketing & Advertising, Media & Entertainment), By End-User (Government, Healthcare, Retail) and Geography Forecast till 2026

About Us:

Fortune Business Insightsoffers expert corporate analysis and accurate data, helping organizations of all sizes make timely decisions. We tailor innovative solutions for our clients, assisting them address challenges distinct to their businesses. Our goal is to empower our clients with holistic market intelligence, giving a granular overview of the market they are operating in.

Our reports contain a unique mix of tangible insights and qualitative analysis to help companies achieve sustainable growth. Our team of experienced analysts and consultants use industry-leading research tools and techniques to compile comprehensive market studies, interspersed with relevant data.

At Fortune Business Insights, we aim at highlighting the most lucrative growth opportunities for our clients. We therefore offer recommendations, making it easier for them to navigate through technological and market-related changes. Our consulting services are designed to help organizations identify hidden opportunities and understand prevailing competitive challenges.

Contact Us:

Fortune Business Insights Pvt. Ltd.308, Supreme Headquarters,Survey No. 36, Baner,Pune-Bangalore Highway,Pune- 411045, Maharashtra,India.Phone:US: +1-424-253-0390UK: +44-2071-939123APAC: +91-744-740-1245Email:sales@fortunebusinessinsights.comFortune Business InsightsLinkedIn|Twitter|Blogs

Read Press Release https://www.fortunebusinessinsights.com/press-release/global-machine-learning-market-10095

Link:
Machine Learning Market to Reach USD 117.19 Billion by 2027; Increasing Popularity of Self-Driving Cars to Propel Demand from Automotive Industry,...

How Machine Learning Will Impact the Future of Software Development and Testing – The Union Journal

Machine learning (ML) and expert system (AI) are regularly thought of to be the entrances to a futuristic world in which robotics connect with us like individuals and computer systems can end up being smarter than people in every method. But of course, artificial intelligence is currently being utilized in millions of applications around the worldand its currently beginning to form how we live and work, frequently in manner ins which go hidden. And while these innovations have actually been compared to devastating bots or blamed for synthetic panic-induction, they are assisting in large methods from software to biotech.

Some of the sexier applications of artificial intelligence remain in emerging innovations like self-driving vehicles; thanks to ML, automated driving software can not just self-improve through millions of simulations, it can likewise adjust on the fly if confronted with brand-new situations while driving. But ML is perhaps a lot more crucial in fields like software testing, which are widely utilized and utilized for millions of other innovations.

So how precisely does machine learning affect the world of software development and testing, and what does the future of these interactions appear like?

A Briefer on Machine Learning and Artificial Intelligence

First, lets discuss the distinction in between ML and AI, given that these innovations relate, however frequently puzzled with each other. Machine learning describes a system of algorithms that are developed to assist a computer system enhance immediately through the course of experience. In other words, through artificial intelligence, a function (like facial acknowledgment, or driving, or speech-to-text) can improve and much better through continuous testing and improvement; to the outside observer, the system appears like its learning.

AI is thought about an intelligence shown by a machine, and it frequently utilizes ML as its structure. Its possible to have a ML system without showing AI, however its difficult to have AI without ML.

The Importance of Software Testing

Now, lets have a look at software testing an important component of the software development procedure, and probably, the crucial. Software testing is developed to ensure the item is working as planned, and in many cases, its a procedure that plays out sometimes over the course of development, prior to the item is in fact completed.

Through software testing, you can proactively recognize bugs and other defects prior to they end up being a genuine issue, and appropriate them. You can likewise examine an items capability, utilizing tests to examine its speed and efficiency under a range of various circumstances. Ultimately, this leads to a much better, more trustworthy itemand lower upkeep expenses over the items life time.

Attempting to provide a software item without total testing would belong to developing a big structure devoid of a real structure. In reality, it is approximated that the expense of post software shipment can 4-5x the total expense of the task itself when correct testing has actually not been totally carried out. When it pertains to software development, stopping working to test is stopping working to strategy.

How Machine Learning Is Reshaping Software Testing

Here, we can integrate the 2. How is artificial intelligence improving the world of software development and testing for the much better?

The easy response is that ML is currently being utilized by software testers to automate and enhance the testing procedure. Its normally utilized in mix with the nimble approach, which puts a focus on constant shipment and incremental, iterative development instead of developing a whole item simultaneously. Its one of the factors, I have actually argued that the future of nimble and scrum methods include a lot of artificial intelligence and expert system.

Machine learning can enhance software testing in numerous methods:

While cognitive computing holds the guarantee of more automating an ordinary, however extremely crucial procedure, troubles stay. We are no place near the level of procedure automation skill needed for full-blown automation. Even in todays finest software testing environments, artificial intelligence help in batch processing bundled code-sets, enabling testing and dealing with concerns with big information without the require to decouple, other than in circumstances when mistakes take place. And, even when mistakes do take place, the structured ML will notify the user who can mark the problem for future machine or human modifications and continue its automatic testing procedures.

Already, ML-based software testing is enhancing consistency, lowering mistakes, conserving time, and all the while, decreasing expenses. As it ends up being advanced, its going to improve the field of software testing in brand-new and a lot more ingenious methods. But, the important piece there is going to. While we are not yet there, we anticipate the next years will continue to enhance how software designers repeat towards a completed procedure in record time. Its just one factor the future of software development will not be almost as custom-made as it when was.

Nate Nead is the CEO of SEO.co/; a full-service SEO business and DEV.co/; a custom-made web and software development company. For over a years Nate had actually offered tactical assistance on innovation and marketing options for some of the most popular online brand names. He and his group encourage Fortune 500 and SMB customers on software, development and internet marketing. Nate and his group are based in Seattle, Washington and West Palm Beach,Florida

.

See more here:
How Machine Learning Will Impact the Future of Software Development and Testing - The Union Journal

Machine Learning In The Enterprise: Where Will The Next Trillion Dollars Of Value Accrue? – Forbes

Every company will become an ML company.

In the world of Harry Potter, the sorting hat serves as an algorithm that takes data from a students behavioral history, preferences and personality and turns that into a decision on which Hogwarts house they should join. If the real world had sorting hats, it would take the form of machine learning (ML) applications that make autonomous decisions based on complex datasets. While software has been eating the world, ML is starting to eat software, and it is supercharging trillion-dollar global industries such as healthcare, security and agriculture.

If ML is expected to create significant value, the question becomes: where will this value accrue? I will explore ways that value will be created and captured by three types of companies: traditional companies applying ML, companies building industry-agnostic ML tools and companies building vertically-integrated ML applications.

Machine learning is not just for the tech giants

ML innovation coming out of Facebook, Amazon, Apple, Netflix and Google (FAANG) is well known, from news feeds to recommendation engines, but most people are not as aware of the increasing demand for ML from traditional industries. Global spending on AI systems is projected to reach $98 billion in 2023, over 2.5x the amount spent in 2019, with financial services, retail, and automotive leading the way. Blackrock, an investment management firm with over $7 trillion in AUM, released several ML-powered ETFs in 2018. ML has rapidly gained mindshare in the healthcare industry, and budget for ML-driven solutions spanning medical imaging, diagnostics and drug discovery is expected to reach $10 billion in the next three years.

Across these enterprise customers, three broad customer segments have emerged: software engineers, data scientists and business analysts, sometimes known as citizen data scientists. Although business analysts are less technical by training, they comprise a large and growing segment of users who are applying ML to help companies make sense of their multiplying data repositories.

Machine learning tools are embedded across industries

To accommodate these customer segments, companies looking to craft pickaxes for the gold rush have proliferated. The challenge is not to make ML transparent but rather to make the painful parts like logging, data management, deployment and reproducibility easy, then to make model training efficient and debuggable, said Stuart Bowers, the former VP of Engineering at Tesla and Snap.

Incumbent vendors, most notably the public clouds, have adopted an end-to-end platform approach as part of their strategy to sell more infrastructure services. AWSs ML platform, Sagemaker, was originally intended for expert developers and data scientists, and it recently launched Sagemaker Studio to expand the audience to less technical users. For tech giants like AWS, selling ML tools is a means to drive additional infrastructure spend from its customers, meaning they can afford to offer these tools at a low cost.

Unicorns have also built value, often in partnership with the cloud providers. Databricks, an ML platform known for its strong data engineering capabilities built on top of Apache Spark, was founded in 2013 and is now valued at $6.2 billion. The partnership between Databricks and Microsoft enables Microsoft to drive more data and compute to Azure while massively scaling its own go-to-market efforts.

However, enterprise practitioners are starting to demand best of breed solutions rather than tools designed to nudge them to buy more infrastructure. To address this, the next generation of startups will pursue a more targeted approach. In contrast to the incumbents broad-brush platform plays, startups can pick specific problems and develop specialized tools to solve them more effectively. Within the ML tools space, three areas pose significant challenges to users today.

Dataset management

While ML results can be elegant, practitioners spend most of their time on the data cleaning, wrangling and transformation parts of the workflow. Because data is increasingly scattered in different formats across multiple machines and clouds, it is difficult to engineer the data into a consumable format that teams can easily access and use to collaborate.

To solve this, Mike Del Balso, the co-founder and CEO of Tecton, is democratizing the best practices he championed at Uber through his new startup. Broken data is the most common cause of problems in production ML systems. Modelers spend most of their time selecting and transforming features at training time and then building the pipelines to deliver those features to production models, he noted. Tecton simplifies complexity in the data layer by building a platform to manage these features - intelligent, real-time signals curated from a business raw data that are critical to operationalizing ML.

Further upstream, Liquidata is building the open source GitHub equivalent for databases. In my conversation with Tim Sehn, Liquidatas co-Founder and CEO and the former VP of Engineering at Snap, he emphasized that we need to collaborate on open data, just like with open source software, at Internet-scale. That is why we created DoltHub, a place on the internet to store, host, and collaborate on open data for free.

Experiment tracking & version control

Another common problem is the lack of reproducibility across results. The absence of version control for ML models makes it difficult to recreate an experiment.

As Lukas Biewald, co-Founder and CEO of Weights and Biases, shared in our interview, today, the biggest pain is a lack of basic software and best practices to manage a completely new style of coding. You cant paint well with a crappy paintbrush, you cant write code well in a crappy IDE (integrated development environment) and you cant build and deploy great deep learning models with the tools we have now. His company launched an experiment tracking solution in 2018, enabling customers like OpenAI to scale insights from a single researcher to the entire team.

Model Scalability

Building the infrastructure to scale model deployment and monitor results in production is another critical component in this maturing market.

Anyscale, the startup behind the open source framework Ray, has abstracted away the infrastructure underlying distributed applications and scalable ML. In my conversation with Robert Nishihara, Anyscales co-Founder and CEO, he shared that just as Microsofts operating system created an ecosystem for developer tools and applications, we are creating the infrastructure to power a rich ecosystem of applications and libraries, ranging from model training to deployment, that make it easy for developers to scale ML applications.

Scalability is also rapidly advancing in the field of natural language processing, or NLP. Hugging Face established an open source library to build, train, and share NLP models. There has been a paradigm shift in the last three years, whereby transfer learning for NLP started to dramatically change the accessibility and accuracy of integrating NLP into business applications, said Clment Delangue, the companys co-Founder and CEO. We are making it possible for companies to apply NLP models from the latest research into production within a week rather than in months.

Other promising startups include Streamlit, which allows developers to create an ML app with just a few lines of Python and deploy it instantly. OctoML applies an additional intelligence layer to ML, making systems easier to optimize and deploy. Fiddler Labs has built an Explainable AI Platform to continuously interpret and monitor results in production.

To build long-term durable companies in the face of stiff competition from incumbents, startups are asking themselves two questions: To which set of customers am I indispensable? What is the best way to reach these customers?

Many startups pitch the idea of capturing 1% of a large market, but often these big markets are already well-served, if not crowded. Companies focused on winning a core customer segment end up exhibiting strong early traction that translates into long term expansion potential. To reach these customers, most incumbents like Databricks and Datarobot have embraced a top-down, enterprise sales motion. Similar to what weve seen in the developer tools space, I expect ML startups will eventually evolve from pure enterprise sales to drive bottoms-up adoption and gain an advantage over todays enterprise-focused incumbents.

Vertically-integrated machine learning applications are upending the status quo

Some of the most exciting companies in ML are pioneering business models to disrupt entire industries. Auto has been the most obvious example, as $10 billion of funding poured into the industry in 2019 alone. The next generation of verticals where ML will also have a revolutionary impact include healthcare, industrials, security and agriculture.

ML is most effective when its ML plus X, said Richard Socher, the Chief Scientist at Salesforce. The best ML companies have a clear vertical focus. They dont even call themselves an ML company. He points to healthcare as a uniquely promising area: Athelas has applied ML to immune monitoring, helping patients optimize drug intake by collecting data on their white blood cell count. Curai leverages ML to augment the efficiency and quality of doctors recommendations, allowing them to spend more time treating patients. Zebra and AIdoc empower radiologists by training datasets to identify medical conditions faster.

In the industrials and logistics space, Covariant is a startup that combines reinforcement learning and neural networks that enable robots to manage objects in large warehouse facilities. Agility and Dexterity are similarly building robots that adapt to unpredictable situations in increasingly sophisticated ways. Interos applies ML to evaluate global supply chain networks, helping enterprises make critical decisions around vendor management, business continuity and risk.

Within security and defense, Verkada has reimagined enterprise physical security by intelligently analyzing and learning from real-time footage. Anduril has built an ML backbone that integrates data from sensor towers to augment intelligence in the interest of national security. Shield AIs software allows unmanned systems to interpret signals and act intelligently in the battlefield.

Agriculture is another vertical that has reaped enormous benefits from ML. John Deere acquired Blue River Technology, a startup that developed intelligent crop spraying equipment. We are changing the world of agriculture by bringing computer vision techniques to identify individual plants and take action on a plant-by-plant basis, said Lee Redden, Chief Scientist of the combined companys Intelligent Solutions Group. Other notable enterprise AgTech companies include Indigo, which applies ML to precision farming, harnessing data to produce food more profitably and sustainably.

Where do we go from here?

ML has quietly become part of our daily lives, powering our cars, the operations in our hospitals and the food we eat. Large incumbents have pioneered the state-of-the art so far, but the real promise lies in the next wave of ML applications and tools that will translate the hype around machine intelligence from a Harry Potter-like fantasy into tangible, societal value.

There are many reasons to be optimistic about the value ML can create in the coming years. Traditional companies will train millions of citizen data scientists to reshape broken industries into more productive ones. ML tools will lower the barriers to building intelligent applications, pushing millions of new ideas into production every day. Vertical ML business models will democratize access to healthy food, reliable physical security and affordable healthcare.

Thats where well find the true value of machine learning.

Originally posted here:
Machine Learning In The Enterprise: Where Will The Next Trillion Dollars Of Value Accrue? - Forbes

Machine Learning on AWS: Getting Started with SageMaker and More – Dice Insights

Ready to get started with machine learning (ML) on AWS? ML requires a lot of processing capability, more than youre likely to have at home. Thats where a cloud platform such as AWS can help. But how do you get started? Here are some tips to add ML to your career.

First, learn as much as you can about ML independent of AWS. To maximize your career opportunities, you want your experience and knowledge to be broad and not focus exclusively on AWS.

ML is not for the faint of heart. It requires serious study. However, opportunities for those with machine-learning skills abound, withroutine six-figure salariesfor engineers and developers who focus on deep learning, machine learning, and artificial intelligence (A.I.). According to Burning Glass, which collects and analyzes millions of job postings from across the country, machine-learning engineers with even a few years of experience can unlock pretty healthy compensationand thats before you throw in benefits such as stock options:

Job interviews for ML-related positionsare often tough and require quite a bit of preparation, as well. Even everyday developers and analysts (i.e., those who dont primarily focus on ML in their work) may very well end up using more ML tools and principles in coming years. If youre a student specializing in computer science or a related field, thats as good a reason as any tobuild out your ML and A.I. knowledge.

Read books, take online classes, and invest as much time as you can into learning it. TensorFlow, the open-source library for deep-learning software that was created by Google,has a nice page of learning resources.

Next, look at the coding frameworks available. The aforementioned TensorFlow is considered one of the top, as isPyTorch, which was created by Facebook. Although AWS has great tools for building ML with little coding, youre still going to want to know how to use ML coding frameworks.

AWS presently has 17 services related to ML, and theyre likely to add more in the years to come. This is too much to learn all at once, so we recommend a couple of things: First, make sure youre completely familiar with basic computing via AWS, including how to provision EC2 servers and, most importantly, how much its going to cost you perhourto allocate those servers. You cant afford surprises, especially when dealing with the kind of processing resources you need.

Second, of the 17 services, the one you want to start with is SageMaker. This is AWSs flagship ML product and it includes a complete IDE called SageMaker Studio.

SageMaker Studio offers a Quick Start; get to the Studio from the main SageMaker page, scroll down, and youll see the Quick Start:

Fill in the name and choose the permissions. (Youll likely need to create a role;you can learn about that here.) Then youll be asked for your VPC ID and subnet, so make sure you have a basic understanding of those, as well. Click Next, and youll see your SageMaker Studio dashboard. After a few minutes, youll see your new Studio show up in a list with the word Ready by it.

Click the Open Studio link to go into the Studio. The Studio will open in a new window; the first time it will take a couple minutes to load.

In the lower-right youll see a pane with a demonstration video and a video tutorials link with more information to help you get started. Theres also a link to a tour guide, which provides a complete walkthrough for setting up multiple experiments and trials.

With ML, experiments are the processes that you run many times over, as the system learns. Trials are the individual outcomes from the experiments. You provide different data with the experiments and observe the trials. Typically, each time you only modify the data slightly; this is known as an incremental change. Over time your system continues to gather more and more data and learn from the outcomes.

If youre into pattern and facial recognition and arent paranoid, you might try outthe AWS DeepLens, which is a hardware camera built to integrate with AWS ML. (You probably want to put tape over its lens when youre not using it.)

One place where you can stay on top of it all isthrough the official AWS ML blog. Many of the articles are really advanced, but if you at least skim through them, youll pick up tidbits of knowledge here and thereeven if youre just starting out on your machine-learning journey.

Machine learning is a huge topic and theres a lot to learn. Start slowly, study as much as you can, and just keep practicing with the different tools available. Over time, youll become competent, and if you keep at it, youll eventually become an expert. Have patience and perseverance!

See more here:
Machine Learning on AWS: Getting Started with SageMaker and More - Dice Insights

Commentary: Combine Optimization, Machine Learning And Simulation To Move Freight – Benzinga

The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.

Author's Disclosure: I am not an investor in Optimal Dynamics, either personally or through REFASHIOND Ventures. I have no financial relationship with Optimal Dynamics.

On July 7, FreightWaves ran Commentary: Optimal Dynamics the decision layer of logistics?, which kicked off a series that will focus on "AI in Supply Chain."

I believe that the incorporation of decision-making technologies in the supply chain is potentially the most transformative development in global industrial supply chains that we will see for the next two or three decades.

The purpose of this series is to seek evidence to support or refute that premise.

As I stated in the July 7 commentary, Optimal Dynamics is setting out to solve dynamic resource allocation problems, a set of problems that deal with the allocation of scarce resources in an optimal manner over space and time when conditions are uncertain and changing randomly in complex networks.

A CargoLux freighter takes off from an airport.(Photo: Jim Allen/FreightWaves)

Dynamic resource allocation problems are a class of problems that Warren Powell, co-founder of Optimal Dynamics, has studied over the course of his 39-year professorship at Princeton University, where he is a member of the Department of Operations Research and Financial Engineering. As Founder and Manager of Princeton University's Computational Stochastic Optimization and Learning Labs (CASTLE Labs), Powell has been at the forefront of researching and developing models and algorithms for stochastic optimization with practical applications in transportation and logistics. He will become a professor emeritus at Princeton University effective September 1, 2020.

He co-founded Optimal Dynamics in 2016, with his son Daniel Powell, who is Optimal Dynamics' CEO.

If you are a regular reader of FreightWaves, you have encountered discussions of network optimization in supply chain logistics before in this column. For example: Commentary: Toshiba's simulated bifurcation machines may optimize your supply chain (February 17, 2020); Commentary: Applying machine learning to improve the supply chain (July 30, 2019); Commentary: How can machine learning be applied to improve transportation? (July 23, 2019); and Logistics network optimization why this time is different (April 23, 2019).

A cargo ship set to unload at dockside.(Photo: Jim Allen/FreightWaves)

Optimal Dynamics' platform, CORE.ai, makes the company's proprietary high-dimensional artificial intelligence, High-Dimensional AI, available for general use through the CORE.ai web portal. It can also be implemented by trucking fleets and by other software vendors that wish to implement it within their products for example a transportation management system could implement CORE.ai through Optimal Dynamics' Open API protocols.

Eduardo Silva, Optimal Dynamics' Vice President of Engineering, says the company's RESTful API is built on top of a secure, reliable and scalable microservice infrastructure running in the cloud. Customer data is fully encrypted both at rest and in-transit, and Optimal Dynamics has adopted and adheres to best-practice fault-tolerance techniques and uses well-tested tools and strategies to ensure the reliability of the CORE.ai platform while maintaining the highest level of performance as scale increases.

CORE.ai's High-Dimensional AI uses approximate dynamic programming, a version of reinforcement learning adapted for high-dimensional problems in operations research, based on the insights gained over the decades of research conducted at CASTLE Labs.

Reinforcement learning is a form of machine learning in which the software system learns to accomplish a defined goal by trial-and-error, within a changing environment. Algorithms accomplish this through repetitive feedback loops based on reiterative improvements to a set of available actions. In approximate dynamic programming these available actions are encoded in mathematical functions which are known as policies. In this context, a policy tells the computer model how to act optimally under uncertainty.

A trainyard is full of railcars and intermodal containers on flatcars.(Photo: Jim Allen/FreightWaves)

Early forms of reinforcement learning, and dynamic programming, were first developed in the 1950s.

Warren Powell explains the difference between reinforcement learning and approximate dynamic programming this way, "In the 1990s and early 2000s, approximate dynamic programming and reinforcement learning were like British English and American English two flavors of the same algorithmic strategy. Then, as people discovered that this entire algorithm strategy (whether it is ADP or RL) did not solve all problems, people started branching out."

Don't worry if this is all starting to sound confusing. He says "These buzz-phrases are so confusing, especially when even the research community is unable to define the terms. Argh!"

What matters is that some of these algorithmic strategies are ready for prime time. As Optimal Dynamics indicates, some of these algorithmic strategies are ready to solve important problems in big, global, legacy industries that are fundamental to our way of life.

The academic research from which CORE.ai is a descendent has been applied in R&D collaborations between CASTLE Labs and large industrial and corporate partners representing every major supply chain logistics subsector.

For example, in Schneider National Uses Data To Survive A Bumpy Economy, which appeared in the September 12, 2011 issue of Forbes, the author describes how a prior version of the technology from CASTLE Labs was applied to create a "fleet-wide tactical planning simulator' that would use software algorithms to mimic the decision-making of human dispatchers on an inhumanly large scale."

Daniel Powell told me that Schneider National credits the technology developed in collaboration with CASTLE Labs with helping it realize $39 million in annual savings at the time.

An intermodal container is unloaded from a ship for transport by truck.(Photo: Jim Allen/FreightWaves)

The Forbes article also describes how other models that were being developed in parallel at CASTLE Labs were implemented by other logistics companies going as far back as the 1980s, and how that transformed the industry even then. For example, an interactive optimization product called SuperSPIN was used by every major national and regional less-than-truckload (LTL) carrier.

According to CASTLE Lab's website, "SuperSPIN was a model that arrived during a period of tremendous change in the LTL trucking industry. SuperSPIN allowed companies to understand the trade-offs between the number of end of lines and the value of network density. It also played a role in determining which carriers survived, and was used in the planning of some of the largest LTL carriers that survived the shakeout."

Manhattan Associates, the publicly traded software company, continues to support SuperSPIN.

Combing through the literature on CASTLE Labs' website one finds mention of collaborations with, and research funding from other companies like YRC, Ryder Truck Lines, Roadway Package System (now part of FedEx), Embraer, UPS, Netjets, The Air Mobility Command, Air Products and Chemicals, Burlington Motor Carriers, Triple Crown Services, Sea-Land (now part of Maersk), North American Van Lines, The Burlington Northern Santa Fe Railroad (now BNSF) and Norfolk Southern. With Norfolk Southern, CASTLE Labs used approximate dynamic programming to optimize locomotives.

Warren Powell's Ph.D. dissertation was on bulk service queues for LTL trucking. It was only after he started a new project as a faculty member, with a carrier (Ryder Truck Line) that he learned about the load planning problem, which is an optimization problem.

His Ph.D dissertation was funded by IU International a diversified services company with interests in trucking, distribution, environmental services, food services and agribusiness, which was acquired in 1988 through a hostile takeover.

Many years ago, a Fortune 500 third-party logistics company built a proprietary network optimization system on predecessors to CORE.ai.

Reflecting on his work in freight transportation, Warren Powell says "My work was roughly split between less-than-truckload which used one modeling technology, and truckload, rail, Embraer and other operational applications which used other modeling technologies. They all focused on operational models that required making decisions now that approximated the impact of these decisions on an uncertain future."

I asked Warren Powell why over the decades during which he has been studying dynamic resource allocation problems, the time is now ripe for Optimal Dynamics to take the work that has been done at CASTLE Labs, to bring it into the real world and to apply it to an entire industry like trucking rather than to discrete, one-off problems within discrete one-off companies.

A tractor pulls a flatbed trailer carrying an intermodal container.(Photo: Jim Allen/FreightWaves)

He said, "The trucking industry has been trying to develop advanced analytics since Schneider National initiated the effort in the late 1970s, but there was always something in the way lack of data (where is my driver?), poor computing facilities, and a basic lack of the types of analytics required to handle problems in the trucking industry."

He added, "30 years of research has developed the analytics we need to allow computers to solve these complex problems. This required combining the power of deterministic optimization tools (which emerged in the 1980s and 1990s), with machine learning and stochastic simulation, all at the same time."

Powell continued, "We can now run these powerful algorithms on the cloud, which offers virtually unlimited computing power. Finally, smartphones and the internet allow us to be in direct touch with drivers, avoiding the need for clumsy telephone calls (1980s) or even the use of expensive satellite systems."

Today, the path to market for startups like Optimal Dynamics has been somewhat smoothed by the broad awareness among business executives that the technology landscape has changed dramatically.

In the 1980s, Warren Powell's work was often called the "bleeding edge." Now, everyone understands the vast power of computers and the cloud, as well as the widespread adoption of smartphones that provide pervasive connectivity and facilitate direct communication with drivers.

In the past few years, people have also started to realize that computers can be smart through "AI," although there remains tremendous confusion about what this really means, since AI is actually an entire family of algorithmic technologies.

According to Warren Powell, the breakthroughs that enable computers to solve chess and Chinese Go simply are not robust enough to optimize a trucking company because of the number of variables that a trucking company must account for, and the uncertainty one must contend with in the real world.

He commented, "It took me a lifetime to realize how to combine the power of optimization [to solve high-dimensional decision problems, but without uncertainty], with machine learning and simulation to crack the high-dimensional problems that arise in freight transportation."

I have personally been witness to how Warren lights up when he is thinking about how his work applies to problems in logistics and transportation. It happened when I first met him in 2016.

It happened again when I introduced him to executives at the freight forwarding unit of a large European container shipping company in March 2018.

Warren and I met with them at their headquarters, and wound up spending more than four hours talking about freight forwarding and how the various techniques developed at CASTLE Labs could be applied to solving some of the problems they wanted to solve in order to improve their operations. I left Warren with them after hours of conversation (I had a long drive home and wanted to beat traffic). To my amusement, they were so engrossed in conversation that they barely acknowledged that I was leaving.

I came away convinced that a lack of sufficient data would not be as big of an issue as I had previously assumed, and also that the problems the executives described could definitely be solved.

On November 23, 2016 I published Industry Study: Freight Trucking (#Startups). That blog post includes Optimal Dynamics in a very early and rudimentary market map of startups building software for the trucking industry. I came to know Optimal Dynamics and the people behind it after spending a day at CASTLE Labs in August 2016.

Daniel Powell has presented demos of early versions of CORE.ai at The New York Supply Chain Meetup in March 2018: Artificial Intelligence & Supply Chains, and again during The Worldwide Supply Chain Federation's inaugural global summit, #SCIT19, in June 2019 (Video).

Juliana Nascimento, Optimal Dynamics' Head of Optimization and Artificial Intelligence, was a panelist at #SCIT19 on the topic of innovation in land-based supply chain logistics (Video). Among other things, Juliana ran Operational Planning & Foreign Trade, and before that Production Planning & Control, and Strategic Planning for eight years at Kimberly-Clark in Brazil, after she earned her Ph.D under the supervision of Warren Powell at CASTLE Labs.

A delivery van at work.(Photo: Jim Allen/FreightWaves)

As far as supply chain logistics is concerned, a platform like CORE.ai can be applied in rail, drayage, container shipping, air freight, and warehousing and distribution. Predecessors to CORE.ai have been applied in long-distance and middle-distance trucking, rail and air, real-time dispatching, routing and scheduling, and spare parts management, among others.

Estimates of the market for artificial intelligence in supply chain logistics applications peg the size of the global market at about $6.5 billion by 2023, with a compound annual growth rate of about 43%, according to Infoholic Research. Or, $10 billion by 2025 with a compound growth rate of about 46% according to BizWit Research & Consulting LLP.

As I stated in my July 7 commentary, the goals of this series are:

In the next article in this series, we will talk about high-dimensional decision problems, such as the problems encountered in freight logistics, and why they pose such a challenge for AI systems like IBM Watson and Google Deepmind's AlphaGo.If you are a team working on innovations that you believe have the potential to significantly refashion global supply chains we'd love to tell your story in FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at media@freightwaves.com.

Continued here:
Commentary: Combine Optimization, Machine Learning And Simulation To Move Freight - Benzinga

Twitter CTO on machine learning challenges: Im not proud that we miss a lot of misinformation – VentureBeat

Watch all the Transform 2020 sessions on-demand right here.

Twitter considers itself a hub of global conversation, but any regular user knows how frequently the discourse veers into angry rants or misinformation. While the companys investments in machine learning are intended to address these issues, executives understand the company has a long way to go.

According to Twitter CTO Parag Agrawal, its likely the company will never be able to declare victory because tools like conversational AI in the hands of adversaries continue to make the problems evolve rapidly. But Agrawal said hes determined to turn the tide to help Twitter fulfill its potential for good.

Its become increasingly clear what our role is in the world, Agrawal said. It is to serve the public conversation. And these last few months, whether they be around the implications on public health due to COVID-19, or to have a conversation around racial injustices in this country, have emphasized the role of public conversation as a concept.

Agrawal made his remarks during VentureBeats Transform 2020 conference in a conversation with VentureBeat CEO Matt Marshall.During the interview, Agrawal noted that Twitter has been investing more in trying to highlight positive and productive conversations. That led to the introduction of following topics as a way to get people out of silos and to discover a broader range of views.

That said, much of his work still focuses on adversaries who are trying to manipulate the public conversations and how they might use these new techniques. He broke down these adversaries into four categories:

Typically, an attempt at manipulating the conversation uses some combination of all of these four to achieve some sort of objective, he said.

The most harmful are those bots that manage to disguise themselves successfully as humans using the most advanced conversational AI. These mislead people into believing that theyre real people and allow people to be influenced by them, he said.

This multi-layered strategy makes fighting manipulation extraordinarily complex. Worse, those techniques advance and change constantly. And the impact of bad content is swift.

If a piece of content is going to matter in a good or a bad way, its going to have its impact within minutes and hours, and not days, he said. So, its not OK for me to wait a day for my model to catch up and learn what to do with it. And I need to learn in real time.

Twitter has won some praise recently for taking steps toward labeling misleading or violent tweets posted by President Trump when other platforms such as Facebook have been more reluctant to take action. Beyond those headline-making decisions, however, Agrawal said the task of monitoring the platform has grown even more difficult in recent months as issues like the pandemic and then Black Lives Matter sparked global conversations.

Weve had to work with an increased amount of passion on the service on whatever the topic of conversation because of the heightened importance of these topics, he said. And Ive had to prioritize our work to best to help people and improve the health of the conversation during this time.

Agrawal does believe the company is making progress.We quickly worked on a policy around misinformation around COVID-19 as we saw that threat emerge, he said. Our policy was meant specifically to mitigate harms. Our strategy in this space is not to tackle all misinformation in the world. Theres too much of it and we dont have clinical approaches to navigate Our efforts are not focused on determining whats true or false. Theyre focused on providing labels and annotations, so people can find easy access to reliable information, as well as the greater conversation around the topic so that they can make up their mind.

The company will continue to expand its machine learning to flag bad content, he said. Currently, about 50% of enforcement actions involve content that is flagged for violating terms of service is caught by those machine learning systems.

Still, there remains a sense of disappointment that more has not been done. Agrawal acknowledges that, noting that the process of turning policy into standards that can be enforced by machine learning remains a practical challenge.

We build systems, he said. Thats why we ground solutions in policy, and then build using product and technology and our processes. Its designed to avoid biases. At the same time, it puts us in a situation where things move slower than most of us would like. It takes us a while to develop a process to scale, to have automation to enforce the policy. Im not proud that we missed a large amount of misinformation even where we have a policy because we havent been able to build these automated systems.

Read more:
Twitter CTO on machine learning challenges: Im not proud that we miss a lot of misinformation - VentureBeat