Machine learning PODA model projects the impact of COVID-19 on US motor gasoline demand – Green Car Congress

A team from Oak Ridge National Laboratory (ORNL), Aramco Services Company, MIT, the Michigan Department of Transportation and Argonne National Laboratory has developed a machine-learning-based model (Pandemic Oil Demand Analysis, PODA) to project the US medium-term gasoline demand in the context of the COVID-19 pandemic and to study the impact of government intervention. Their open-access paper appears in the journal Nature Energy.

The PODA model is a machine-learning-based model to project the US gasoline demand using COVID-19 pandemic data, government policies and demographic information. The Mobility Dynamic Index Forecast Module identifies the changes in travel mobility caused by the evolution of the COVID-19 pandemic and government orders. The Motor Gasoline Demand Estimation Module quantifies motor gasoline demands due to the changes in travel mobility. Ou et al.

They found that under the reference infection scenario, US gasoline demand grows slowly after a quick rebound in May, and is unlikely to recover to a non-pandemic level prior to October 2020.

Under both the reference and a pessimistic scenario, continual lockdown (no reopening) could worsen the motor gasoline demand temporarily, but it helps the demand recover to a normal level more quickly due to its impact on infection rate.

Under the optimistic infection scenario, the projected trend of motor gasoline demand will recover to about 95% of the non-pandemic gasoline level (almost fully recover) by late September 2020.

However, under the pessimistic infection scenario, a second wave of infections in mid-June to August could lower the gasoline demand once morebut not worse than it was in April 2020.

The researchers conclude that their results imply that government intervention does impact the infection rate, which thereby impacts mobility and fuel demand.

Projections of the evolution of COVID-19 pandemic trends show that lockdowns help to reduce COVID-19 transmissions by as much as 90% compared with the baseline without any social distancing in Austin, Texas. However, this unprecedented phenomenon could last for a few years: Kissler et al. suggested that, even after the pandemic peaked, COVID-19 surveillance should be continued as a resurgence in contagion could be possible as late as 2024. Therefore, beyond the immediate economic responses, the longer-term impact on the US economy may persist well beyond 2020. An effective forecast or estimate of the pandemic impacts could help people to well prepare and navigate around unknown risks. More specifically, reliably projecting the oil demand, a critical leading indicator of the state of the US economy, is beneficial to related business activities and investment decisions.

There are studies that discuss the impacts of unexpected natural hazards and/or disasters on energy demand and/or consumption and studies that evaluate the impacts of previous pandemics on tourism and economics . However, few studies have quantified and forecast the oil demands under multiple pandemic scenarios, and this research is desperately needed.

To date, studies focused on the energy impacts of the COVID-19 pandemic are limited to the short-term energy outlook released by the US Energy Information Administration (EIA); this outlook uses a simplified evolution of the COVID-19 pandemic to forecast the US gross domestic product, energy supplies, demands and prices until the fourth quarter of 202115. In this work, we develop a model that combines personal mobility with motor gasoline demand and uses a neural network to correlate personal mobility with the evolution of the COVID-19 pandemic, government policies and demographic information.

Ou et al.

The model contains two major modules: a Mobility Dynamic Index Forecast Module and a Motor Gasoline Demand Estimation Module. The Mobility Dynamic Index Forecast Module identifies the changes in travel mobility caused by the evolution of the COVID-19 pandemic and government orders, and it projects the changes in travel mobility indices relative to the pre-COVID-19 period in the United State.

The change in travel mobility, which affects the frequency of human contact or the level of social distancing, can reciprocally impact the evolution of the pandemic to some extent.

The Motor Gasoline Demand Estimation Module estimates vehicle miles traveled on pandemic days while it considers the dynamic indices of travel mobility, and it quantifies motor gasoline demands by coupling the gasoline demands and vehicle miles travelled.

The neural network model, which is the core of the PODA model, has 42 inputs, 2 layers and 25 hidden nodes for each layer, with rectified linear units as the activation function. In the PODA model, the potential induced travel demand due to the lower oil prices under the COVID-19 pandemic is not explicitly considered.

Resources

Ou, S., He, X., Ji, W. et al. (2020) Machine learning model to project the impact of COVID-19 on US motor gasoline demand. Nat Energy doi: 10.1038/s41560-020-0662-1

See the rest here:
Machine learning PODA model projects the impact of COVID-19 on US motor gasoline demand - Green Car Congress

Machine Learning Market to Reach USD 117.19 Billion by 2027; Increasing Popularity of Self-Driving Cars to Propel Demand from Automotive Industry,…

Pune, July 17, 2020 (GLOBE NEWSWIRE) -- The global machine learning market size is anticipated to rise remarkably on account of the advancement in deep learning. This, coupled with the amalgamation of analytics-driven solutions with ML abilities, is expected to aid in favor of the market in the coming years. As per a recent report by Fortune Business Insights, titled, Machine Learning Market Size, Share & Covid-19 Impact Analysis, By Component (Solution, and Services), By Enterprise Size (SMEs, and Large Enterprises), By Deployment (Cloud and On-premise), By Industry (Healthcare, Retail, IT and Telecommunication, BFSI, Automotive and Transportation, Advertising and Media, Manufacturing, and Others), and Regional Forecast, 2020-2027, the value of this market was USD 8.43 billion in 2019 and is likely to exhibit a CAGR of 39.2% to reach USD 117.19 billion by the end of 2027.

Get Sample PDF Brochure: https://www.fortunebusinessinsights.com/enquiry/request-sample-pdf/machine-learning-market-102226

Coronavirus has not only brought about health issues and created social distance among people but it has also hampered the industrial and commercial sectors drastically. The whole world is following home quarantine, and we are unsure when we can freely roam the streets again. The governments of various nations are also making considerable efforts to bring the COVID-19 situation under control, and hopefully, we will overcome this obstacle soon.

Fortune Business Insights is offering special reports on various markets impacted by the COVID-19 pandemic. These reports provide a thorough analysis of the market and will be helpful for the players and investors to accordingly study and chalk out the growth strategies for better revenue generation.

Click here to get the short-term and long-term impacts of COVID-19 on this Market.Please visit: https://www.fortunebusinessinsights.com/machine-learning-market-102226

What Are the Objectives of the Report?

The report is based on a 360-degree overview of the market that discusses major factors driving, repelling, challenging, and creating opportunities for the market. It also talks about the current trends prevalent in the market, recent industry developments, and other interesting insights that will help investors accordingly chalk out growth strategies for the future. The report also highlights the names of major segments and significant players operating in the market. For more information on the report, log on to the company website.

Drivers & Restraints-Huge Investment on Artificial Intelligence to Aid in Favor of Market

The e-commerce sector has showcased significant growth in the past few years, with the advent of retail analytics. Companies such as Alibaba, eBay, Amazon, and others are utilizing advanced data analytics solutions for boosting their sales graph. Thus, the advent of analytical solutions into the e-commerce sector, offering enhanced consumer experience and rise in sales graph is one of the major factors promoting the machine learning market growth. In addition to this, the use of machine intelligence solutions for encrypting and protecting data is adding boost to the market. Furthermore, massive investments in artificial intelligence (AI) and efforts to introduce innovations in this field are further expected to add impetus to the market in the coming years.

On the flipside, national security threat issues such as deepfakes and other fraudulent cases, coupled with the misuse of robots, may hamper the overall market growth. Nevertheless, the introduction and increasing popularity of self-driving cars from the automotive industry is projected to create new growth opportunities for the market in the coming years.

Speak To Analyst https://www.fortunebusinessinsights.com/enquiry/speak-to-analyst/machine-learning-market-102226

Segment:

IT and Telecommunication Segment Bagged Major Share Soon to be Overpowered by Healthcare Sector

Based on segmentation by industry, the IT and telecommunication segment earned 22.0% machine learning market share and emerged dominant. But the current COVID-19 pandemic increased the popularity of wearable medical devices to keep track of personal health and diet. This is expected to help the healthcare sector emerge dominant in the coming years.

Regional Analysis-Asia Pacific to Exhibit Fastest Growth Rate Owing to Rising Adoption by Developing Economies

Region-wise, North America emerged dominant in the market, with a revenue of USD 3.07 billion in 2019. This is attributable to the presence of significant players such as IBM Corporation, Oracle Corporation, Amazon.com, and others and their investments in research and development of better software solutions for this technology. On the other side, the market in Asia Pacific is expected to exhibit a rapid CAGR in the forecast period on account of the increasing adoption of artificial intelligence, machine learning, and other latest advancements in the rising economies such as India, China, and others.

Competitive Landscape-

Players Focusing on Development of Responsible Machine Learning to Strengthen their position

The global market generates significant revenues from companies such as Microsoft Corporation, IBM Corporation, SAS Institute Inc., Amazon.com, and others. The principal objective of these players is to develop responsible machine learning that will help prevent unauthorized use of such solutions for fraudulent or data theft crimes. Other players are engaging in collaborative efforts to strengthen their position in the market.

Major Industry Developments of this Market Include:

March 2019 The latest and most advanced ML capability was added to the 365 platforms by Microsoft. This new feature will help strengthen the internet-facing virtual machines by increasing security when merged with the integration of machine learning by Azures security centre.

List of the Leading Companies Profiled in the Machine Learning Market Research Report Include:

Quick Buy:

Machine Learning Market Research Report: https://www.fortunebusinessinsights.com/checkout-page/102226

Detailed Table of Content

TOC Continued...!!!

Get your Customized Research Report: https://www.fortunebusinessinsights.com/enquiry/customization/machine-learning-market-102226

Have a Look at Related Research Insights:

Commerce Cloud Market Size, Share & Industry Analysis, By Component (Platform, and Services), By Enterprise Size (SMEs, and Large Enterprises), By Application (Grocery and Pharmaceuticals, Fashion and Apparel, Travel and Hospitality, Electronics, Furniture and Bookstore, and Others), By End-use (B2B, and B2C), and Regional Forecast, 2020-2027

Big Data Technology Market Size, Share & Industry Analysis, By Offering (Solution, Services), By Deployment (On-Premise, Cloud, Hybrid), By Application (Customer Analytics, Operational Analytics, Fraud Detection and Compliance, Enterprise Data Warehouse Optimization, Others), By End Use Industry (BFSI, Retail, Manufacturing, IT and Telecom, Government, Healthcare, Utility, Others) and Regional Forecast, 2019-2026

Artificial Intelligence (AI) Market Size, Share and Industry Analysis By Component (Hardware, Software, Services), By Technology (Computer Vision, Machine Learning, Natural Language Processing, Others), By Industry Vertical (BFSI, Healthcare, Manufacturing, Retail, IT & Telecom, Government, Others) and Regional Forecast, 2019-2026

Artificial IntelligenceAI in Manufacturing Market Size, Share & COVID-19 Impact Analysis, By Offering (Hardware, Software, and Services), By Technology (Computer Vision, Machine Learning, Natural Language Processing), By Application (Process Control, Production Planning, Predictive Maintenance & Machinery Inspection), By Industry (Automotive, Medical Devices, Semiconductor &Electronics), and Regional Forecast, 2020-2027

Artificial Intelligence AI in Retail Market Size, Share & Industry Analysis, By Offering (Solutions, Services), By Function (Operations-Focused, Customer-Facing), By Technology (Computer Vision, Machine Learning, Natural Language Processing, and Others), and Regional Forecast, 2019-2026

Emotion Detection and Recognition Market Size, Share and Global Trend By Component (Software tools, Services), By Technology (Pattern Recognition Network, Machine Learning, Natural Language Processing), By Application (Marketing & Advertising, Media & Entertainment), By End-User (Government, Healthcare, Retail) and Geography Forecast till 2026

About Us:

Fortune Business Insightsoffers expert corporate analysis and accurate data, helping organizations of all sizes make timely decisions. We tailor innovative solutions for our clients, assisting them address challenges distinct to their businesses. Our goal is to empower our clients with holistic market intelligence, giving a granular overview of the market they are operating in.

Our reports contain a unique mix of tangible insights and qualitative analysis to help companies achieve sustainable growth. Our team of experienced analysts and consultants use industry-leading research tools and techniques to compile comprehensive market studies, interspersed with relevant data.

At Fortune Business Insights, we aim at highlighting the most lucrative growth opportunities for our clients. We therefore offer recommendations, making it easier for them to navigate through technological and market-related changes. Our consulting services are designed to help organizations identify hidden opportunities and understand prevailing competitive challenges.

Contact Us:

Fortune Business Insights Pvt. Ltd.308, Supreme Headquarters,Survey No. 36, Baner,Pune-Bangalore Highway,Pune- 411045, Maharashtra,India.Phone:US: +1-424-253-0390UK: +44-2071-939123APAC: +91-744-740-1245Email:sales@fortunebusinessinsights.comFortune Business InsightsLinkedIn|Twitter|Blogs

Read Press Release https://www.fortunebusinessinsights.com/press-release/global-machine-learning-market-10095

Link:
Machine Learning Market to Reach USD 117.19 Billion by 2027; Increasing Popularity of Self-Driving Cars to Propel Demand from Automotive Industry,...

How Machine Learning Will Impact the Future of Software Development and Testing – The Union Journal

Machine learning (ML) and expert system (AI) are regularly thought of to be the entrances to a futuristic world in which robotics connect with us like individuals and computer systems can end up being smarter than people in every method. But of course, artificial intelligence is currently being utilized in millions of applications around the worldand its currently beginning to form how we live and work, frequently in manner ins which go hidden. And while these innovations have actually been compared to devastating bots or blamed for synthetic panic-induction, they are assisting in large methods from software to biotech.

Some of the sexier applications of artificial intelligence remain in emerging innovations like self-driving vehicles; thanks to ML, automated driving software can not just self-improve through millions of simulations, it can likewise adjust on the fly if confronted with brand-new situations while driving. But ML is perhaps a lot more crucial in fields like software testing, which are widely utilized and utilized for millions of other innovations.

So how precisely does machine learning affect the world of software development and testing, and what does the future of these interactions appear like?

A Briefer on Machine Learning and Artificial Intelligence

First, lets discuss the distinction in between ML and AI, given that these innovations relate, however frequently puzzled with each other. Machine learning describes a system of algorithms that are developed to assist a computer system enhance immediately through the course of experience. In other words, through artificial intelligence, a function (like facial acknowledgment, or driving, or speech-to-text) can improve and much better through continuous testing and improvement; to the outside observer, the system appears like its learning.

AI is thought about an intelligence shown by a machine, and it frequently utilizes ML as its structure. Its possible to have a ML system without showing AI, however its difficult to have AI without ML.

The Importance of Software Testing

Now, lets have a look at software testing an important component of the software development procedure, and probably, the crucial. Software testing is developed to ensure the item is working as planned, and in many cases, its a procedure that plays out sometimes over the course of development, prior to the item is in fact completed.

Through software testing, you can proactively recognize bugs and other defects prior to they end up being a genuine issue, and appropriate them. You can likewise examine an items capability, utilizing tests to examine its speed and efficiency under a range of various circumstances. Ultimately, this leads to a much better, more trustworthy itemand lower upkeep expenses over the items life time.

Attempting to provide a software item without total testing would belong to developing a big structure devoid of a real structure. In reality, it is approximated that the expense of post software shipment can 4-5x the total expense of the task itself when correct testing has actually not been totally carried out. When it pertains to software development, stopping working to test is stopping working to strategy.

How Machine Learning Is Reshaping Software Testing

Here, we can integrate the 2. How is artificial intelligence improving the world of software development and testing for the much better?

The easy response is that ML is currently being utilized by software testers to automate and enhance the testing procedure. Its normally utilized in mix with the nimble approach, which puts a focus on constant shipment and incremental, iterative development instead of developing a whole item simultaneously. Its one of the factors, I have actually argued that the future of nimble and scrum methods include a lot of artificial intelligence and expert system.

Machine learning can enhance software testing in numerous methods:

While cognitive computing holds the guarantee of more automating an ordinary, however extremely crucial procedure, troubles stay. We are no place near the level of procedure automation skill needed for full-blown automation. Even in todays finest software testing environments, artificial intelligence help in batch processing bundled code-sets, enabling testing and dealing with concerns with big information without the require to decouple, other than in circumstances when mistakes take place. And, even when mistakes do take place, the structured ML will notify the user who can mark the problem for future machine or human modifications and continue its automatic testing procedures.

Already, ML-based software testing is enhancing consistency, lowering mistakes, conserving time, and all the while, decreasing expenses. As it ends up being advanced, its going to improve the field of software testing in brand-new and a lot more ingenious methods. But, the important piece there is going to. While we are not yet there, we anticipate the next years will continue to enhance how software designers repeat towards a completed procedure in record time. Its just one factor the future of software development will not be almost as custom-made as it when was.

Nate Nead is the CEO of SEO.co/; a full-service SEO business and DEV.co/; a custom-made web and software development company. For over a years Nate had actually offered tactical assistance on innovation and marketing options for some of the most popular online brand names. He and his group encourage Fortune 500 and SMB customers on software, development and internet marketing. Nate and his group are based in Seattle, Washington and West Palm Beach,Florida

.

See more here:
How Machine Learning Will Impact the Future of Software Development and Testing - The Union Journal

Machine Learning In The Enterprise: Where Will The Next Trillion Dollars Of Value Accrue? – Forbes

Every company will become an ML company.

In the world of Harry Potter, the sorting hat serves as an algorithm that takes data from a students behavioral history, preferences and personality and turns that into a decision on which Hogwarts house they should join. If the real world had sorting hats, it would take the form of machine learning (ML) applications that make autonomous decisions based on complex datasets. While software has been eating the world, ML is starting to eat software, and it is supercharging trillion-dollar global industries such as healthcare, security and agriculture.

If ML is expected to create significant value, the question becomes: where will this value accrue? I will explore ways that value will be created and captured by three types of companies: traditional companies applying ML, companies building industry-agnostic ML tools and companies building vertically-integrated ML applications.

Machine learning is not just for the tech giants

ML innovation coming out of Facebook, Amazon, Apple, Netflix and Google (FAANG) is well known, from news feeds to recommendation engines, but most people are not as aware of the increasing demand for ML from traditional industries. Global spending on AI systems is projected to reach $98 billion in 2023, over 2.5x the amount spent in 2019, with financial services, retail, and automotive leading the way. Blackrock, an investment management firm with over $7 trillion in AUM, released several ML-powered ETFs in 2018. ML has rapidly gained mindshare in the healthcare industry, and budget for ML-driven solutions spanning medical imaging, diagnostics and drug discovery is expected to reach $10 billion in the next three years.

Across these enterprise customers, three broad customer segments have emerged: software engineers, data scientists and business analysts, sometimes known as citizen data scientists. Although business analysts are less technical by training, they comprise a large and growing segment of users who are applying ML to help companies make sense of their multiplying data repositories.

Machine learning tools are embedded across industries

To accommodate these customer segments, companies looking to craft pickaxes for the gold rush have proliferated. The challenge is not to make ML transparent but rather to make the painful parts like logging, data management, deployment and reproducibility easy, then to make model training efficient and debuggable, said Stuart Bowers, the former VP of Engineering at Tesla and Snap.

Incumbent vendors, most notably the public clouds, have adopted an end-to-end platform approach as part of their strategy to sell more infrastructure services. AWSs ML platform, Sagemaker, was originally intended for expert developers and data scientists, and it recently launched Sagemaker Studio to expand the audience to less technical users. For tech giants like AWS, selling ML tools is a means to drive additional infrastructure spend from its customers, meaning they can afford to offer these tools at a low cost.

Unicorns have also built value, often in partnership with the cloud providers. Databricks, an ML platform known for its strong data engineering capabilities built on top of Apache Spark, was founded in 2013 and is now valued at $6.2 billion. The partnership between Databricks and Microsoft enables Microsoft to drive more data and compute to Azure while massively scaling its own go-to-market efforts.

However, enterprise practitioners are starting to demand best of breed solutions rather than tools designed to nudge them to buy more infrastructure. To address this, the next generation of startups will pursue a more targeted approach. In contrast to the incumbents broad-brush platform plays, startups can pick specific problems and develop specialized tools to solve them more effectively. Within the ML tools space, three areas pose significant challenges to users today.

Dataset management

While ML results can be elegant, practitioners spend most of their time on the data cleaning, wrangling and transformation parts of the workflow. Because data is increasingly scattered in different formats across multiple machines and clouds, it is difficult to engineer the data into a consumable format that teams can easily access and use to collaborate.

To solve this, Mike Del Balso, the co-founder and CEO of Tecton, is democratizing the best practices he championed at Uber through his new startup. Broken data is the most common cause of problems in production ML systems. Modelers spend most of their time selecting and transforming features at training time and then building the pipelines to deliver those features to production models, he noted. Tecton simplifies complexity in the data layer by building a platform to manage these features - intelligent, real-time signals curated from a business raw data that are critical to operationalizing ML.

Further upstream, Liquidata is building the open source GitHub equivalent for databases. In my conversation with Tim Sehn, Liquidatas co-Founder and CEO and the former VP of Engineering at Snap, he emphasized that we need to collaborate on open data, just like with open source software, at Internet-scale. That is why we created DoltHub, a place on the internet to store, host, and collaborate on open data for free.

Experiment tracking & version control

Another common problem is the lack of reproducibility across results. The absence of version control for ML models makes it difficult to recreate an experiment.

As Lukas Biewald, co-Founder and CEO of Weights and Biases, shared in our interview, today, the biggest pain is a lack of basic software and best practices to manage a completely new style of coding. You cant paint well with a crappy paintbrush, you cant write code well in a crappy IDE (integrated development environment) and you cant build and deploy great deep learning models with the tools we have now. His company launched an experiment tracking solution in 2018, enabling customers like OpenAI to scale insights from a single researcher to the entire team.

Model Scalability

Building the infrastructure to scale model deployment and monitor results in production is another critical component in this maturing market.

Anyscale, the startup behind the open source framework Ray, has abstracted away the infrastructure underlying distributed applications and scalable ML. In my conversation with Robert Nishihara, Anyscales co-Founder and CEO, he shared that just as Microsofts operating system created an ecosystem for developer tools and applications, we are creating the infrastructure to power a rich ecosystem of applications and libraries, ranging from model training to deployment, that make it easy for developers to scale ML applications.

Scalability is also rapidly advancing in the field of natural language processing, or NLP. Hugging Face established an open source library to build, train, and share NLP models. There has been a paradigm shift in the last three years, whereby transfer learning for NLP started to dramatically change the accessibility and accuracy of integrating NLP into business applications, said Clment Delangue, the companys co-Founder and CEO. We are making it possible for companies to apply NLP models from the latest research into production within a week rather than in months.

Other promising startups include Streamlit, which allows developers to create an ML app with just a few lines of Python and deploy it instantly. OctoML applies an additional intelligence layer to ML, making systems easier to optimize and deploy. Fiddler Labs has built an Explainable AI Platform to continuously interpret and monitor results in production.

To build long-term durable companies in the face of stiff competition from incumbents, startups are asking themselves two questions: To which set of customers am I indispensable? What is the best way to reach these customers?

Many startups pitch the idea of capturing 1% of a large market, but often these big markets are already well-served, if not crowded. Companies focused on winning a core customer segment end up exhibiting strong early traction that translates into long term expansion potential. To reach these customers, most incumbents like Databricks and Datarobot have embraced a top-down, enterprise sales motion. Similar to what weve seen in the developer tools space, I expect ML startups will eventually evolve from pure enterprise sales to drive bottoms-up adoption and gain an advantage over todays enterprise-focused incumbents.

Vertically-integrated machine learning applications are upending the status quo

Some of the most exciting companies in ML are pioneering business models to disrupt entire industries. Auto has been the most obvious example, as $10 billion of funding poured into the industry in 2019 alone. The next generation of verticals where ML will also have a revolutionary impact include healthcare, industrials, security and agriculture.

ML is most effective when its ML plus X, said Richard Socher, the Chief Scientist at Salesforce. The best ML companies have a clear vertical focus. They dont even call themselves an ML company. He points to healthcare as a uniquely promising area: Athelas has applied ML to immune monitoring, helping patients optimize drug intake by collecting data on their white blood cell count. Curai leverages ML to augment the efficiency and quality of doctors recommendations, allowing them to spend more time treating patients. Zebra and AIdoc empower radiologists by training datasets to identify medical conditions faster.

In the industrials and logistics space, Covariant is a startup that combines reinforcement learning and neural networks that enable robots to manage objects in large warehouse facilities. Agility and Dexterity are similarly building robots that adapt to unpredictable situations in increasingly sophisticated ways. Interos applies ML to evaluate global supply chain networks, helping enterprises make critical decisions around vendor management, business continuity and risk.

Within security and defense, Verkada has reimagined enterprise physical security by intelligently analyzing and learning from real-time footage. Anduril has built an ML backbone that integrates data from sensor towers to augment intelligence in the interest of national security. Shield AIs software allows unmanned systems to interpret signals and act intelligently in the battlefield.

Agriculture is another vertical that has reaped enormous benefits from ML. John Deere acquired Blue River Technology, a startup that developed intelligent crop spraying equipment. We are changing the world of agriculture by bringing computer vision techniques to identify individual plants and take action on a plant-by-plant basis, said Lee Redden, Chief Scientist of the combined companys Intelligent Solutions Group. Other notable enterprise AgTech companies include Indigo, which applies ML to precision farming, harnessing data to produce food more profitably and sustainably.

Where do we go from here?

ML has quietly become part of our daily lives, powering our cars, the operations in our hospitals and the food we eat. Large incumbents have pioneered the state-of-the art so far, but the real promise lies in the next wave of ML applications and tools that will translate the hype around machine intelligence from a Harry Potter-like fantasy into tangible, societal value.

There are many reasons to be optimistic about the value ML can create in the coming years. Traditional companies will train millions of citizen data scientists to reshape broken industries into more productive ones. ML tools will lower the barriers to building intelligent applications, pushing millions of new ideas into production every day. Vertical ML business models will democratize access to healthy food, reliable physical security and affordable healthcare.

Thats where well find the true value of machine learning.

Originally posted here:
Machine Learning In The Enterprise: Where Will The Next Trillion Dollars Of Value Accrue? - Forbes

Commentary: Combine Optimization, Machine Learning And Simulation To Move Freight – Benzinga

The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.

Author's Disclosure: I am not an investor in Optimal Dynamics, either personally or through REFASHIOND Ventures. I have no financial relationship with Optimal Dynamics.

On July 7, FreightWaves ran Commentary: Optimal Dynamics the decision layer of logistics?, which kicked off a series that will focus on "AI in Supply Chain."

I believe that the incorporation of decision-making technologies in the supply chain is potentially the most transformative development in global industrial supply chains that we will see for the next two or three decades.

The purpose of this series is to seek evidence to support or refute that premise.

As I stated in the July 7 commentary, Optimal Dynamics is setting out to solve dynamic resource allocation problems, a set of problems that deal with the allocation of scarce resources in an optimal manner over space and time when conditions are uncertain and changing randomly in complex networks.

A CargoLux freighter takes off from an airport.(Photo: Jim Allen/FreightWaves)

Dynamic resource allocation problems are a class of problems that Warren Powell, co-founder of Optimal Dynamics, has studied over the course of his 39-year professorship at Princeton University, where he is a member of the Department of Operations Research and Financial Engineering. As Founder and Manager of Princeton University's Computational Stochastic Optimization and Learning Labs (CASTLE Labs), Powell has been at the forefront of researching and developing models and algorithms for stochastic optimization with practical applications in transportation and logistics. He will become a professor emeritus at Princeton University effective September 1, 2020.

He co-founded Optimal Dynamics in 2016, with his son Daniel Powell, who is Optimal Dynamics' CEO.

If you are a regular reader of FreightWaves, you have encountered discussions of network optimization in supply chain logistics before in this column. For example: Commentary: Toshiba's simulated bifurcation machines may optimize your supply chain (February 17, 2020); Commentary: Applying machine learning to improve the supply chain (July 30, 2019); Commentary: How can machine learning be applied to improve transportation? (July 23, 2019); and Logistics network optimization why this time is different (April 23, 2019).

A cargo ship set to unload at dockside.(Photo: Jim Allen/FreightWaves)

Optimal Dynamics' platform, CORE.ai, makes the company's proprietary high-dimensional artificial intelligence, High-Dimensional AI, available for general use through the CORE.ai web portal. It can also be implemented by trucking fleets and by other software vendors that wish to implement it within their products for example a transportation management system could implement CORE.ai through Optimal Dynamics' Open API protocols.

Eduardo Silva, Optimal Dynamics' Vice President of Engineering, says the company's RESTful API is built on top of a secure, reliable and scalable microservice infrastructure running in the cloud. Customer data is fully encrypted both at rest and in-transit, and Optimal Dynamics has adopted and adheres to best-practice fault-tolerance techniques and uses well-tested tools and strategies to ensure the reliability of the CORE.ai platform while maintaining the highest level of performance as scale increases.

CORE.ai's High-Dimensional AI uses approximate dynamic programming, a version of reinforcement learning adapted for high-dimensional problems in operations research, based on the insights gained over the decades of research conducted at CASTLE Labs.

Reinforcement learning is a form of machine learning in which the software system learns to accomplish a defined goal by trial-and-error, within a changing environment. Algorithms accomplish this through repetitive feedback loops based on reiterative improvements to a set of available actions. In approximate dynamic programming these available actions are encoded in mathematical functions which are known as policies. In this context, a policy tells the computer model how to act optimally under uncertainty.

A trainyard is full of railcars and intermodal containers on flatcars.(Photo: Jim Allen/FreightWaves)

Early forms of reinforcement learning, and dynamic programming, were first developed in the 1950s.

Warren Powell explains the difference between reinforcement learning and approximate dynamic programming this way, "In the 1990s and early 2000s, approximate dynamic programming and reinforcement learning were like British English and American English two flavors of the same algorithmic strategy. Then, as people discovered that this entire algorithm strategy (whether it is ADP or RL) did not solve all problems, people started branching out."

Don't worry if this is all starting to sound confusing. He says "These buzz-phrases are so confusing, especially when even the research community is unable to define the terms. Argh!"

What matters is that some of these algorithmic strategies are ready for prime time. As Optimal Dynamics indicates, some of these algorithmic strategies are ready to solve important problems in big, global, legacy industries that are fundamental to our way of life.

The academic research from which CORE.ai is a descendent has been applied in R&D collaborations between CASTLE Labs and large industrial and corporate partners representing every major supply chain logistics subsector.

For example, in Schneider National Uses Data To Survive A Bumpy Economy, which appeared in the September 12, 2011 issue of Forbes, the author describes how a prior version of the technology from CASTLE Labs was applied to create a "fleet-wide tactical planning simulator' that would use software algorithms to mimic the decision-making of human dispatchers on an inhumanly large scale."

Daniel Powell told me that Schneider National credits the technology developed in collaboration with CASTLE Labs with helping it realize $39 million in annual savings at the time.

An intermodal container is unloaded from a ship for transport by truck.(Photo: Jim Allen/FreightWaves)

The Forbes article also describes how other models that were being developed in parallel at CASTLE Labs were implemented by other logistics companies going as far back as the 1980s, and how that transformed the industry even then. For example, an interactive optimization product called SuperSPIN was used by every major national and regional less-than-truckload (LTL) carrier.

According to CASTLE Lab's website, "SuperSPIN was a model that arrived during a period of tremendous change in the LTL trucking industry. SuperSPIN allowed companies to understand the trade-offs between the number of end of lines and the value of network density. It also played a role in determining which carriers survived, and was used in the planning of some of the largest LTL carriers that survived the shakeout."

Manhattan Associates, the publicly traded software company, continues to support SuperSPIN.

Combing through the literature on CASTLE Labs' website one finds mention of collaborations with, and research funding from other companies like YRC, Ryder Truck Lines, Roadway Package System (now part of FedEx), Embraer, UPS, Netjets, The Air Mobility Command, Air Products and Chemicals, Burlington Motor Carriers, Triple Crown Services, Sea-Land (now part of Maersk), North American Van Lines, The Burlington Northern Santa Fe Railroad (now BNSF) and Norfolk Southern. With Norfolk Southern, CASTLE Labs used approximate dynamic programming to optimize locomotives.

Warren Powell's Ph.D. dissertation was on bulk service queues for LTL trucking. It was only after he started a new project as a faculty member, with a carrier (Ryder Truck Line) that he learned about the load planning problem, which is an optimization problem.

His Ph.D dissertation was funded by IU International a diversified services company with interests in trucking, distribution, environmental services, food services and agribusiness, which was acquired in 1988 through a hostile takeover.

Many years ago, a Fortune 500 third-party logistics company built a proprietary network optimization system on predecessors to CORE.ai.

Reflecting on his work in freight transportation, Warren Powell says "My work was roughly split between less-than-truckload which used one modeling technology, and truckload, rail, Embraer and other operational applications which used other modeling technologies. They all focused on operational models that required making decisions now that approximated the impact of these decisions on an uncertain future."

I asked Warren Powell why over the decades during which he has been studying dynamic resource allocation problems, the time is now ripe for Optimal Dynamics to take the work that has been done at CASTLE Labs, to bring it into the real world and to apply it to an entire industry like trucking rather than to discrete, one-off problems within discrete one-off companies.

A tractor pulls a flatbed trailer carrying an intermodal container.(Photo: Jim Allen/FreightWaves)

He said, "The trucking industry has been trying to develop advanced analytics since Schneider National initiated the effort in the late 1970s, but there was always something in the way lack of data (where is my driver?), poor computing facilities, and a basic lack of the types of analytics required to handle problems in the trucking industry."

He added, "30 years of research has developed the analytics we need to allow computers to solve these complex problems. This required combining the power of deterministic optimization tools (which emerged in the 1980s and 1990s), with machine learning and stochastic simulation, all at the same time."

Powell continued, "We can now run these powerful algorithms on the cloud, which offers virtually unlimited computing power. Finally, smartphones and the internet allow us to be in direct touch with drivers, avoiding the need for clumsy telephone calls (1980s) or even the use of expensive satellite systems."

Today, the path to market for startups like Optimal Dynamics has been somewhat smoothed by the broad awareness among business executives that the technology landscape has changed dramatically.

In the 1980s, Warren Powell's work was often called the "bleeding edge." Now, everyone understands the vast power of computers and the cloud, as well as the widespread adoption of smartphones that provide pervasive connectivity and facilitate direct communication with drivers.

In the past few years, people have also started to realize that computers can be smart through "AI," although there remains tremendous confusion about what this really means, since AI is actually an entire family of algorithmic technologies.

According to Warren Powell, the breakthroughs that enable computers to solve chess and Chinese Go simply are not robust enough to optimize a trucking company because of the number of variables that a trucking company must account for, and the uncertainty one must contend with in the real world.

He commented, "It took me a lifetime to realize how to combine the power of optimization [to solve high-dimensional decision problems, but without uncertainty], with machine learning and simulation to crack the high-dimensional problems that arise in freight transportation."

I have personally been witness to how Warren lights up when he is thinking about how his work applies to problems in logistics and transportation. It happened when I first met him in 2016.

It happened again when I introduced him to executives at the freight forwarding unit of a large European container shipping company in March 2018.

Warren and I met with them at their headquarters, and wound up spending more than four hours talking about freight forwarding and how the various techniques developed at CASTLE Labs could be applied to solving some of the problems they wanted to solve in order to improve their operations. I left Warren with them after hours of conversation (I had a long drive home and wanted to beat traffic). To my amusement, they were so engrossed in conversation that they barely acknowledged that I was leaving.

I came away convinced that a lack of sufficient data would not be as big of an issue as I had previously assumed, and also that the problems the executives described could definitely be solved.

On November 23, 2016 I published Industry Study: Freight Trucking (#Startups). That blog post includes Optimal Dynamics in a very early and rudimentary market map of startups building software for the trucking industry. I came to know Optimal Dynamics and the people behind it after spending a day at CASTLE Labs in August 2016.

Daniel Powell has presented demos of early versions of CORE.ai at The New York Supply Chain Meetup in March 2018: Artificial Intelligence & Supply Chains, and again during The Worldwide Supply Chain Federation's inaugural global summit, #SCIT19, in June 2019 (Video).

Juliana Nascimento, Optimal Dynamics' Head of Optimization and Artificial Intelligence, was a panelist at #SCIT19 on the topic of innovation in land-based supply chain logistics (Video). Among other things, Juliana ran Operational Planning & Foreign Trade, and before that Production Planning & Control, and Strategic Planning for eight years at Kimberly-Clark in Brazil, after she earned her Ph.D under the supervision of Warren Powell at CASTLE Labs.

A delivery van at work.(Photo: Jim Allen/FreightWaves)

As far as supply chain logistics is concerned, a platform like CORE.ai can be applied in rail, drayage, container shipping, air freight, and warehousing and distribution. Predecessors to CORE.ai have been applied in long-distance and middle-distance trucking, rail and air, real-time dispatching, routing and scheduling, and spare parts management, among others.

Estimates of the market for artificial intelligence in supply chain logistics applications peg the size of the global market at about $6.5 billion by 2023, with a compound annual growth rate of about 43%, according to Infoholic Research. Or, $10 billion by 2025 with a compound growth rate of about 46% according to BizWit Research & Consulting LLP.

As I stated in my July 7 commentary, the goals of this series are:

In the next article in this series, we will talk about high-dimensional decision problems, such as the problems encountered in freight logistics, and why they pose such a challenge for AI systems like IBM Watson and Google Deepmind's AlphaGo.If you are a team working on innovations that you believe have the potential to significantly refashion global supply chains we'd love to tell your story in FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at media@freightwaves.com.

Continued here:
Commentary: Combine Optimization, Machine Learning And Simulation To Move Freight - Benzinga

Machine Learning on AWS: Getting Started with SageMaker and More – Dice Insights

Ready to get started with machine learning (ML) on AWS? ML requires a lot of processing capability, more than youre likely to have at home. Thats where a cloud platform such as AWS can help. But how do you get started? Here are some tips to add ML to your career.

First, learn as much as you can about ML independent of AWS. To maximize your career opportunities, you want your experience and knowledge to be broad and not focus exclusively on AWS.

ML is not for the faint of heart. It requires serious study. However, opportunities for those with machine-learning skills abound, withroutine six-figure salariesfor engineers and developers who focus on deep learning, machine learning, and artificial intelligence (A.I.). According to Burning Glass, which collects and analyzes millions of job postings from across the country, machine-learning engineers with even a few years of experience can unlock pretty healthy compensationand thats before you throw in benefits such as stock options:

Job interviews for ML-related positionsare often tough and require quite a bit of preparation, as well. Even everyday developers and analysts (i.e., those who dont primarily focus on ML in their work) may very well end up using more ML tools and principles in coming years. If youre a student specializing in computer science or a related field, thats as good a reason as any tobuild out your ML and A.I. knowledge.

Read books, take online classes, and invest as much time as you can into learning it. TensorFlow, the open-source library for deep-learning software that was created by Google,has a nice page of learning resources.

Next, look at the coding frameworks available. The aforementioned TensorFlow is considered one of the top, as isPyTorch, which was created by Facebook. Although AWS has great tools for building ML with little coding, youre still going to want to know how to use ML coding frameworks.

AWS presently has 17 services related to ML, and theyre likely to add more in the years to come. This is too much to learn all at once, so we recommend a couple of things: First, make sure youre completely familiar with basic computing via AWS, including how to provision EC2 servers and, most importantly, how much its going to cost you perhourto allocate those servers. You cant afford surprises, especially when dealing with the kind of processing resources you need.

Second, of the 17 services, the one you want to start with is SageMaker. This is AWSs flagship ML product and it includes a complete IDE called SageMaker Studio.

SageMaker Studio offers a Quick Start; get to the Studio from the main SageMaker page, scroll down, and youll see the Quick Start:

Fill in the name and choose the permissions. (Youll likely need to create a role;you can learn about that here.) Then youll be asked for your VPC ID and subnet, so make sure you have a basic understanding of those, as well. Click Next, and youll see your SageMaker Studio dashboard. After a few minutes, youll see your new Studio show up in a list with the word Ready by it.

Click the Open Studio link to go into the Studio. The Studio will open in a new window; the first time it will take a couple minutes to load.

In the lower-right youll see a pane with a demonstration video and a video tutorials link with more information to help you get started. Theres also a link to a tour guide, which provides a complete walkthrough for setting up multiple experiments and trials.

With ML, experiments are the processes that you run many times over, as the system learns. Trials are the individual outcomes from the experiments. You provide different data with the experiments and observe the trials. Typically, each time you only modify the data slightly; this is known as an incremental change. Over time your system continues to gather more and more data and learn from the outcomes.

If youre into pattern and facial recognition and arent paranoid, you might try outthe AWS DeepLens, which is a hardware camera built to integrate with AWS ML. (You probably want to put tape over its lens when youre not using it.)

One place where you can stay on top of it all isthrough the official AWS ML blog. Many of the articles are really advanced, but if you at least skim through them, youll pick up tidbits of knowledge here and thereeven if youre just starting out on your machine-learning journey.

Machine learning is a huge topic and theres a lot to learn. Start slowly, study as much as you can, and just keep practicing with the different tools available. Over time, youll become competent, and if you keep at it, youll eventually become an expert. Have patience and perseverance!

See more here:
Machine Learning on AWS: Getting Started with SageMaker and More - Dice Insights

Twitter CTO on machine learning challenges: Im not proud that we miss a lot of misinformation – VentureBeat

Watch all the Transform 2020 sessions on-demand right here.

Twitter considers itself a hub of global conversation, but any regular user knows how frequently the discourse veers into angry rants or misinformation. While the companys investments in machine learning are intended to address these issues, executives understand the company has a long way to go.

According to Twitter CTO Parag Agrawal, its likely the company will never be able to declare victory because tools like conversational AI in the hands of adversaries continue to make the problems evolve rapidly. But Agrawal said hes determined to turn the tide to help Twitter fulfill its potential for good.

Its become increasingly clear what our role is in the world, Agrawal said. It is to serve the public conversation. And these last few months, whether they be around the implications on public health due to COVID-19, or to have a conversation around racial injustices in this country, have emphasized the role of public conversation as a concept.

Agrawal made his remarks during VentureBeats Transform 2020 conference in a conversation with VentureBeat CEO Matt Marshall.During the interview, Agrawal noted that Twitter has been investing more in trying to highlight positive and productive conversations. That led to the introduction of following topics as a way to get people out of silos and to discover a broader range of views.

That said, much of his work still focuses on adversaries who are trying to manipulate the public conversations and how they might use these new techniques. He broke down these adversaries into four categories:

Typically, an attempt at manipulating the conversation uses some combination of all of these four to achieve some sort of objective, he said.

The most harmful are those bots that manage to disguise themselves successfully as humans using the most advanced conversational AI. These mislead people into believing that theyre real people and allow people to be influenced by them, he said.

This multi-layered strategy makes fighting manipulation extraordinarily complex. Worse, those techniques advance and change constantly. And the impact of bad content is swift.

If a piece of content is going to matter in a good or a bad way, its going to have its impact within minutes and hours, and not days, he said. So, its not OK for me to wait a day for my model to catch up and learn what to do with it. And I need to learn in real time.

Twitter has won some praise recently for taking steps toward labeling misleading or violent tweets posted by President Trump when other platforms such as Facebook have been more reluctant to take action. Beyond those headline-making decisions, however, Agrawal said the task of monitoring the platform has grown even more difficult in recent months as issues like the pandemic and then Black Lives Matter sparked global conversations.

Weve had to work with an increased amount of passion on the service on whatever the topic of conversation because of the heightened importance of these topics, he said. And Ive had to prioritize our work to best to help people and improve the health of the conversation during this time.

Agrawal does believe the company is making progress.We quickly worked on a policy around misinformation around COVID-19 as we saw that threat emerge, he said. Our policy was meant specifically to mitigate harms. Our strategy in this space is not to tackle all misinformation in the world. Theres too much of it and we dont have clinical approaches to navigate Our efforts are not focused on determining whats true or false. Theyre focused on providing labels and annotations, so people can find easy access to reliable information, as well as the greater conversation around the topic so that they can make up their mind.

The company will continue to expand its machine learning to flag bad content, he said. Currently, about 50% of enforcement actions involve content that is flagged for violating terms of service is caught by those machine learning systems.

Still, there remains a sense of disappointment that more has not been done. Agrawal acknowledges that, noting that the process of turning policy into standards that can be enforced by machine learning remains a practical challenge.

We build systems, he said. Thats why we ground solutions in policy, and then build using product and technology and our processes. Its designed to avoid biases. At the same time, it puts us in a situation where things move slower than most of us would like. It takes us a while to develop a process to scale, to have automation to enforce the policy. Im not proud that we missed a large amount of misinformation even where we have a policy because we havent been able to build these automated systems.

Read more:
Twitter CTO on machine learning challenges: Im not proud that we miss a lot of misinformation - VentureBeat

Global Machine Learning as a Service (MLaaS) Market Size, Growth, Trends and Forecast Analysis Report 2020 to 2027 – 3rd Watch News

Machine Learning as a Service (MLaaS) Market is systematic exploration that delivers key statistics on the market status of the development trends, competitive landscape analysis, and key regions development status. The report has included strong players and analyses their limitations and strong points of the well-known players through SWOT analysis. This Report covers growing trends that are linked with major opportunities for the expansion of the Machine Learning as a Service (MLaaS) industry.

Get Free Sample Report(including full TOC, Tables and Figures): @ https://www.globalmarketers.biz/report/business-services/2015-2027-global-machine-learning-as-a-service-(mlaas)-industry-market-research-report,-segment-by-player,-type,-application,-marketing-channel,-and-region/147849#request_sample

Yottamine AnalyticsGoogleFuzzy.aiAT&TErsatz Labs, Inc.Hewlett PackardIBMBigMLSift Science, Inc.HypergiantMicrosoftAmazon Web Services

The Geographical Analysis Covers the Following Regions

The recent outbreak of the COVID-19 (Corona Virus Disease) Provide extra commentary on the newest scenario, an economic slowdown on the overall industry. In addition to this, the report also includes the development of the Machine Learning as a Service (MLaaS) market in the major regions across the world.

Note: Upto 30% Discount: Get this reports in Discounted Price

Ask For Discount: https://www.globalmarketers.biz/discount_inquiry/discount/147849

Global Machine Learning as a Service (MLaaS) Market Segmentation: By Types

Cloud and Web-based Application Programming Interface (APIs)Software ToolsOthers

Global Machine Learning as a Service (MLaaS) Market Segmentation: By Applications

Cloud and Web-based Application Programming Interface (APIs)Software ToolsOthers

Do you want any other requirement or customize the report, Do Inquiry Here: https://www.globalmarketers.biz/report/business-services/2015-2027-global-machine-learning-as-a-service-(mlaas)-industry-market-research-report,-segment-by-player,-type,-application,-marketing-channel,-and-region/147849#inquiry_before_buying

This research report represents a 360-degree overview of the competitive landscape of the Machine Learning as a Service (MLaaS) Market. Furthermore, it offers enormous statistics relating to current trends, technological advancements, tools, and methodologies.

Global Machine Learning as a Service (MLaaS) Market Research Report 2020

Chapter 1 About the Machine Learning as a Service (MLaaS) Industry

Chapter 2 World Market Competition Landscape

Chapter 3 World Machine Learning as a Service (MLaaS) Market share

Chapter 4 Supply Chain Analysis

Chapter 5 Company Profiles

Chapter 6 Globalization & Trade

Chapter 7 Distributors and Customers

Chapter 8 Import, Export, Consumption and Consumption Value by Major Countries

Chapter 9 World Machine Learning as a Service (MLaaS) Market Forecast through 2027

Chapter 10 Key success factors and Market Overview

It concludes by throwing light on the recent developments that took place in the Machine Learning as a Service (MLaaS) market and their influence on the future growth of this market.

Table of Content & Report Detail @ https://www.globalmarketers.biz/report/business-services/2015-2027-global-machine-learning-as-a-service-(mlaas)-industry-market-research-report,-segment-by-player,-type,-application,-marketing-channel,-and-region/147849#table_of_contents

The rest is here:
Global Machine Learning as a Service (MLaaS) Market Size, Growth, Trends and Forecast Analysis Report 2020 to 2027 - 3rd Watch News

Alpha Health Predictive AI Research Featured In Spotlight Session at the International Conference on Machine Learning – Yahoo Finance

Deep Claim model demonstrates potential to save the U.S. billions in wasted healthcare spending

SOUTH SAN FRANCISCO, Calif., July 16, 2020 /PRNewswire/ --Alpha Health Inc, the first Unified Automation company for healthcare, announced today its paper describing the company's method of using a neural network to predict health care billing claim denials will be featured as a spotlight session during theHealthcare Systems, Population Health and the Role of Health-Tech Workshopduring the International Conference of Machine Learning 2020(ICML2020). Lead author of the paper, Byung-Hak Kim, Ph.D, and AI Technical Lead at Alpha Health will be featured in a pre-recordedspotlight session that airs during the workshop's live session on Friday, July 17th. The paper was co-authored by other members of the Alpha Health technical team, including; Co-Founder and Chief Technology Officer, Varun Ganapathi, Ph.D; Co-Founder and Vice President of Engineering, Andy Atwal and Lead Machine Learning Engineer Seshadri Sridharan.

Thepaperdescribes one of the company's machine learning models believed to be the first published deep learning-based system that successfully predicts how a claim will be paid in advance of submission to a payer. Called Deep Claim, this machine learning model predicts whether, when, and how much a payer will pay for a given hospital expense or claim.

"Deep Claim is an innovative neural network-based framework. It focuses on a part of the healthcare system that has received very little attention thus far," said Varun Ganapathi, Ph.D., co-author of the paper and Co-Founder and Chief Technology Officer at Alpha Health. "While much attention has focused on the potential of artificial intelligence and machine learning in diagnostics and drug discovery, this paper demonstrates the opportunity to apply these same approaches at scale to the back office of healthcare which could save the U.S. billions annually in wasted healthcare spending."

"I am deeply honored to have my work and the work of the team at Alpha Health featured in the Spotlight Session alongside five other papers from prestigious academic research centers, including University of Cambridge, Johns Hopkins University and NASA Frontier Development Labs, among others," said Byung-Hak Kim, Ph.D., lead author of the paper and AI Technical Lead at Alpha Health. "The fact that our model was trained on real-world claims data and that development included real deployment scenarios will enable us to integrate our research directly into our solution more quickly than a conceptual or theoretical research approach would otherwise allow. This helps us ensure that our research will directly benefit our health system customers as quickly as possible."

For this paper, Byung-Hak Kim and the Alpha Health team used almost three million de-identified claims to test the Deep Claim system. The data included in these claims contains demographic information, diagnoses, treatments, and billed amounts as inputs. The Deep Claim system then uses those inputs to predict the first response date, denial probability, denial reason codes with probability, and questionable fields in the claim. The ability to predict denial reason codes and questionable fields are especially promising as these key insights are required to proactively improve claims before they are submitted. The developers of Deep Claim demonstrated that the system performed about 22 percent better than the best baseline system.

The paper demonstrates that this deep learning system can accurately predict how an insurance company will respond to a claim. Automating this process could save individual hospitals millions of dollars each year. One of the machine learning scientists who reviewed the paper said it was "excellent work."

"Grappling with the claims system is a key question that has been understudied in (Machine Learning) for health," another reviewer wrote.

The U.S. spent about $3.6 trillion on healthcare in 2018, more than $11,000 per person, according to the Centers for Medicare and Medicaid Services. Recent studies have found that fully one-quarter of healthcare spending in the U.S. is wasteful. Administrative costs contribute the most significant share of that wasteful spending, and wereestimated to cost about $266 billionannually. Estimates are that hospitals and healthcare systems spend about $120 in administrative costs for each claim just to recoup money owed them. This system of claim preparation and billing is at the core of the healthcare system in the U.S, and is a key driver of healthcare costs. Identifying ways to eliminate some of it by improving efficiency, correcting billing errors, and saving time could significantly reduce wasteful spending.

Story continues

The rest is here:
Alpha Health Predictive AI Research Featured In Spotlight Session at the International Conference on Machine Learning - Yahoo Finance

Do Machine Learning and AI Go Hand-in-Hand in Digital Transformation? – Techiexpert.com – TechiExpert.com

The measure of data put away by banks is quickly expanding and gives a chance to banks to lead prescient examinations and improve their organizations. In any case, data researchers are confronting significant difficulties, dealing with the considerable measure of data effectively, and producing bits of data with genuine business esteem.

Various advanced procedures and internet-based life trades produce data trails. Frameworks, sensors, and cell phones transmit data. Big data is touching base from different sources with disturbing speed, volume, and assortment. Consistently 2.5 quintillion bytes of data are made, and 90% of the data on the planet today was delivered inside the previous two years.

In this significant data period, the measure of data put away by any bank is quick extending, and the idea of the data has turned out to be increasingly unpredictable. These patterns give a gigantic chance to a bank to upgrade its organizations. Generally, banks have attempted to extricate data from an example of its inside data and delivered occasional reports to improve future essential leadership. These days, with the accessibility of immense measures of standardized and unstructured data from both inside and outside sources. There is expanded weight and spotlight on getting an endeavor perspective on the client efficiently. This further empowers a bank to direct significant scale client experience investigation and addition more profound bits of data for clients, channels, and the whole showcase.

With the advancement of new financial administrations, banks databases are developing to adjust to business needs. Subsequently, these databases have turned out to be incredibly mind-boggling. Since customarily organized data is spared in tables, there is much open door for expanded intricacy. For instance, another table in a database is included for another business or another database replaces the past one for a business framework update. Besides the internal data sources, there are standardized data from outside sources like financial, statistic, and geographic data. To guarantee the consistency and precision of the data, a standard data arrangement is characterized by organized data.

The development of unstructured data makes a much higher multifaceted nature. While some unstructured data can start from inside a bank, including web log documents, call records, and video replays, increasingly more can be gotten from outside sources, for example, internet based life data from Twitter**, Facebook**, and WeChat. The unstructured data is usually put away as records as opposed to database tables. A great many documents with tens or several terabytes of data can be successfully overseen on the BigInsights stage. this is an Apache Hadoop-based, equipment freethinker programming stage that gives better approaches for utilizing different and big-scale data accumulations alongside implicit explanatory capacities

Since unstructured data isnt sorted out in a well-characterized way, extra work must be done to move the data into a regularized or schematized structure before displaying it. The IBM SPSS Analytic Server (AS) gives big data investigation capacities, including incorporated help for unstructured prescient examination from the Hadoop condition. It very well may be utilized to draw legitimately and inquiry the data put away in BigInsights, dispensing with the need to move data and empowering ideal execution on a lot of data. Using apparatuses given by AS, strategies for normalizing unstructured data can be planned and actualized on a standard calendar without composing complex code and contents.

Indeed, even organized data needs extra data planning to improve the data quality on BigInsights with Big SQL (Structured Query Language), which is, an apparatus given by BigInsights as a blend of a SQL interface and parallel preparing for taking care of big data. It very well may be utilized to deal with insufficient, erroneous, or insignificant data effectively. Besides, some factual techniques are executed using Big SQL to lessen the effect of the clamor in the data. For instance, a few data nonsensical qualities are recognized and dispensed with; a few highlights are standardized or positioned. Along these lines, some exceptionally suspected anomalies are controlled from impeding the investigation. This progression helps separate signs from the commotion in significant data examination.

When every one of the data has been arranged and purified, a data combination procedure is directed on BigInsights. Data from numerous sources are consolidated, and the coordinated data is put away in a data stockroom, in which the connections between tables are well-characterized. The data clashes because of heterogeneous sources are settled. Each full join between meals with a great many occurrences should be possible on BigInsights in minutes, which for the most part, takes hours without the parallel processing procedure. Given the data stockroom, many traits can be related to every client, and a united undertaking client view is produced.

1. Customer division and inclination examination: This module delivers fine-grained client divisions in which clients share similar inclination for various sub-branches or market locales. Because of these outcomes, banks can get further bits of data in their client qualities and preferences, to improve consumer loyalty and accomplish exactness advertising by customizing banking items and administrations, just as showcasing messages. This is one of the most significant advantages of big data analytics in banking sector.

2. Potential client distinguishing proof: This module enables banks to recognize potential high-income or steadfast clients who are probably going to wind up beneficial to the bank. However, we are at present, not clients. With this strategy, banks can get an increasingly complete and exact objective client list for high-esteem clients, which can improve showcasing productivity and carry tremendous benefits to the banks.

3. Customer system investigation: By getting client and item proclivity through an examination of internet-based life systems, the client organizes inquiry can improve client maintenance, strategically pitch, and up-sell.

4. Market potential examination: Using financial, statistic, and geographic data, this module creates spatial conveyance for both existing clients and potential clients. With the market potential conveyance map, banks can have an unmistakable diagram of the objective clients areas. To distinguish the client from concentrating/lacking territories for contributing/stripping, which will bolster the banks client promoting and investigation.

5. Channel assignment and activity streamlining: Based on the banks system and spatial conveyance of client assets, this module improves the arrangement (i.e., area, type) and tasks of administration channels (i.e., retail bank or computerized/automated teller machine). Expanding income, consumer loyalty, and reach against expenses can improve client maintenance and draw in new clients.

Business data (BI) devices are fit for recognizing potential dangers related to cash loaning forms in banks. With the assistance of big data examination, banks can dissect the market inclines and choose to bring down or to expand loan fees for various people crosswise over different locales.

Data section blunders from manual structures can be decreased to a base as extensive data bring up peculiarities in client data as well.

With misrepresentation recognition calculations, clients who have poor FICO ratings can be distinguished, so banks dont advance cash to them. One more big application in banking is restricting the rates of deceitful or questionable exchanges that could improve the enemy of social exercises or psychological warfare.

big data examination can help banks in understanding client conduct dependent on the sources of info obtained from their speculation designs, shopping patterns, inspiration to contribute, and individual or money related foundations. This data assumes an urgent job in winning client unwaveringly by planning customized banking answers for them. This prompts a cooperative connection between banks and clients. Altered financial arrangements can extraordinarily expand lead age as well.

A more significant part of bank representatives guarantee that guaranteeing banking administrations meet all the administrative consistence criteria set by the Government 68% of bank workers state that their greatest worry in banking administrations is

BI instruments can help break down and monitor all the administrative prerequisites by experiencing every individual application from the clients for exact approval.

With execution examination, worker execution can be evaluated whether they have accomplished the month to month/quarterly/yearly targets. Because of the figures obtained from current offers of workers, significant data examination can decide approaches to enable them to scale better. Notwithstanding banking administrations overall can be checked to recognize what works and what doesnt.

Banks client assistance focuses will have a ton of requests and criticism age all the time. Indeed, even web-based social networking stages fill in as a sounding board for client encounters today. Big Data apparatuses can help in filtering through high volumes of data and react to every one of them sufficiently and quickly. Clients who feel that their banks esteem their input immediately will stay faithful to the brand.

At last, banks that dont advance and ride the big data wave wont just get left behind yet additionally become outdated. Receiving Big Data investigation and other howdy tech instruments to change the existing financial segment will assume a big job in deciding the lifespan of banks in the digital age.

The financial segment has consistently been moderately delayed to improve: 92 of the best 100 world driving banks still depend on IBM centralized servers in their tasks. No big surprise fintech appropriation is so high. Contrasted with the client inspired and nimble new businesses, customary budgetary establishments stand zero chance.

Be that as it may, with regards to big data, things deteriorate: most heritage frameworks cant adapt to the outstanding developing burden. Attempting to gather, store, and dissect the required measures of data utilizing an obsolete framework can put the strength of your whole structure in danger.

Thus, associations face the test of developing their preparing limits or totally re-assembling their frameworks to respond to the call.

Besides, where theres data, theres a hazard (particularly considering the heritage issue weve referenced previously). Unmistakably banking suppliers need to ensure the client data they aggregate and procedure stays safe consistently.

However, just 38% of associations worldwide are prepared to deal with the danger, as per ISACA International. That is the reason cybersecurity stays one of the most consuming issues in banking.

Furthermore, data security guidelines are getting stringent. The presentation of GDPR has put certain limitations on organizations worldwide that need to gather and apply clients data. This ought to likewise be considered.

With such big numbers of various types of data in banking and its total volume, its nothing unexpected that organizations battle to adapt to it. This turns out to be much progressively evident when attempting to isolate the useful data from the pointless.

While the portion of possibly valuable data is developing, there is still a lot of unimportant data to deal with. This implies organizations need to plan themselves and reinforce their techniques for breaking down much more data. If conceivable, locate another application for the data that has been viewed as unimportant.

In spite of the referenced difficulties, the upsides of big data in banking effectively legitimize any dangers. The bits of data it gives you the assets it opens up, the cash it spares. Data is an all-inclusive fuel that can move your business to the top.

See the original post here:
Do Machine Learning and AI Go Hand-in-Hand in Digital Transformation? - Techiexpert.com - TechiExpert.com