2020: The year of seeing clearly on AI and machine learning – ZDNet

Tom Foremski

Late last year, I complained toRichard Socher, chief scientist at Salesforce and head of its AI projects, about the term "artificial intelligence" and that we should use more accurate terms such as machine learning or smart machine systems, because "AI" creates unreasonably high expectations when the vast majority of applications are essentially extremely specialized machine learning systems that do specific tasks -- such as image analysis -- very well but do nothing else.

Socher said that when he was a post-graduate it rankled him also, and he preferred other descriptions such as statistical machine learning. He agrees that the "AI" systems that we talk about today are very limited in scope and misidentified, but these days he thinks of AI as being "Aspirational Intelligence." He likes the potential for the technology even if it isn't true today.

I like Socher's designation of AI as Aspirational Intelligence but I'd prefer not to further confuse the public, politicians and even philosophers about what AI is today: It is nothing more than software in a box -- a smart machine system that has no human qualities or understanding of what it does. It's a specialized machine that is nothing to do with systems that these days are called Artificial General Intelligence (AGI).

Before ML systems co-opted it, the term AI was used to describe what AGI is used to describe today: computer systems that try to mimic humans, their rational and logical thinking, and their understanding of language and cultural meanings to eventually become some sort of digital superhuman, which is incredibly wise and always able to make the right decisions.

There has been a lot of progress in developing ML systems but very little progress on AGI. Yet the advances in ML are being attributed to advances in AGI. And that leads to confusion and misunderstanding of these technologies.

Machine learning systems unlike AGI, do not try to mimic human thinking -- they use very different methods to train themselves on large amounts of specialist data and then apply their training to the task at hand. In many cases, ML systems make decisions without any explanation and it's difficult to determine the value of their black box decisions. But if those results are presented as artificial intelligence then they get far higher respect from people than they likely deserve.

For example, when ML systems are being used in applications such as recommending prison sentences but are described as artificial intelligence systems -- they gain higher regard from the people using them. It implies that the system is smarter than any judge. But if the term machine learning is used it would underline that these are fallible machines and allow people to treat the results with some skepticism in key applications.

Even if we do develop future advanced AGI systems we should continue to encourage skepticism and we should lower our expectations for their abilities to augment human decision making. It is difficult enough to find and apply human intelligence effectively -- how will artificial intelligence be any easier to identify and apply? Dumb and dumber do not add up to a genius. You cannot aggregate IQ.

As things stand today, the mislabeled AI systems are being discussed as if they were well on their way of jumping from highly specialized non-human tasks to becoming full AGI systems that can mimic human thinking and logic. This has resulted in warnings from billionaires and philosophers that those future AI systems will likely kill us all -- as if a sentient AI would conclude that genocide is rational and logical. It certainly might appear to be a winning strategy if the AI system was trained on human behavior across recorded history but that would never happen.

There is no rational logic for genocide. Future AI systems would be designed to love humanity and be programmed to protect and avoid human injury. They would likely operate very much in the vein of Richard Brautigan's 1967 poemAll Watched Over By Machines Of Loving Grace--the last stanza:

I like to think(it has to be!)of a cybernetic ecologywhere we are free of our laborsand joined back to nature,returned to our mammalbrothers and sisters,and all watched overby machines of loving grace.

Let us not fear AI systems and in 2020, let's be clear and call them machine learning systems -- because words matter.

Link:
2020: The year of seeing clearly on AI and machine learning - ZDNet

Raleys Drive To Be Different Gets an Assist From Machine Learning – Winsight Grocery Business

Raleys has brought artificial intelligence to pricing not to necessarily to go toe-to-toe with competitors, but to differentiate from them, President and CEO Keith Knopf said.

Speaking in a presentation at the National Retail Federation show in New York, Knopf described how the West Sacramento, Calif.-based food retailer is using machine learning algorithms from partner Eversight to help manage its price perception amid larger, and often cheaper, competitorswhile optimizing revenue by driving unit share growth and margin dollars. That benefit is going toward what he described as a differentiated positioning behind health and wellness.

This is not just about pricing for the sake of pricing. This is pricing within a business strategy to differentiateand afford the investment in price in a way that is both financially sustainable and also relevant to the customer, Knopf said.

Raleyshas been working with Eversight for about four years, and has since invested in the Palo Alto, Calif.-based provider of AI-led pricing and promotion management. Knopf described using insights and recommendations derived from Eversights data crunching to support its merchants, helping to strategically manage the Rubiks Cube of pricing and promoting 40,000 items, each with varying elasticity, in stores with differing customer bases, price zones and competitive characteristics.

Raleys, Knopf said, is high-priced relative to its competitors, a reflection of its sizeand its ambitions. Were a $3 billion to $4 billion retailer competing against companies much larger than us, with much greater purchasing power and so for us, [AI pricing] is about optimization within our brand framework. We aspire to be a differentiated operator with a differentiated customer experience and a differentiated product assortment, which is guided more toward health and wellness. We have strong position in fresh that is evolving through innovation. But we also understand that we are a high-priced, high-cost retailer.

David Moran, Eversights co-founder, was careful to put his companys influence in perspective. Algorithms don't replace merchants or set a strategy, he said, but can support them by bringing new computing power that exceeds the work a merchant could do alone and has allowed for experimentation with pricing strategies across categories.In an example he shared, a mix of price changessome going up, others downhelped to drive overall unit growth and profits in the olive oil category.

The merchants still own the art: They are still the connection between the brand positioning, the price value perception, and they also own the execution, Knopf said. This technology gets us down that road much faster and with greater confidence.

Knopf said he believes that pricing science, in combination with customer relationship management, will eventually trigger big changes in the nature of promotional spending by vendors, with a shift toward so-called below the line programs, such as everyday pricing and personalized pricing, and less above the line mass promotions, which he believes are ultimately ineffective at driving long-term growth.

Every time we promote above the line, and everybody sees what everybody else does, no more units are sold in totality in the marketplace, it's just a matter of whos going to sell this week at what price, Knopf said.I believe that its in in the manufacturers best interest, and the retailers best interest, to make pricing personalized and relevant, and the dollars that are available today will shift from promotions into a more personalized, one-on-one, curated relationship that a vendor, the retailer and the customer will share.

More:
Raleys Drive To Be Different Gets an Assist From Machine Learning - Winsight Grocery Business

Christiana Care offers tips to ‘personalize the black box’ of machine learning – Healthcare IT News

For all the potential benefits of artificial intelligence and machine learning, one of the biggest and, increasingly most publicized challenges with the technology is the potential for algorithmic bias.

But an even more basic challenge for hospitals and health systems looking to deploy AI and ML can be the skepticism from frontline staff a hesitance to use predictive models that, even if they aren't inherently biased, are certainly hard to understand.

At Delaware-based Christiana Care Health System, the past few years have seen efforts to "simplify the model without sacrificing precision," says Dr. Terri Steinberg, its chief health information officer and VP of population health informatics.

"The simpler the model, the more human beings will accept it," said Steinberg, who will talk more about this notion in a March 12 presentation at HIMSS20.

When it comes to pop health programs, the data sets used to drive the analytics matter, she explains. Whether it's EHR data, social determinants of health, claims data or even wearables information, it's key to select the most relevant data sources, use machine learning to segment the population and then, crucially, present those findings to care managers in a way that's understandable and fits their workflow.

At HIMSS20, Steinberg, alongside Health Catalyst Chief Data Scientist Jason Jones, will show how Christiana Care has been working to streamline its machine learning processes, to ensure they're more approachable and thus more liable to be embraced by its care teams.

Dr. Terri Steinberg, Christiana Care Health System

They'll explain how to assign relative value to pop health data and discuss some of the challenges associated with integrating them; they'll show how ML can segment populations and spotlight strategies for using new data sources that will boost the value and utility of predictive models.

"We've been doing this since 2012," said Steinberg. And now we have significant time under our belts, so we wanted to come back to HIMSS and talk about what we were doing in terms of programming for care management and, more important, how we're segmenting our population with machine learning."

"There are a couple of patterns that we've seen repeated across engagements that are a little bit counter to how people typically go about building these models today, which is to sort of throw everything at them and hope for the best," said Jones, of Health Catalyst, Christiana Care's vendor partner.

At Christiana Care, he said, the goal instead has been to "help people understand as much as they would like about how the models are working, so that they will trust and actually use them.

"We've found repeatedly that we can build technically fantastic models that people just don't trust and won't use," he added. "In that case, we might as well not bother in the first place. So we're going to go through and show how it is that we can build models in such a way that they're technically excellent but also well-trusted by the people who are going to use them."

In years past, "when we built the model and put it in front of our care managers and said, 'Here you go, now customize your treatment plans based on the risk score,' what we discovered is that they basically ignored the score and did what they wanted," Steinberg explained.

But by simplifying a given model to the "smallest number of participants and data elements that can be," that enables the development of something "small enough for people to understand the list of components, so that they think that they know why the model has made a specific prediction," she said.

That has more value than many population health professionals realize.

"The goal is to simplify the model as much as you can, so human beings understand the components," said Steinberg.

"People like understanding why a particular individual falls into a risk category," she said. "And then they sometimes would even like to know what the feature is that has resulted in the risk. The take home message is that the more human beings understand what the machine is doing, the more likely they are to trust the machine. We want to personalize the black box."

Steinberg and Jones will talk more about making machine learning meaningful at a HIMSS20 session titled "Machine Learning and Data Selection for Population Health." It's scheduled for Thursday, March 12, from 10-11 a.m. in room W414A.

Here is the original post:
Christiana Care offers tips to 'personalize the black box' of machine learning - Healthcare IT News

Going Beyond Machine Learning To Machine Reasoning – Forbes

From Machine Learning to Machine Reasoning

The conversation around Artificial Intelligence usually revolves around technology-focused topics: machine learning, conversational interfaces, autonomous agents, and other aspects of data science, math, and implementation. However, the history and evolution of AI is more than just a technology story. The story of AI is also inextricably linked with waves of innovation and research breakthroughs that run headfirst into economic and technology roadblocks. There seems to be a continuous pattern of discovery, innovation, interest, investment, cautious optimism, boundless enthusiasm, realization of limitations, technological roadblocks, withdrawal of interest, and retreat of AI research back to academic settings. These waves of advance and retreat seem to be as consistent as the back and forth of sea waves on the shore.

This pattern of interest, investment, hype, then decline, and rinse-and-repeat is particularly vexing to technologists and investors because it doesn't follow the usual technology adoption lifecycle. Popularized by Geoffrey Moore in his book "Crossing the Chasm", technology adoption usually follows a well-defined path. Technology is developed and finds early interest by innovators, and then early adopters, and if the technology can make the leap across the "chasm", it gets adopted by the early majority market and then it's off to the races with demand by the late majority and finally technology laggards. If the technology can't cross the chasm, then it ends up in the dustbin of history. However, what makes AI distinct is that it doesn't fit the technology adoption lifecycle pattern.

But AI isn't a discrete technology. Rather it's a series of technologies, concepts, and approaches all aligning towards the quest for the intelligent machine. This quest inspires academicians and researchers to come up with theories of how the brain and intelligence works, and their concepts of how to mimic these aspects with technology. AI is a generator of technologies, which individually go through the technology lifecycle. Investors aren't investing in "AI, but rather they're investing in the output of AI research and technologies that can help achieve the goals of AI. As researchers discover new insights that help them surmount previous challenges, or as technology infrastructure finally catches up with concepts that were previously infeasible, then new technology implementations are spawned and the cycle of investment renews.

The Need for Understanding

It's clear that intelligence is like an onion (or a parfait) many layers. Once we understand one layer, we find that it only explains a limited amount of what intelligence is about. We discover there's another layer thats not quite understood, and back to our research institutions we go to figure out how it works. In Cognilyticas exploration of the intelligence of voice assistants, the benchmark aims to tease at one of those next layers: understanding. That is, knowing what something is recognizing an image among a category of trained concepts, converting audio waveforms into words, identifying patterns among a collection of data, or even playing games at advanced levels, is different from actually understanding what those things are. This lack of understanding is why users get hilarious responses from voice assistant questions, and is also why we can't truly get autonomous machine capabilities in a wide range of situations. Without understanding, there's no common sense. Without common sense and understanding, machine learning is just a bunch of learned patterns that can't adapt to the constantly evolving changes of the real world.

One of the visual concepts thats helpful to understand these layers of increasing value is the "DIKUW Pyramid":

DIKUW Pyramid

While the Wikipedia entry above conveniently skips the Understanding step in their entry, we believe that understanding is the next logical threshold of AI capability. And like all previous layers of this AI onion, tackling this layer will require new research breakthroughs, dramatic increases in compute capabilities, and volumes of data. What? Don't we have almost limitless data and boundless computing power? Not quite. Read on.

The Quest for Common Sense: Machine Reasoning

Early in the development of artificial intelligence, researchers realized that for machines to successfully navigate the real world, they would have to gain an understanding of how the world works and how various different things are related to each other. In 1984, the world's longest-lived AI project started. The Cyc project is focused on generating a comprehensive "ontology" and knowledge base of common sense, basic concepts and "rules of thumb" about how the world works. The Cyc ontology uses a knowledge graph to structure how different concepts are related to each other, and an inference engine that allows systems to reason about facts.

The main idea behind Cyc and other understanding-building knowledge encodings is the realization that systems can't be truly intelligent if they don't understand what the underlying things they are recognizing or classifying are. This means we have to dig deeper than machine learning for intelligence. We need to peel this onion one level deeper, scoop out another tasty parfait layer. We need more than machine learning - we need machine reasoning.

Machine reason is the concept of giving machines the power to make connections between facts, observations, and all the magical things that we can train machines to do with machine learning. Machine learning has enabled a wide range of capabilities and functionality and opened up a world of possibility that was not possible without the ability to train machines to identify and recognize patterns in data. However, this power is crippled by the fact that these systems are not really able to functionally use that information for higher ends, or apply learning from one domain to another without human involvement. Even transfer learning is limited in application.

Indeed, we're rapidly facing the reality that we're going to soon hit the wall on the current edge of capabilities with machine learning-focused AI. To get to that next level we need to break through this wall and shift from machine learning-centric AI to machine reasoning-centric AI. However, that's going to require some breakthroughs in research that we haven't realized yet.

The fact that the Cyc project has the distinction as being the longest-lived AI project is a bit of a back-handed compliment. The Cyc project is long lived because after all these decades the quest for common sense knowledge is proving elusive. Codifying commonsense into a machine-processable form is a tremendous challenge. Not only do you need to encode the entities themselves in a way that a machine knows what you're talking about but also all the inter-relationships between those entities. There are millions, if not billions, of "things" that a machine needs to know. Some of these things are tangible like "rain" but others are intangible such as "thirst". The work of encoding these relationships is being partially automated, but still requires humans to verify the accuracy of the connections... because after all, if machines could do this we would have solved the machine recognition challenge. It's a bit of a chicken and egg problem this way. You can't solve machine recognition without having some way to codify the relationships between information. But you can't scalable codify all the relationships that machines would need to know without some form of automation.

Are we still limited by data and compute power?

Machine learning has proven to be very data-hungry and compute-intensive. Over the past decade, many iterative enhancements have lessened compute load and helped to make data use more efficient. GPUs, TPUs, and emerging FPGAs are helping to provide the raw compute horsepower needed. Yet, despite these advancements, complicated machine learning models with lots of dimensions and parameters still require intense amounts of compute and data. Machine reasoning is easily one order or more of complexity beyond machine learning. Accomplishing the task of reasoning out the complicated relationships between things and truly understanding these things might be beyond today's compute and data resources.

The current wave of interest and investment in AI doesn't show any signs of slowing or stopping any time soon, but it's inevitable it will slow at some point for one simple reason: we still don't understand intelligence and how it works. Despite the amazing work of researchers and technologists, we're still guessing in the dark about the mysterious nature of cognition, intelligence, and consciousness. At some point we will be faced with the limitations of our assumptions and implementations and we'll work to peel the onion one more layer and tackle the next set of challenges. Machine reasoning is quickly approaching as the next challenge we must surmount on the quest for artificial intelligence. If we can apply our research and investment talent to tackling this next layer, we can keep the momentum going with AI research and investment. If not, the pattern of AI will repeat itself, and the current wave will crest. It might not be now or even within the next few years, but the ebb and flow of AI is as inevitable as the waves upon the shore.

Read more here:
Going Beyond Machine Learning To Machine Reasoning - Forbes

Hospices Leverage Machine Learning to Improve Care – Hospice News

Hospice providers are using new machine learning tools to identify patients in need of their services earlier in the course of their illnesses and to ensure that patients receive appropriate levels of home visits in their final days.

While hospice utilization is rising, lengths of stay for many patients remains too short for them to receive the full benefit of hospice care. Hospice utilization among Medicare decedents exceeded 50% for the first time in 2018, according to the National Hospice & Palliative Care Organization (NHPCO). More than 27% of patients in 2017, however, were in hospice for seven days or less, with another 12.7% in hospice for less than 14 days, NHPCO reported.

While not a panacea, machine learning systems have the ability to help hospices engage patients earlier in their illness trajectory. Machine learning, a form of artificial intelligence, uses algorithms and statistical models to detect patterns in data and make predictions based on those patterns.

With machine learning you actually begin with the outcome for example, the people who were readmitted to the hospital and those who were not. Then the software itself is able to learn what the rules are, all the differences between the people who did or did not get admitted, said Leonard DAvolio, founder and CEO of the health care performance improvement firm Cyft, and assistant professor at Harvard Medical School. The advantage of the special software being able to learn these patterns, instead of the human telling it the patterns, is that software can consider so many more factors or variables than a human could, and it can do it in microseconds. Its basically checking all of the patterns that have come before and predicting the next step forward.

Machine learning systems have the ability to analyze data from claims, electronic medical records or other sources of information to predict when a patient maybe in need of hospice or palliative care, as well as which patients are at the highest risk of hospitalization, among others.

Minnesota-based St. Croix Hospice a portfolio company of the Chicago-based private equity firm Vistria Group uses a system from the recently launched technology firm Muse Healthcare, which applies a predictive model to hospice clinical data to determine which patients are likely to pass away within the forthcoming seven to 12 days.

Patients and families tend to need more intense levels of service as the patient nears their final moments. For this reason, regulators mandate that hospices collect data on the number of visits their patients receive during the last seven and the last three days of life as a part of quality reporting programs.

The U.S. Centers for Medicare & Medicaid Services (CMS) requires hospice providers to submit data for two measures pertaining to the number of hospice visits a patient receives when death is imminent. The three-day measure assesses the percentage of patients receiving at least one visit from a registered nurse, physician, nurse practitioner, or physician assistant in the last three days of life.

A second seven-day measure assesses the percentage of patients receiving at least two visits from a social worker, chaplain or spiritual counselor, licensed practical nurse, or hospice aide in the last seven days of life.

CMS currently publicly reports hospices performance on the three-day measure on Hospice Compare, but to date has not published results on the seven day measure, citing a need for further testing.

Since adopting machine learning, St. Croix has achieved a rate of 100% compliance with the visits during the final three days of life requirement, according to the companys Chief Medical Officer Andrew Mayo, M.D.

I really view it as a sixth vital sign. It provides our clinical team with additional information that helps them make decisions about care. It doesnt replace the need for human contact and evaluation, Mayo told Hospice News. Quite the contrary, it can trigger increased involvement at a time where patients, their families and caregivers may need increased hospice involvement and guidance.

Research indicates that automatic screening and notification systems makes identification of patient needs more efficient, saving hospice and palliative care teams the time consuming task of reviewing charts, allowing them to reach out to patients before receiving a physician referral or order.

Early identification of patient needs can allow hospices to more frequently apply the Medicare service intensity add-on (SIA), which increases payment to hospices for nursing visits close to the end of life.

CMS introduced SIA in 2016 to allow hospices to bill an additional payment on an hourly basis for registered nurse and social worker visits during the last seven days of a patients life in addition to their standard per diem reimbursement.

There is a top line revenue opportunity there. Along with that, were talking about additional visits in the last seven days. Theres value in terms of the outcomes that are being tracked, and those outcomes affect quality scores for the providers, Bryan Mosher, data scientist for Muse Healthcare, told Hospice News. Theres a longer term value there for them there.

Link:
Hospices Leverage Machine Learning to Improve Care - Hospice News

What is the role of machine learning in industry? – Engineer Live

In 1950, Alan Turing developed the Turing test to answer the question can machines think? Since then, machine learning has gone from being just a concept, to a process relied on by some of the worlds biggest companies. Here Sophie Hand, UK country manager at industrial parts supplier EU Automation, discusses the applications of the different types of machine learning that exist today.

Machine learning is a subset of artificial intelligence (AI) where computers independently learn to do something they were not explicitly programmed to do. They do this by learning from experience leveraging algorithms and discovering patterns and insights from data. This means machines dont need to be programmed to perform exact tasks on a repetitive basis.

Machine learning is rapidly being adopted across several industries according to Research and Markets, the market is predicted to grow to US$8.81 billion by 2022, at a compound annual growth rate of 44.1 per cent. One of the main reasons for its growing use is that businesses are collecting Big Data, from which they need to obtain valuable insights. Machine learning is an efficient way of making sense of this data, for example the data sensors collect on the condition of machines on the factory floor.

As the market develops and grows, new types of machine learning will emerge and allow new applications to be explored. However, many examples of current machine learning applications fall into two categories; supervised learning and unsupervised learning.

A popular type of machine learning is supervised learning, which is typically used in applications where historical data is used to develop training models predict future events, such as fraudulent credit card transactions. This is a form of machine learning which identifies inputs and outputs and trains algorithms using labelled examples. Supervised learning uses methods like classification, regression, prediction and gradient boosting for pattern recognition. It then uses these patterns to predict the values of the labels on the unlabelled data.

This form of machine learning is currently being used in drug discovery and development with applications including target validation, identification of biomarkers and the analysis of digital pathology data in clinical trials. Using machine learning in this way promotes data-driven decision making and can speed up the drug discovery and development process while improving success rates.

Unlike supervised learning, unsupervised learning works with datasets without historical data. Instead, it explores collected data to find a structure and identify patterns. Unsupervised machine learning is now being used in factories for predictive maintenance purposes. Machines can learn the data and algorithms responsible for causing faults in the system and use this information to identify problems before they arise.

Using machine learning in this way leads to a decrease in unplanned downtime as manufacturers are able to order replacement parts from an automation equipment supplier before a breakdown occurs, saving time and money. According to a survey by Deloitte, using machine learning technologies in the manufacturing sector reduces unplanned machine downtime between 15 and 30 per cent, reducing maintenance costs by 30 per cent.

Its no longer just humans that can think for themselves machines, such as Googles Duplex, are now able to pass the Turing test. Manufacturers can make use of machine learning to improve maintenance processes and enable them to make real-time, intelligent decisions based on data.

See the original post:
What is the role of machine learning in industry? - Engineer Live

The 4 Hottest Trends in Data Science for 2020 – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Originally published in Towards Data Science, January 8, 2020

2019 was a big year for all of Data Science.

Companies all over the world across a wide variety of industries have been going through what people are calling a digital transformation. That is, businesses are taking traditional business processes such as hiring, marketing, pricing, and strategy, and using digital technologies to make them 10 times better.

Data Science has become an integral part of those transformations. With Data Science, organizations no longer have to make their important decisions based on hunches, best-guesses, or small surveys. Instead, theyre analyzing large amounts of real data to base their decisions on real, data-driven facts. Thats really what Data Science is all about creating value through data.

This trend of integrating data into the core business processes has grown significantly, with an increase in interest by over four times in the past 5 years according to Google Search Trends. Data is giving companies a sharp advantage over their competitors. With more data and better Data Scientists to use it, companies can acquire information about the market that their competitors might not even know existed. Its become a game of Data or perish.

Google search popularity of Data Science over the past 5 years. Generated by Google Trends.

In todays ever-evolving digital world, staying ahead of the competition requires constant innovation. Patents have gone out of style while Agile methodology and catching new trends quickly is very much in.

Organizations can no longer rely on their rock-solid methods of old. If a new trend like Data Science, Artificial Intelligence, or Blockchain comes along, it needs to be anticipated beforehand and adapted quickly.

The following are the 4 hottest Data Science trends for the year 2020. These are trends which have gathered increasing interest this year and will continue to grow in 2020.

(1) Automated Data Science

Even in todays digital age, Data Science still requires a lot of manual work. Storing data, cleaning data, visualizing and exploring data, and finally, modeling data to get some actual results. That manual work is just begging for automation, and thus has been the rise of automated Data Science and Machine Learning.

Nearly every step of the Data Science pipeline has been or is in the process of becoming automated.

Auto-Data Cleaning has been heavily researched over the past few years. Cleaning big data often takes up most of a Data Scientists expensive time. Both startups and large companies such as IBM offer automation and tooling for data cleaning.

Another large part of Data Science known as feature engineering has undergone significant disruption. Featuretools offers a solution for automatic feature engineering. On top of that, modern Deep Learning techniques such as Convolutional and Recurrent Neural Networks learn their own features without the need for manual feature design.

Perhaps the most significant automation is occurring in the Machine Learning space. Both Data Robot and H2O have established themselves in the industry by offering end-to-end Machine Learning platforms, giving Data Scientists a very easy handle on data management and model building. AutoML, a method for automatic model design and training, has also boomed over 2019 as these automated models surpass the state-of-the-art. Google, in particular, is investing heavily in Cloud AutoML.

In general, companies are investing heavily in building and buying tools and services for automated Data Science. Anything to make the process cheaper and easier. At the same time, this automation also caters to smaller and less technical organizations who can leverage these tools and services to have access to Data Science without building out their own team.

(2) Data Privacy and Security

Privacy and security are always sensitive topics in technology. All companies want to move fast and innovate, but losing the trust of their customers over privacy or security issues can be fatal. So, theyre forced to make it a priority, at least to a bare minimum of not leaking private data.

Data privacy and security has become an incredibly hot topic over the past year as the issues are magnified by enormous public hacks. Just recently on November 22, 2019, an exposed server with no security was discovered on Google Cloud. The server contained the personal information of 1.2 Billion unique people including names, email addresses, phone numbers, and LinkedIn and Facebook profile information. Even the FBI came in to investigate. Its one of the largest data exposures of all time.

To continue reading this article click here.

Follow this link:
The 4 Hottest Trends in Data Science for 2020 - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

PROTXX Launches Collaboration with Alberta Healthcare and Machine Learning Innovation Ecosystems – BioSpace

MENLO PARK, Calif. and CALGARY, Alberta, Jan. 14, 2020 /PRNewswire/ -- Silicon Valley-based digital healthcare technology pioneer PROTXX, Inc. today announced a broad collaboration with Alberta-based organizations including Alberta Health Services (AHS), independent clinical healthcare providers, machine learning developers, and three of the province's universities. Pilot testing of the PROTXX precision medicine platform has been launched to quantify corresponding improvements in healthcare service quality, patient outcomes, and provider economics.

The PROTXX precision medicine platform integrates wearable sensor and machine learning innovations to replace bulky and expensive clinical equipment and time-consuming testing procedures for a variety of complex medical conditions. PROTXX sensors are designed to be worn unobtrusively by users of any age and in any clinical, athletic, industrial, or military environment, significantly expanding patient and provider access, enabling continuous and remote patient monitoring, and enhancing the value of telemedicine initiatives.

Multiple stakeholders within the Alberta healthcare ecosystem have identified an exciting range of innovative applications in clinical diagnoses, treatment, and rehabilitation of complex medical conditions in which multiple physiological systems are impaired, including injuries such as concussions, diseases such as Parkinson's disease, and age-related disorders such as stroke. The ease of use and level of quantitative assessment enabled by the PROTXX platform provide opportunities for better healthcare quality, outcomes, and value for large and diverse groups of Albertans.

PROTXX, Inc. subsidiary PROTXX Medical Ltd was recently incorporated in Alberta in order to support product development and pilot deployment initiatives with local customers and R&D partners, and to leverage the province's world-class expertise in both healthcare service delivery and machine learning to expand the company's employee base. Tanya Fir, Alberta Minister of Economic Development commented: "I want to commend PROTXX for choosing Alberta. Our province has a dynamic healthcare innovation ecosystem and being recognized for our talent and expertise is a huge win that will spur future developments across multiple sectors of the economy. These pilots also support the strategic focus our province is taking for economic development through innovation, and I look forward to what's in store for the future."

PROTXX CEO and Founder, John Ralston, added: "Alberta's unique combination of healthcare innovation initiatives, world-class research institutions, and economic diversification strategy presents exciting opportunities for PROTXX to expand the development and commercial deployment of our innovations in wearable devices and machine learning, and to prove out the economic value of more quantified management of medical conditions that affect global patient populations numbering in the billions."

PROTXX was originally introduced into the Alberta health care ecosystem by Connection Silicon Valley and New West Networks, a Calgary and San Francisco based team supporting the efforts of Alberta's Economic Development Department to expand and diversify the province's tech sector. Christy Pierce, Director at New West Networks said: "AHS has been a great partner to work with on this investment opportunity. Their Innovation, Impact and Evidence team was instrumental in providing guidance on AHS' priorities and potential opportunities with the appropriate Strategic Clinical Networks (SCN) and clinics. AHS has also worked with us and Innovate Calgary to introduce PROTXX to collaborative research opportunities at the province's universities, providing exciting research opportunities and exposure to the commercial innovation process for our faculty and students."

About PROTXX, Inc.(http://protxx.com/)PROTXX has developed clinical grade wearable sensors that enable rapid non-invasive classification and quantification of neurological, sensory, and musculoskeletal impairments due to fatigue, injury, and disease. PROTXX has further utilized the company's large proprietary data sets to develop and train machine learning models that can automate analytical tasks such as classifying specific medical conditions based upon unique patterns detected in the sensor data. PROTXX customers and partners in Canada, the U.S., the U.K., and Japan are helping healthcare payers rein in costs, providers improve quality of care, and consumers gain greater access to higher quality care and improved outcomes. PROTXX innovations have been recognized with numerous industry, academic, and government awards from healthcare, medical engineering, wearable technology, data analytics, professional sports, defense, and state and local government organizations.

Media inquiries

PROTXX, Inc.:John Ralston, CEO - email: john.ralston@protxx.com - tel: 650.215.8418.

New West Networks:Christy Pierce, Director - email: christy@newwestnetworwks.ca - tel: 403.988.8156.

View original content to download multimedia:http://www.prnewswire.com/news-releases/protxx-launches-collaboration-with-alberta-healthcare-and-machine-learning-innovation-ecosystems-300985235.html

SOURCE PROTXX, Inc.

See the original post here:
PROTXX Launches Collaboration with Alberta Healthcare and Machine Learning Innovation Ecosystems - BioSpace

Zinier raises $90 million to embed field service work with AI and machine learning – VentureBeat

Workplace automation tools are expected to see an uptick in adoption in the next few years. One company leading the charge is San Francisco-based Zinier, which was founded in October 2015 by Andrew Wolf and former TripAdvisor market development manager Arka Dhar. A developer of intelligent field service automation, the startup provides a platform intelligent service automation and control, or ISAC aimed at fixing machinery before it breaks and maintaining mission-critical client services.

After raising $30 million across three funding rounds, the first of which closed in January 2016, Zinier is gearing up for a major expansion with fresh capital. Today the startup announced that it has raised $90 million in series C funding nearly quadruple its series B total led by new investor Iconiq Capital, with participation from Tiger Global Management and return investors Accel, Founders Fund, Nokia-backed NGP Capital, France-based Newfund Capital, and Qualcomm Ventures. The round brings Ziniers total raised to over $120 million, and CEO Dhar says it will support the companys customer acquisition strategy and accelerate expansion of its services across telecom, energy, utilities, and beyond.

Specifically, Zinier plans to expand in Asia Pacific, Europe, and Latin America, with a particular focus on Australia and New Zealand. On the R&D side, it will build out its AI technologies at the platform level and partner with global system integrators.

Services that we rely on every day electricity, transportation, communication are getting by on centuries-old infrastructure that requires a major upgrade for the next generation of users, added Dhar. A field service workforce powered by both people and automation is necessary to execute the massive amount of work required to not only maintain [this] critical human infrastructure, but to also prepare for growth. Our team is focused on enabling this transformation across industries through intelligent field service automation.

Ziniers eponymous suite delivers insights, general recommendations, and specific tasks by running operations metrics through proprietary AI and machine learning algorithms. As for ISAC, it triggers preventative actions based on equipment health while anticipating stock transfers, and it scans technicians calendars to help define ongoing and potential problems. A separate component Ziniers AI configurator affords control over the recommendations by enabling users to define the corpora on which the recommender systems are trained and to set the algorithms used.

A complementary workflow builder automates routine tasks with custom workflows, and it lets users build, deploy, and update those workflows to their hearts content without having to write custom code. Theyre also afforded access to the app builder, which supports the building of custom solutions for specific field service problems, as well as the grouping together of key functionalities into a single bundle that can be deployed across teams (subject to roles and permissions).

Coordinators can accept or reject suggestions on the fly the former sets off a series of automated actions. And managers can build custom dashboards from which teams can view insights generated by the analysis of historical trends, and optionally receive proactive alerts configured to address particular pain points, like when a task is at risk of falling behind.

Zinier supports scheduling and dispatching so technicians know their work orders up to months in advance, and it autonomously determines general capacity based on transit time, technician availability, and task prioritization. Elsewhere, Zinier uses AI to predict systems failure by weighing real-time internet of things data against historical trends, and it tracks and manages inventory to ensure technicians consistently have the parts they need.

On the mobile side of the equation, Ziniers app makes an effort to boost productivity and job satisfaction by ensuring accurate job completion in part by surfacing contextual and back-office data for technicians in the field. Service workers can access not only relevant documents, but detailed site records and readings from internet of things gadgets.

As my colleague Paul Sawers notes, field service organizations are embracing AI and machine learning at an accelerated pace and startups are rising to meet the demand. Theres the well-fundedServiceMax, which wasacquired by GE Digital a few years back for $915 million and in turn sold a majority stake to Silver Lake in December 2018. Salesforce also offersa product calledField Service Lightning, while Microsoftsnapped up FieldOne Systems in 2015 and now offers a field service management platform called Dynamics 365 for Field Service. For its part, Oraclein 2014 acquired the reportedly lucrative TOA Technologies, and in 2018 SAP bought AI-powered field service management company Coresystems.

Ziniers ostensible advantage is that its AI-powered platform was developed in-house from the ground up and it isnt distracted by other business interests. For Zinier, field service is our sole focus, Dhar told VentureBeat in an earlier interview. The platform has allowed us to quickly build Field Service Elements, our end-to-end field service automation product. And as we continue to build our expertise in different industries, we can quickly configure specific use case solutions for customers.

See the rest here:
Zinier raises $90 million to embed field service work with AI and machine learning - VentureBeat

Machine learning market to reach $96.7 billion by 2025 – ICLG.com

The global machine learning market is projected to have a compound annual growth rate of 43.8% over the n...

The global machine learning market is projected to have a compound annual growth rate of 43.8% over the next five years, reaching $96.7 billion by 2025, according to a report by Grand View Research.

The report shows that larger enterprises accounted for the leading market share in 2018, when the market was valued at $6.9 billion.

The major large enterprises employing deep learning, machine learning and optimisation of decisions to deliver greater business value, are identified as Amazon Web Services, Baidu, Google, Hewlett Packard Enterprise Development, Intel and Microsoft, among others.

Small to medium sized enterprises (SMEs) are not left behind, as they too are benefitting from deployment options which allow them to scale up easily, allowing them to avoid substantial up-front investments.

In addition, the development of customised silicon chips with AI and machine learning competences is increasing companies adoption of hardware, while at the same time, improved processing tools supplied by companies such as computing start-up SambaNova Systems, are expected to grow the market.

The advertising and media sector, which often uses buyers optimisation, data processing and connected AI analysis to predict customer behaviour and improve advertising campaigns, accounted for the largest market share in 2018 owing to AI capabilities.

However, the report explains that the healthcare sector is expected to surpass this segment to account for the largest share by the end of the forecast period.

Elsewhere, the adoption of AI extends to the United States Army, which has future plans to utilise AI for predictive maintenance in combat vehicles, while the stock market is adopting AI and machine learning technology with an accuracy level of about 60% in market predictions.

Examples of technology-based partnerships include H20.ai, an open source machine learning and AI platform, which announced a partnership with IBM Corporation in June 2018.

In the media sector, ICLG.com, a brand of legal publishing and media company Global Legal Group, launched an AI-powered search engine using LexSnap technology, in November 2019.

In addition, professional services firm Ernst & Young released its third edition blockchain technology on Ethereum public platform, announced this week.

See more here:
Machine learning market to reach $96.7 billion by 2025 - ICLG.com