Machine Learning: Real-life applications and it’s significance in Data Science – Techstory

Do you know how Google Maps predicts traffic? Are you amused by how Amazon Prime or Netflix subscribes to you just the movie you would watch? We all know it must be some approach of Artificial Intelligence. Machine Learning involves algorithms and statistical models to perform tasks. This same approach is used to find faces in Facebook and detect cancer too. A Machine Learning course can educate in the development and application of such models.

Artificial Intelligence mimics human intelligence. Machine Learning is one of the significant branches of it. There is an ongoing and increasing need for its development.

Tasks as simple as Spam detection in Gmail illustrates its significance in our day-to-day lives. That is why the roles of Data scientists are in demand to yield more productivity at present. An aspiring data scientist can learn to develop algorithms and apply such by availing Machine Learning certification.

Machine learning as a subset of Artificial Intelligence, is applied for varied purposes. There is a misconception that applying Machine Learning algorithms would need a prior mathematical knowledge. But, a Machine Learning Online course would suggest otherwise. On contrary to the popular approach of studying, here top-to-bottom approach is involved. An aspiring data scientist, a business person or anyone can learn how to apply statistical models for various purposes. Here, is a list of some well-known applications of Machine Learning.

Microsofts research lab uses Machine Learning to study cancer. This helps in Individualized oncological treatment and detailed progress reports generation. The data engineers apply pattern recognition, Natural Language Processing and Computer vision algorithms to work through large data. This aids oncologists to conduct precise and breakthrough tests.

Likewise, machine learning is applied in biomedical engineering. This has led to automation of diagnostic tools. Such tools are used in detecting neurological and psychiatric disorders of many sorts.

We all have had a conversation with Siri or Alexa. They use speech recognition to input our requests. Machine Learning is applied here to auto generate responses based on previous data. Hello Barbie is the Siri version for the kids to play with. It uses advanced analytics, machine learning and Natural language processing to respond. This is the first AI enabled toy which could lead to more such inventions.

Google uses Machine Learning statistical models to acquire inputs. The statistical models collect details such as distance from the start point to the endpoint, duration and bus schedules. Such historical data is rescheduled and reused. Machine Learning algorithms are developed with the objective of data prediction. They recognise the pattern between such inputs and predict approximate time delays.

Another well-known application of Google, Google translate involves Machine Learning. Deep learning aids in learning language rules through recorded conversations. Neural networks such as Long-short term memory networks aids in long-term information updates and learning. Recurrent Neural networks identify the sequences of learning. Even bi-lingual processing is made feasible nowadays.

Facebook uses image recognition and computer vision to detect images. Such images are fed as inputs. The statistical models developed using Machine Learning maps any information associated with these images. Facebook generates automated captions for images. These captions are meant to provide directions for visually impaired people. This innovation of Facebook has nudged Data engineers to come up with other such valuable real-time applications.

The aim here is to increase the possibility of the customer, watching a movie recommendation. It is achieved by studying the previous thumbnails. An algorithm is developed to study these thumbnails and derive recommendation results. Every image of available movies has separate thumbnails. A recommendation is generated by pattern recognition among the numerical data. The thumbnails are assigned individual numerical values.

Tesla uses computer vision, data prediction, and path planning for this purpose. The machine learning practices applied makes the innovation stand-out. The deep neural networks work with trained data and generate instructions. Many technological advancements such as changing lanes are instructed based on imitation learning.

Gmail, Yahoo mail and Outlook engage machine learning techniques such as neural networks. These networks detect patterns in historical data. They train on received data about spamming messages and phishing messages. It is noted that these spam filters provide 99.9 percent accuracy.

As people grow more health conscious, the development of fitness monitoring applications are on the rise. Being on top of the market, Fitbit ensures its productivity by the employment of machine learning methods. The trained machine learning models predicts user activities. This is achieved through data pre-processing, data processing and data partitioning. There is a need to improve the application in terms of additional purposes.

The above mentioned applications are like the tip of an iceberg. Machine learning being a subset of Artificial Intelligence finds its necessity in many other streams of daily activities.

comments

More:
Machine Learning: Real-life applications and it's significance in Data Science - Techstory

How to Pick a Winning March Madness Bracket – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Introduction

In 2019, over 40 million Americans wagered money on March Madness brackets, according to the American Gaming Association. Most of this money was bet in bracket pools, which consist of a group of people each entering their predictions of the NCAA tournament games along with a buy-in. The bracket that comes closest to being right wins. If you also consider the bracket pools where only pride is at stake, the number of participants is much greater. Despite all this attention, most do not give themselves the best chance to win because they are focused on the wrong question.

The Right Question

Mistake #3 in Dr. John Elders Top 10 Data Science Mistakes is to ask the wrong question. A cornerstone of any successful analytics project starts with having the right project goal; that is, to aim at the right target. If youre like most people, when you fill out your bracket, you ask yourself, What do I think is most likely to happen? This is the wrong question to ask if you are competing in a pool because the objective is to win money, NOT to make the most correct bracket. The correct question to ask is: What bracket gives me the best chance to win $? (This requires studying the payout formula. I used ESPN standard scoring (320 possible points per round) with all pool money given to the winner. (10 points are awarded for each correct win in the round of 64, 20 in the round of 32, and so forth, doubling until 320 are awarded for a correct championship call.))

While these questions seem similar, the brackets they produce will be significantly different.

If you ignore your opponents and pick the teams with the best chance to win games you will reduce your chance of winning money. Even the strongest team is unlikely to win it all, and even if they do, plenty of your opponents likely picked them as well. The best way to optimize your chances of making money is to choose a champion team with a good chance to win who is unpopular with your opponents.

Knowing how other people in your pool are filling out their brackets is crucial, because it helps you identify teams that are less likely to be picked. One way to see how others are filling out their brackets is via ESPNs Who Picked Whom page (Figure 1). It summarizes how often each team is picked to advance in each round across all ESPN brackets and is a great first step towards identifying overlooked teams.

Figure 1. ESPNs Who Picked Whom Tournament Challenge page

For a team to be overlooked, their perceived chance to win must be lower than their actual chance to win. The Who Picked Whom page provides an estimate of perceived chance to win, but to find undervalued teams we also need estimates for actual chance to win. This can range from a complex prediction model to your own gut feeling. Two sources I trust are 538s March Madness predictions and Vegas future betting odds. 538s predictions are based on a combination of computer rankings and has predicted performance well in past tournaments. There is also reason to pay attention to Vegas odds, because if they were too far off, the sportsbooks would lose money.

However, both sources have their flaws. 538 is based on computer ratings, so while they avoid human bias, they miss out on expert intuition. Most Vegas sportsbooks likely use both computer ratings and expert intuition to create their betting odds, but they are strongly motivated to have equal betting on all sides, so they are significantly affected by human perception. For example, if everyone was betting on Duke to win the NCAA tournament, they would increase Dukes betting odds so that more people would bet on other teams to avoid large losses. When calculating win probabilities for this article, I chose to average 538 and Vegas predictions to obtain a balance I was comfortable with.

Lets look at last year. Figure 2 compares a teams perceived chance to win (based on ESPNs Who Picked Whom) to their actual chance to win (based on 538-Vegas averaged predictions) for the leading 2019 NCAA Tournament teams. (Probabilities for all 64 teams in the tournament appear in Table 6 in the Appendix.)

Figure 2. Actual versus perceived chance to win March Madness for 8 top teams

As shown in Figure 2, participants over-picked Duke and North Carolina as champions and under-picked Gonzaga and Virginia. Many factors contributed to these selections; for example, most predictive models, avid sports fans, and bettors agreed that Duke was the best team last year. If you were the picking the bracket most likely to occur, then selecting Duke as champion was the natural pick. But ignoring selections made by others in your pool wont help you win your pool.

While this graph is interesting, how can we turn it into concrete takeaways? Gonzaga and Virginia look like good picks, but what about the rest of the teams hidden in that bottom left corner? Does it ever make sense to pick teams like Texas Tech, who had a 2.6% chance to win it all, and only 0.9% of brackets picking them? How much does picking an overvalued favorite like Duke hurt your chances of winning your pool?

To answer these questions, I simulated many bracket pools and found that the teams in Gonzagas and Virginias spots are usually the best picksthe most undervalued of the top four to five favorites. However, as the size of your bracket pool increases, overlooked lower seeds like third-seeded Texas Tech or fourth-seeded Virginia Tech become more attractive. The logic for this is simple: the chance that one of these teams wins it all is small, but if they do, then you probably win your pool regardless of the number of participants, because its likely no one else picked them.

Simulations Methodology

To simulate bracket pools, I first had to simulate brackets. I used an average of the Vegas and 538 predictions to run many simulations of the actual events of March Madness. As discussed above, this method isnt perfect but its a good approximation. Next, I used the Who Picked Whom page to simulate many human-created brackets. For each human bracket, I calculated the chance it would win a pool of size by first finding its percentile ranking among all human brackets assuming one of the 538-Vegas simulated brackets were the real events. This percentile is basically the chance it is better than a random bracket. I raised the percentile to the power, and then repeated for all simulated 538-Vegas brackets, averaging the results to get a single win probability per bracket.

For example, lets say for one 538-Vegas simulation, my bracket is in the 90th percentile of all human brackets, and there are nine other people in my pool. The chance I win the pool would be. If we assumed a different simulation, then my bracket might only be in the 20th percentile, which would make my win probability . By averaging these probabilities for all 538-Vegas simulations we can calculate an estimate of a brackets win probability in a pool of size , assuming we trust our input sources.

Results

I used this methodology to simulate bracket pools with 10, 20, 50, 100, and 1000 participants. The detailed results of the simulations are shown in Tables 1-6 in the Appendix. Virginia and Gonzaga were the best champion picks when the pool had 50 or fewer participants. Yet, interestingly, Texas Tech and Purdue (3-seeds) and Virginia Tech (4-seed) were as good or better champion picks when the pool had 100 or more participants.

General takeaways from the simulations:

Additional Thoughts

We have assumed that your local pool makes their selections just like the rest of America, which probably isnt true. If you live close to a team thats in the tournament, then that team will likely be over-picked. For example, I live in Charlottesville (home of the University of Virginia), and Virginia has been picked as the champion in roughly 40% of brackets in my pools over the past couple of years. If you live close to a team with a high seed, one strategy is to start with ESPNs Who Picked Whom odds, and then boost the odds of the popular local team and correspondingly drop the odds for all other teams. Another strategy Ive used is to ask people in my pool who they are picking. It is mutually beneficial, since Id be less likely to pick whoever they are picking.

As a parting thought, I want to describe a scenario from the 2019 NCAA tournament some of you may be familiar with. Auburn, a five seed, was winning by two points in the waning moments of the game, when they inexplicably fouled the other team in the act of shooting a three-point shot with one second to go. The opposing player, a 78% free throw shooter, stepped to the line and missed two out of three shots, allowing Auburn to advance. This isnt an alternate reality; this is how Auburn won their first-round game against 12-seeded New Mexico State. They proceeded to beat powerhouses Kansas, North Carolina, and Kentucky on their way to the Final Four, where they faced the exact same situation against Virginia. Virginias Kyle Guy made all his three free throws, and Virginia went on to win the championship.

I add this to highlight an important qualifier of this analysisits impossible to accurately predict March Madness. Were the people who picked Auburn to go to the Final Four geniuses? Of course not. Had Terrell Brown of New Mexico State made his free throws, they would have looked silly. There is no perfect model that can predict the future, and those who do well in the pools are not basketball gurus, they are just lucky. Implementing the strategies talked about here wont guarantee a victory; they just reduce the amount of luck you need to win. And even with the best modelsyoull still need a lot of luck. It is March Madness, after all.

Appendix: Detailed Analyses by Bracket Sizes

At baseline (randomly), a bracket in a ten-person pool has a 10% chance to win. Table 1 shows how that chance changes based on the round selected for a given team to lose. For example, brackets that had Virginia losing in the Round of 64 won a ten-person pool 4.2% of the time, while brackets that picked them to win it all won 15.1% of the time. As a reminder, these simulations were done with only pre-tournament informationthey had no data indicating that Virginia was the eventual champion, of course.

Table 1 Probability that a bracket wins a ten-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

In ten-person pools, the best performing brackets were those that picked Virginia or Gonzaga as the champion, winning 15% of the time. Notably, early round picks did not have a big influence on the chance of winning the pool, the exception being brackets that had a one or two seed losing in the first round. Brackets that had a three seed or lower as champion performed very poorly, but having lower seeds making the Final Four did not have a significant impact on chance of winning.

Table 2 shows the same information for bracket pools with 20 people. The baseline chance is now 5%, and again the best performing brackets are those that picked Virginia or Gonzaga to win. Similarly, picks in the first few rounds do not have much influence. Michigan State has now risen to the third best Champion pick, and interestingly Purdue is the third best runner-up pick.

Table 2 Probability that a bracket wins a 20-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool size increases to 50, as shown in Table 3, picking the overvalued favorites (Duke and North Carolina) as champions significantly lowers your baseline chances (2%). The slightly undervalued two and three seeds now raise your baseline chances when selected as champions, but Virginia and Gonzaga remain the best picks.

Table 3 Probability that a bracket wins a 50-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

With the bracket pool size at 100 (Table 4), Virginia and Gonzaga are joined by undervalued three-seeds Texas Tech and Purdue. Picking any of these four raises your baseline chances from 1% to close to 2%. Picking Duke or North Carolina again hurts your chances.

Table 4 Probability that a bracket wins a 100-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool grows to 1000 people (Table 5), there is a complete changing of the guard. Virginia Tech is now the optimal champion pick, raising your baseline chance of winning your pool from 0.1% to 0.4%, followed by the three-seeds and sixth-seeded Iowa State are the best champion picks.

Table 5 Probability that a bracket wins a 1000-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

For Reference, Table 6 shows the actual chance to win versus the chance of being picked to win for all teams seeded seventh or better. These chances are derived from the ESPN Who Picked Whom page and the 538-Vegas predictions. The data for the top eight teams in Table 6 is plotted in Figure 2. Notably, Duke and North Carolina are overvalued, while the rest are all at least slightly undervalued.

The teams in bold in Table 6 are examples of teams that are good champion picks in larger pools. They all have a high ratio of actual chance to win to chance of being picked to win, but a low overall actual chance to win.

Table 6 Actual odds to win Championship vs Chance Team is Picked to Win Championship.

Undervalued teams in green; over-valued in red.

About the Author

Robert Robison is an experienced engineer and data analyst who loves to challenge assumptions and think outside the box. He enjoys learning new skills and techniques to reveal value in data. Robert earned a BS in Aerospace Engineering from the University of Virginia, and is completing an MS in Analytics through Georgia Tech.

In his free time, Robert enjoys playing volleyball and basketball, watching basketball and football, reading, hiking, and doing anything with his wife, Lauren.

See the original post:
How to Pick a Winning March Madness Bracket - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

How businesses and governments should embrace AI and Machine Learning – TechCabal

Leadership team of credit-as-a-service startup Migo, one of a growing number of businesses using AI to create consumer-facing products.

The ability to make good decisions is literally the reason people trust you with responsibilities. Whether you work for a government or lead a team at a private company, your decision-making process will affect lives in very real ways.

Organisations often make poor decisions because they fail to learn from the past. Wherever a data-collection reluctance exists, there is a fair chance that mistakes will be repeated. Bad policy goals will often be a consequence of faulty evidentiary support, a failure to sufficiently look ahead by not sufficiently looking back.

But as Daniel Kahneman, author of Thinking Fast and Slow, says:

The idea that the future is unpredictable is undermined every day by the ease with which the past is explained. If governments and business leaders will live up to their responsibilities, enthusiastically embracing methodical decision-making tools should be a no-brainer.

Mass media representations project artificial intelligence in futuristic, geeky terms. But nothing could be further from the truth.

While it is indeed scientific, AI can be applied in practical everyday life today. Basic interactions with AI include algorithms that recommend articles to you, friend suggestions on social media and smart voice assistants like Alexa and Siri.

In the same way, government agencies can integrate AI into regular processes necessary for society to function properly.

Managing money is an easy example to begin with. AI systems can be used to streamline data points required during budget preparations and other fiscal processes. Based on data collected from previous fiscal cycles, government agencies could reasonably forecast needs and expectations for future years.

With its large trove of citizen data, governments could employ AI to effectively reduce inequalities in outcomes and opportunities. Big Data gives a birds-eye view of the population, providing adequate tools for equitably distributing essential infrastructure.

Perhaps a more futuristic example is in drafting legislation. Though a young discipline, legimatics includes the use of artificial intelligence in legal and legislative problem-solving.

Democracies like Nigeria consider public input a crucial aspect of desirable law-making. While AI cannot yet be relied on to draft legislation without human involvement, an AI-based approach can produce tools for specific parts of legislative drafting or decision support systems for the application of legislation.

In Africa, businesses are already ahead of most governments in AI adoption. Credit scoring based on customer data has become popular in the digital lending space.

However, there is more for businesses to explore with the predictive powers of AI. A particularly exciting prospect is the potential for new discoveries based on unstructured data.

Machine learning could broadly be split into two sections: supervised and unsupervised learning. With supervised learning, a data analyst sets goals based on the labels and known classifications of the dataset. The resulting insights are useful but do not produce the sort of new knowledge that comes from unsupervised learning processes.

In essence, AI can be a medium for market-creating innovations based on previously unknown insight buried in massive caches of data.

Digital lending became a market opportunity in Africa thanks to growing smartphone availability. However, customer data had to be available too for algorithms to do their magic.

This is why it is desirable for more data-sharing systems to be normalised on the continent to generate new consumer products. Fintech sandboxes that bring the public and private sectors together aiming to achieve open data standards should therefore be encouraged.

Artificial intelligence, like other technologies, is neutral. It can be used for social good but also can be diverted for malicious purposes. For both governments and businesses, there must be circumspection and a commitment to use AI responsibly.

China is a cautionary tale. The Communist state currently employs an all-watching system of cameras to enforce round-the-clock citizen surveillance.

By algorithmically rating citizens on a so-called social credit score, Chinas ultra-invasive AI effectively precludes individual freedom, compelling her 1.3 billion people to live strictly by the Politburos ideas of ideal citizenship.

On the other hand, businesses must be ethical in providing transparency to customers about how data is harvested to create products. At the core of all exchange must be trust, and a verifiable, measurable commitment to do no harm.

Doing otherwise condemns modern society to those dystopian days everybody dreads.

How can businesses and governments use Artificial Intelligence to find solutions to challenges facing the continent? Join entrepreneurs, innovators, investors and policymakers in Africas AI community at TechCabals emerging tech townhall. At the event, stakeholders including telcos and financial institutions will examine how businesses, individuals and countries across the continent can maximize the benefits of emerging technologies, specifically AI and Blockchain. Learn more about the event and get tickets here.

Read the original post:
How businesses and governments should embrace AI and Machine Learning - TechCabal

Cisco Enhances IoT Platform with 5G Readiness and Machine Learning – The Fast Mode

Cisco on Friday announced advancements to its IoT portfolio that enable service provider partners to offer optimized management of cellular IoT environments and new 5G use-cases.

Cisco IoT Control Center(formerly Jasper Control Center) is introducing new innovations to improve management and reduce deployment complexity. These include:

Using Machine Learning (ML) to improve management: With visibility into 3 billion events every day, Cisco IoT Control Center uses the industry's broadest visibility to enable machine learning models to quickly identify anomalies and address issues before they impact a customer. Service providers can also identify and alert customers of errant devices, allowing for greater endpoint security and control.

Smart billing to optimize rate plans:Service providers can improve customer satisfaction by enabling Smart billing to automatically optimize rate plans. Policies can also be created to proactively send customer notifications should usage changes or rate plans need to be updated to help save enterprises money.

Support for global supply chains: SIM portability is an enterprise requirement to support complex supply chains spanning multiple service providers and geographies. It is time-consuming and requires integrations between many different service providers and vendors, driving up costs for both. Cisco IoT Control Center now provides eSIM as a service, enabling a true turnkey SIM portability solution to deliver fast, reliable, cost-effective SIM handoffs between service providers.

Cisco IoT Control Center has taken steps towards 5G readiness to incubate and promote high value 5G business use cases that customers can easily adopt.

Vikas Butaney, VP Product Management IoT, CiscoCellular IoT deployments are accelerating across connected cars, utilities and transportation industries and with 5G and Wi-Fi 6 on the horizon IoT adoption will grow even faster. Cisco is investing in connectivity management, IoT networking, IoT security, and edge computing to accelerate the adoption of IoT use-cases.

See the rest here:
Cisco Enhances IoT Platform with 5G Readiness and Machine Learning - The Fast Mode

Machine learning could speed the arrival of ultra-fast-charging electric car – Chemie.de

Using machine learning, a Stanford-led research team has slashed battery testing times - a key barrier to longer-lasting, faster-charging batteries for electric vehicles.

Battery performance can make or break the electric vehicle experience, from driving range to charging time to the lifetime of the car. Now, artificial intelligence has made dreams like recharging an EV in the time it takes to stop at a gas station a more likely reality, and could help improve other aspects of battery technology.

For decades, advances in electric vehicle batteries have been limited by a major bottleneck: evaluation times. At every stage of the battery development process, new technologies must be tested for months or even years to determine how long they will last. But now, a team led by Stanford professors Stefano Ermon and William Chueh has developed a machine learning-based method that slashes these testing times by 98 percent. Although the group tested their method on battery charge speed, they said it can be applied to numerous other parts of the battery development pipeline and even to non-energy technologies.

"In battery testing, you have to try a massive number of things, because the performance you get will vary drastically," said Ermon, an assistant professor of computer science. "With AI, we're able to quickly identify the most promising approaches and cut out a lot of unnecessary experiments."

The study, published by Nature on Feb. 19, was part of a larger collaboration among scientists from Stanford, MIT and the Toyota Research Institute that bridges foundational academic research and real-world industry applications. The goal: finding the best method for charging an EV battery in 10 minutes that maximizes the battery's overall lifetime. The researchers wrote a program that, based on only a few charging cycles, predicted how batteries would respond to different charging approaches. The software also decided in real time what charging approaches to focus on or ignore. By reducing both the length and number of trials, the researchers cut the testing process from almost two years to 16 days.

"We figured out how to greatly accelerate the testing process for extreme fast charging," said Peter Attia, who co-led the study while he was a graduate student. "What's really exciting, though, is the method. We can apply this approach to many other problems that, right now, are holding back battery development for months or years."

Designing ultra-fast-charging batteries is a major challenge, mainly because it is difficult to make them last. The intensity of the faster charge puts greater strain on the battery, which often causes it to fail early. To prevent this damage to the battery pack, a component that accounts for a large chunk of an electric car's total cost, battery engineers must test an exhaustive series of charging methods to find the ones that work best.

The new research sought to optimize this process. At the outset, the team saw that fast-charging optimization amounted to many trial-and-error tests - something that is inefficient for humans, but the perfect problem for a machine.

"Machine learning is trial-and-error, but in a smarter way," said Aditya Grover, a graduate student in computer science who co-led the study. "Computers are far better than us at figuring out when to explore - try new and different approaches - and when to exploit, or zero in, on the most promising ones."

The team used this power to their advantage in two key ways. First, they used it to reduce the time per cycling experiment. In a previous study, the researchers found that instead of charging and recharging every battery until it failed - the usual way of testing a battery's lifetime -they could predict how long a battery would last after only its first 100 charging cycles. This is because the machine learning system, after being trained on a few batteries cycled to failure, could find patterns in the early data that presaged how long a battery would last.

Second, machine learning reduced the number of methods they had to test. Instead of testing every possible charging method equally, or relying on intuition, the computer learned from its experiences to quickly find the best protocols to test.

By testing fewer methods for fewer cycles, the study's authors quickly found an optimal ultra-fast-charging protocol for their battery. In addition to dramatically speeding up the testing process, the computer's solution was also better - and much more unusual - than what a battery scientist would likely have devised, said Ermon.

"It gave us this surprisingly simple charging protocol - something we didn't expect," Ermon said. Instead of charging at the highest current at the beginning of the charge, the algorithm's solution uses the highest current in the middle of the charge. "That's the difference between a human and a machine: The machine is not biased by human intuition, which is powerful but sometimes misleading."

The researchers said their approach could accelerate nearly every piece of the battery development pipeline: from designing the chemistry of a battery to determining its size and shape, to finding better systems for manufacturing and storage. This would have broad implications not only for electric vehicles but for other types of energy storage, a key requirement for making the switch to wind and solar power on a global scale.

"This is a new way of doing battery development," said Patrick Herring, co-author of the study and a scientist at the Toyota Research Institute. "Having data that you can share among a large number of people in academia and industry, and that is automatically analyzed, enables much faster innovation."

The study's machine learning and data collection system will be made available for future battery scientists to freely use, Herring added. By using this system to optimize other parts of the process with machine learning, battery development - and the arrival of newer, better technologies - could accelerate by an order of magnitude or more, he said.

The potential of the study's method extends even beyond the world of batteries, Ermon said. Other big data testing problems, from drug development to optimizing the performance of X-rays and lasers, could also be revolutionized by the use of machine learning optimization. And ultimately, he said, it could even help to optimize one of the most fundamental processes of all.

"The bigger hope is to help the process of scientific discovery itself," Ermon said. "We're asking: Can we design these methods to come up with hypotheses automatically? Can they help us extract knowledge that humans could not? As we get better and better algorithms, we hope the whole scientific discovery process may drastically speed up."

Continue reading here:
Machine learning could speed the arrival of ultra-fast-charging electric car - Chemie.de

Google Teaches AI To Play The Game Of Chip Design – The Next Platform

If it wasnt bad enough that Moores Law improvements in the density and cost of transistors is slowing. At the same time, the cost of designing chips and of the factories that are used to etch them is also on the rise. Any savings on any of these fronts will be most welcome to keep IT innovation leaping ahead.

One of the promising frontiers of research right now in chip design is using machine learning techniques to actually help with some of the tasks in the design process. We will be discussing this at our upcoming The Next AI Platform event in San Jose on March 10 with Elias Fallon, engineering director at Cadence Design Systems. (You can see the full agenda and register to attend at this link; we hope to see you there.) The use of machine learning in chip design was also one of the topics that Jeff Dean, a senior fellow in the Research Group at Google who has helped invent many of the hyperscalers key technologies, talked about in his keynote address at this weeks 2020 International Solid State Circuits Conference in San Francisco.

Google, as it turns out, has more than a passing interest in compute engines, being one of the large consumers of CPUs and GPUs in the world and also the designer of TPUs spanning from the edge to the datacenter for doing both machine learning inference and training. So this is not just an academic exercise for the search engine giant and public cloud contender particularly if it intends to keep advancing its TPU roadmap and if it decides, like rival Amazon Web Services, to start designing its own custom Arm server chips or decides to do custom Arm chips for its phones and other consumer devices.

With a certain amount of serendipity, some of the work that Google has been doing to run machine learning models across large numbers of different types of compute engines is feeding back into the work that it is doing to automate some of the placement and routing of IP blocks on an ASIC. (It is wonderful when an idea is fractal like that. . . .)

While the pod of TPUv3 systems that Google showed off back in May 2018 can mesh together 1,024 of the tensor processors (which had twice as many cores and about a 15 percent clock speed boost as far as we can tell) to deliver 106 petaflops of aggregate 16-bit half precision multiplication performance (with 32-bit accumulation) using Googles own and very clever bfloat16 data format. Those TPUv3 chips are all cross-coupled using a 3232 toroidal mesh so they can share data, and each TPUv3 core has its own bank of HBM2 memory. This TPUv3 pod is a huge aggregation of compute, which can do either machine learning training or inference, but it is not necessarily as large as Google needs to build. (We will be talking about Deans comments on the future of AI hardware and models in a separate story.)

Suffice it to say, Google is hedging with hybrid architectures that mix CPUs and GPUs and perhaps someday other accelerators for reinforcement learning workloads, and hence the research that Dean and his peers at Google have been involved in that are also being brought to bear on ASIC design.

One of the trends is that models are getting bigger, explains Dean. So the entire model doesnt necessarily fit on a single chip. If you have essentially large models, then model parallelism dividing the model up across multiple chips is important, and getting good performance by giving it a bunch of compute devices is non-trivial and it is not obvious how to do that effectively.

It is not as simple as taking the Message Passing Interface (MPI) that is used to dispatch work on massively parallel supercomputers and hacking it onto a machine learning framework like TensorFlow because of the heterogeneous nature of AI iron. But that might have been an interesting way to spread machine learning training workloads over a lot of compute elements, and some have done this. Google, like other hyperscalers, tends to build its own frameworks and protocols and datastores, informed by other technologies, of course.

Device placement meaning, putting the right neural network (or portion of the code that embodies it) on the right device at the right time for maximum throughput in the overall application is particularly important as neural network models get bigger than the memory space and the compute oomph of a single CPU, GPU, or TPU. And the problem is getting worse faster than the frameworks and hardware can keep up. Take a look:

The number of parameters just keeps growing and the number of devices being used in parallel also keeps growing. In fact, getting 128 GPUs or 128 TPUv3 processors (which is how you get the 512 cores in the chart above) to work in concert is quite an accomplishment, and is on par with the best that supercomputers could do back in the era before loosely coupled, massively parallel supercomputers using MPI took over and federated NUMA servers with actual shared memory were the norm in HPC more than two decades ago. As more and more devices are going to be lashed together in some fashion to handle these models, Google has been experimenting with using reinforcement learning (RL), a special subset of machine learning, to figure out where to best run neural network models at any given time as model ensembles are running on a collection of CPUs and GPUs. In this case, an initial policy is set for dispatching neural network models for processing, and the results are then fed back into the model for further adaptation, moving it toward more and more efficient running of those models.

In 2017, Google trained an RL model to do this work (you can see the paper here) and here is what the resulting placement looked like for the encoder and decoder, and the RL model to place the work on the two CPUs and four GPUs in the system under test ended up with 19.3 percent lower runtime for the training runs compared to the manually placed neural networks done by a human expert. Dean added that this RL-based placement of neural network work on the compute engines does kind of non-intuitive things to achieve that result, which is what seems to be the case with a lot of machine learning applications that, nonetheless, work as well or better than humans doing the same tasks. The issue is that it cant take a lot of RL compute oomph to place the work on the devices to run the neural networks that are being trained themselves. In 2018, Google did research to show how to scale computational graphs to over 80,000 operations (nodes), and last year, Google created what it calls a generalized device placement scheme for dataflow graphs with over 50,000 operations (nodes).

Then we start to think about using this instead of using it to place software computation on different computational devices, we started to think about it for could we use this to do placement and routing in ASIC chip design because the problems, if you squint at them, sort of look similar, says Dean. Reinforcement learning works really well for hard problems with clear rules like Chess or Go, and essentially we started asking ourselves: Can we get a reinforcement learning model to successfully play the game of ASIC chip layout?

There are a couple of challenges to doing this, according to Dean. For one thing, chess and Go both have a single objective, which is to win the game and not lose the game. (They are two sides of the same coin.) With the placement of IP blocks on an ASIC and the routing between them, there is not a simple win or lose and there are many objectives that you care about, such as area, timing, congestion, design rules, and so on. Even more daunting is the fact that the number of potential states that have to be managed by the neural network model for IP block placement is enormous, as this chart below shows:

Finally, the true reward function that drives the placement of IP blocks, which runs in EDA tools, takes many hours to run.

And so we have an architecture Im not going to get a lot of detail but essentially it tries to take a bunch of things that make up a chip design and then try to place them on the wafer, explains Dean, and he showed off some results of placing IP blocks on a low-powered machine learning accelerator chip (we presume this is the edge TPU that Google has created for its smartphones), with some areas intentionally blurred to keep us from learning the details of that chip. We have had a team of human experts places this IP block and they had a couple of proxy reward functions that are very cheap for us to evaluate; we evaluated them in two seconds instead of hours, which is really important because reinforcement learning is one where you iterate many times. So we have a machine learning-based placement system, and what you can see is that it sort of spreads out the logic a bit more rather than having it in quite such a rectangular area, and that has enabled it to get improvements in both congestion and wire length. And we have got comparable or superhuman results on all the different IP blocks that we have tried so far.

Note: I am not sure we want to call AI algorithms superhuman. At least if you dont want to have it banned.

Anyway, here is how that low-powered machine learning accelerator for the RL network versus people doing the IP block placement:

And here is a table that shows the difference between doing the placing and routing by hand and automating it with machine learning:

And finally, here is how the IP block on the TPU chip was handled by the RL network compared to the humans:

Look at how organic these AI-created IP blocks look compared to the Cartesian ones designed by humans. Fascinating.

Now having done this, Google then asked this question: Can we train a general agent that is quickly effective at placing a new design that it has never seen before? Which is precisely the point when you are making a new chip. So Google tested this generalized model against four different IP blocks from the TPU architecture and then also on the Ariane RISC-V processor architecture. This data pits people working with commercial tools and various levels tuning on the model:

And here is some more data on the placement and routing done on the Ariane RISC-V chips:

You can see that experience on other designs actually improves the results significantly, so essentially in twelve hours you can get the darkest blue bar, Dean says, referring to the first chart above, and then continues with the second chart above. And this graph showing the wireline costs where we see if you train from scratch, it actually takes the system a little while before it sort of makes some breakthrough insight and was able to significantly drop the wiring cost, where the pretrained policy has some general intuitions about chip design from seeing other designs and people that get to that level very quickly.

Just like we do ensembles of simulations to do better weather forecasting, Dean says that this kind of AI-juiced placement and routing of IP block sin chip design could be used to quickly generate many different layouts, with different tradeoffs. And in the event that some feature needs to be added, the AI-juiced chip design game could re-do a layout quickly, not taking months to do it.

And most importantly, this automated design assistance could radically drop the cost of creating new chips. These costs are going up exponentially, and data we have seen (thanks to IT industry luminary and Arista Networks chairman and chief technology officer Andy Bechtolsheim), an advanced chip design using 16 nanometer processes cost an average of $106.3 million, shifting to 10 nanometers pushed that up to $174.4 million, and the move to 7 nanometers costs $297.8 million, with projections for 5 nanometer chips to be on the order of $542.2 million. Nearly half of that cost has been and continues to be for software. So we know where to target some of those costs, and machine learning can help.

The question is will the chip design software makers embed AI and foster an explosion in chip designs that can be truly called Cambrian, and then make it up in volume like the rest of us have to do in our work? It will be interesting to see what happens here, and how research like that being done by Google will help.

See original here:
Google Teaches AI To Play The Game Of Chip Design - The Next Platform

Machine learning is making NOAA’s efforts to save ice seals and belugas faster – FedScoop

Written by Dave Nyczepir Feb 19, 2020 | FEDSCOOP

National Oceanic and Atmospheric Administration scientists are preparing to use machine learning (ML) to more easily monitor threatened ice seal populations in Alaska between April and May.

Ice flows are critical to seal life cycles but are melting due to climate change which has hit the Arctic and sub-Arctic regions hardest. So scientists are trying to track species population distributions.

But surveying millions of aerial photographs of sea ice a year for ice seals takes months. And the data is outdated by the time statisticians analyze it and share it with the NOAA assistant regional administrator for protected resources in Juneau, according to aMicrosoft blog post.

NOAAs Juneau office oversees conservation and recovery programs for marine mammals statewide and can instruct other agencies to limit permits for activities that might hurt species feeding or breeding. The faster NOAA processes scientific data, the faster it can implement environmental sustainability policies.

The amazing thing is how consistent these problems are from scientist to scientist, Dan Morris, principal scientist and program director of MicrosoftAI for Earth, told FedScoop.

To speed up monitoring from months to mere hours, NOAAs Marine Mammal Laboratory partnered with AI for Earth in the summer of 2018 to develop ML models recognizing seals in real-time aerial photos.

The models were trained during a one-week hackathon using 20 terabytes of historical survey data in the cloud.

In 2007, the first NOAA survey done by helicopter captured about 90,000 images that took months to analyze and find 200 seals. The challenge isthe seals are solitary, and aircraft cant fly so low as to spook them. But still, scientists need images to capture the difference between threatened bearded and ringed seals and unthreatened spotted and ribbon seals.

Alaskas rainy, cloudy climate has led scientists to adopt thermal and color cameras, but dirty ice and reflections continue to interfere. A 2016 survey of 1 million sets of images took three scientists six months to identify about 316,000 seal hotspots.

Microsofts ML, on the other hand, can distinguish seals from rocks and, coupled with improved cameras on a NOAA turboprop airplane, will be used in flyovers of the Beaufort Sea this spring.

NOAA released a finalized Artificial Intelligence Strategy on Tuesday aimed at reducing the cost of data processing and incorporating AI into scientific technologies and services addressing mission priorities.

Theyre a very mature organization in terms of thinking about incorporating AI into remote processing of their data, Morris said.

The camera systems on NOAA planes are also quite sophisticated because the agencys forward-thinking ecologists are assembling the best hardware, software and expertise for their biodiversity surveys, he added.

While the technical integration of AI for Earths models with the software systems on NOAAs planes has taken a year to perfect, another agency project was able to apply a similar algorithm more quickly.

The Cook Inlets endangered beluga whale population numbered 279 last year down from about 1,000three decades ago.

Belugas increasingly rely on echolocation to communicate with sediment from melting glaciers dirtying the water they live in. But the noise from an increasing number of cargo ships and military and commercial flights can disorient the whales. Calves can get lost if they cant hear their mothers clicks and whistles, and adults cant catch prey or identify predators.

NOAA is using ML tools to distinguish a whales whistle from man-made noises and identify areas where theres dangerous overlap, such as where belugas feed and breed. The agency can then limit construction or transportation during those periods, according to the blog post.

Previously, the projects 15 mics recorded sounds for six months along the seafloor, scientists collected the data, and then they spent the remainder of the year classifying noises to determine how the belugas spent their time.

AI for Earths algorithms matched scientists previously classified logs 99 percent of the time last fall and have been since introduced into the field.

The ML was implemented faster than the seal projects because the software runs offline at a lab in Seattle, so integration was easier, Morris said.

NOAA intends to employ ML in additional biodiversity surveys. And AI for Earth plans to announce more environmental sustainability projects in the acoustic space in the coming weeks, Morris added, thoughhe declined to name partners.

More here:
Machine learning is making NOAA's efforts to save ice seals and belugas faster - FedScoop

Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages – Business Wire

TAMPA, Fla. & SEATTLE--(BUSINESS WIRE)--Syniverse, the worlds most connected company, and RealNetworks, a leader in digital media software and services, today announced they have incorporated sophisticated machine learning (ML) features into their integrated offering that gives carriers visibility and control over mobile messaging traffic. By integrating RealNetworks Kontxt application-to-person (A2P) message categorization capabilities into Syniverse Messaging Clarity, mobile network operators (MNOs), internet service providers (ISPs), and messaging aggregators can identify and block spam, phishing, and malicious messages by prioritizing legitimate A2P traffic, better monetizing their service.

Syniverse Messaging Clarity, the first end-to-end messaging visibility solution, utilizes the best-in-class grey route firewall, and clearing and settlement tools to maximize messaging revenue streams, better control spam traffic, and closely partner with enterprises. The solution analyzes the delivery of messages before categorizing them into specific groupings, including messages being sent from one person to another person (P2P), A2P messages, or outright spam. Through its existing clearing and settlement capabilities, Messaging Clarity can transform upcoming technologies like Rich Communication Services (RCS) and chatbots into revenue-generating products and services without the clutter and cost of spam or fraud.

The foundational Kontxt technology adds natural language processing and deep learning techniques to Messaging Clarity to continually update and improve its understanding of messages and clarification. This new feature adds to Messaging Claritys ability to identify, categorize, and ascribe a monetary value to the immense volume and complexity of messages that are delivered through text messaging, chatbots, and other channels.

The Syniverse and RealNetworks Kontxt message classification provides companies the ability to ensure that urgent messages, like one-time passwords, are sent at a premium rate compared with lower-priority notifications, such as promotional offers. The Syniverse Messaging Clarity solution also helps eliminate instances of extreme message spam phishing (smishing). This type of attack recently occurred with a global shipping company when spam texts were sent to consumers with the request to click a link to receive an update on a package delivery for a phantom order.

CLICK TO TWEET: Block #spam and categorize & prioritize #textmessages with @Syniverse & @RealNetworks #Kontxt. #MNO #ISPs #Messaging #MachineLearning #AI http://bit.ly/2HalZkv

Supporting Quotes

Syniverse offers companies the capability to use machine learning technologies to gain insight into what traffic is flowing through their networks, while simultaneously ensuring consumer privacy and keeping the actual contents of the messages hidden. The Syniverse Messaging Clarity solution can generate statistics examining the type of traffic sent and whether it deviates from the senders traffic pattern. From there, the technology analyzes if the message is a valid one or spam and blocks the spam.

The self-learning Kontxt algorithms within the Syniverse Messaging Clarity solution allow its threat-assessment techniques to evolve with changes in message traffic. Our analytics also verify that sent messages conform to network standards pertaining to spam and fraud. By deploying Messaging Clarity, MNOs and ISPs can help ensure their compliance with local regulations across the world, including the U.S. Telephone Consumer Protection Act, while also avoiding potential costs associated with violations. And, ultimately, the consumer -- who is the recipient of more appropriate text messages and less spam -- wins as well, as our Kontxt technology within the Messaging Clarity solution works to enhance customer trust and improve the overall customer experience.

Digital Assets

Supporting Resources

About Syniverse

As the worlds most connected company, Syniverse helps mobile operators and businesses manage and secure their mobile and network communications, driving better engagements and business outcomes. For more than 30 years, Syniverse has been the trusted spine of mobile communications by delivering the industry-leading innovations in software and services that now connect more than 7 billion devices globally and process over $35 billion in mobile transactions each year. Syniverse is headquartered in Tampa, Florida, with global offices in Asia Pacific, Africa, Europe, Latin America and the Middle East.

About RealNetworks

Building on a legacy of digital media expertise and innovation, RealNetworks has created a new generation of products that employ best-in-class artificial intelligence and machine learning to enhance and secure our daily lives. Kontxt (www.kontxt.com) is the foremost platform for categorizing A2P messages to help mobile carriers build customer loyalty and drive new revenue through text message classification and antispam. SAFR (www.safr.com) is the worlds premier facial recognition platform for live video. Leading in real world performance and accuracy as tested by NIST, SAFR enables new applications for security, convenience, and analytics. For information about our other products, visit http://www.realnetworks.com.

RealNetworks, Kontxt, SAFR and the companys respective logos are trademarks, registered trademarks, or service marks of RealNetworks, Inc. Other products and company names mentioned are the trademarks of their respective owners.

Results shown from NIST do not constitute an endorsement of any particular system, product, service, or company by NIST: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt-ongoing.

Go here to see the original:
Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages - Business Wire

Pluto7, a Google Cloud Premier Partner, Achieved the Machine Learning Specialization and is Recognized by Google Cloud as a Machine Learning…

Pluto7 is a services and solutions company focused on accelerating business transformation. As a Google Cloud Premier Partner, we service the retail, manufacturing, healthcare, and hi-tech industries.

Pluto7 just achieved the Google Cloud Machine Learning Specialization for combining business consultancy and unique machine learning solutions built on Google Cloud.

With Pluto7 comes unique capabilities for machine learning, artificial intelligence, and analytics. Brought to you by a company that contains some of the finest minds in data science, able to draw on its surroundings in the very heart of Silicon Valley, California.

Businesses are looking for practical solutions to real-world challenges. And by that, we do not just mean providing the tech and leaving you to stitch it all together. Instead, Pluto7s approach is to apply innovation to your desired outcome, alongside the experience needed to make it all happen. This is where their range of consultancy services comes into play. These are designed to create an interconnected tech stack and to champion data empowerment through ML/AI.

Pluto7s services and solutions allow businesses to speed up and scale-out sophisticated machine learning models. They have successfully guided many businesses through the digital transformation process by leveraging the power of artificial intelligence, analytics, and IoT solutions.

What does this mean for a partner to be specialized?

When you see a Google Cloud partner with a Specialization, it indicates proficiency and experience with Google Cloud. Pluto7 is recognized by Google Cloud as a machine learning specialist with deep technical capabilities. The organizations that receive this distinction, demonstrates their ability to lead a customer through the entire AI journey. Pluto7 designs, builds, migrates, tests, and operates industry-specific solutions for their customers.

Pluto7 has a plethora of previous experience in deploying accelerated solutions and custom applications in machine learning and AI. The many proven success stories from industry leaders like ABinBev, DxTerity, L-Nutra, CDD, USC, UNM are publically available on their website. These customers have leveraged Pluto7 and Google Cloud technology to see tangible and transformative results.

On top of all this, Pluto7 has a business plan that aligns with the Specialization. Because of their design, build, and implementation methodologies they are able to successfully drive innovation, accelerate business transformation, and boost human creativity.

ML Services and Solutions

Pluto7 has created Industry-specific use cases for marketing, sales, and supply chains and integrated these to deliver a game-changing customer experience. These capabilities are brought to life through their partnership with Google Cloud, one of the most innovative platforms for AI and ML out there. The following solution suites are created to solve some of the most difficult problems through a combination of innovative technology and deep industry expertise.

Demand ML - Increase efficiency and lower costs

Pluto7 helps supply chain leaders manage unpredictable fluctuations. These solutions allow businesses to achieve demand forecast accuracy of more than 90%, manage complex and unpredictable fluctuations while delivering the right product at the right time -- all using AI to predict and recommend based on real-time data at scale.

Preventive Maintenance - Improve quality, production and reduce associated costs

Pluto7 improves the production efficiency of production plants from 45-80% to reduce downtime and maintain quality. They leverage machine learning and predictive analytics to determine the remaining value of assets and accurately determine when a manufacturing plant, machine, component or part is likely to fail, and thus needs to be replaced.

Marketing ML - Increase marketing ROI

Pluto7s marketing solutions improve click-through rates and predict traffic rates accurately. Pluto7 can help you analyze marketing data in real-time to transform prospect and customer engagement with hyper-personalization. Businesses are able to leverage machine learning for better customer segmentation, campaign targeting, and content optimization.

Contact Pluto7

If you would like to begin your AI journey, Pluto7 recommends starting with a discovery workshop. This workshop is co-driven by Pluto7 and Google Cloud to understand business pain points and set up a strategy to begin solving. Visit the website at http://www.pluto7.com and contact us to get started today!

View source version on businesswire.com: https://www.businesswire.com/news/home/20200219005054/en/

Contacts

Sierra ShepardGlobal Marketing Teammarketing@pluto7.com

Excerpt from:
Pluto7, a Google Cloud Premier Partner, Achieved the Machine Learning Specialization and is Recognized by Google Cloud as a Machine Learning...

Machine learning and clinical insights: building the best model – Healthcare IT News

At HIMSS20 next month, two machine learning experts will show how machine learning algorithms are evolving to handle complex physiological data and drive more detailed clinical insights.

During surgery and other critical care procedures, continuous monitoring of blood pressure to detect and avoid the onset of arterial hypotension is crucial. New machine learning technology developed by Edwards Lifesciences has proven to be an effective means of doing this.

In the prodromal stage of hemodynamic instability, which is characterized by subtle, complex changes in different physiologic variables unique dynamic arterial waveform "signatures" are formed, which require machine learning and complex feature extraction techniques to be utilized.

Feras Hatib, director of research and development for algorithms and signal processing at Edwards Lifesciences, explained his team developed a technology that could predict, in real-time and continuously, upcoming hypertension in acute-care patients, using an arterial pressure waveforms.

We used an arterial pressure signal to create hemodynamic features from that waveform, and we try to assess the state of the patient by analyzing those signals, said Hatib, who is scheduled to speak about his work at HIMSS20.

His teams success offers real-world evidence as to how advanced analytics can be used to inform clinical practice by training and validating machine learning algorithms using complex physiological data.

Machine learning approaches were applied to arterial waveforms to develop an algorithm that observes subtle signs to predict hypotension episodes.

In addition, real-world evidence and advanced data analytics were leveraged to quantify the association between hypotension exposure duration for various thresholds and critically ill sepsis patient morbidity and mortality outcomes.

"This technology has been in Europe for at least three years, and it has been used on thousands of patients, and has been available in the US for about a year now," he noted.

Hatib noted similar machine learning models could provide physicians and specialists with information that will help prevent re-admissions or other treatment options, or help prevent things like delirium current areas of active development.

"In addition to blood pressure, machine learning could find a great use in the ICU, in predicting sepsis, which is critical for patient survival," he noted. "Being able to process that data in the ICU or in the emergency department, that would be a critical area to use these machine learning analytics models."

Hatib pointed out the way in which data is annotated in his case, defining what is hypertension and what is not is essential in building the machine learning model.

"The way you label the data, and what data you include in the training is critical," he said. "Even if you have thousands of patients and include the wrong data, that isnt going to help its a little bit of an art to finding the right data to put into the model."

On the clinical side, its important to tell the clinician what the issue is in this case what is causing hypertension.

"You need to provide to them the reasons that could be causing the hypertension this is why we complimented the technology with a secondary screen telling the clinician what is physiologically is causing hypertension," he explained. "Helping them decide what do to about it was a critical factor."

Hatib said in the future machine learning will be everywhere, because scientists and universities across the globe are hard at work developing machine learning models to predict clinical conditions.

"The next big step I see is going toward using this ML techniques where the machine takes care of the patient and the clinician is only an observer," he said.

Feras Hatib, along with Sibyl Munson of Boston Strategic Partners, will share some machine learning best practices during his HIMSS20 in a session, "Building a Machine Learning Model to Drive Clinical Insights." It's scheduled for Wednesday, March 11, from 8:30-9:30 a.m. in room W304A.

The rest is here:
Machine learning and clinical insights: building the best model - Healthcare IT News