Using Machine Learning to Predict Which COVID-19 Patients Will Get Worse – Michigan Medicine

A patient enters the hospital struggling to breathe they have COVID-19. Their healthcare team decides to admit them to the hospital. Will they be one of the fortunate ones who steadily improves and are soon discharged? Or will they end up needing mechanical ventilation?

That question may be easier to answer, thanks to a recent study from Michigan Medicine describing an algorithm to predict which patients are likely to quickly deteriorate while hospitalized.

You can see large variability in how different patients with COVID-19 do, even among close relatives with similar environments and genetic risk, says Nicholas J. Douville, M.D., Ph.D., of the Department of Anesthesiology, one of the studys lead authors. At the peak of the surge, it was very difficult for clinicians to know how to plan and allocate resources.

Combining data science and their collective experiences caring for COVID-19 patients in the intensive care unit, Douville, Milo Engoren, M.D., and their colleagues explored the potential of predictive machine learning. They looked at a set of patients with COVID-19 hospitalized during the first pandemic surge from March to May 2020 and modeled their clinical course.

The team generated an algorithm with inputs such as a patients age, whether they had underlying medical conditions and what medications they were on when entering the hospital, as well as variables that changed while hospitalized, including vital signs like blood pressure, heart rate and oxygenation ratio, among others.

Their question: which of these data points helped to best predict which patients would decompensate and require mechanical ventilator or die within 24 hours?

Of the 398 patients in their study, 93 required a ventilator or died within two weeks. The model was able to predict mechanical ventilation most accurately based upon key vital signs, including oxygen saturation ratio (SpO2/FiO2), respiratory rate, heart rate, blood pressure and blood glucose level.

The team assessed the data points of interest at 4, 8, 24 and 48 hour increments, in an attempt to identify the optimal amount of time necessary to predictand intervenebefore a patient deteriorates.

"The closer we were to the event, the higher our ability to predict, which we expected.But we were still able to predict the outcomes with good discrimination at 48 hours, giving providers time to make alterations to the patients care or to mobilize resources, says Douville.

For instance, the algorithm could quickly identify a patient on a general medical floor who would be a good candidate for transfer to the ICU, before their condition deteriorated to the point where ventilation would be more difficult.

In the long term, Douville and his colleagues hope the algorithm can be integrated into existing clinical decision support tools already used in the ICU. In the short term, the study brings to light patient characteristics that clinicians caring for patients with COVID-19 should keep in the back of their minds. The work also raises new questions about which COVID-19 therapies, such as anti-coagulants or anti-viral drugs, may or may not alter a patients clinical trajectory.

Says Douville, While many of our model features are well known to experienced clinicians, the utility of our model is that it performs a more complex calculation than the clinician could perform on the back of the envelope it also distills the overall risk to an easily interpretable value, which can be used to flag patients in a way so they are not missed.

Paper cited: Clinically Applicable Approach for Predicting Mechanical Ventilation in Patients with COVID-19, British Journal of Anaesthesia. DOI: 10.1016/j.bja.2020.11.03

Read more:
Using Machine Learning to Predict Which COVID-19 Patients Will Get Worse - Michigan Medicine

Machine learning and predictive analytics work better together – TechTarget

Like many AI technologies, the difference between machine learning and predictive analytics lies in applications and use cases. Machine learning's ability to learn from previous data sets and stay nimble lends itself to diverse applications like neural networks or image detection, while predictive analytics' narrow focus is on forecasting specific target variables.

Instead of implementing one type of AI or choosing between the two strategies, companies that want to get the most out of their data should combine the processing power of predictive analytics and machine learning.

Artificial intelligence is the replication of human intelligence by machines. This includes numerous technologies such as robotic process automation (RPA), natural language processing (NLP) and machine learning. These diverse technologies each replicate human abilities but often operate differently in order to accomplish their specific tasks.

Machine learning is a form of AI that allows software applications to become progressively more accurate at prediction without being expressly programmed to do so. The algorithms applied to machine learning programs and software are created to be versatile and allow for developers to make changes via hyperparameter tuning. The machine 'learns' by processing large amounts of data and detecting patterns within this set. Machine learning is the foundational basis for advanced technologies like deep learning, neural networks and autonomous vehicle operation.

Machine learning can increase the speed at which data is processed and analyzed and is a clear candidate through which AI and predictive analytics can coalesce. Using machine learning, algorithms can train on even larger data sets and perform deeper analysis on multiple variables with minor changes in deployment.

Machine learning and AI have become enterprise staples, and the debate over value is obsolete in the eyes of Gartner analyst Whit Andrews. In years prior, operationalizing machine learning required a difficult transition for organizations, but the technology has now successful implementation in numerous industries due to the popularity of open source and private software machine learning development.

"Machine learning is easier to use now by far than it was five years ago," Andrews said. "And it's also likely to be more familiar to the organization's business leaders."

As a form of advanced analytics, predictive analytics uses new and historical data in order to predict and forecast behaviors and trends.

Software applications of predictive analytics use variables that can be analyzed to predict the future likely behavior, whether for individual consumers, machinery or sales trends. This form of analytics typically requires expertise in statistical methods and is therefore commonly the domain of data scientists, data analysts and statisticians -- but also requires major oversight in order to function.

For Gartner analyst Andrew White, the crucial piece of deploying predictive analytics is strong business leadership. In order to see successful implementation, enterprises need to be using predictive analytics and data to constantly try and improve business processes. The decisions and outcomes need to be based on the data analytics, which requires a hands-on data science team.

Because of the smaller training samples used to create a specific model that does not have much capacity for learning, White stressed the importance of quality training data. Predictive models and the data they are using need to be equally fine-tuned; confusing the analytics or the data as the main player is a mistake in White's eyes.

"The reality is [data and analytical models] are equal," White said. "You need to have ownership or leadership around prioritizing and governing data as much as you have the same for analytics, because analytics is just the last mile."

Data-rich enterprises have established successful applications for both machine learning and predictive analytics.

Retailers are one of the most predominant enterprises using predictive analytics tools in order to spot website user trends and hyperpersonalize ads and target emails. Massive amounts of data collected from points of sale, retail apps, social media, in-store sensors and voluntary email lists provide insights on sales forecasting, customer experience management, inventory and supply chain.

Another popular application of predictive analytics is predictive maintenance. Manufacturers use predictive analytics to monitor their equipment and machinery and predict when they need to replace or repair valuable pieces.

Predictive analytics is also popularly deployed in risk management, fraud and security, and healthcare applications across enterprises.

Machine learning, on the other hand, has a wider variety of applications, from customer relationship management to self-driving cars. These algorithms are in human resource information systems to identify candidates, within software sold by business intelligence and analytics vendors, as well as in customer relationship management systems.

In businesses, the most popular machine learning applications include chatbots, recommendation engines, market research and image recognition.

Enterprise trend applications are where predictive analytics and AI can converge. Maintaining best data practices as well as focusing on combining the powers of machine learning and predictive analytics is the only way for organizations to keep themselves at the cutting edge of predictive forecasting.

Machine learning algorithms can produce more accurate predictions, create cleaner data and empower predictive analytics to work faster and provide more insight with less oversight. Having a strong predictive analysis model and clean data fuels the machine learning application. While a combination does not necessarily provide more applications, it does mean that the application can be trusted more. Splitting hairs between the two shows that these terms are actually hierarchical and that when combined, they complete one another to strengthen the enterprise.

Originally posted here:
Machine learning and predictive analytics work better together - TechTarget

LPA announce VisiRule FastChart to combine Machine Learning and rule-based expert systems – PR Web

LONDON (PRWEB) November 02, 2020

VisiRule FastChart is an exciting new addition to the VisiRule family of visual AI expert system tools.

VisiRule FastChart can automatically interpret decision trees and use them to auto-construct a VisiRule chart without any user involvement. This means that historical data can be used to create VisiRule charts.

For example, given a historical log of machine data and fault logs, a decision tree can be induced which when exported to VisiRule FastChart will lead to a visual model being built in VisiRule. This chart can be used to predict future occurrences based on current data.

VisiRule incorporates Artificial Intelligence in the form of expert system rule-based inferencing. Complex behaviour and computation can be represented as a set of interconnected decision rules which in turn can be represented graphically in VisiRule.

Clive Spenser, Marketing Director, LPA, says, "VisiRule FastChart is an exciting new addition to the VisiRule family. It allows companies to utilise historical data to build current models to help predict and prescribe remedies for future situations". Clive adds "The highly visual philosophy of VisiRule makes building and testing such models more practical and opens up the world of AI to a much wider audience."

VisiRule FastChart is available for immediate release as part of the VisiRule product range at a price of 2500 USD.

VisiRule is an easy-to-use Low-Code No-Code tool for subject matter experts, like lawyers, tax advisors, engineers, to rapidly define and deliver intelligent advice and troubleshooting guides using decision tree flowcharts.

VisiRule allows experts to capture, evaluate, refine and deploy specialist expertise as smart AI solutions. Use cases include problem triage with recommended prescriptive actions plus document generation. https://www.visirule.co.uk/

LPA is a small dedicated AI company in London, England which has been providing logic-based software solutions since it was formed in 1981. LPA products have been used in a wide-range of commercial and research applications including legal document assembly, environmental engineering, information modeling, disease diagnosis, fault diagnosis, products sales and recommendations. https://www.lpa.co.uk/

Share article on social media or email:

Read more here:
LPA announce VisiRule FastChart to combine Machine Learning and rule-based expert systems - PR Web

Microsoft Introduces Lobe: A Free Machine Learning Application That Allows You To Create AI Models Without Coding – MarkTechPost

Microsoft has releasedLobe, a free desktop application that lets Windows and Mac users create customized AI models without writing any code. Several customers are already using the app for tracking tourist activity around coral reefs, the company said.

Lobeis available on Windows and Mac as a desktop app. Presently it only supports image classification by categorizing the image to a single label overall. Microsoft says that there will be new releases supporting other neural networks in the near future.

To create an AI in Lobe, a user first needs to import a collection of images. These images are used as a dataset to train the application. Lobe analyzes the input images and sifts through a built-in library of neural network architectures to find the most suitable model for processing the dataset. Then it trains the model on the provided data, creating an AI model optimized to scan images for the users specific object or action.

AutoML is a technology that can automate parts and most of the machine learning creation workflow, reducing the advancement costs. Microsoft has made AutoML features available to enterprises in its Azure public cloud. The existing AI tools in Azure target only advanced projects. Lobe being free, easy to access, and convenient to use can now support even simple use cases that were not adequately addressed by the existing AI tools.

The Nature Conservancy is a nonprofit environmental organization that used Lobe to create an AI. This model analyzes the pictures taken by tourists in the Caribbean to identify where and when visitors interact with coral reefs. A Seattle auto marketing firm,Sincro LLC,has developed an AI model that scans vehicle images in online ads to filter out pictures that are less appealing to the customers.

GitHub: https://github.com/lobe

Website: https://lobe.ai/

Related

Continue reading here:
Microsoft Introduces Lobe: A Free Machine Learning Application That Allows You To Create AI Models Without Coding - MarkTechPost

Machine learning approach could detect drivers of atrial fibrillation – Cardiac Rhythm News

Mapping of the explanted human heart

Researchers have designed a new machine learning-based approach for detecting atrial fibrillation (AF) drivers, small patches of the heart muscle that are hypothesised to cause this most common type of cardiac arrhythmia. This approach may lead to more efficient targeted medical interventions to treat the condition, according to the authors of the paper published in the journal Circulation: Arrhythmia and Electrophysiology.

The mechanism behind AF is yet unclear, although research suggests it may be caused and maintained by re-entrant AF drivers, localised sources of repetitive rotational activity that lead to irregular heart rhythm. These drivers can be burnt via a surgical procedure, which can mitigate the condition or even restore the normal functioning of the heart.

To locate these re-entrant AF drivers for subsequent destruction, doctors use multi-electrode mapping, a technique that allows them to record multiple electrograms inside the using a catheter and build a map of electrical activity within the atria. However, clinical applications of this technique often produce a lot of false negatives, when an existing AF driver is not found, and false positives, when a driver is detected where there really is none.

Recently, researchers have tapped machine learning algorithms for the task of interpreting ECGs to look for AF; however, these algorithms require labelled data with the true location of the driver, and the accuracy of multi-electrode mapping is insufficient. The authors of the new study, co-led by Dmitry Dylov from the Skoltech Center of Computational and Data-Intensive Science and Engineering (CDISE, Moscow, Russia) and Vadim Fedorov from the Ohio State University (Columbus, USA) used high-resolution near-infrared optical mapping (NIOM) to locate AF drivers and stuck with it as a reference for training.

NIOM is based on well-penetrating infrared optical signals and therefore can record the electrical activity from within the heart muscle, whereas conventional clinical electrodes can only measure the signals on the surface. Add to this trait the excellent optical resolution, and the optical mapping becomes a no-brainer modality if you want to visualize and understand the electrical signal propagation through the heart tissue, said Dylov.

The team tested their approach on 11 explanted human hearts, all donated posthumously for research purposes. Researchers performed the simultaneous optical and multi-electrode mapping of AF episodes induced in the hearts. ML model can indeed efficiently interpret electrograms from multielectrode mapping to locate AF drivers, with an accuracy of up to 81%. They believe that larger training datasets, validated by NIOM, can improve machine learning-based algorithms enough for them to become complementary tools in clinical practice.

The dataset of recording from 11 human hearts is both extremely priceless and too small. We realiaed that clinical translation would require a much larger sample size for representative sampling, yet we had to make sure we extracted every piece of available information from the still-beating explanted human hearts. Dedication and scrutiny of two of our PhD students must be acknowledged here: Sasha Zolotarev spent several months on the academic mobility trip to Fedorovs lab understanding the specifics of the imaging workflow and present the pilot study at the HRS conference the biggest arrhythmology meeting in the world, and Katya Ivanova partook in the frequency and visualization analysis from within the walls of Skoltech. These two young researchers have squeezed out everything one possibly could, to train the machine learning model using optical measurements, Dylov notes.

Read the original:
Machine learning approach could detect drivers of atrial fibrillation - Cardiac Rhythm News

Using Machine Learning To Predict Disease In Cattle Might Help Solve A Billion-Dollar Problem – Forbes

One of the challenges in scaling up meat production are issues of disease for the animals. Take bovine respiratory disease (BRD), for example. This contagious infection is responsible for nearly half of all feedlot deaths for cattle every year in North America. The industrys costs for managing the disease come close to $1 billion annually.

Preventative measures could significantly decrease these costs, and a small team comprising a data scientist, a college student and two entrepreneurs spent the past weekend at the Forbes Under 30 Agtech+ Hackathon figuring out a concept for better managing the disease.

Their solution? Tag-Ag, a conceptual set of predictive models that could take data already routinely gathered by cattle ranchers and tracked using ear tags to both identify cows at risk for BRD to focus prevention efforts; and to trace outbreaks of BRD to provide more focused treatment and management decisions.

By providing these insights, we can instill confidence in both big consumers such as McDonalds or Wal-Mart, and small consumers like you and me, that their meat is sourced from a healthy and sustainable operation, said team member Natalie McCaffrey, an 18 year-old undergraduate at Washington & Lee University at the Hackathons final presentations on Sunday evening.

McCaffrey was joined by Jacob Shields, 30, a senior research scientist at Elanco Animal Health; Marya Dzmiturk, 28, cofounder of TK startup Avanii and an alumnus of the 2020 Forbes Under 30 list in Manufacturing & Industry; and Shaina Steward, 29, founder of The Model Knowledge Group & Ekal Living.

They joined a larger group of hackathoners who brainstormed a variety of concepts related to animal health on Friday night before settling on three different ideas, at which point the group split into the smaller teams. The initial pitch for the Tag-Ag team was the use of AI & Big Data to help producers keep animals healthy.

As the Tag-Ag team began its research and development process on Saturday, one clear challenge was the scope of potential animal health issues, as well as a potentially intense labor process in collecting useful information. They settled on cattle because, McCaffrey says, big ranchers are already electronically collecting data on cattle, and because BRD by itself makes a huge impact on the industry.

Another advantage of using data already being collected, adds Shields, is that tools exist to build a model for the concepts predictive analytics based on whats out there. For supervised machine learning algorithms, the more inputs the better, he says. I dont believe well need additional studies to support this case, unless we knew of a handful of data points that werent being collected that really would help with the predictability.

For a business model, the Tag-Ag team suggests a subscription-based model, with a one-time implementation fee for any hardware needs. They believe that theres definitely room to raise capital, pointing to the size of the market loss theyre addressing plus the $500 million in venture capital invested in AgTech companies in 2019 alone.

Investors and institutions are recognizing opportunities in the AgTech space, McCaffrey says, and beyond that, she adds, our space of AI and data has space for additional players.

Team members: Natalie McCaffery, undergraduate, Washington & Lee University; Jacob Shields, senior research scientist, Elanco Animal Health; Marya Dzmiturk, cofounder, Avanii; Shaina Steward, 29, founder, The Model Knowledge Group and Ekal Living.

Read this article:
Using Machine Learning To Predict Disease In Cattle Might Help Solve A Billion-Dollar Problem - Forbes

Experian partners with Standard Chartered to drive Financial Inclusion with Machine Learning, powering the next generation of Decisioning – Yahoo…

Leveraging innovation in technology to provide access to credit during uncertain times to populations underserved by formal financial services.

This social impact was made possible by the Bank's digital first strategy and Experian's best-in-class decisioning platform. Experian's software enables the Bank to analyse a high volume of alternative data and execute machine learning models for better decision-making and risk management.

Since the first pilot implementation in India in December 2019, the Bank saw an improvement in approvals by increasing overall acceptance rates using big data and artificial intelligence. This enhanced risk management capabilities to test and learn, helping to expand access to crucial credit and financial services.

The Bank and Experian are committed to financial inclusion, with plans for rollouts across 6 more markets across Asia, Africa and the Middle East.

SINGAPORE, Oct. 15, 2020 /PRNewswire/ -- Experian a leading global information services company has announced a partnership with leading international banking group Standard Chartered to drive financial access across key markets in Asia, Africa and the Middle East by leveraging the latest technology innovation in credit decisioning. Without enough credit bureau data for financial institutions to determine their credit worthiness, especially in this time of unprecedented volatility, many underbanked communities are facing difficulties securing access to loans.

The collaboration involves Experian's leading global decisioning solution, PowerCurve Strategy Manager, integrated with machine learning capabilities that will enable deployment of advanced analytics to help organisations make the most of their data. In support of Standard Chartered's digital-first transformation strategy, this state-of-the-art machine learning capability provides the Bank with the ability to ingest and analyse a high volume of non-Bank or, with client consent, alternative data, enabling faster, more effective and accurate credit decisioning, resulting in better risk management for the Bank and better outcomes from clients.

Story continues

Launched in India in December 2019, Standard Chartered registered positive business outcomes such as increased acceptance rates and reduced overall delinquencies. The success in India meant that Standard Chartered is now able to improve risk management for more clients who previously would have been underbanked, empowering them with access to crucial credit and financial services in their time of need.

Beyond benefits to consumers, access to credit is vital for overall economic growth, with consumer spending helping businesses continue to operate during these difficult times.

"Social and economic growth in developing markets, especially in the coming period, will be driven by progress in financial inclusion. Experian strongly believes that a technology, advanced analytics and data-driven approach can address this opportunity and we remain deeply committed to the vision of progressing financial inclusion for the world's underserved and underbanked population. Our long-standing collaboration with Standard Chartered across our PowerCurve decisioning suite of solutions, leveraging machine learning and big data to advance to the next generation of credit decisioning, is focused on empowering these underbanked communities to access credit," said Mohan Jayaraman, Managing Director, Southeast Asia & Regional Innovation, Experian Asia Pacific.

Reaffirming a commitment towards financial inclusion, Experian and Standard Chartered are working on plans to deploy the solution to its retail franchise across Asia, Africa and the Middle East, in addition to India.

"We're committed to supporting sustainable social and economic development through our business, operations and communities. This partnership helps the Bank manage risk more effectively with a more robust data-driven credit decisioning which in turn enables more clients to gain access to financial services at a time when they need it the most," said Vishu Ramachandran, Group Head, Retail Banking, Standard Chartered.

"Partnerships are central to our digital banking strategy and how we better serve our clients. Experian was a natural choice as a partner given their strong track record in innovation and in driving financial inclusion," said Aalishaan Zaidi, Global Head, Client Experience, Channels & Digital Banking, Standard Chartered.

For more information, please visit Experian's Decisioning & Credit Risk Management solution.

For further information, please contact:

About Experian

Experian is the world's leading global information services company. During life's big moments from buying a home or a car, to sending a child to college, to growing a business by connecting with new customers we empower consumers and our clients to manage their data with confidence. We help individuals to take financial control and access financial services, businesses to make smarter decisions and thrive, lenders to lend more responsibly, and organisations to prevent identity fraud and crime.

We have 17,800 people operating across 45 countries and every day we're investing in new technologies, talented people and innovation to help all our clients maximise every opportunity. We are listed on the London Stock Exchange (EXPN) and are a constituent of the FTSE 100 Index.

Learn more at http://www.experian.com.sg or visit our global content hub at our global news blog for the latest news and insights from the Group.

About Standard Chartered

We are a leading international banking group, with a presence in 60 of the world's most dynamic markets, and serving clients in a further 85. Our purpose is to drive commerce and prosperity through our unique diversity, and our heritage and values are expressed in our brand promise, Here for good.

Standard Chartered PLC is listed on the London and Hong Kong Stock Exchanges.

For more stories and expert opinions please visit Insights at sc.com. Follow Standard Chartered on Twitter, LinkedIn and Facebook.

Logo - https://photos.prnasia.com/prnh/20201013/2947870-1LOGO-a Logo - https://photos.prnasia.com/prnh/20201013/2947870-1LOGO-b

SOURCE Experian

View post:
Experian partners with Standard Chartered to drive Financial Inclusion with Machine Learning, powering the next generation of Decisioning - Yahoo...

Vanderbilt trans-institutional team shows how next-gen wearable sensor algorithms powered by machine learning could be key to preventing injuries that…

A trans-institutional team of Vanderbilt engineering, data science and clinical researchers has developed a novel approach for monitoring bone stress in recreational and professional athletes, with the goal of anticipating and preventing injury. Using machine learning and biomechanical modeling techniques, the researchers built multisensory algorithms that combine data from lightweight, low-profile wearable sensors in shoes to estimate forces on the tibia, or shin bonea common place for runners stress fractures.

The research builds off the researchers2019 study,which found that commercially available wearables do not accurately monitor stress fracture risks.Karl Zelik, assistant professor of mechanical engineering, biomedical engineering and physical medicine and rehabilitation, sought to develop a better technique to solve this problem.Todays wearablesmeasure ground reaction forceshow hard the foot impacts or pushes against the groundto assess injury risks like stress fractures to the leg, Zelik said. While it may seem intuitive to runners and clinicians that the force under your foot causes loading on your leg bones, most of your bone loading is actually from muscle contractions. Its this repetitive loading on the bone that causes wear and tear and increases injury risk to bones, including the tibia.

The article, Combining wearable sensor signals, machine learning and biomechanics to estimate tibial bone force and damage during running was publishedonlinein the journalHuman Movement Scienceon Oct. 22.

The algorithms have resulted in bone force data that is up to four times more accurate than available wearables, and the study found that traditional wearable metrics based on how hard the foot hits the ground may be no more accurate for monitoring tibial bone load than counting steps with a pedometer.

Bones naturally heal themselves, but if the rate of microdamage from repeated bone loading outpaces the rate of tissue healing, there is an increased risk of a stress fracture that can put a runner out of commission for two to three months. Small changes in bone load equate to exponential differences in bone microdamage, said Emily Matijevich, a graduate student and the director of theCenter for Rehabilitation Engineering and Assistive TechnologyMotion Analysis Lab. We have found that 10 percent errors in force estimates cause 100 percent errors in damage estimates. Largely over- or under-estimating the bone damage that results from running has severe consequences for athletes trying to understand their injury risk over time. This highlights why it is so important for us to develop more accurate techniques to monitor bone load and design next-generation wearables. The ultimate goal of this tech is to better understand overuse injury risk factors and then prompt runners to take rest days or modify training before an injury occurs.

The machine learning algorithm leverages the Least Absolute Shrinkage and Selection Operator regression, using a small group of sensors to generate highly accurate bone load estimates, with average errors of less than three percent, while simultaneously identifying the most valuable sensor inputs, saidPeter Volgyesi, a research scientist at the Vanderbilt Institute for Software Integrated Systems. I enjoyed being part of the team.This is a highly practical application of machine learning, markedly demonstrating the power of interdisciplinary collaboration with real-life broader impact.

This research represents a major leap forward in health monitoring capabilities. This innovation is one of the first examples of a wearable technology that is both practical to wear in daily life and can accuratelymonitor forces on and microdamage to musculoskeletal tissues.The team has begun applying similar techniques to monitor low back loading and injury risks, designed for people in occupations that require repetitive lifting and bending. These wearables could track the efficacy of post-injury rehab or inform return-to-play or return-to-work decisions.

We are excited about the potential for this kind of wearable technology to improve assessment, treatment and prevention of other injuries like Achilles tendonitis, heel stress fractures or low back strains, saidMatijevich, the papers corresponding author.The group has filed multiple patents on their invention and is in discussions with wearable tech companies to commercialize these innovations.

This research was funded by National Institutes of Health grant R01EB028105 and the Vanderbilt University Discovery Grant program.

Go here to see the original:
Vanderbilt trans-institutional team shows how next-gen wearable sensor algorithms powered by machine learning could be key to preventing injuries that...

A beginners guide to the math that powers machine learning – The Next Web

How much math knowledge do you need for machine learning and deep learning? Some people say not much. Others say a lot. Both are correct, depending on what you want to achieve.

There are plenty of programming libraries, code snippets, and pretrained models that can get help you integrate machine learning into your applications without having a deep knowledge of the underlying math functions.

But theres no escaping the mathematical foundations ofmachine learning. At some point in your exploration and mastering of artificial intelligence, youll need to come to terms with the lengthy and complicated equations that adorn AI whitepapers and machine learning textbooks.

In this post, I will introduce some of my favorite machine learning math resources. And while I dont expect you to have fun with machine learning math, I will also try my best to give you some guidelines on how to make the journey a bit more pleasant.

Khan Academys online courses are an excellent resource to acquire math skills for machine learning

Many machine learning books tell you that having a working knowledge of linear algebra. I would argue that you need a lot more than that. Extensive experience with linear algebra is a must-havemachine learning algorithms squeeze every last bit out of vector spaces and matrix mathematics.

You also need to know a good bit of statistics and probability, as well as differential and integral calculus, especially if you want to become more involved indeep learning.

There are plenty of good textbooks, online courses, and blogs that explore these topics. But my personal favorite isKhan Academys math courses. Sal Khan has done a great job of putting together a comprehensive collection of videos that explain different math topics. And its free, which makes it even better.

Although each of the videos (which are also available on YouTube) explain a separate topic, going through the courses end-to-end provides a much richer experience.

I recommend thelinear algebracourse in particular. Here, youll find everything you need about vector spaces, linear transformations, matrix transformations, and coordinate systems. The course has not been tailored for machine learning, and many of the examples are about 2D and 3D graphic systems, which are much easier to visualize than the multidimensional spaces of machine learning problems. But they discuss the same concepts youll encounter in machine learning books and whitepapers. In the course are some hidden gems like least square calculations and eigenvectors, which are important topics in machine learning.

The calculus course are a bit more fragmented, but it might be a good feature for readers who already have a strong foundation and just want to brush up their skills. Khan includes precalculus, differential calculus, and integral calculus courses that cover the foundations. Themultivariable calculus coursediscusses some of the topics that are central to deep learning, such as gradient descent and partial derivatives.

There are also several statistics courses in Khan Academys platform, and there are some overlaps between them. They all discuss some of the key concepts you need in data science and machine learning, such as random variables, distributions, confidence intervals, and the difference between continuous and categorical data. I recommend thecollege statistics course, which includes some extra material that is relevant to machine learning, such as the Bayes theorem.

To be clear, Khan Academys courses are not a replacement for the math textbook and classroom. They are not very rich in exercises. But they are very rich in examples, and for someone who just needs to blow the dust off their algebra knowledge, theyre great. Sal talks very slowly, probably to make the videos usable for a wider audience who are not native English speakers. I run the videos on 1.5x speed and have no problem understanding them, so dont let the video lengths taunt you.

Vanilla algebra and calculus are not enough to get comfortable with the mathematics of machine learning. Machine learning concepts such as loss functions, learning rate, activation functions, and dimensionality reduction are not covered in classic math books. There are more specialized resources for that.

My favorite isMathematics for Machine Learning. Written by three AI researchers, the provides you with a strong foundation to explore the workings of different components of machine learning algorithms.

The book is split into two parts. The first part is mathematical foundations, which is basically a revision of key linear algebra and calculus concepts. The authors cover a lot of material in little more than 200 pages, so most of it is skimmed over with one or two examples. If you have a strong foundation, this part will be a pleasant read. If you find it hard to grasp, you can combine the chapters with select videos from Khans YouTube channel. Itll become much easier.

The second part of the book focuses on machine learning mathematics. Youll get into topics such as regression, dimensionality reduction, support vector machines, and more. Theres no discussion ofartificial neural networksand deep learning concepts, but being focused on the basics makes this book a very good introduction to the mathematics of machine learning.

As the authors write on their website: The book is not intended to cover advanced machine learning techniques because there are already plenty of books doing this. Instead, we aim to provide the necessary mathematical skills to read those other books.

For a more advanced take on deep learning, I recommendHands-on Mathematics for Deep Learning. This book also contains an intro on linear algebra, calculus, and probability and statistics. Again, this section is for people who just want to jar their memory. Its not a basic introductory book.

The real value of this book comes in the second section, where you go into the mathematics of multilayer perceptrons,convolutional neural networks(CNN), andrecurrent neural networks(RNN). The book also goes into the logic of other crucial concepts such as regularization (L1 and L2 norm), dropout layers, and more.

These are concepts that youll encounter in most books on machine learning and deep learning. But knowing the mathematical foundations will help you better understand the role hyperparameters play in improving the performance of your machine learning models.

A bonus section dives into advanced deep learning concepts, such as the attention mechanism that has made Transformers so efficient and popular, generative models such as autoencoders andgenerative adversarial networks, and the mathematics oftransfer learning.

Agreeably, mathematics is not the most fun way to start machine learning education, especially if youre self-learning. Fortunately, as I said at the beginning of this article, you dont need to begin your machine learning education by poring over double integrals, partial derivatives, and mathematical equations that span a pages width.

You can start with some of the more practical resources on data science and machine learning. A good introductory book isPrinciples of Data Science, which gives you a good overview of data science and machine learning fundamentals along with hands-on coding examples in Python and light mathematics.Hands-on Machine Learning andPython Machine Learningare two other books that are a little more advanced and also give deeper coverage of the mathematical concepts. UdemysMachine Learning A-Zis an online course that combines coding with visualization in a very intuitive way.

I would recommend starting with one or two of the above-mentioned books and courses. They will give you a working knowledge of the basics of machine learning and deep learning and prepare your mind for the mathematical foundations. Once you know have a solid grasp of different machine learning algorithms, learning the mathematical foundations becomes much more pleasant.

As you master the mathematics of machine learning, you will find it easier to find new ways to optimize your models and tweak them for better performance. Youll also be able to read the latest cutting edge papers that explain the latest findings and techniques in deep learning, and youll be able to integrate them into your applications. In my experience, the mathematics of machine learning is an ongoing educational experience. Always look for new ways to hone your skills.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

Published October 2, 2020 10:00 UTC

See the original post:
A beginners guide to the math that powers machine learning - The Next Web

Requirements for the Use of Machine Learning in Cardiology Research – The Cardiology Advisor

Suggestions were formulated to reduce bias and error related to the use of machine learning (ML) approaches in cardiology research, and published in the Journal of American College of Cardiologists: Cardiovascular Imaging.

The use of ML approaches for cardiovascular research has recently increased, as the technology offers approaches to automatically discover relevant patterns among datasets. This review authored by members of the American College of Cardiology Healthcare Innovation Council, points to the fact that many studies using ML approaches may have uncertain real-world data sources, inconsistent outcomes, possible measurement inaccuracies, or lack of validation and reproducibility.

The authors provide here a framework to guide cardiovascular research in the form of a checklist.

When considering employing a ML approach for their research work, investigators should initially determine whether it would be applicable for the specific study aim. An important caveat of ML is that it requires large sample sizes. Therefore, if collecting and labeling fewer than hundreds of samples per class is not feasible, overfitting is likely be a relevant concern. When sufficient samples are available, ML approaches are best suited for unstructured data, exploratory study objectives, or for feature selection purposes.

Next, data should be standardized, if necessary. During this process, redundant features are normalized, duplicates are removed, outliers removed or corrected for, and missing data removed or imputed. As a general rule, the ratio of observations to measurements should be 5. In cases in which this ratio is too large, dimension reduction may be considered.

Many ML approaches are available to researchers, and the choice of which model to implement is critical. Some models are preferable for high-dimensional data (regression or instance-based learning) or imaging data (convolutional neural networks). The authors recommend selecting the simplest algorithm that is appropriate for ones dataset.

Several methods are available to assess and evaluate models. Model assessment should always be performed through random division of the data into training, testing, and validation sets. Cross-validation and bootstrapping methods are best suited for big data, and jack-knifing methods for smaller datasets. Model evaluation should include appropriate plots (Bland-Altman). In addition, inter-observational variability should be reported, and misclassification risk be made clear.

To maintain a level of reproducibility across studies, the authors encourage researchers to release the code and data used, when possible. All chosen variables and parameters, as well as specific versions of software and libraries should be clearly indicated.

The authors acknowledge that these methods are complex, and while they have the opportunity to advance the field of cardiology, especially personalized medicine, many concerns remain when translating these findings into clinical practice. This checklist should assist researchers in reducing bias or error when designing and carrying out future studies.

Reference

Sengupta P P, Shrestha S, Berthon B, et al. Proposed Requirements for Cardiovascular Imaging-Related Machine Learning Evaluation (PRIME): A Checklist. JACC Cardiovasc Imaging. 2020;13(9):2017-2035.

Read more from the original source:
Requirements for the Use of Machine Learning in Cardiology Research - The Cardiology Advisor

Microsoft releases the InnerEye Deep Learning Toolkit to improve patient care – Neowin

Microsoft's Project InnerEye has been involved in building and deploying machine learning models for years now. The team has been working with doctors, clinicians, oncologists, assisting them in tasks like radiotherapy, surgical planning, and quantitative radiology. This has reduced the burden on the people involved in the domain.

The firm says that the goal of Project InnerEye is to "democratize AI for medical image analysis" by allowing researchers and medical practitioners to build their own medical imaging models. With this in mind, the team released the InnerEye Deep Learning Toolkit as open-source software today. Built on top of PyTorch and integrated heavily with Microsoft Azure, the toolkit is meant to ease the process of training and deploying models.

Specifically, the InnerEye Deep Learning Toolkit will allow users to build their own image classification, segmentation, or sequential models. They will have the option to construct their own neural networks or import them from elsewhere. One of the motivations behind this project was to provide an abstraction layer for users so that they can deploy machine learning models without worrying too much about the details. As expected, the usual advantages of Azure Machine Learning Services will be bundled with the toolkit as well:

The Project InnerEye team at Microsoft Research hopes that this toolkit will integrate machine learning technologies to treatment pathways, leading to long-term practical solutions. If you are interested in checking out the toolkit or want to contribute to it, you may check out the repository on GitHub. The full set of features offered under the toolkit can be found here.

Original post:
Microsoft releases the InnerEye Deep Learning Toolkit to improve patient care - Neowin

YouTube Will Now Harness Machine Learning To Auto-Apply Video Age Restrictions – Tubefilter

Beginning today, YouTube will roll out three updates with respect to Age-Restricted content part of an ongoing reliance on machine learning technology for content moderation that dates back to 2017, and in response to a new legal directive in the European Union (EU), the company said.

Age-restricted content is only available to logged-in YouTube users over 18, and includes videos that dont violate platform policies but are inappropriate for underage viewers. Videos can get age-restricted, for instance, when they include vulgar language, violence or disturbing imagery, nudity, or the portrayal of harmful or dangerous activities. (YouTube has just instituted minor changes as to where it draws these lines, the company said,which you can check out in full right here, and which will be rolled out in coming months).

Previously, age restrictions could be implemented by creators themselves or by manual reviewers on YouTubes Trust & Safety team as part of the broader video review process. While both of these avenues will still exist, YouTube will also begin using machine learning to auto-apply age restrictions a change that is bound to result in far more restrictions across the board.

A YouTube spokesperson described the move as the latest development in a multi-year responsibility effort harnessing machine learning and a testament to YouTubes ongoing commitment to child safety. In 2018, the platform began using machine learning to detect violent extremism and content that endangered child safety, and in 2019 expanded the technology to detect hate speech and harassment.

Even with more videos being age-restricted, YouTube anticipates the impact on creator revenues will be minimal or nonexistent given that videos that could fall into the age-restricted category tend to also violate YouTubes ad-friendly guidelines and thus typically carry no or limited ads. YouTube also notes that creators will still be able to appeal decisions if they feel their videos have been incorrectly restricted.

In addition to the integration of machine learning, YouTube is also putting a stop to a previous workaround for age-restricted videos which could be viewed by anyone when embedded on third-party websites. Going forward, embedded age-restricted videos will redirect users to YouTube, where they must sign in to watch, the company said.

And finally, YouTube is instituting new age verification procedures in the EU as mandated by new regulation dubbed the Audiovisual Media Services Directive (AVMSD), which can require viewers to provide additional proof of age when attempting to watch mature content.

Now, if YouTubes systems cannot verify whether a creator is actually above 18 in the EU, they can be asked to provide a valid ID or credit card number for which the minimum account-holding age is typically 18 by means of proof (pictured above). A prompt for additional proof of age could be triggered by different signals if, for instance, an account predominantly favors kid-friendly content and then attempts to watch a mature videos.

Given the countless forms of identification that exist across the EU, YouTube says that it is still working on a full rundown of acceptable formats.A spokesperson said that all ID and credit card numbers would be deleted after a users age is confirmed.

Read more from the original source:
YouTube Will Now Harness Machine Learning To Auto-Apply Video Age Restrictions - Tubefilter

Finance Sector Benefits from Machine Learning Development and AI – Legal Reader

Banking and finance rely on experts but the new expert on the scene is your AI/ML combo, able to do far more, do it fast and do it accurately.

Making the right decisions and grabbing opportunities in the fast moving world of finance can make a difference to your bottom line. This is where artificial intelligence and machine learning make a tangential difference. Engage machine learning development services in your finance segment and life will not be the same. Markets and Markets study shows that artificial intelligence in financial segment will grow to over $ 7300 million by 2022.

Data

The simple reason you need machine learning development company to help you make better decisions with the help of AI/ML is data. Data flows in torrents from diverse sources and contains precious nuggets of information. This can be the basis of understanding customer behaviors and it can help you gain predictive capabilities. Data analysis with ML can also help identify patterns that could be indicative of attempts at fraud and you save your reputation and money by tackling it in time.

The key

Normalize huge sets of data and derive information in real time according to specifiable parameters. Machine Learning algorithms can help you train the system to carry out fast analysis and deliver results based on algorithm models created for the purpose by Machine Learning Development Company for you. As it ages the system actually becomes smarter because it learns as it goes along.

To achieve the same result manually using standard IT solutions you would employ a team of IT specialists but even then it is doubtful if you could get outputs in time to help you take decisive action.

Fraud prevention

This is one case where prevention is better than cure. A typical bank may have hundreds of thousands of customers carry out any number of different transactions. All such data is under the watchful eye of the ML imbued system and it is quick to detect anomalies. In fact, ML has been known to cause misunderstanding because a customer not familiar with credit card operations repeatedly fumbled and that raised a false alarm. Still, it is better to be safe than sorry and carry out firefighting after the event.

Stock trading

Day trading went algorithmic quite a few years back and helped brokers profit by getting the system to make automatic profitable trades. Apart from day trading there are derivatives, forex, commodities and binary where specific models for ML can help you, as a trader or a broker, anticipate price movements. This is one area where price is influenced not just by demand-supply but also by political factors, climate, company results and unforeseen calamities. ML keeps track of all and integrates them into a predictive capability to keep you ahead of the game.

Investment decisions

Likewise, investments in other areas like bonds, mutual funds and real estate need to be based on smart analysis of present and future while factoring external influencers. No one, for example, foresaw the covid-19 devastation that froze economies and dried up sources of funds that have an impact on investments, especially in real estate. However, if you have machine learning based system it would keep track of developments and alert you in advance so that you can be prepared. Then there are more mundane tasks in finance sector where ML does help. Portfolio managers always walk a tight rope and rely on experts who can make false decisions and affect clients capital. Tap into the power of ML to stay on top and grow wealth of wealthy clients. Their recommendations will get you more clients making the investment in ML solutions more than worthwhile. It could be the best investment you make.

Automation

Banks, private lenders, institutions and insurance companies routinely carry out repetitive and mundane tasks like attending to inquiries, processing forms and handling transactions. This does involve extreme manpower usage leading to high costs. Your employees work under a deluge of such tasks and cannot do anything productive. Switch to ML technologies to automate such repetitive tasks. You will have two benefits:

The second one alone is worth the investment. In the normal course of things you would have to devote considerable energies to identify developing patterns whereas the ML solution presents trends based on which you can modify services, design offers or address customer pain points and ensure loyalty.

Risk mitigation

Smart operators are always gaming the system such as finding ways to improve credit score and obtain credit despite being ineligible. Such operators would pass the normal scanning technique of banks. However, if you have ML for assessment of loan application the system delves deeper and digs to find out all relevant information, collate it and analyze it to help you get a true picture. Non-performing assets cause immense losses to banks and this is one area where Machine Learning solutions put in place by expert machine learning development services can and does prove immensely valuable.

Banking and finance rely on experts but the new expert on the scene is your AI/ML combo, able to do far more, do it fast and do it accurately.

Read more here:
Finance Sector Benefits from Machine Learning Development and AI - Legal Reader

The tensions between explainable AI and good public policy – Brookings Institution

Democratic governments and agencies around the world are increasingly relying on artificial intelligence. Police departments in the United States, United Kingdom, and elsewhere have begun to use facial recognition technology to identify potential suspects. Judges and courts have started to rely on machine learning to guide sentencing decisions. In the U.K., one in three British local authorities are said to be using algorithms or machine learning (ML) tools to make decisions about issues such as welfare benefit claims. These government uses of AI are widespread enough to wonder: Is this the age of government by algorithm?

Many critics have expressed concerns about the rapidly expanding use of automated decision-making in sensitive areas of policy such as criminal justice and welfare. The most often voiced concern is the issue of bias: When machine learning systems are trained on biased data sets, they will inevitably embed in their models the datas underlying social inequalities. The data science and AI communities are now highly sensitive to data bias issues, and as a result have started to focus far more intensely on the ethics of AI. Similarly, individual governments and international organizations have published statements of principle intended to govern AI use.

A common principle of AI ethics is explainability. The risk of producing AI that reinforces societal biases has prompted calls for greater transparency about algorithmic or machine learning decision processes, and for ways to understand and audit how an AI agent arrives at its decisions or classifications. As the use of AI systems proliferates, being able to explain how a given model or system works will be vital, especially for those used by governments or public sector agencies.

Yet explainability alone will not be a panacea. Although transparency about decision-making processes is essential to democracy, it is a mistake to think this represents an easy solution to the dilemmas algorithmic decision-making will present to our societies.

There are two reasons why. First, with machine learning in general and neural networks or deep learning in particular, there is often a trade-off between performance and explainability. The larger and more complex a model, the harder it will be to understand, even though its performance is generally better. Unfortunately, for complex situations with many interacting influenceswhich is true of many key areas of policymachine learning will often be more useful the more of a black box it is. As a result, holding such systems accountable will almost always be a matter of post hoc monitoring and evaluation. If it turns out that a given machine learning algorithms decisions are significantly biased, for example, then something about the system or (more likely) the data it is trained on needs to change. Yet even post hoc auditing is easier said than done. In practice, there is surprisingly little systematic monitoring of policy outcomes at all, even though there is no shortage of guidance about how to do it.

The second reason is due to an even more significant challenge. The aim of many policies is often not made explicit, typically because the policy emerged as a compromise between people pursuing different goals. These necessary compromises in public policy presents a challenge when algorithms are tasked with implementing policy decisions. A compromise in public policy is not always a bad thing; it allows decision makers to resolve conflicts as well as avoiding hard questions about the exact outcomes desired. Yet this is a major problem for algorithms as they need clear goals to function. An emphasis on greater model explainability will never be able to resolve this challenge.

Consider the recent use of an algorithm to produce U.K. high school grades in the absence of examinations during the pandemic, which provides a remarkable example of just how badly algorithms can function in the absence of well-defined goals. British teachers had submitted their assessment of individual pupils likely grades and ranked their pupils within each subject and class. The algorithm significantly downgraded many thousands of these assessed results, particularly in state schools in low-income areas. Star pupils with conditional university places consequently failed to attain the level they needed, causing much heartbreak, not to mention pandemonium in the centralized system for allocating students to universities.

After a few days of uproar, the U.K. government abandoned the results, instead awarding everyone the grades their teachers had predicted. When the algorithm was finally published, it turned out to have placed most weight on matching the distribution of grades the same school had received in previous years, penalizing the best pupils at typically poorly performing schools. However, small classes were omitted as having too few observations, which meant affluent private schools with small class sizes escaped the downgrading.

Of course, the policy intention was never to increase educational inequality, but to prevent grade inflation. This aim had not been stated publicly beforehandor statisticians might have warned of the unintended consequences. The objectives of no grade inflation, school by school, and of individual fairness were fundamentally in conflict. Injustice to some pupilsthose who had worked hardest to overcome unfavorable circumstanceswas inevitable.

For government agencies and offices that increasingly rely on AI, the core problem is that machine learning algorithms need to be given a precisely specified objective. Yet in the messy world of human decision-making and politics, it is often possible and even desirable to avoid spelling out conflicting aims. By balancing competing interests, compromise is essential to the healthy functioning of democracies.

This is true even in the case of what might at first glance seem a more straightforward example, such as keeping criminals who are likely to reoffend behind bars rather than granting them bail or parole. An algorithm using past data to find patterns willgiven the historically higher likelihood that people from low income or minority communities will have been arrested or imprisonedpredict that similar people are more likely to offend in future. Perhaps judges can stay alert for this data bias and override the algorithm when sentencing particular individuals.

But there is still an ambiguity about what would count as a good outcome. Take bail decisions. About a third of the U.S. prison population is awaiting trial. Judges make decisions every day about who will await trial in jail and who will be bailed, but an algorithm can make a far more accurate prediction than a human about who will commit an offense if they are bailed. According to one model, if bail decisions were made by algorithm, the prison population in the United States would be 40% smaller, with the same recidivism rate as when the decisions are made by humans. Such a system would reduce prison populationsan apparent improvement on current levels of mass incarceration. But given that people of color make up the great majority of the U.S. prison population, the algorithm may also recommend a higher proportion of people from minority groups are denied bailwhich seems to perpetuate unfairness.

Some scholars have argued that exposing such trade-offs is a good thing. Algorithms or ML systems can then be set more specific aimsfor instance, to predict recidivism subject to a rule requiring that equal proportions of different groups get bailand still do better than humans. Whats more, this would enforce transparency about the ultimate objectives.

But this is not a technical problem about how to write computer code. Perhaps greater transparency about objectives could eventually be healthy for our democracies, but it would certainly be uncomfortable. Compromises work by politely ignoring inconvenient contradictions. Should government assistance for businesses hit by the pandemic go to those with most employees or to those most likely to repay? There is no need to answer this question about ultimate aims in order to set specific criteria for an emergency loan scheme. But to automate the decision requires specifying an objectivesave jobs, maximize repayments, or perhaps weight each equally. Similarly, people might disagree about whether the aim of the justice system is retribution or rehabilitation and yet agree on sentencing guidelines.

Dilemmas about objectives do not crop up in many areas of automated decisions or predictions, where the interests of those affected and those running the algorithm are aligned. Both the bank and its customers want to prevent frauds, both the doctor and her patient want an accurate diagnosis or radiology results. However, in most areas of public policy there are multiple overlapping and sometimes competing interests.

There is often a trust deficit too, particularly in criminal justice and policing, or in welfare policies which bring the power of the state into peoples family lives. Even many law-abiding citizens in some communities do not trust the police and judiciary to have their best interests at heart. It is nave to believe that algorithmically enforced transparency about objectives will resolve political conflicts in situations like these. The first step, before deploying machines to make decisions, is not to insist on algorithmic explainability and transparency, but to restore the trustworthiness of institutions themselves. Algorithmic decision-making can sometimes assist good government but can never make up for its absence.

Diane Coyle is professor of public policy and co-director of the Bennett Institute at the University of Cambridge.

Go here to see the original:
The tensions between explainable AI and good public policy - Brookings Institution

What is Imblearn Technique – Everything To Know For Class Imbalance Issues In Machine Learning – Analytics India Magazine

In machine learning, while building a classification model we sometimes come to situations where we do not have an equal proportion of classes. That means when we have class imbalance issues for example we have 500 records of 0 class and only 200 records of 1 class. This is called a class imbalance. All machine learning models are designed in such a way that they should attain maximum accuracy but in these types of situations, the model gets biased towards the majority class and will, at last, reflect on precision and recall. So how to build a model on these types of data set in a manner that the model should correctly classify the respective class and does not get biased.

To get rid of these imbalance class issues few techniques are used called as Imblearn Technique that is mainly used in these types of situations. Imblearn techniques help to either upsample the minority class or downsample the majority class to match the equal proportion. Through this article, we will discuss imblearn techniques and how we can use them to do upsampling and downsampling. For this experiment, we are using Pima Indian Diabetes data since it is an imbalance class data set. The data is available on Kaggle for downloading.

What we will learn from this article?

Class imbalance issues are the problem when we do not have equal ratios of different classes. Consider an example if we had to build a machine learning model that will predict whether a loan applicant will default or not. The data set has 500 rows of data points for the default class but for non-default we are only given 200 rows of data points. When we will build the model it is obvious that it would be biased towards the default class because its the majority class. The model will learn how to classify default classes in a more good manner as compared to the default. This will not be called as a good predictive model. So, to resolve this problem we make use of some techniques that are called Imblearn Techniques. They help us to either reduce the majority class as default to the same ratio as non-default or vice versa.

Imblearn techniques are the methods by which we can generate a data set that has an equal ratio of classes. The predictive model built on this type of data set would be able to generalize well. We mainly have two options to treat an imbalanced data set that are Upsampling and Downsampling. Upsampling is the way where we generate synthetic data so for the minority class to match the ratio with the majority class whereas in downsampling we reduce the majority class data points to match it to the minority class.

Now lets us practically understand how upsampling and downsampling is done. We will first install the imblearn package then import all the required libraries and the pima data set. Use the below code for the same.

As we checked there are a total of 500 rows that falls under 0 class and 268 rows that are present in 1 class. This results in an imbalance data set where the majority of the data points lie in 0 class. Now we have two options either use upsampling or downsampling. We will do both and will check the results. We will first divide the data into features and target X and y respectively. Then we will divide the data set into training and testing sets. Use the below code for the same.

X = df.values[:,0:7]

y = df.values[:,8]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=7)

Now we will check the count of both the classes in the training data and will use upsampling to generate new data points for minority classes. Use the below code to do the same.

print("Count of 1 class in training set before upsampling :" ,(sum(y_train==1)))

print("Count of 0 class in training set before upsampling :",format(sum(y_train==0)))

We are using Smote techniques from imblearn to do upsampling. It generates data points based on the K-nearest neighbor algorithm. We have defined k = 3 whereas it can be tweaked since it is a hyperparameter. We will first generate the data point and then will compare the counts of classes after upsampling. Refer to the below code for the same.

smote = SMOTE(sampling_strategy = 1 ,k_neighbors = 3, random_state=1)

X_train_new, y_train_new = smote.fit_sample(X_train, y_train.ravel())

print("Count of 1 class in training set after upsampling :" ,(sum(y_train_new==1)))

print("Count of 0 class in training set after upsampling :",(sum(y_train_new==0)))

Now the classes are balanced. Now we will build a model using random forest on the original data and then the new data. Use the below code for the same.

Now we will downsample the majority class and we will randomly delete the records from the original data to match the minority class. Use the below code for the same.

random = np.random.choice( Non_diabetic_indices, Non_diabetic 200 , replace=False)

down_sample_indices = np.concatenate([Diabetic_indices,random])

Now we will again divide the data set and will again build the model. Use the below code for the same.

Conclusion

In this article, we discussed how we can pre-process the imbalanced class data set before building predictive models. We explored Imblearn techniques and used the SMOTE method to generate synthetic data. We first did up sampling and then performed down sampling. There are again more methods present in imblean techniques like Tomek links and Cluster centroid that also can be used for the same problem. You can check the official documentation here.

Also check this article Complete Tutorial on Tkinter To Deploy Machine Learning Model that will help you to deploy machine learning models.

comments

Follow this link:
What is Imblearn Technique - Everything To Know For Class Imbalance Issues In Machine Learning - Analytics India Magazine

Learn in-demand technical skills in Python, machine learning, and more with this academy – The Next Web

Credit: Clment Hlardot/Unsplash

TLDR: With access to the Zenva Academy, users can take over 250 tech courses packed with real world programming training to become a knowledgeable and hirable professional coder.

The tech industry is expected to grow by as many as 13 million new jobs in the U.S. alone over the next five years, with another 20 million likely to spring up in the EU.

And you can rest assured that coding will be at the heart of almost every single one of those new positions.

Its no surprise that programming courses are being taught to our youngest students these days. From web development to gaming to data science, all the tech innovations well see over those next five years and beyond will come from innovators who understand how to make those static lines of code get together and dance.

If you feel behind the programming curve or just want a stockpile of tech training to have you ready for anything, the Zenva Academy ($139.99 for a one-year subscription) may be just the bootcamp you need to grab one of those new jobs.

This access unlocks everything in the Zenva Academys vast archives, a collection of more than 250 courses that dive into every aspect of learning to build games, websites, apps and more.

With courses taught by knowledgeable industry professionals, even newbies coming in with zero experience receive world-class training on in-demand programming skills on their way to becoming professionals themselves. Classes are based entirely around your own schedule with no deadlines or due dates so you can work at your own pace on bolstering your abilities.

Whether a student is interested in crafting mobile apps, mastering data science, or exploring machine learning and AI, these courses dont just tell you how to interact with these disciplines, they actually show you. Zenva coursework is based around creating real projects in tandem with the learning.

As you build a VR or AR app, or craft your first artificial neural networks using Python and TensorFlow, or create an awesome game, youll be building work for a professional portfolio that can help you land one of these prime coding positions. And with their ties to elite developer programs for outlets like Intel, Microsoft, and CompTIA, students can get on the fast track toward getting hired.

Regularly $169 for a year of Zenva Academy access, you can get it foronly $139.99 for a limited time.

Prices are subject to change.

Read next: Forget Hyperloop, check out Chinas new 620kmph maglev prototype

See more here:
Learn in-demand technical skills in Python, machine learning, and more with this academy - The Next Web

Google’s Vision for the Future of Bank Marketing, AI, Data and Brand – The Financial Brand

Subscribe to The Financial Brand via email for FREE!

No financial marketer questions the tectonic shift digital media has wrought on marketing and advertising. Yet even the most ardent digital marketing proponent might be startled by the prediction that 100% of advertising will be online and automated by 2025. Startled, and perhaps more than a bit skeptical. Although the pandemic has changed the situation to varying degrees, many financial marketers continue to find value in TV, radio, print and outdoor and human input into what appears there.

The 100% figure is a little less startling, however, when you consider that about 55% of U.S. advertising was already online as of 2019, according to Nicolas Darveau-Garneau, Chief Evangelist at Google. The marketing executive, who is in touch regularly with the search giants biggest advertisers, also notes that the 100% consists of two components: First, about 65% of the ads in 2025 will be online ads. Second, the other 35% will also be digital, but not online.

Whether youre buying a billboard or youre buying television, it will be a lot more like buying YouTube, he says. Machine learning algorithms are going to automate most advertising in the next five years. The time that bank and credit union marketers spend today optimizing media, selecting keywords and placing the right targeting on banner ads will be done by machines more and more, says Darveau-Garneau.

Machine learning is doubling in power every four to six months, he points out. Even as that rate begins to slow, there will still be a multiple thousand X improvement in machine learning power within the next ten years.

That kind of dramatic change prompts two big questions from CMOs the Google exec speaks with:

In answering the first question, Darveau-Garneau, who made spoke during a WPromote virtual presentation explores three key points:

As he says, all three need to happen for institutions to be able to compete. The effort to accomplish that, which is difficult, also takes care of the question about job security: There will plenty of marketing jobs, just different, which Darveau-Garneau expands upon below.

( sponsored content )

Before he joined Google about nine years ago, Darveau-Garneau was steeped in performance marketing essentially the modern digital marketing milieu of data and metrics with everything measurable. Yet he believes that to compete more effectively in the rapidly automating marketing world, many CMOs need to shift their thinking about performance marketing. They should create a marketing strategy that, simply put, makes you as much money as possible, trying to squeeze every ounce of profit you can, Darveau-Garneau states. Thats the most important KPI, he emphasizes.

While that advice may seem self evident, the Google marketer says few advertisers he works with are trying to make as much money as possible. Instead, many are trying to achieve the highest ROI.

And that is not the same thing.

Maximizing cash flow is very different than maximizing ROI, Darveau-Garneau states. The best advice I can give you in your performance marketing strategy is to build a dashboard that motivates your marketing teams to maximize profitability, as opposed to efficiency.

Dont fall in love with your ROASGoogle tools today cannot automatically maximize a financial institutions profitability, but Darveau-Garneau says they can produce maximum revenues out of a certain return on ad spend (ROAS). So a bank CMO can incorporate maximum profitability as a criteria for finding the right ROAS. But the Google exec advises being careful in selecting the right ROAS whether that is five-to-one, seven-to-one or another number.

Dont fall in love with your ROAS, Darveau-Garneau states. Test various numbers up and down to see which one makes you the most money. Once you know what the right ROAS is for your business, he adds, then make sure you get enough budget to cover full demand.

REGISTER FOR THIS FREE WEBINAR

How to Communicate With Customers in a Digital Branchless Reality

Is your bank or credit union struggling to provide exceptional customer experience an in time when you may never see or hear your customers?

TUESDAY, October 6th at 2pm (ET)

Why customer lifetime value makes so much senseManaging a company based on customer lifetime value is the future of business, Darveau-Garneau firmly believes. He ran four marketing (CX?) startups before joining Google. In hindsight, he says, he should have narrowed his customer database at these firms by building marketing based on customer lifetime value (CLV). This requires determining who a companys top customers are and then acquiring more people like that.

While financial inclusion is a major theme in banking today, banks and credit union marketers can benefit from a CLV focus in terms of outreach and messaging for loans, savings, investments and many other products.

The best advice I can give you, says Darveau-Garneau, is dont try to forecast customer lifetime value perfectly. Just do it approximately quintiles or deciles. One example of how to use CLV as part of efforts to personalize marketing is do A/B testing of landing page conversions to see which one converts better for your high CLV customers compared with average customers.

Dont worry, marketing jobs arent going awayWhile automation will increasingly handle things like selecting brand placements, Darveau-Garneau maintains that marketing work will shift to things such as building CLV models, segmenting customers in clever ways, optimizing creative and having the right data structure and the data sets to feed into the machine learning algorithms.

I actually think there will be more people doing marketing five years from now than there are now, he states, because its going to be easier in some ways, but much more complex in other ways.

Read More:

You could describe Nicolas Darveau-Garneau as a reformed performance marketer. For much of his career, he never did any brand marketing. As he describes it, performance marketers always have this tension about brand marketing because they like to measure things accurately to be sure theyre not wasting money. Hes changed his tune now to the point where Build a strong brand is the second of his three key strategies for marketers to be ready for the future.

Darveau-Garneau points to the fintech Credit Karma as a great example of a company combining performance marketing with great brand marketing. There is an extraordinary amount of value created by building a strong brand, he insists. This includes having consumers go directly to your site, or searching specifically for your brand on Google, or generating higher conversions.

These advantages are harder to measure than the clicks, leads or sales that result from pay-for-performance advertising, but they can be measured over time. Darveau-Garneau counsels patience to those skeptical of brand marketings benefits. It takes three months to a year, he says, to see the impact of a consideration or awareness campaign.

To financial marketers who still need convincing, he recommends starting small and trying out a branding campaign in one state (or possibly one part of a state) and tracking how business does there over six months. This doesnt require a big investment in a hardcore attribution model. If successful, it can then be expanded.

Brand marketing is becoming a lot more like performance marketing, the Google exec states. Brand marketing should be optimized in real time, and held accountable, he states, but give it some time to work.

Also to the point raised earlier as machine learning makes performance marketing easier, it diminishes the competitive advantage. That makes building a strong brand that much more important, Darveau-Garneau emphasizes. Ideally, financial institution marketers that can combine the skill sets of both disciplines will be in a good position, he believes.

Bank and credit union marketers can be doing great performance marketing and great brand marketing, but if youre sending these clicks to a site that doesnt perform very well, its going to be hard to compete, stresses Darveau-Garneau. A simple example is having a fast mobile site. He cites data from Chinese ecommerce giant Alibaba in which an already good conversion rate jumped 76% when they built a much faster mobile site.

Friction is the enemy of great digital experience, which in turn robs marketing of much of its power. The Google executive counsels CMOs to remove anything that creates significant friction: remove one field from a form, for example, or add Google Pay or Apple Pay to your app. Get the ball rolling so your marketing and digital banking teams start looking for things to remove to streamline the customer experience.

Dont get hung up on mega projects that are huge investments and take forever, like breaking down data silos and merging them all into one vast data lake, Darveau-Garneau advises. Such projects should be undertaken over the long term, but think about small projects in the short term.

Ive seen a lot of marketers trying to get things perfect from the beginning, as opposed to peeling the onion and just getting better every day, Darveau-Garneau observes.

Read More:

With the surge in ecommerce unleashed by the pandemics arrival, financial marketers may be wondering whether omnichannel marketing even makes sense any more versus concentrating solely on online digital.

While acknowledging the difficulty of forecasting what will happen to in-person commerce (and in-person banking), Darveau-Garneau firmly believes that whatever new normal arises, people will once again venture into retail facilities, so having an omni-channel strategy makes a lot of sense.

Financial marketers should be sure to include in-branch and other channel data beyond website and mobile data in what they share with the machine learning application they use. In the case of Google, Darveau-Garneau advises not to think of the company as driving just your online business. We can help you drive your store business as well. The company now has tools to integrate data, revenue and margin, for example, from physical locations into its smart bidding algorithms.

Importantly, Darveau-Garneau says Google has found that for many customers, including those in banking, consumers who buy both online and in-store often are much better customers than those who dont.

Continue reading here:
Google's Vision for the Future of Bank Marketing, AI, Data and Brand - The Financial Brand

Machine Learning as a Service Market Qualitative Insights the COVID-19 by 2023 – Aerospace Journal

Market Overview

Machine learning has become a disruptive trend in the technology industry with computers learning to accomplish tasks without being explicitly programmed. The manufacturing industry is relatively new to the concept of machine learning. Machine learning is well aligned to deal with the complexities of the manufacturing industry. Manufacturers can improve their product quality, ensure supply chain efficiency, reduce time to market, fulfil reliability standards, and thus, enhance their customer base through the application of machine learning. Machine learning algorithms offer predictive insights at every stage of the production, which can ensure efficiency and accuracy. Problems that earlier took months to be addressed are now being resolved quickly. The predictive failure of equipment is the biggest use case of machine learning in manufacturing. The predictions can be utilized to create predictive maintenance to be done by the service technicians. Certain algorithms can even predict the type of failure that may occur so that correct replacement parts and tools can be brought by the technician for the job.

Market Analysis

According to Infoholic Research, Machine Learning as a Service (MLaaS) Market will witness a CAGR of 49% during the forecast period 20172023. The market is propelled by certain growth drivers such as the increased application of advanced analytics in manufacturing, high volume of structured and unstructured data, the integration of machine learning with big data and other technologies, the rising importance of predictive and preventive maintenance, and so on. The market growth is curbed to a certain extent by restraining factors such as implementation challenges, the dearth of skilled data scientists, and data inaccessibility and security concerns to name a few.

Click Here to Get Sample Premium Report @ https://www.trendsmarketresearch.com/report/sample/10980

Segmentation by Components

The market has been analyzed and segmented by the following components Software Tools, Cloud and Web-based Application Programming Interface (APIs), and Others.

Segmentation by End-users

The market has been analyzed and segmented by the following end-users, namely process industries and discrete industries. The application of machine learning is much higher in discrete than in process industries.

Segmentation by Deployment Mode

The market has been analyzed and segmented by the following deployment mode, namely public and private.

Regional Analysis

The market has been analyzed by the following regions as Americas, Europe, APAC, and MEA. The Americas holds the largest market share followed by Europe and APAC. The Americas is experiencing a high adoption rate of machine learning in manufacturing processes. The demand for enterprise mobility and cloud-based solutions is high in the Americas. The manufacturing sector is a major contributor to the GDP of the European countries and is witnessing AI driven transformation. Chinas dominant manufacturing industry is extensively applying machine learning techniques. China, India, Japan, and South Korea are investing significantly on AI and machine learning. MEA is also following a high growth trajectory.

Vendor Analysis

Some of the key players in the market are Microsoft, Amazon Web Services, Google, Inc., and IBM Corporation. The report also includes watchlist companies such as BigML Inc., Sight Machine, Eigen Innovations Inc., Seldon Technologies Ltd., and Citrine Informatics Inc.

Benefits

The study covers and analyzes the Global MLaaS Market in the manufacturing context. Bringing out the complete key insights of the industry, the report aims to provide an opportunity for players to understand the latest trends, current market scenario, government initiatives, and technologies related to the market. In addition, it helps the venture capitalists in understanding the companies better and take informed decisions.

More Info of Impact Covid19 @ https://www.trendsmarketresearch.com/report/covid-19-analysis/10980

See the original post here:
Machine Learning as a Service Market Qualitative Insights the COVID-19 by 2023 - Aerospace Journal

Neurals AI predictions for 2021 – The Next Web

Its that time of year again! Were continuing our longrunning tradition of publishing a list of predictions fromAI experts who know whats happening on the ground, in the research labs, and at the boardroom tables.

Without further ado, lets dive in and see what the pros think will happen in the wake of 2020.

Dr. Arash Rahnama, Head of Applied AI Research at Modzy:

Just as advances in AI systems are racing forward, so too are opportunities and abilities for adversaries to trick AI models into making wrong predictions. Deep neural networks are vulnerable to subtle adversarial perturbations applied to their inputs adversarial AI which are imperceptible to the human eye. These attacks pose a great risk to the successful deployment of AI models in mission critical environments. At the rate were going, there will be a major AI security incident in 2021 unless organizations begin to adopt proactive adversarial defenses into their AI security posture.

2021 will be the year of explainability. As organization integrate AI, explainability will become a major part of ML pipelines to establish trust for the users. Understanding how machine learning reasons against real-world data helps build trust between people and models. Without understanding outputs and decision processes, there will never be true confidence in AI-enabled decision-making. Explainability will be critical in moving forward into the next phase of AI adoption.

The combination of explainability, and new training approaches initially designed to deal with adversarial attacks, will lead to a revolution in the field. Explainability can help understand what data influenced a models prediction and how to understand bias information which can then be used to train robust models that are more trusted, reliable and hardened against attacks. This tactical knowledge of how a model operates, will help create better model quality and security as a whole. AI scientists will re-define model performance to encompass not only prediction accuracy but issues such as lack of bias, robustness and strong generalizability to unpredicted environmental changes.

Dr. Kim Duffy, Life Science Product Manager at Vicon.

Forming predictions for artificial intelligence (AI) and machine learning (ML) is particularly difficult to do while only looking one year into the future. For example, in clinical gait analysis, which looks at a patients lower limb movement to identify underlying problems that result in difficulties walking and running, methodologies like AI and ML are very much in their infancy. This is something Vicon highlights in our recent life sciences report, A deeper understanding of human movement. To utilize these methodologies and see true benefits and advancements for clinical gait will take several years. Effective AI and ML requires a mass amount of data to effectively train trends and pattern identifications using the appropriate algorithms.

For 2021, however, we may see more clinicians, biomechanists, and researchers adopting these approaches during data analysis. Over the last few years, we have seen more literature presenting AI and ML work in gait. I believe this will continue into 2021, with more collaborations occurring between clinical and research groups to develop machine learning algorithms that facilitate automatic interpretations of gait data. Ultimately, these algorithms may help propose interventions in the clinical space sooner.

It is unlikely we will see the true benefits and effects of machine learning in 2021. Instead, well see more adoption and consideration of this approach when processing gait data. For example, the presidents of Gait and Postures affiliate society provided a perspective on the clinical impact of instrumented motion analysis in their latest issue, where they emphasized the need to use methods like ML on big-data in order to create better evidence of the efficiency of instrumented gait analysis. This would also provide better understanding and less subjectivity in clinical decision-making based on instrumented gait analysis. Were also seeing more credible endorsements of AI/ML such as the Gait and Clinical Movement Analysis Society which will also encourage further adoption by the clinical community moving forward.

Joe Petro, CTO of Nuance Communications:

In 2021, we will continue to see AI come down from the hype cycle, and the promise, claims, and aspirations of AI solutions will increasingly need to be backed up by demonstrable progress and measurable outcomes. As a result, we will see organizations shift to focus more on specific problem solving and creating solutions that deliver real outcomes that translate into tangible ROI not gimmicks or building technology for technologys sake. Those companies that have a deep understanding of the complexities and challenges their customers are looking to solve will maintain the advantage in the field, and this will affect not only how technology companies invest their R&D dollars, but also how technologists approach their career paths and educational pursuits.

With AI permeating nearly every aspect of technology, there will be an increased focus on ethics and deeply understanding the implications of AI in producing unintentional consequential bias. Consumers will become more aware of their digital footprint, and how their personal data is being leveraged across systems, industries, and the brands they interact with, which means companies partnering with AI vendors will increase the rigor and scrutiny around how their customers data is being used, and whether or not it is being monetized by third parties.

Dr. Max Versace, CEO and Co-Founder, Neurala:

Well see AI be deployed in the form of inexpensive and lightweight hardware. Its no secret that 2020 was a tumultuous year, and the economic outlook is such that capital intensive, complex solutions will be sidestepped for lighter-weight, perhaps software-only, less expensive solutions. This will allow manufacturers to realize ROIs in the short term without massive up-front investments. It will also give them the flexibility needed to respond to fluctuations the supply chain and customer demands something that weve seen play out on a larger scale throughout the pandemic.

Humans will turn their attention to why AI makes the decisions it makes. When we think about the explainability of AI, it has often been talked about in the context of bias and other ethical challenges. But as AI comes of age and gets more precise, reliable and finds more applications in real-world scenarios, well see people start to question the why? The reason? Trust: humans are reluctant to give power to automatic systems they do not fully understand. For instance, in manufacturing settings, AI will need to not only be accurate, but also explain why a product was classified as normal or defective, so that human operators can develop confidence and trust in the system and let it do its job.

Another year, another set of predictions. You can see how our experts did last year by clicking here. You can see how our experts did this year by building a time machine and traveling to the future. Happy Holidays!

Published December 28, 2020 07:00 UTC

Read more:
Neurals AI predictions for 2021 - The Next Web

Robotic Interviews, Machine Learning And the Future Of Workforce Recruitment – Entrepreneur

These would affect all aspects of HR functions such as the way HR professionals on-board and hire people, and the way they train them

Stay informed and join our daily newsletter now!

October12, 20204 min read

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

Artificial intelligence (AI) is changing all aspects of our lives and that too at a rapid pace. This includes our professional lives, too. Experts expect that in the days ahead, AI would become a greater part of our careers as all companies are moving ahead with adopting such technology. They are using more machines that use AI technology that would affect our daily professional activities. Soon enough, we would seemachine learning and deep learning in HRtoo. It would affect all aspects of HR (human resources) such as the way HR professionals on-board and hire people, and the way they train them.

Impact on onboarding and recruitment

These days, companies are usingrobotics in HRto make sure they have found the right people for particular job profiles. This means that even before you have stepped into your new office, your company already knows that you are the best person for the job thanks to such technology. They are using AI to pre-screen candidates before they invite the best candidates for interviews. This especially applies to large companies that offer thousands of new jobs each year and where millions of applicants go looking for jobs.

Impact on training on the job

Companies are also usingmachine learning and deep learning in HRto help provide on-the-job training to employees. Just because you have landed a job and settled in it, it does not mean that you know it all. You need to get job-related training so you can keep getting better. This is where experts expect that AI would play a major role in the coming years. It will also help one generation of professionals in an organization transfer its skills to its successors. This will make sure that no company would ever suffer from skill gaps.

Workforce augmentation

Robotics in HRwill play a major role in improving the people working in organizations where the management implements such technology. A major reason why people are so apprehensive about using AI in an organization is that they feel it would replace them and do all that they can do now. This will consequently lead to job losses. However, in the present scenario, AI is all about augmenting such a workforce. This means that it would help you perform your job with greater efficiency. Contrary to popular opinion, it would not replace you.

Workplace surveillance

Companies can also usemachine learning and deep learning in HRto improve their workforce surveillance work. This is uncomfortable for several employees as they feel that such technology would encroach on their workplace privacy. Gartner recently did a survey where it found that more than half of the companies that had a yearly turnover over $750 million use digital tools to get data on the activities of their employees and monitor their overall performance. As part of this, they analyze their emails to find out how engaged and content they are with their work.

Usage of workplace robots

Apart fromrobotics in HR,companies these days are also using physical robots that can move around on their own. This is especially true for the warehousing and manufacturing companies. Experts expect that soon this would become a common feature in a lot of other workplaces too. Companies specializing in mobility are creating delivery robots that can move around the workplace and deliver items straight to your desk. Tech companies are also developing security robots. Experts believe they would become commonplace because they can assure the safety of commercial properties from trespassers. Companies are also developing software to help you park your cars in your office.

See the rest here:
Robotic Interviews, Machine Learning And the Future Of Workforce Recruitment - Entrepreneur