Exploring the Present and Future Impact of Robotics and Machine Learning on the Healthcare Industry – Robotics and Automation News

Robotics has already revolutionized the manufacturing industry, but it has begun to impact the healthcare industry as well. AI is already showing that it can do a lot of what humans can, only faster and cheaper.

The potential benefits of machine learning and artificial intelligence are only starting to be seen, though we can make an educated guess regarding the benefit it can have.

Lets look at the current and future impact robotics and machine learning is making on the healthcare industry.

The Automation of Hospital Support Tasks

Robots may or may not be wandering the halls dispensing hot towels and medication, but they are filling a number of critical hospital support roles. For example, theyre used to quickly and thoroughly disinfect operating rooms and scrub floors.

They dont get tired, their performance doesnt deteriorate at the end of a shift, and they cant get sick themselves. This technology is expected to reduce the rate of hospital-acquired infections that currently stands at 4 percent.

Robots are at the vanguard of minimally invasive surgery. This is why surgical robot sales are expected to hit $6.4 billion in 2020.

Were only starting to see what care robots can do. One system can draw blood and provide assistance with patient care like changing undergarments. This frees up nurses from unpleasant and physically demanding tasks.

They may one day take the next step and perform lab tests and deliver care without requiring a human present beyond the patient.

Were already seeing robots push around racks of medication, towels and laboratory specimens. This frees up hospital staff to work on more creative and demanding tasks.

The Impact of AI and Robotics in Manufacturing

Robots have been improving productivity and quality in the manufacturing sector for years. For example, these machines can manufacture complete medical device components without any further processing.

CNC Swiss machining is the next generation in manufacturing technology and could be taken to a whole new plateau thanks to AI. It can already do everything from threading to milling to slotting before delivering a high quality, completed part.

CNC machines can now be driven by advanced AI to create precise parts, and it can make a wide variety of parts, all of them made as quickly as possible.

Advances in digital design mean you can design the product, test the design in various simulations, and send the data file to a contract manufacturer who sends the completed item back. Prototyping is faster, and low rate production and custom parts are now economical.

The Arrival of Caregiving Robots

Robots will soon be able to interact with patients, call for help when required, dispense medication and remind patients of appointments.

Everything that can be done to help people age in peace saves money while improving quality of life. This same technology could be used in long-term care facilities and nursing homes to improve the quality of care at a lower cost.

Robotic medical dispenser systems can already dispense medication to patients whenever required as long as it is within predetermined limits. And pharmaceutical companies could get valuable insight into how their medications are used.

AI Promise

Artificial intelligence and robotics promise to improve quality of care, access to medical services, and support for patients and caregivers alike at a lower cost to everyone.

Promoted

You might also like

Read more from the original source:

Exploring the Present and Future Impact of Robotics and Machine Learning on the Healthcare Industry - Robotics and Automation News

3 questions to ask before investing in machine learning for pop health – Healthcare IT News

The goal of population health is to use data to identify those who will benefit from intervention sooner, typically in an effort to prevent unnecessary hospital admissions. Machine learning introduces the potential of moving population health away from one-size-fits-all risk scores and toward matching individuals to specific interventions.

The combination of the two has enormous potential. However, many of the factors that will determine success or failure have nothing to do with technology and should be considered before investing in machine learning or population health.

Population health software, with or without machine learning, only produces suggestions. Getting a team to take action, particularly if that action is different, is one of the hardest things to do in healthcare. You will not succeed without executive support. Executives will not support you without significant incentive to do so.

Here's an easy surrogate for whether there is enough of that incentive: whether those executives jobs are in jeopardy if too many people go to the hospital. If not, the likelihood that an investment will lead to measurable improvement is minimal.

If youve been ordered to "do" population health, your best bet is to install a low cost risk score or have your team write a query to identify the oldest sickest people with the most readmissions. Either will return the same results more or less and your team of care managers are used to ignoring said results without rocking the boat. If there is sufficient incentive, read on.

Henry Ford is credited with saying, "If I asked people what they wanted, they would have said faster horses." Its human nature to try to apply a new technology in an old way.

Economists have named this the IT Productivity Paradox and have studied the cost of applying new technical capabilities in old ways. There are signs that healthcare organizations are unknowingly walking this plank.

For decades, risk scores were designed to identify the costliest patients with little consideration of the types of costs, the diseases they suffer, whether or not those costs are preventable, etc.

As a result, according to a systematic review of 30 risk stratification algorithms appearing in the Journal of the American Medical Association, "most current readmission risk prediction models that were designed for either comparative or clinical purposes perform poorly." A recently published study in Science also showed that prioritizing based on cost discriminates against people of color. Applying more data and better math to solve the problem in the old way is an expensive way to propagate existing shortcomings.

The opportunity now made possible is the ability to match individuals to interventions. Patients with serious mental illness that are most likely to have an inpatient psychiatric admission are very different than those with serious illnesses that might benefit from home-based palliative care. Clinicians wouldnt treat them the same, neither should our approach to prioritization.

However, you will need to design for this and clinical teams should be prepared for the repercussions. Patients identified with rising risk (as opposed to peak utilization) will not seem as sick.

Clinical teams trained to triage may feel like theyre not doing their jobs if the patients arent as obviously acute. Its important to discuss these repercussions and prepare in advance of the introduction of new technology.

Using technology to send more of the right people into a program that doesnt have an impact only adds to the cost of an already failing program. Surprisingly, very few programs have ever measured the impact of their interventions.

Those that have, often rely on measuring patients before and after they enter into care management programs which is misleading and biased on many levels.

If you are not confident that the existing program makes a difference, invest in measuring and improving the existing programs performance before investing additional resources. A good read on the pros and cons of different approaches to measuring impact is here.

Starting with a program of measurement can create a culture of measurement, improvement, and accountability - a great foundation for a pop health effort. Involving the clinical team in the definition of measures that matter will go a long way.

Another important consideration is whether your intervention is costly to deliver. The more costly it is to steer resources toward the wrong people, the more likely your program is to benefit from smarter prioritization.

For both reasons above if your program is entirely telephonic and targets older people with chronic complex diseases, you may want to invest in program design and measurement before investing in stratification technology.

Youre in great shape, and your odds of success are exponentially higher. Youre also better informed, as you and the team shift focus to decisions such as whether to build versus partner, what unique data you collect that can be used to your advantage and how youll measure algorithm and program performance.

Leonard DAvolio, PhD is an assistant professor at Harvard Medical School and Brigham and Womens Hospital, and the CEO and founder of Cyft. He shares his work on LinkedIn and Twitter.

Continue reading here:

3 questions to ask before investing in machine learning for pop health - Healthcare IT News

Amazon Wants to Teach You Machine Learning Through Music? – Dice Insights

Machine learning has rapidly become one of those buzzwordsembraced by companies around the world. Even if they dont fully understandwhat it means, executives think that machine learning will magically transformtheir operations and generate massive profits. Thats good news fortechnologistsprovided they actually learn the technologys fundamentals, ofcourse.

Amazon wants to help with the learning aspect of things. At this years AWS re:Invent, the company is previewing the DeepComposer, a 32-key keyboard thats designed to train you in machine learning fundamentals via the power of music.

No, seriously. AWS DeepComposer is theworlds first musical keyboard powered by machine learning to enable developersof all skill levels to learn Generative AI while creating original musicoutputs, reads Amazonsultra-helpful FAQ on the matter. DeepComposer consists of a USB keyboardthat connects to the developers computer, and the DeepComposer service,accessed through the AWS Management Console.There are tutorials andtraining data included in the package.

Generative AI, the FAQcontinues, allows computers to learn the underlying pattern of a given problemand use this knowledge to generate new content from input (such as image,music, and text). In other words, youre going to play a really simple songlike Chopsticks,and this machine-learning platform will use that seed to build a four-hourWagner-style opera. Just kidding! Or are we?

Jokes aside, the ideathat a machine-learning platform can generate lots of data based on relativelylittle input is a powerful one. Of course, Amazon isnt totally altruistic inthis endeavor; by serving as a training channel for up-and-comingtechnologists, the company obviously hopes that more people will turn to it forall of their machine learning and A.I. needs in future years. Those interestedcan sign up for the preview on adedicated site.

This isnt the first time that Amazon has plunged into machine-learning training, either. Late last year, it introduced AWS DeepRacer, a model racecar designed to teach developers the principles of reinforcement learning. And in 2017, it rolled out AWS DeepLens camera, meant to introduce the technology world to Amazons take on computer vision and deep learning.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

For those who master the fundamentals of machine learning, the jobs can prove quite lucrative. In September, theIEEE-USA Salary & Benefits Salarysuggested that engineers with machine-learning knowledge make an annual average of $185,000. Earlier this year, meanwhile, Indeed pegged theaverage machine learning engineer salary at $146,085, and its job growth between 2015 and 2018 at 344 percent.

If youre not interested in Amazonsversion of a machine-learning education, there are other channels. For example,OpenAI, the sorta-nonprofit foundation (yes, itsas odd as it sounds), hosts what it calls Gym, a toolkit fordeveloping and comparing reinforcement algorithms; it also has a set of modelsand tools, along with a very extensive tutorialin deepreinforcement learning.

Googlelikewise has acrash course,complete with 25 lessonsand 40+ exercises, thats a good introduction to machine learning concepts.Then theres Hacker Noon and its interesting breakdown ofmachine learning andartificial intelligence.

Onceyou have a firmer grasp on the core concepts, you can turn to BloombergsFoundations of Machine Learning,afree online coursethat teaches advanced concepts such asoptimization and kernel methods. A lotof math is involved.

Whateverlearning route you take, its clear that machine learning skills have anincredible value right now. Familiarizing yourself through thistechnologywhether via traditional lessons or a musical keyboardcan only helpyour career in tech.

See the original post:

Amazon Wants to Teach You Machine Learning Through Music? - Dice Insights

Amazons new AI keyboard is confusing everyone – The Verge

Amazon Web Services debuted a keyboard called DeepComposer this week, claiming its the worlds first musical keyboard powered by generative AI. It has 32 keys, costs $99, and connects to a software interface that uses machine learning and cloud computing to generate music based on what you play.

Its been unclear who this is for, and many have latched on to the fact that the music it creates just sounds bad. It looks like a consumer product, and Amazon used an over-the-top presentation to hype it, which included what AWS claimed was the first hybrid AI human pop acoustic collaboration. (Its not.)

But actually, the keyboard is intended to be a beginning tool for developers to get into machine learning and music. The device is AWSs newest offering for developers to familiarize themselves with aspects of machine learning, following AWS DeepRacer (an RC car) and AWS DeepLens (a camera).

DeepComposer is not meant to make music for entertainment purposes or push the state of generative AI. It will never be marketed to aspiring musicians.

Even though Amazon says DeepComposer is for developers, many devs dont understand what to do with it. According to Amazon, since this is for developers, no musical knowledge is needed. But it still uses traditional music theory terms and is centered on a thing that very much requires a modicum of musical knowledge a keyboard.

The $99 physical keyboard isnt even necessary since the DeepComposer software has a virtual keyboard. And AWS didnt develop or design the keyboard. Its a MIDI controller by Taiwanese company Midiplus thats been around for years. Amazon sells it for $46.15. Theyre the same. Amazons costlier version is not powered by AI, it sends MIDI to software thats hooked up to the cloud. Any MIDI keyboard would technically work.

At the end of the day, this is not for the everyday person. AWS does not claim it is. DeepComposer is not supposed to write the next radio hit. But its also hard to see how its meaningful for its target audience.

It is compelling to see a company as large as Amazon getting into the world of democratizing music creation, even if this first iteration somewhat missed the mark. Who knows, maybe its all a long play to get more devs familiar with AWSs machine learning platform, SageMaker.

But given how simple DeepComposer is, developers could learn about the same basics of machine learning and music through any number of other interfaces, like Google Magenta, without being pitched an overpriced controller stamped with a logo.

The rest is here:

Amazons new AI keyboard is confusing everyone - The Verge

Machine Learning Answers: If Caterpillar Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? – Forbes

We found that if Caterpillars (NYSE: CAT) stock drops 10% in a week (5 trading days), there is a solid 25% chance that it will rise by 10% over the subsequent month (20 trading days).

Caterpillar stock has seen significant volatility this year. While the company is being impacted by growing headwinds to the global economy, the uncertainty surrounding the trade war between the U.S. and China, relatively mixed quarterly earnings reports, as well as slowing sales, its relatively high capital returns, and strong balance sheet have supported the stock to an extent.

Considering the recent price swings, we started with a simple question that investors could be asking about Caterpillar stock: given a certain drop or rise, say a 10% drop in a week, what should we expect for the next week? Is it very likely that the stock will recover the next week? What about the next month or a quarter? You can test a variety of scenarios on the Trefis Machine Learning Engine to calculate if Caterpillar stock dropped, whats the chance itll rise.

For example, after a 5% drop over a week (5 trading days), the Trefis machine learning engine says chances of an additional 5% drop over the next month, are about 23%. Quite significant, and helpful to know for someone trying to recover from a loss. Knowing what to expect for almost any scenario is powerful. It can help you avoid rash moves. Given the recent volatility in the market, the mix of macroeconomic events (including the trade war with China and interest rate easing by the U.S. Fed), we think investors can prepare better.

Below, we also discuss a few scenarios and answer common investor questions:

Question 1: Does a rise in CAT stock become more likely after a drop?

Answer:

Consider two situations:

Case 1: CAT stock drops by 5% or more in a week

Case 2: CAT stock rises by 5% or more in a week

Is the chance of say a 5% rise in CAT stock over the subsequent month after Case 1 or Case 2 occurs much higher for one versus the other?

The answer is absolutely!

Turns out, chances of a 5% rise over the next month (20 trading days) is meaningfully more for Case 1, where the CAT has just suffered a big loss, versus Case 2.

Specifically, chances of a 5% rise in CAT stock over the next month:

= 40% after Case 1, where CAT stock drops by 5% in a week

versus,

= 32% after Case 2: where CAT stock rises by 5% in a week

Question 2: What about the other way around, does a drop in CAT stock become more likely after a rise?

Answer:

Consider, once again, two cases

Case 1: CAT stock drops by 5% in a week

Case 2: CAT stock rises by 5% in a week

Turns out the chances of a 5% drop after Case 1 or Case 2 has occurred, is actually quite similar, both pretty close to 23%.

Question 3: Does patience pay?

Answer:

According to data and Trefis machine learning engines calculations, absolutely!

Given a drop of 5% in CAT stock over a week (5 trading days), while there is only about 21% chance the CAT stock will gain 5% over the subsequent week, there is more than 50% chance this will happen in 6 months, and 62% chance itll gain 5% over a year (about 250 trading days).

The table below shows the trend:

Trefis

Question 4: What about the possibility of a drop after a rise if you wait for a while?

Answer:

After seeing a rise of 5% over 5 days, the chances of a 5% drop in CAT stock are about 24% over the subsequent quarter of waiting (60 trading days). However, this chance drops slightly to about 23% when the waiting period is a year (250 trading days).

The table below shows the trend:

Whats behind Trefis? See How Its Powering New Collaboration and What-Ifs ForCFOs and Finance Teams|Product, R&D, and Marketing Teams More Trefis Data Like our charts? Exploreexample interactive dashboardsand create your own

See more here:

Machine Learning Answers: If Caterpillar Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes

Onica Showcases Advanced Internet of Things, Artificial Intelligence, and Machine Learning Capabilities at AWS re:Invent 2019 – PR Web

With IoTanium Cloud and the IoTanium Developer Kit, were excited to help developers increase efficiencies and drive actual business value from their applications.

LAS VEGAS (PRWEB) December 02, 2019

Onica, a Premier Consulting Partner in the Amazon Web Services (AWS) Partner Network (APN) and AWS Managed Service Provider (MSP), announced today the launch of IoTanium Cloud, a collection of AWS CloudFormation resources that ease and accelerate the deployment of, and time to insight from Internet of Things (IoT) devices. The cloud native services provider boasts over 500 AWS certifications and 9 AWS Competencies, offering a full spectrum of cloud capabilities.

Onica has differentiated itself in the APN with their ability to accelerate the innovative capabilities of the cloud such as Artificial Intelligence (AI), Machine Learning (ML), Big Data, and IoT. At AWS re:Invent 2018, Onica launched a proprietary IoTanium Developer Kit, which provided physical IoT hardware to help customers quickly prototype connected devices. IoTanium Cloud, the next evolution of Onicas IoTanium suite, is a cloud configuration tool that is designed to codify best practices in building a serverless IoT backend on AWS to easily create serverless ingestion pipelines, device monitoring metrics, and accelerate data flow from IoT devices. Developers can gain preview access to IoTanium Cloud before it launches in AWS Marketplace starting on Monday, December 2nd at Onica.com/IoTaniumCloud.

Onicas IoT, AI, and ML capabilities will be on full display at AWS re:Invent 2019 as Onica has developed a hands-on interactive lab for attendees to work with the IoTanium Developer Kit and IoTanium Cloud. The Onica.create(AI.Magic) experience will guide attendees through the hardware assembly of an IoT-enabled magic wand. Participants will then leverage Amazon SageMaker to train wand gestures. After testing their models, their complete IoT experience can be realized as they use the Magic Wand to navigate a Wizard through a virtual maze using only the gestures trained into their magic wand. All participants who successfully complete the lab within the 30-minute time limit will take home their own IoTanium DevKit. The Onica.create(AI.Magic) lab can be found at AWS re:Invent 2019 in the Quad at ARIA, Booth 508. To learn more, please visit onica.com/reinvent.

The IoTanium Developer Kit along with IoTanium Cloud is designed to provide customers an opportunity to prototype IoT-enabled applications quickly and painlessly, letting them leverage the full suite of AWS analytics services for device data. IoTanium allows customers to spend more time focused on the business value of IoT applications, rather than the undifferentiated heavy lifting of connectivity, charging, provisioning, device management, and security.

With more than 25 billion connected devices in use by 2021, its critical that developers have access to the tools and services that will give their IoT applications a competitive edge, said Tolga Tarhan, Chief Technology Officer at Onica. With IoTanium Cloud and the IoTanium Developer Kit, were excited to help developers increase efficiencies and drive actual business value from their applications.

Earlier this year, Onica announced it had agreed to be acquired by Rackspace. The acquisition brings Onicas innovative professional services and strategic advisory expertise including application and infrastructure modernization, serverless development, containers, and IoT to the Rackspace portfolio, complementing its existing managed cloud services capabilities.

AWS re:Invent 2019 takes place from December 2-6 in Las Vegas. The Onica.create(AI.Magic) lab can be found at AWS re:Invent 2019 in the Quad at ARIA, Booth 508. To learn more about the Onica.create(AI.Magic) lab, please visit onica.com/reinvent.

About Onica

Onica is a cloud consulting and managed services company, helping businesses enable, operate, and innovate on the cloud. From migration strategy to operational excellence, cloud-native development, and immersive transformation, Onica is a full spectrum integrator, helping hundreds of companies realize the value, efficiency, and productivity of the cloud. Learn more at http://www.onica.com.

Share article on social media or email:

Read the rest here:

Onica Showcases Advanced Internet of Things, Artificial Intelligence, and Machine Learning Capabilities at AWS re:Invent 2019 - PR Web

The 10 Hottest AI And Machine Learning Startups Of 2019 – CRN: The Biggest Tech News For Partners And The IT Channel

AI Startup Funding In 2019 Set To Outpace Previous Year

Investors just can't get enough of artificial intelligence and machine-learning startupsif the latest data on venture capital funding is any indication.

Total funding for AI and machine-learning startups for the first three quarters of 2019 was $12.1 billion, surpassing last year's total of $10.2 billion, according to the PwC and CB Insights' MoneyTree report.

With global spending on AI systems set to grow 28.4 percent annually to $97.9 billion, according to research firm IDC, these startups see an opportunity to build new hardware and software innovations that can fundamentally change the way we work and live.

What follows is a look at the 10 hottest AI and machine-learning startups of 2019, whose products range from new AI hardware and open-source platforms to AI-powered sales applications.

Continued here:

The 10 Hottest AI And Machine Learning Startups Of 2019 - CRN: The Biggest Tech News For Partners And The IT Channel

Synthesis-planning program relies on human insight and machine learning – Chemical & Engineering News

Computer-aided synthesis planning (CASP) programs aim to replicate what synthetic chemists do when tackling a synthesis: start with a target molecule and then work backwards to trace a synthetic route, including an efficient and achievable series of reactions and reagents. Work in this field stretches back 50 years, but successful examples have emerged only in the last several years. These either rely on chemistry rules written by human chemists, or on machine-learning algorithms that have assimilated synthesis knowledge from databases of reactions.

Now researchers report that one CASP program that combines human knowledge and machine learning performs better than those using only artificial intelligence, particularly for synthetic routes involving rarely used reactions (Angew. Chem. Int. Ed. 2019, DOI: 10.1002/anie.201912083).

The program is an update to Chematica, which was developed by Bartosz A. Grzybowski of the Ulsan National Institute of Science and Technology and the Polish Academy of Sciences, and is marketed by MilliporeSigma as Synthia. Grzybowski says the program now includes almost 100,000 rules that he and colleagues have encoded over 15 years. Last year, they demonstrated that Chematicas synthetic plans are as good or better than human chemists in laboratory syntheses. To this point, Grzybowski has been perhaps the staunchest proponent of the expert approach to synthesis planning software, says Connor W. Coley of the Massachusetts Institute of Technology, who has developed a machine learningbased CASP program.

Now Grzybowski and colleagues have incorporated machine learning into Chematica. They trained machine-learning algorithms called neural networks on about 1.4 million product molecules that match one or more of Chematicas expert-coded reactions. Grzybowski says this hybrid approach teaches the algorithms which of those expert rules chemists actually use. That can help Chematica avoid a synthetic step that is possible but impractical, or to favor a reaction that may be rarely seen in the literature, but is necessary for certain transformations.

Grzybowski says human insight is important to include in a CASP program because chemical synthesis poses a more difficult challenge for machine-learning algorithms than playing chess or Go, games that these programs consistently beat humans at. For one, successful synthetic-route planning often involves considering two or three steps simultaneously. And unlike making a move in those games, calculating the effects of a given synthetic transformationfor example, the effects on electron density or stereochemistrytakes significant computing time.

The researchers compared the abilities of their hybrid algorithm with those of a purely neural networkbased approach published last year (Nature 2018, DOI: 10.1038/nature25978). The two methods were about equally effective at proposing synthetic steps that matched published reactions when their training data included thousands of examples of those reactions. But when there were fewer than 100 examples, the neural network approach rarely identified a verified transformation, while the hybrid version of Chematica found it more than 75% of the time. Several of the hybrid programs proposed reactions to synthesize the glaucoma drug bimatoprost were not represented in its training data, demonstrating its ability to use unusual reactions.

Chemists agree that this human-machine partnership shows promise, especially for less common reactions. This is important because there has been a preference of modern retrosynthetic algorithms to favor well-precedented reactions, says Timothy A. Cernak, whose lab at the University of Michigan is sponsored by MilliporeSigma and uses Synthia. But Coley cautions that a fair comparison of a hybrid approach and a neural network alone is difficult because theres greater potential for human experts to bias the data that the system is trained and tested on.

The researchers have not verified the generated synthetic routes in lab experiments, but Grzybowski says his group will publish new, lab-tested natural product syntheses from this program soon. He also says there are plans to incorporate the hybrid system into Synthia.

Read the original post:

Synthesis-planning program relies on human insight and machine learning - Chemical & Engineering News

Here’s why machine learning is critical to success for banks of the future – Tech Wire Asia

HSBC Bank and others use machine learning to win big. Source: Shutterstock

MACHINE learning is a popular buzzword today, and has been heralded as one of the greatest innovations conceived by man.

A branch of artificial intelligence (AI), machine learning is increasingly embedded in daily life, such as automatic email reply predictions, virtual assistants, and chatbots.

The technology is also expected to revolutionize the world of finance. While it is slower than other industries in embracing the technology, the impact of ML is already visibly significant.

Most recently, HSBC said that the bank was using the technology to combat financial crime.

We have 608 million transactions every month. Hence, with AI and machine learning we are able to identify a good transaction done by an innocent person versus transaction conducted by criminals, said HSBC Hong Kong Financial Crime Threat Mitigation Regional Head Paul Jevtovic.

Like HSBC, several other banks are beginning to deploy ML at scale. Here are the top three use cases in the banking and financial services space:

Prior to the advent of ML, decisions were made on a rule-based system, where the same criteria are applied across a broad customer segment, subjecting them to a one-size-fits-all solution.

With ML, bankers can approach customers in a more personalized way. ML algorithms can analyze volumes of consumer data in banks, tracing each customers digital footprint with a unified, omni-faceted view.

This footprint includes their financial status across multiple accounts, financial investments, and banking transactions.

With the relevant data and armed with the right analytical tools, ML can provide valuable insights that allow banks to create tailor-made solutions based on a specific customers behavior, preferences, and requirements.

With the wide tracking of a customers digital footprint that ML offers, banks can quickly and accurately assess a potential borrowers ability to repay better than with traditional methods.

Leveraging ML can eliminate biases, and can quickly help differentiate between applicants who are more credit-worthy from those who have a higher default risk even without an elaborate credit history. ML can also help banks forecast potential issues and rectify them before they occur.

With the assurance that risks are being mitigated, banks can focus on issues that can add value to their customers, increase productivity, and provide greater support to their employees.

ML can be greatly leveraged for fraud detection. Fraud is a pain point for many financial institutions, one which could potentially cause a bank to go out of business.

With ML, anomalies in customers behaviors can be quickly detected. By flagging and blocking transactions that are suspicious, banks can catch fraud in real-time, protecting customers and themselves.

ML is undoubtedly one of the greatest technological feats of the 21st century. With its laser precision in predicting behaviors and anticipating risks, we can be sure that the role of ML will only be more prominent in the future of banking.

Regardless of size, financial institutions or businesses looking to engage financial services must be aware of the uses of ML in banking. Should they wish to stay relevant, they must start exploring the technology now.

Original post:

Here's why machine learning is critical to success for banks of the future - Tech Wire Asia

Verification In The Era Of Autonomous Driving, Artificial Intelligence And Machine Learning – SemiEngineering

The importance of data is changing traditional value creation in electronics and forcing recalculations of return on investment.

The last couple of weeks have been busy with me participating on three panels that dealt with AI and machine learning in the contexts of automotive and aero/defense, in San Jose, Berlin and Detroit. The common theme? Data is indeed the new oil, and it messes with traditional value creation in electronics. Also, requirements for system design and verification are changing and there are completely new, blank-sheet opportunities that can help with verification and confirmation of what AI and ML actually do.

In the context of AI/ML, I have been using the movie quote a lot that I think I am 90% excited and 10% scared, oh wait, perhaps I am 10% percent excited and 90% scared. The recent panel discussions that I was part of did not help that much.

First, I was part of a panel called Collaborating to Realize an Autonomous Future at Arm TechCon in San Jose. The panel was organized by Arms Andrew Moore and my co-panelists were Robert Day (Arm), Phil Magney (VSI Labs) and Hao Liu (AutoX Inc.). Given the technical audience, questions were centered around how to break down hardware development for autonomous systems, how the autonomous software stack could be divided between developers, whether compute will end up being centralized or decentralized, and what the security and safety implications of a large amount of collaboration would bebasically boiling things down to a changing industry structure with new dynamics of control in the design chain.

For the second panel I was in Berlin, which was gearing up for the celebrations of 30-year anniversary of the fall of the Berlin Wall. The panel title could roughly be translated as If training data is the oil for digitalization enabled by artificial intelligence, how can the available oil be used best? The panel was organized by Wolfgang Ecker (Infineon) and my co-panelists were Erich Biermann (Bosch), Raik Brinkmann (OneSpin), Matthias Kstner (Microchip), Stefan Mengel (BMBF) and Herbert Taucher (Siemens). Discussion points here were centered around ownership of data, whether users would be willing to share data with tool vendors and whether this data could be trusted to be complete enough in the first place.

The third panel took place in Detroit, from which I just returned. It took place at the Association of the United States Army (AUSA) Autonomy and AI Symposium. Moderated by Major Amber Walker, my co-panelists were Margaret Amori (NVIDIA), BG Ross Coffman (United States Army Futures Command) and Ryan Close (United States Army Futures Command, C5ISR Center). Questions here were centered on lessons learned from civilian autonomous vehicles and what the differences between civilian and Army customization needs are. We discussed advances in hardware and how ready developers are for new sensors and compute, course resilience and trust and what the new vulnerabilities for cyber-attacks in AI would be, as well as design for customization and how the best of both worldscustom and adaptablecan be achieved.

Discussions and opinions were diverse, to say the least. Two big take-aways stick with me.

First, data really is the new oil! It needs protectionsecurity and resilience are crucial in an Army context in which data in the enemys hands could have catastrophic consequences, and privacy is crucial in civilian applications as well. Data also changes the value chain in electronics. As I had written before in the context of IoT, the value really has to come from the overall system perspective and cannot be assigned to individual components alone. In a system value chain of sensors, network, storage and data, one may decide to give away the tracker if the data it creates allows value creation through advertisement. Calculation of return on investments is becoming much more complicated.

Second, verification of what these neural networks actually do (and not do) is becoming critical. I had mused in the past about a potential Revenge of the Digital Twins, but these recent panel discussions have emphasized to me that indeed the confirmability of what an CNN/DNN in an AI does is seen as critical by manyin both automotive and the Army contexts, safety of the car and the human life involved is a risk if we cannot really confirm that AI cannot be tricked. Examples that demonstrate how easy it is to trick self-driving cars by defacing street signs make me worry quite a bit here.

That said though, from challenges come opportunity. Verification for CNN/DNNs and associated data sets will likely be an interesting new market in itselfI am definitely watching this space.

Read more:

Verification In The Era Of Autonomous Driving, Artificial Intelligence And Machine Learning - SemiEngineering