When Are We Going to Start Designing AI With Purpose? Machine Learning Times – The Predictive Analytics Times

Originally published in UX Collective, Jan 19, 2021.

For an industry that prides itself on moving fast, the tech community has been remarkably slow to adapt to the differences of designing with AI. Machine learning is an intrinsically fuzzy science, yet when it inevitably returns unpredictable results, we tend to react like its a puzzle to be solved; believing that with enough algorithmic brilliance, we can eventually fit all the pieces into place and render something approaching objective truth. But objectivity and truth are often far afield from the true promise of AI, as well soon discuss.

I think a lot of the confusion stems from language;in particular the way we talk about machine-like efficiency. Machines are expected to make precise measurements about whatever theyre pointed at; to produce data.

But machinelearningdoesnt produce data. Machine learning producespredictionsabout how observations in the present overlap with patterns from the past. In this way, its literally aninversionof the classicif-this-then-thatlogic thats driven conventional software development for so long. My colleague Rick Barraza has a great way of describing the distinction:

To continue reading this article, click here.

See the original post here:
When Are We Going to Start Designing AI With Purpose? Machine Learning Times - The Predictive Analytics Times

Machine Learning To Bring A Transformation In Software Testing – CIO Applications

The test automation effort will continue to accelerate. Surprisingly, a lot of businesses do have manual checks in their distribution pipeline, but you can't deliver quickly if you have humans on the vital path of the supply chain, slowing things down.

FREMONT, CA: Over the last decade, there has been an unwavering drive to deliver applications faster. Automated testing has emerged as one of the most relevant technologies for scaling DevOps, businesses are spending a lot of time and effort to develop end-to-end software delivery pipelines, and containers and their ecosystem are keeping up with their early promise.

Testing is one of the top DevOps monitors that companies can use to ensure that their consumers have a delightful brand experience. Others include access management, logging, traceability and disaster recovery.

Quality and access control are preventive controls, while others are reactive. In the future, there will be a growing emphasis on consistency because it prevents consumers from having a bad experience. So delivering value quicklyor better still delivering the right value quickly at the right quality levelis the main theme that everyone will see this year and beyond.

Here are the five key trends in 2021:

Automation of exams

The test automation effort will continue to accelerate. Surprisingly, a lot of businesses do have manual checks in their distribution pipeline, but you can't deliver quickly if you have humans on the vital path of the supply chain, slowing things down.

Automation of manual tests is a long process that takes dedicated engineering time. While many companies have at least some kind of test automation, much needs to be done. That's why automated testing will remain one of the top trends in the future.

DevOps-driven data

Over the past six to eight years, the industry has concentrated on linking various resources through the development of robust distribution pipelines. Each of these tools produces a significant amount of data, but the data is used minimally, if at all.

The next stage is to add the smarts to the tooling. Expect to see an increased focus on data-driven decision-making by practitioners.

Follow this link:
Machine Learning To Bring A Transformation In Software Testing - CIO Applications

NTUC LearningHub Survey Reveals Accelerated Business Needs In Cloud Computing And Machine Learning Outpacing Singapore Talent Supply; Skills Gap A…

SINGAPORE -Media OutReach-5 February2021 -Despite majority of Singapore employers(89%) reporting that the COVID-19 pandemic has accelerated the adoption of cloudcomputing and Machine Learning (ML) in their companies, obstacles abound. Singaporebusiness leaders say that the largest hindrance to adopting cloud computing andML technologies is the shortage of relevant in-house IT support (64%), amongstother reasons such as 'employees do not have the relevant skill sets' (58%) and'the lack of financial resources' (46%).

alt="NTUC LearningHub Survey Reveals Accelerated Business Needs In Cloud Computing And Machine Learning Outpacing Singapore Talent Supply; Skills Gap A Hindrance To Implementing These Technologies"

These are some ofthe key findings from the recently launched NTUC LearningHub (NTUC LHUB)Industry Insights report on cloud computing and ML in Singapore. The report is basedon in-depth interviews with industry experts, such as Amazon Web Services (AWS)and NTUC LHUB, and a survey with 300 hiring managers across industries inSingapore.

While organisationsare keen to adopt cloud computing and ML to improve the company's businessperformance (64%), obtain business insights from Big Data (59%) and performmundane or tedious tasks (53%), a third of Singapore employers (32%) say theircompanies have insufficient talent to implement cloud computing and MLtechnologies.

To overcome thisshortage, companies say they have been upskilling employees that have relevantskill sets/ roles (55%), and reskilling employees that have completelydifferent skill sets/ roles (44%). In a further show of how organisations werewilling to take steps to overcome this skills gap, three in five (61%) stronglyagree or agree that they will be open to hiring individuals with relevantmicro-credentials, even if these candidates has no relevant experience oreducation degrees.

Looking to thefuture, four in five employers (81%) agree or strongly agree that ML will bethe most in-demand Artificial Intelligence (AI) skill in 2021. Meanwhile, sevenout of 10 surveyed (70%) indicated they will be willing to offer a premium fortalent with AI and ML skills.

"The report reinforces the growing demand for a cloud-skilled workforce inSingapore, and the critical need to upskill and reskill local talent", said TanLee Chew, Managing Director, ASEAN, Worldwide Public Sector, AWS. "Thecollaboration across government, businesses, education and traininginstitutions will be instrumental in helping Singapore employers address theseskills gaps. AWS will continue to collaborate with training providers like NTUCLearningHub to make skills training accessible to help Singaporeans, fromstudents to adult learners, to remain relevant today and prepare for the future."

NTUC LHUB's Head ofICT, Isa Nasser also adds, "While much of the talent demand encompasses technicalpositions such as data scientists and data engineers, businesses are alsolooking for staff to pick up practical ML and data science skills sets that canbe applied to their existing work. Thatis why in today's digital age, most professionals would benefit greatly frompicking up some data science skills to enable them to deploy ML applicationsand use cases in their organization. We highly urge workers to get started on equipping themselveswith ML skills, including understanding the core concepts of data science, aswell as familiarising themselves on the use of cloud or ML platforms such as AmazonSageMaker."

To download theIndustry Insights: Cloud Computing and ML report, visit

https://www.ntuclearninghub.com/machine-learning-cloud.

NTUCLearningHub is the leading Continuing Education and Training provider in Singapore,which aims to transform the lifelong employability of working people. Since ourcorporatisation in 2004, we have been working employers and individual learnersto provide learning solutions in areas such as Cloud, Infocomm Technology,Healthcare, Employability & Literacy, Business Excellence, Workplace Safety& Health, Security, Human Resources and Foreign Worker Training.

Todate, NTUC LearningHub has helped over 25,000 organisations and achieved over2.5 million training places across more than 500 courses with a pool of over460 certified trainers. As a Total Learning Solutions provider toorganisations, we also forge partnerships and offer a wide range of relevantend-to-end training solutions and work constantly to improve our trainingquality and delivery. In 2020, we have accelerated our foray into onlinelearning with our Virtual Live Classes and, through working with best-in-classpartners such as IBM, DuPont Sustainable Solutions and GO1, asynchronous onlinecourses.

For moreinformation, visitwww.ntuclearninghub.com.

Read more:
NTUC LearningHub Survey Reveals Accelerated Business Needs In Cloud Computing And Machine Learning Outpacing Singapore Talent Supply; Skills Gap A...

Machine Learning and Artificial Intelligence in Healthcare Market 2021 inclining trends with NVIDIA Corporation, Intel Corporation, GENERAL ELECTRIC…

Travel Guard has specific cruise insurance policies, which makes it simpler than trying to find an add-on. If youre getting a quote online, theyll ask you to specify if youre taking a plane, a cruise, or both. They cover any emergency travel assistance, trip interruption, delay, or cancellation.

Cruise travel insurance secures non-refundable investments related to your trip. It reimburses you if you have to cancel your international cruise unexpectedly prior to your departure. It also provides medical coverage for unexpected injuries and illnesses. Cruise travel insurance policies provide medical coverage while you are on a holiday. A cancellation after this can mean a huge financial loss, but a cruise travel insurance policyholder shall be covered for cancellation or postponement of trips.

The aim of the report is to equip relevant players in deciphering essential cues about the various real-time market based developments, also drawing significant references from historical data, to eventually present a highly effective market forecast and prediction, favoring sustainable stance and impeccable revenue flow despite challenges such as sudden pandemic, interrupted production and disrupted sales channel in the Cruise Travel Insurance market.

Request a sample copy of report @ https://www.reportconsultant.com/request_sample.php?id=77601

Key players profiled in the report includes:

Allianz, AIG, Munich RE, Generali, Tokio Marine, Sompo Japan, CSA Travel Protection, AXA, Pingan Baoxian, Mapfre Asistencia, USI Affinity, Seven Corners, Hanse Merkur, MH Ross, STARR

Market Segmentation by type:

Market Segmentation by application:

This report is well documented to present crucial analytical review affecting the Cruise Travel Insurance market amidst COVID-19 outrage. The report is so designed to lend versatile understanding about various market influencers encompassing a thorough barrier analysis as well as an opportunity mapping that together decide the upcoming growth trajectory of the market. In the light of the lingering COVID-19 pandemic, this mindfully drafted research offering is in complete sync with the current ongoing market developments as well as challenges that together render tangible influence upon the holistic growth trajectory of the Cruise Travel Insurance market.

Besides presenting a discerning overview of the historical and current market specific developments, inclined to aid a future-ready business decision, this well-compiled research report on the Cruise Travel Insurance market also presents vital details on various industry best practices comprising SWOT and PESTEL analysis to adequately locate and maneuver profit scope. Therefore, to enable and influence a flawless market-specific business decision, aligning with the best industry practices, this specific research report on the market also lends a systematic rundown on vital growth triggering elements comprising market opportunities, persistent market obstacles and challenges, also featuring a comprehensive outlook of various drivers and threats that eventually influence the growth trajectory in the Cruise Travel Insurance market.

Get reports for upto 40% discount @ https://www.reportconsultant.com/ask_for_discount.php?id=77601

Global Cruise Travel Insurance Geographical Segmentation Includes:

North America (U.S., Canada, Mexico)

Europe (U.K., France, Germany, Spain, Italy, Central & Eastern Europe, CIS)

Asia Pacific (China, Japan, South Korea, ASEAN, India, Rest of Asia Pacific)

Latin America (Brazil, Rest of L.A.)

Middle East and Africa (Turkey, GCC, Rest of Middle East)

Some Major TOC Points:

Chapter 1. Report Overview

Chapter 2. Global Growth Trends

Chapter 3. Market Share by Key Players

Chapter 4. Breakdown Data by Type and Application

Chapter 5. Market by End Users/Application

Chapter 6. COVID-19 Outbreak: Cruise Travel Insurance Industry Impact

Chapter 7. Opportunity Analysis in Covid-19 Crisis

Chapter 9. Market Driving Force

And More

In this latest research publication a thorough overview of the current market scenario has been portrayed, in a bid to aid market participants, stakeholders, research analysts, industry veterans and the like to borrow insightful cues from this ready-to-use market research report, thus influencing a definitive business discretion. The report in its subsequent sections also portrays a detailed overview of competition spectrum, profiling leading players and their mindful business decisions, influencing growth in the Cruise Travel Insurance market.

About Us:

Report Consultant A worldwide pacesetter in analytics, research and advisory that can assist you to renovate your business and modify your approach. With us, you will learn to take decisions intrepidly by taking calculative risks leading to lucrative business in the ever-changing market. We make sense of drawbacks, opportunities, circumstances, estimations and information using our experienced skills and verified methodologies.

Our research reports will give you the most realistic and incomparable experience of revolutionary market solutions. We have effectively steered business all over the world through our market research reports with our predictive nature and are exceptionally positioned to lead digital transformations. Thus, we craft greater value for clients by presenting progressive opportunities in the global futuristic market.

Contact us:

Rebecca Parker

(Report Consultant)

sales@reportconsultant.com

http://www.reportconsultant.com

Here is the original post:
Machine Learning and Artificial Intelligence in Healthcare Market 2021 inclining trends with NVIDIA Corporation, Intel Corporation, GENERAL ELECTRIC...

The POWER Interview: The Importance of AI and Machine Learning – POWER magazine

Artificial intelligence (AI) and machine learning (ML) are becoming synonymous with the operation of power generation facilities. The increased digitization of power plants, from equipment to software, involves both thermal generation and renewable energy installations.

Both AI and ML will be key elements for the design of future energy systems, supporting the growth of smart grids and improving the efficiency of power generation, along with the interaction among electricity customers and utilities.

The technology group Wrtsil is a global leader in using data to improve operations in the power generation sector. The company helps generators make better asset management decisions, which supports predictive maintenance. The company uses AI, along with advanced diagnostics, and its deep equipment expertise greatly to enhance the safety, reliability, and efficiency of power equipment and systems.

Luke Witmer, general manager, Data Science, Energy Storage & Optimization at Wrtsil, talked with POWER about the importance of AI and ML to the future of power generation and electricity markets.

POWER: How can artificial intelligence (AI) be used in power trading, and with regard to forecasts and other issues?

Witmer: Artificial intelligence is a very wide field. Even a simple if/else statement is technically AI (a computer making a decision). Forecasts for price and power are generated by AI (some algorithm with some historic data set), and represent the expected trajectory or probability distribution of that value.

Power trading is also a wide field. There are many different markets that span different time periods and different electricity (power) services that power plants provide. Its more than just buying low and selling high, though that is a large piece of it. Forecasts are generally not very good at predicting exactly when electricity price spikes will happen. There is always a tradeoff between saving some power capacity for the biggest price spikes versus allocating more of your power for marginal prices. In the end, as a power trader, it is important to remember that the historical data is not a picture of the future, but rather a statistical distribution that can be leveraged to inform the most probable outcome of the unknown future. AI is more capable at leveraging statistics than people will ever be.

POWER: Machine learning and AI in power generation rely on digitalization. As the use of data becomes more important, what steps need to be taken to support AI and machine learning while still accounting for cybersecurity?

Witmer: A lot of steps. Sorry for the lame duck answer here. Regular whitehat penetration testing by ethical hackers is probably the best first step. The second step should be to diligently and quickly address each critical issue that is discovered through that process. This can be done by partnering with technology providers who have the right solution (cyber security practices, certifications, and technology) to enable the data flow that is required.

POWER: How can the power generation industry benefit from machine learning?

Witmer: The benefit is higher utilization of the existing infrastructure. There is a lot of under-utilized intrastructure in the power generation industry. This can be accomplished with greater intelligence on the edges of the network (out at each substation and at each independent generation facility) coupled with greater intelligence at the points of central dispatch.

POWER: Can machines used in power generation learn from their experiences; would an example be that a machine could perform more efficiently over time based on past experience?

Witmer: Yes and no. It depends what you mean by machines. A machine itself is simply pieces of metal. An analogy would be that your air conditioner at home cant learn anything, but your smart thermostat can. Your air conditioner needs to just operate as efficiently as possible when its told to operate, constrained by physics. Power generation equipment is the same. The controls however, whether at some point of aggregation, or transmission intersection, or at a central dispatch center, can certainly apply machine learning to operate differently as time goes on, adapting in real time to changing trends and conditions in the electricity grids and markets of the world.

POWER: What are some of the uses of artificial intelligence in the power industry?

Witmer: As mentioned in the response to question 1, I think it appropriate to point you at some definitions and descriptions of AI. I find wikipedia to be the best organized and moderated by experts.

In the end, its a question of intelligent control. There are many uses of AI in the power industry. To start listing some of them is insufficient, but, to give some idea, I would say that we use AI in the form of rules that automatically ramp power plants up/down by speeding up or slowing down their speed governors, in the form of neural networks that perform load forecasting based on historic data and the present state data (time of day, metering values, etc.), in the form of economic dispatch systems that leverage these forecasts, and in the form of reinforcement learning for statistically based automated bid generation in open markets. Our electricity grids combined with their associated controls and markets are arguably the most complex machines that humans have built.

POWER: How can AI benefit centralized generation, and can it provide cost savings for power customers?

Witmer: Centralized power systems continue to thrive from significant economies of scale. Centralized power systems enable equal access to clean power at the lowest cost, reducing economic inequality. I view large renewable power plants that are owned by independent power producers as centralized power generation, dispatched by centralized grid operators. Regardless of whether the path forward is more or less centralized, AI brings value to all parties. Not only does it maximize revenue for any specific asset (thus the asset owner), it also reduces overall electricity prices for all consumers.

POWER: How important is AI to smart grids? How important is AI to the integration of e-mobility (electric vehicles, etc.) to the grid?

Witmer: AI is very important to smart grids. AI is extremely important to the integration of smart charging of electric vehicles, and leveraging of those mobile batteries for grid services when they are plugged into the grid (vehicles to grid, or V2G). However, the more important piece is for the right market forces to be created (economics), so that people can realize the value (actually get paid) for allowing their vehicles to participate in these kinds of services.

The mobile batteries of EVs will be under-utilized if we do not integrate the controls for charging/discharging this equipment in a way that gives both the consumers the ability to opt in/out of any service but also for the centralized dispatch to leverage this equipment as well. Its less a question of AI, and more a question of economics and human behavioral science. Once the economics are leveraged and the right tools are in place, then AI will be able to forecast the availability and subsequent utility that the grid will be able to extract from the variable infrastructure of plugged in EVs.

POWER: How important is AI to the design and construction of virtual power plants?

Witmer: Interesting question. On one level, this is a question that raises an existential threat to aspects of my own job (but thats a good thing because if a computer can do it, I dont want to do it!). Its a bit of a chicken-and-egg scenario. Today, any power plant (virtual or actual), is designed through a process that involves a lot of modeling, or simulations of what-if scenarios. That model must be as accurate as possible, including the controls behavior of not only the new plant in question, but also the rest of the grid and/or markets nearby.

As more AI is used in the actual context of this new potential power plant, the model must also contain a reflection of that same AI. No model is perfect, but as more AI gets used in the actual dispatch of power plants, more AI will be needed in the design and creation process for new power plants or aggregations of power generation equipment.

POWER: What do you see as the future of AI and machine learning for power generation / utilities?

Witmer: The short-term future is simply an extension of what we see today. As more renewables come onto the grids, we will see more negative price events and more price volatility. AI will be able to thrive in that environment. I suspect that as time goes on, the existing market structures will cease to be the most efficient for society. In fact, AI is likely going to be able to take advantage of some of those legacy features (think Enron).

Hopefully the independent system operators of the world can adapt quickly enough to the changing conditions, but I remain skeptical of that in all scenarios. With growing renewables that have free fuel, the model of vertically integrated utilities with an integrated resource planning (IRP) process will likely yield the most economically efficient structure. I think that we will see growing inefficiencies in regions that have too many manufactured rules and structure imposed by legacy markets, designed around marginal costs of operating fossil fuel-burning plants.

Darrell Proctor is associate editor for POWER (@POWERmagazine).

Read more here:
The POWER Interview: The Importance of AI and Machine Learning - POWER magazine

Mission Healthcare of San Diego Adopts Muse Healthcare’s Machine Learning Tool – Southernminn.com

ST. PAUL, Minn., Jan. 19, 2021 /PRNewswire/ -- San Diego-based Mission Healthcare, one of the largest home health, hospice, and palliative care providers in California, will adopt Muse Healthcare's machine learning and predictive modeling tool to help deliver a more personalized level of care to their patients.

The Muse technology evaluates and models every clinical assessment, medication, vital sign, and other relevant data to perform a risk stratification of these patients. The tool then highlights the patients with the most critical needs and visually alerts the agency to perform additional care. Muse Healthcare identifies patients as "Critical," which means they have a greater than 90% likelihood of passing in the next 7-10 days. Users are also able to make accurate changes to care plans based on the condition and location of the patient. When agencies use Muse's powerful machine learning tool, they have an advantage and data proven outcomes to demonstrate they are providing more care and better care to patients in transition.

According to Mission Healthcare's Vice President of Clinical and Quality, Gerry Smith, RN, MSN, Muse will serve as an invaluable tool that will assist their clinicians to enhance care for their patients. "Mission Hospice strives to ensure every patient receives the care and comfort they need while on service, and especially in their final days. We are so excited that the Muse technology will provide our clinical team with additional insights to positively optimize care for patients at the end of life. This predictive modeling technology will enable us to intervene earlier; make better decisions for more personalized care; empower staff; and ultimately improve patient outcomes."

Mission Healthcare's CEO, Paul VerHoeve, also believes that the Muse technology will empower their staff to provide better care for patients. "Predictive analytics are a new wave in hospice innovation and Muse's technology will be a valuable asset to augment our clinical efforts at Mission Healthcare. By implementing a revolutionary machine learning tool like Muse, we can ensure our patients are receiving enhanced hands-on care in those critical last 7 10 days of life. Our mission is to take care of people, with Muse we will continue to improve the patient experience and provide better care in the final days and hours of a patient's life."

As the only machine learning tool in the hospice industry, the Muse transitions tool takes advantage of the implemented documentation within the EMR. This allows the agency to quickly implement the tool without disruption. "With guidance from our customers in the hundreds of locations that are now using the tool, we have focused on deploying time saving enhancements to simplify a clinician's role within hospice agencies. These tools allow the user to view a clinical snapshot, complete review of the scheduled frequency, and quickly identify the patients that need immediate attention. Without Muse HC, a full medical review must be conducted to identify these patients," said Tom Maxwell, co-Founder of Muse Healthcare. "We are saving clinicians time in their day, simplifying the identification challenges of hospice, and making it easier to provide better care to our patients. Hospice agencies only get one chance to get this right," said Maxwell.

CEO of Muse Healthcare, Bryan Mosher, is also excited about Mission's adoption of the Muse tool. "We welcome the Mission Healthcare team to the Muse Healthcare family of customers, and are happy to have them adopt our product so quickly. We are sure with the use of our tools,clinicians at Mission Healthcare will provide better care for their hospice patients," said Mosher.

About Mission Healthcare

As one of the largest regional home health, hospice, and palliative care providers in California, San Diego-based Mission Healthcare was founded in 2009 with the creation of its first service line, Mission Home Health. In 2011, Mission added its hospice service line. Today, Mission employs over 600 people and serves both home health and hospice patients through Southern California. In 2018, Mission was selected as a Top Workplace by the San Diego Union-Tribune. For more information visit https://homewithmission.com/.

About Muse Healthcare

Muse Healthcare was founded in 2019 by three leading hospice industry professionals -- Jennifer Maxwell, Tom Maxwell, and Bryan Mosher. Their mission is to equip clinicians with world-class analytics to ensure every hospice patient transitions with unparalleled quality and dignity. Muse's predictive model considers hundreds of thousands of data points from numerous visits to identify which hospice patients are most likely to transition within 7-12 days. The science that powers Muse is considered a true deep learning neural network the only one of its kind in the hospice space. When hospice care providers can more accurately predict when their patients will transition, they can ensure their patients and the patients' families receive the care that matters most in the final days and hours of a patient's life. For more information visit http://www.musehc.com.

See original here:
Mission Healthcare of San Diego Adopts Muse Healthcare's Machine Learning Tool - Southernminn.com

Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows – Georgia State University News

ATLANTACompared to standard machine learning models, deep learning models are largely superior at discerning patterns and discriminative features in brain imaging, despite being more complex in their architecture, according to a new study in Nature Communications led by Georgia State University.

Advanced biomedical technologies such as structural and functional magnetic resonance imaging (MRI and fMRI) or genomic sequencing have produced an enormous volume of data about the human body. By extracting patterns from this information, scientists can glean new insights into health and disease. This is a challenging task, however, given the complexity of the data and the fact that the relationships among types of data are poorly understood.

Deep learning, built on advanced neural networks, can characterize these relationships by combining and analyzing data from many sources. At the Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State researchers are using deep learning to learn more about how mental illness and other disorders affect the brain.

Although deep learning models have been used to solve problems and answer questions in a number of different fields, some experts remain skeptical. Recent critical commentaries have unfavorably compared deep learning with standard machine learning approaches for analyzing brain imaging data.

However, as demonstrated in the study, these conclusions are often based on pre-processed input that deprive deep learning of its main advantagethe ability to learn from the data with little to no preprocessing. Anees Abrol, research scientist at TReNDS and the lead author on the paper, compared representative models from classical machine learning and deep learning, and found that if trained properly, the deep-learning methods have the potential to offer substantially better results, generating superior representations for characterizing the human brain.

We compared these models side-by-side, observing statistical protocols so everything is apples to apples. And we show that deep learning models perform better, as expected, said co-author Sergey Plis, director of machine learning at TReNDS and associate professor of computer science.

Plis said there are some cases where standard machine learning can outperform deep learning. For example, diagnostic algorithms that plug in single-number measurements such as a patients body temperature or whether the patient smokes cigarettes would work better using classical machine learning approaches.

If your application involves analyzing images or if it involves a large array of data that cant really be distilled into a simple measurement without losing information, deep learning can help, Plis said.. These models are made for really complex problems that require bringing in a lot of experience and intuition.

The downside of deep learning models is they are data hungry at the outset and must be trained on lots of information. But once these models are trained, said co-author Vince Calhoun, director of TReNDS and Distinguished University Professor of Psychology, they are just as effective at analyzing reams of complex data as they are at answering simple questions.

Interestingly, in our study we looked at sample sizes from 100 to 10,000 and in all cases the deep learning approaches were doing better, he said.

Another advantage is that scientists can reverse analyze deep-learning models to understand how they are reaching conclusions about the data. As the published study shows, the trained deep learning models learn to identify meaningful brain biomarkers.

These models are learning on their own, so we can uncover the defining characteristics that theyre looking into that allows them to be accurate, Abrol said. We can check the data points a model is analyzing and then compare it to the literature to see what the model has found outside of where we told it to look.

The researchers envision that deep learning models are capable of extracting explanations and representations not already known to the field and act as an aid in growing our knowledge of how the human brain functions. They conclude that although more research is needed to find and address weaknesses of deep-learning models, from a mathematical point of view, its clear these models outperform standard machine learning models in many settings.

Deep learnings promise perhaps still outweighs its current usefulness to neuroimaging, but we are seeing a lot of real potential for these techniques, Plis said.

See the original post here:
Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows - Georgia State University News

Project MEDAL to apply machine learning to aero innovation – The Engineer

Metallic alloys for aerospace components are expected to be made faster and more cheaply with the application of machine learning in Project MEDAL.

This is the aim of Project MEDAL: Machine Learning for Additive Manufacturing Experimental Design,which is being led by Intellegens, a Cambridge University spin-out specialising in artificial intelligence, the Sheffield University AMRC North West, and Boeing. It aims to accelerate the product development lifecycle of aerospace components by using a machine learning model to optimise additive manufacturing (AM) for new metal alloys.

How collaboration is driving advances in additive manufacturing

Project MEDALs research will concentrate on metal laser powder bed fusion and will focus on so-called parameter variables required to manufacture high density, high strength parts.

The project is part of the National Aerospace Technology Exploitation Programme (NATEP), a 10m initiative for UK SMEs to develop innovative aerospace technologies funded by the Department for Business, Energy and Industrial Strategy and delivered in partnership with the Aerospace Technology Institute (ATI) and Innovate UK.

In a statement, Ben Pellegrini, CEO of Intellegens, said: The intersection of machine learning, design of experiments and additive manufacturing holds enormous potential to rapidly develop and deploy custom parts not only in aerospace, as proven by the involvement of Boeing, but in medical, transport and consumer product applications.

There are many barriers to the adoption of metallic AM but by providing users, and maybe more importantly new users, with the tools they need to process a required material should not be one of them, added James Hughes, research director for Sheffield University AMRC North West. With the AMRCs knowledge in AM, and Intellegens AI tools, all the required experience and expertise is in place in order to deliver a rapid, data-driven software toolset for developing parameters for metallic AM processes to make them cheaper and faster.

Aerospace components must withstand certain loads and temperature resistances, and some materials are limited in what they can offer. There is also simultaneous push for lower weight and higher temperature resistance for better fuel efficiency, bringing new or previously impractical-to-machine metals into the aerospace sector.

One of the main drawbacks of AM is the limited material selection currently available and the design of new materials, particularly in the aerospace industry, requires expensive and extensive testing and certification cycles which can take longer than a year to complete and cost as much as 1m. Project MEDAL aims to accelerate this process.

The machine learning solution in this project can significantly reduce the need for many experimental cycles by around 80 per cent, Pellegrini said: The software platform will be able to suggest the most important experiments needed to optimise AM processing parameters, in order to manufacture parts that meet specific target properties. The platform will make the development process for AM metal alloys more time and cost-efficient. This will in turn accelerate the production of more lightweight and integrated aerospace components, leading to more efficient aircraft and improved environmental impact.

Original post:
Project MEDAL to apply machine learning to aero innovation - The Engineer

AI in Credit Decision-Making Is Promising, but Beware of Hidden Biases, Fed Warns – JD Supra

As financial services firms increasingly turn to artificial intelligence (AI), banking regulators warn that despite their astonishing capabilities, these tools must be relied upon with caution.

Last week, the Board of Governors of the Federal Reserve (the Fed) held a virtual AI Academic Symposium to explore the application of AI in the financial services industry. Governor Lael Brainard explained that particularly as financial services become more digitized and shift to web-based platforms, a steadily growing number of financial institutions have relied on machine learning to detect fraud, evaluate credit, and aid in operational risk management, among many other functions.[i]

In the AI world, machine learning refers to a model that processes complex data sets and automatically recognizes patterns and relationships, which are in turn used to make predictions and draw conclusions.[ii] Alternative data is information that is not traditionally used in a particular decision-making process but that populates machine learning algorithms in AI-based systems and thus fuels their outputs.[iii]

Machine learning and alternative data have special utility in the consumer lending context, where these AI applications allow financial firms to determine the creditworthiness of prospective borrowers who lack credit history.[iv] Using alternative data such as the consumers education, job function, property ownership, address stability, rent payment history, and even internet browser history and behavioral informationamong many other datafinancial institutions aim to expand the availability of affordable credit to so-called credit invisibles or unscorables.[v]

Yet, as Brainard cautioned last week, machine-learning AI models can be so complex that even their developers lack visibility into how the models actually classify and process what could amount to thousands of nonlinear data elements.[vi] This obscuring of AI models internal logic, known as the black box problem, raises questions about the reliability and ethics of AI decision-making.[vii]

When using AI machine learning to evaluate access to credit, the opaque and complex data interactions relied upon by AI could result in discrimination by race, or even lead to digital redlining, if not intentionally designed to address this risk.[viii] This can happen, for example, when intricate data interactions containing historical information such as educational background and internet browsing habits become proxies for race, gender, and other protected characteristicsleading to biased algorithms that discriminate.[ix]

Consumer protection laws, among other aspects of the existing regulatory framework, cover AI-related credit decision-making activities to some extent. Still, in light of the rising complexity of AI systems and their potentially inequitable consequences, AI-focused legal reforms may be needed. At this time, to help ensure that financial services are prepared to manage these risks, the Fed has called on stakeholdersfrom financial services firms to consumer advocates and civil rights organizations as well as other businesses and the general publicto provide input on responsible AI use.[x]

[i] Lael Brainard, Governor, Bd. of Governors of the Fed. Reserve Sys., AI Academic Symposium: Supporting Responsible Use of AI and Equitable Outcomes in Financial Services (Jan. 12, 2021), available at https://www.federalreserve.gov/newsevents/speech/brainard20210112a.htm.

[ii] Pratin Vallabhaneni and Margaux Curie, Leveraging AI and Alternative Data in Credit Underwriting: Fair Lending Considerations for Fintechs, 23 No. 4 Fintech L. Rep. NL 1 (2020).

[iii] Id.

[iv] Id.; Brainard, supra n. 1.

[v] Vallabhaneni and Margaux Curie, supra n.2; Kathleen Ryan, The Big Brain in the Black Box, Am. Bar Assoc. (May 2020), https://bankingjournal.aba.com/2020/05/the-big-brain-in-the-black-box/.

[vi] Brainard, supra n.1; Ryan, supra n.5.

[vii] Brainard, supra n.1; Ryan, supra n.5.

[viii] Brainard, supra n.1.

[ix] Id. (citing Carol A. Evans and Westra Miller, From Catalogs to Clicks: The Fair Lending Implications of Targeted, Internet Marketing, Consumer Compliance Outlook (2019)).

[x] Id.

See original here:
AI in Credit Decision-Making Is Promising, but Beware of Hidden Biases, Fed Warns - JD Supra

Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis – AJMC.com Managed Markets Network

Machine learning was shown to identify patients with rheumatoid arthritis (RA) who present an increased chance of achieving clinical response with sarilumab, with those selected also showing an inferior response to adalimumab, according to an abstract presented at ACR Convergence, the annual meeting of the American College of Rheumatology (ACR).

In prior phase 3 trials comparing the interleukin 6 receptor (IL-6R) inhibitor sarilumab with placebo and the tumor necrosis factor (TNF-) inhibitor adalimumab, sarilumab appeared to provide superior efficacy for patients with moderate to severe RA. Although promising, the researchers of the abstract highlight that treatment of RA requires a more individualized approach to maximize efficacy and minimize risk of adverse events.

The characteristics of patients who are most likely to benefit from sarilumab treatment remain poorly understood, noted researchers.

Seeking to better identify the patients with RA who may best benefit from sarilumab treatment, the researchers applied machine learning to select from a predefined set of patient characteristics, which they hypothesized may help delineate the patients who could benefit most from either antiIL-6R or antiTNF- treatment.

Following their extraction of data from the sarilumab clinical development program, the researchers utilized a decision tree classification approach to build predictive models on ACR response criteria at week 24 in patients from the phase 3 MOBILITY trial, focusing on the 200-mg dose of sarilumab. They incorporated the Generalized, Unbiased, Interaction Detection and Estimation (GUIDE) algorithm, including 17 categorical and 25 continuous baseline variables as candidate predictors. These included protein biomarkers, disease activity scoring, and demographic data, added the researchers.

Endpoints used were ACR20, ACR50, and ACR70 at week 24, with the resulting rule validated through application on independent data sets from the following trials:

Assessing the end points used, it was found that the most successful GUIDE model was trained against the ACR20 response. From the 42 candidate predictor variables, the combined presence of anticitrullinated protein antibodies (ACPA) and C-reactive protein >12.3 mg/L was identified as a predictor of better treatment outcomes with sarilumab, with those patients identified as rule-positive.

These rule-positive patients, which ranged from 34% to 51% in the sarilumab groups across the 4 trials, were shown to have more severe disease and poorer prognostic factors at baseline. They also exhibited better outcomes than rule-negative patients for most end points assessed, except for patients with inadequate response to TNF inhibitors.

Notably, rule-positive patients had a better response to sarilumab but an inferior response to adalimumab, except for patients of the HAQ-Disability Index minimal clinically important difference end point.

If verified in prospective studies, this rule could facilitate treatment decision-making for patients with RA, concluded the researchers.

Reference

Rehberg M, Giegerich C, Praestgaard A, et al. Identification of a rule to predict response to sarilumab in patients with rheumatoid arthritis using machine learning and clinical trial data. Presented at: ACR Convergence 2020; November 5-9, 2020. Accessed January 15, 2021. 021. Abstract 2006. https://acrabstracts.org/abstract/identification-of-a-rule-to-predict-response-to-sarilumab-in-patients-with-rheumatoid-arthritis-using-machine-learning-and-clinical-trial-data/

Read more from the original source:
Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis - AJMC.com Managed Markets Network