Artificial intelligence in pulmonary medicine: computer vision, predictive model and COVID-19 – DocWire News

This article was originally published here

Eur Respir Rev. 2020 Oct 1;29(157):200181. doi: 10.1183/16000617.0181-2020. Print 2020 Sep 30.

ABSTRACT

Artificial intelligence (AI) is transforming healthcare delivery. The digital revolution in medicine and healthcare information is prompting a staggering growth of data intertwined with elements from many digital sources such as genomics, medical imaging and electronic health records. Such massive growth has sparked the development of an increasing number of AI-based applications that can be deployed in clinical practice. Pulmonary specialists who are familiar with the principles of AI and its applications will be empowered and prepared to seize future practice and research opportunities. The goal of this review is to provide pulmonary specialists and other readers with information pertinent to the use of AI in pulmonary medicine. First, we describe the concept of AI and some of the requisites of machine learning and deep learning. Next, we review some of the literature relevant to the use of computer vision in medical imaging, predictive modelling with machine learning, and the use of AI for battling the novel severe acute respiratory syndrome-coronavirus-2 pandemic. We close our review with a discussion of limitations and challenges pertaining to the further incorporation of AI into clinical pulmonary practice.

PMID:33004526 | DOI:10.1183/16000617.0181-2020

See the article here:
Artificial intelligence in pulmonary medicine: computer vision, predictive model and COVID-19 - DocWire News

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI) – Forbes

Technical skills and data literacy are obviously important in this age of AI, big data, and automation. But that doesn't mean we should ignore the human side of work skills in areas that robots can't do so well. I believe these softer skills will become even more critical for success as the nature of work evolves, and as machines take on more of the easily automated aspects of work. In other words, the work of humans is going to become altogether more, well, human.

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI)

With this in mind, what skills should employees be looking to cultivate going forward? Here are nine soft skills that I think are going to become even more precious to employers in the future.

1. Creativity

Robots and machines can do many things, but they struggle to compete with humans when it comes to our ability to create, imagine, invent, and dream. With all the new technology coming our way, the workplaces of the future will require new ways of thinking making creative thinking and human creativity an important asset.

2. Analytical (critical) thinking

As well as creative thinking, the ability to think analytically will be all the more precious, particularly as we navigate the changing nature of the workplace and the changing division of labor between humans and machines. That's because people with critical thinking skills can come up with innovative ideas, solve complex problems and weigh up the pros and cons of various solutions all using logic and reasoning, rather than relying on gut instinct or emotion.

3. Emotional intelligence

Also known as EQ (as in, emotional IQ), emotional intelligence describes a person's ability to be aware of, control, and express their own emotions and be aware of the emotions of others. So when we talk about someone who shows empathy and works well with others, were describing someone with a high EQ. Given that machines cant easily replicate humans ability to connect with other humans, it makes sense that those with high EQs will be in even greater demand in the workplace.

4. Interpersonal communication skills

Related to EQ, the ability to successfully exchange information between people will be a vital skill, meaning employees must hone their ability to communicate effectively with other people using the right tone of voice and body language in order to deliver their message clearly.

5. Active learning with a growth mindset

Someone with a growth mindset understands that their abilities can be developed and that building skills leads to higher achievement. They're willing to take on new challenges, learn from their mistakes, and actively seek to expand their knowledge. Such people will be much in demand in the workplace of the future because, thanks to AI and other rapidly advancing technologies, skills will become outdated even faster than they do today.

6. Judgement and decision making

We already know that computers are capable of processing information better than the human brain, but ultimately, it's humans who are responsible for making the business-critical decisions in an organization. It's humans who have to take into account the implications of their decisions in terms of the business and the people who work in it. Decision-making skills will, therefore, remain important. But there's no doubt that the nature of human decision making will evolve specifically, technology will take care of more menial and mundane decisions, leaving humans to focus on higher-level, more complex decisions.

7. Leadership skills

The workplaces of the future will look quite different from today's hierarchical organizations. Project-based teams, remote teams, and fluid organizational structures will probably become more commonplace. But that won't diminish the importance of good leadership. Even within project teams, individuals will still need to take on leadership roles to tackle issues and develop solutions so common leadership traits like being inspiring and helping others become the best versions of themselves will remain critical.

8. Diversity and cultural intelligence

Workplaces are becoming more diverse and open, so employees will need to be able to respect, understand, and adapt to others who might have different ways of perceiving the world. This will obviously improve how people interact within the company, but I think it will also make the businesss services and products more inclusive, too.

9. Embracing change

Even for me, the pace of change right now is startling, particularly when it comes to AI. This means people will have to be agile and cultivate the ability to embrace and even celebrate change. Employees will need to be flexible and adapt to shifting workplaces, expectations, and required skillsets. And, crucially, they'll need to see change not as a burden but as an opportunity to grow.

Bottom line: we needn't be intimated by AI. The human brain is incredible. It's far more complex and more powerful than any AI in existence. So rather than fearing AI and automation and the changes this will bring to workplaces, we should all be looking to harness our unique human capabilities and cultivate these softer skills skills that will become all the more important for the future of work.

AI is going to impact businesses of all shapes and sizes across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

See the article here:
9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI) - Forbes

Artificial Intelligence in Operation Monitoring Discovers Patterns Within Drilling Reports – Journal of Petroleum Technology

Artificial Intelligence in Operation Monitoring Discovers Patterns Within Drilling Reports

You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.

To ensure continued access to JPT's content, please Sign In, JOIN SPE, or Subscribe to JPT

In well-drilling activities, successful execution of a sequence of operations defined in a well project is critical. To provide proper monitoring, operations executed during drilling procedures are reported in daily drilling reports (DDRs). The complete paper provides an approach using machine-learning and sequence-mining algorithms for predicting and classifying the next operation based on textual descriptions. The general goal is to exploit the rich source of information represented by the DDRs to derive methodologies and tools capable of performing automatic data-analysis procedures and assisting human operators in time-consuming tasks.

Classification Tasks. fastText. This is a library discussed in the literature designed to learn word embeddings and text classification. The technique implements a simple linear model with rank constraint, and the text representation is a hidden state that is used to feed classifiers. A softmax function computes the probability distribution over predefined classes.

Conditional Random Fields (CRFs). CRFs are a category of undirected graphical models that allow combination of features from each timestep of the sequence, with the ability to transit between labels for each episode in the input sequence. They were proposed to overcome the problem of bias that existed in techniques such as hidden Markov models and maximum-entropy Markov models.

Recurrent Models. Despite achieving good results in several scenarios and learning word embeddings as a byproduct of its training, the fastText classifier does not properly consider word-ordering information that can be useful for several classification tasks. Such a shortcoming can be addressed by a recurrent neural network (RNN), which considers the fact that a fragment of text is formed by an ordered sequence of words. The authors consider the gated recurrent unit variant, which is easier to train than traditional RNNs and achieves results comparable with those of the long short-term memory unit, while figuring fewer parameters to learn. The methodology of these classifiers is detailed mathematically in the complete paper.

Sequence Prediction. Sequential pattern mining can be defined broadly by the task of discovering interesting subsequences in a set of sequences, where the level of interest can be measured in terms of various criteria such as occurrence frequency, length, and profit, according to the application. The authors focus in this paper on the specific task of sequence prediction.

In the scenario considered, the alphabet is given by an ontology of operations of drilling activities. The sequence is defined according to data stored in DDRs. The proposed methodology considers various sequence prediction algorithms, specifically the following:

These algorithms are detailed in the complete paper. The sequential pattern mining framework (SPMF) was used for algorithm implementation. SPMF is an open-source data-mining library specialized in frequent pattern mining.

Data Sets. The data sets used for the experiments reported in this paper were extracted from different collections of DDRs. Each DDR entry is a record containing a rich set of information about the event being reported, which could be an operation or an occurrence. Two different types of data sets were generated, the operations data sets and the cost data set. The former is used by both classification and sequence prediction tasks, whereas the latter is only used for classification.

Operations Data Sets. The operations data sets were extracted from DDRs of 119 different wellbores, which comprise more than 90,000 entries. The DDR fields of most interest for the experiments applied on this collection are the description and the operation name. The former is a special field used by the reporter to fill in important details about the event in a free-text format. The latter is selected by the reporter from a predefined list of operation names.

For the sequence-mining tasks, only the operation name is used. The data set is viewed as a set of sequences of operations, one for each wellbore. For the classification tasks, both fields are used for supervised learning, with the description as input object and the operation name as label.

The DDRs were preprocessed by an industry specialist with the objective of, first, removing the inconsistencies and, second, normalizing operation names to unify operations that shared semantics. Given the large number of documents, the strategy used for the former objective was to remove entries with the wrong operation name (instead of fixing each one, which would be a much harder task). As for the second objective, after an analysis of the list of operation names and samples of descriptions, each group of overlapping operations was transformed into a single operation.

This process yielded a resulting data set containing more than 38,000 samples and 39 operation types for the classification task and another containing more than 51,000 samples and 41 operations types for the sequence-prediction task.

Costs Data Sets. The costs data set is a collection of DDRs with an extra field (the target field) meant to be used for calculating the cost of each operation performed in a wellbore project. That field usually is multivalued because more than one activity of interest being described might exist in the free-text field of a DDR entry. Each value in that list is a pair containing two types of information: a label for the activity described in the entry and a number pointing to a diameter value.

As opposed to the operations data set, the target field was filled on land by a small group of employees trained specifically for this task. Nevertheless, the costs data set still had to be preprocessed before use in the experiments.

Classification Results. Before evaluating the models, the best values for each hyperparameter are determined using the validation set through a grid search. The proposed models are trained for 30 epochs.

The experimental results regarding accuracy and macro-F1 measures for the costs and operations data sets are presented in the complete paper. In both cases, the fastText classifier, despite being quite simple, yields significant results, posing a strong baseline for the proposed models. Nevertheless, one should recall that the word vectors learned by this first classifier are used as the proposed model embeddings as well.

The other neural networks also consider the complete word ordering in the samples, allowing them to achieve results better than the baseline. Such metrics are further improved by replacing the traditional Softmax layer in the output layer by a CRF. This allows the model to label each entry in the segment not only based on its extracted characteristics but also with respect to the operations ordering. This allows the model to improve the baseline accuracy by 10.94 and 3.85% in the cost and operations data sets, respectively. The proposed model learns not only the most relevant characteristics from each sample but also the patterns in the sequence of operations performed in a well-drilling project.

Sequence-Mining Results. The data set was divided into 10 segments, and the methods were evaluated according to a cross-validation protocol. The cross-validation protocol varies the training and testing data through various executions in order to reduce any bias in the results. For the classification tasks, approaches based on word embeddings and CRFs are exploited. Evaluations were made considering sequences from size 5 to 10 in the data set, using the sequence-prediction methods to predict the next drilling operation.

Table 1 presents the accuracy obtained when considering the sequences of operations as presented in the data set. Table2 shows the accuracy obtained when removing consecutive drilling operations from the data. The data set contains multiple repetitions of operations, contiguous to one another. This makes the data more predictable to the sequence prediction model and explains the higher accuracy obtained in experiments shown in Table 1.

DDR Processing Framework. To make the models discussed available for use in a real-world scenario, a framework is proposed that allows the end user to upload DDRs and analyze them by different applications, one for each specific purpose. One great advantage of using this framework is that the user feeds data once and then has access to several tools for analyzing them.

Currently, a working version of an application for performing the classification tasks already has been implemented. It encapsulates the classification models generated with the experiments and allows the processing of a large number of DDRs, either for operation or cost classification.

See the article here:
Artificial Intelligence in Operation Monitoring Discovers Patterns Within Drilling Reports - Journal of Petroleum Technology

Cleaner air on motorways thanks to matrix signs with artificial intelligence – Innovation Origins

When matrix signs start blinking above a motorway, the average commuter already knows what time it is. Traffic jams. Although the adjusted speed limit does help to improve traffic flow. There is one more indirect effect of this dynamic traffic flow management system: Fewer emissions due to improved traffic flow. That is why a trial is starting in Germany where environmental data will be incorporated into traffic flow management.

The air quality in the vicinity of motorways could also be considerably improved in the Netherlands. This is why the government wants to tackle this problem through the National Air Quality Cooperation Program (NSL). For example, by promoting electric vehicles or offering alternative means of transport. Air quality values around motorways can be found via this link.

According to German scientists, the incorporation of environmental data into traffic flow management can reduce noise and pollution. They are now going to research this in the U-SARAH live project, coordinated by the Karlsruhe Institute of Technology (KIT). The Ministry of Infrastructure and Water Management of the Netherlands is funding the project to the tune of almost 1.1 million.

The aim of this study is to optimize and implement an environmental control system in an existing traffic route control system so as to reduce the environmental impact on the sections in question, explains Professor Peter Vortisch, head of the KIT Institute for Transport.

A microscopic traffic flow model developed over the course of a preliminary study with our partner Hessen Mobil enables the effects of the newly developed environmental control system to be simulated. This makes it possible to optimize the control system in such a way that both traffic flow and environmental effects are taken into account.

We want to test and evaluate the new control system under real conditions in a practical test, says Matthias Glatz. He is a project manager at Hessen Mobil. EDI GmbH, a spin-off of KIT, uses the extensive traffic data to model the road users reactions to dynamic speed limits by using artificial intelligence (AI). On the basis of this data, we plan to develop an AI-based acceptance model and a prediction model as modules for guiding the SBA, says Dr. Thomas Freudenmann. He is one of the founders and managing director of EDI GmbH. The existing control system will be expanded with these modules.

The simulation model developed in U-SARAH live can be used in future both for quality management and the optimization of route control systems. The results of the project will benefit not only the population, public authorities, and scientific institutes but also all manufacturers of traffic control systems. Thanks to the AI-based approach, the traffic situation can be estimated a few minutes in advance so that traffic can be controlled even better. The simulation-based development facilitates the easy integration of emissions data into traffic control systems. And without incurring high acquisition costs for measuring technology, explains Sebastian Buck of the KIT Institute for Transport.

By reducing emissions and optimizing the flow of traffic, the economic damage caused by traffic congestion and excess emissions can be reduced. An analysis platform developed within the project will help to examine the large data files from different angles across all steps. The platform will be made available to the public via the Ministry of Transports data cloud.

See the original post:
Cleaner air on motorways thanks to matrix signs with artificial intelligence - Innovation Origins

How do we govern artificial intelligence and act ethically? – Open Access Government

The world has evolved rapidly in the last few years and artificial intelligence (AI) has often been leading the change. The technology has been adopted by almost every industry with companies wanting to explore how AI can automate processes, increase efficiency, and improve business operations.

AI has certainly proved how it can be beneficial to us all, but a common misconception is that it is always objective and avoids bias, opinion, and ideologies. Based on this understanding, there has been a rise in recent years with companies utilising AI-based recruiting platforms in a bid to make the hiring process more efficient and devoid of human bias.

Yet, a Financial Times article quoted an employment barrister who doubted the progressive nature of AI tools and said that there is overwhelming evidence available that the machines are very often getting it wrong. A high-profile example of this being the case is when Amazon had to abandon its AI-recruiting tool in 2018 after the company realised it was favouring men for technical jobs.

However, AI has continued to advance at a rapid pace and its adoption by businesses has been further accelerated following COVID-19s arrival. With debates of whether AI can be relied upon to behave impartially still ongoing, how can the technology be governed so organisations continue to act ethically?

During a press conference in Brussels earlier this year, the European Commission said it was preparing to draft regulation for AI that will help prevent its misuse, but the governing body has set itself quite the challenge. The technology is developing constantly so after only a few weeks any regulation that is introduced may not go far enough. After a few months, it could become completely irrelevant.

Within the risk community however, there is no doubt that policies are needed as a study found that 80% of risk professionals are not confident with the AI governance in place. At the same time, there are also concerns from technology leaders who believe tighter regulations will stifle AI innovation and obstruct the potentially enormous advantages it can have on the world.

A certain level of creative and scientific freedom is required for companies to create innovative new technologies and although AI can be used for good, the increasing speed with which it is being developed and adopted across industries is a major consideration for governance. The ethical concerns need to be addressed.

Given the current and ongoing complexities that the global pandemic brings, as well as the looming Brexit deadline, we will likely have to wait for the EUs regulation to be finalised and put in place. In the meantime, businesses should begin to get their own houses in order if they havent already with their use of AI and governance, risk and compliance (GRC) processes to ensure they are not caught out when legislation does arrive.

By setting up a forward-looking risk management program around implementing and managing the use of AI, organisations can improve their ability in handling both existing and emerging risks by analysing past trends, predicting future scenarios, and proactively preparing for further risk. A governance framework should also be implemented around AI both within and outside the organisation to better overcome any unforeseen exposure to risk from evolving AI technologies and an ever-changing business landscape.

Unlike the financial services sector where internal controls and regulators require businesses to regularly validate and manage their own models, AI model controls are already being put in place, reflecting the abundant usage of AI within enterprises. It wont be long before regulators begin to demand proof points of there being the right controls in place, so organisations need to monitor where AI is being used for business decisions and ensure the technology operates with accuracy and is void of inherent biases and incomplete underlying datasets.

When an organisation is operating with such governance and a forward-looking risk management program towards its use of AI, it will certainly be better positioned once new regulation is eventually enforced.

Too often, businesses are operating with multiple information siloes created by different business units and teams in various geographic locations. This can lead to information blind spots and a recent Garter study found that poor data quality is responsible for an average loss of $15 million per year.

Now more than ever, businesses need to be conscious of avoiding unnecessary fines as the figures can be crippling. Hence, it is important that these restrictive siloes are removed in favour of a centralised information hub that everyone across the business can access. This way, senior management and risk professionals are always aware of their risks, including any introduced by AI, and can be confident that they have a clear vision of the bigger picture to be able to efficiently respond to threats.

Another reason for moving towards centralisation and complete visibility throughout the business is that it often gets touted that the reason AI fails to act impartially is that AI systems learn to make decisions based on training data that humans provide. If this data is incomplete or contains conscious or unconscious bias or reflects historical and social inequalities, so will the AI technology.

While an organisation may not always be responsible for creating AI bias in the first place, by having a good oversight and complete centralised information to hand at any time, it becomes a lot easier to see where there are blind spots that could damage a companys reputation.

Ultimately, it is down to organisations themselves to manage their GRC processes, have a clear oversight of the entire risk landscape and strongly protect their reputation. One of the outcomes of the pandemic is the increased laser focus on ethics and integrity, so it is critical that organisations hold these values at the core of their business model to prevent scrutiny from regulators, stakeholders and consumers. Until adequate regulation is introduced by the EU, companies essentially need to take AI governance into their own hands to mitigate any risk and to always perform with integrity.

Editor's Recommended Articles

See original here:
How do we govern artificial intelligence and act ethically? - Open Access Government

Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage – PRNewswire

To do this, Admiral Seguros is using an AI solution, developed by the technology company Tractable, which accurately evaluates vehicle damage with photos sent through a web application. The app, via the AI, completes the complex manual tasks that an advisor would normally perform and produces a damage assessment in seconds, often without the need for further review.

Upon receiving the assessment, Admiral Seguros will use it to make immediate payment offers to policyholders when appropriate, allowing them to resolve claims in minutes, even on the first call.

Jose Maria Perez de Vargas, Head of Customer Management at Admiral Seguros, said: "Admiral Seguros continues to advance in digitalisation as a means to provide a better service to our policyholders, providing them with an easy, secure and transparent means of evaluating damages without the need for travel, achieving compensation in a few hours. It's a simple, innovative and efficient claims management process that our clients will surely appreciate."

Adrien Cohen, co-founder and president of Tractable, said: "By using our AI to offer immediate payments, Admiral Seguros will resolve many claims almost instantly, to the delight of its customers. This is central to our mission of using Artificial Intelligence to accelerate recovery, converting the process from weeks to minutes."

Tractable's AI uses deep learning for computer vision, in addition to machine learning techniques. The AI is trained with many millions of photographs of vehicle damage, and the algorithms learn from experience by analyzing a wide variety of different examples. Tractable's technology can be applied globally to any vehicle.

The AI enables insurers to assess car damage, shares recommended repair operations, and guides the claims management process to ensure these are processed and settled as quickly as possible.

According to Admiral Seguros, the application of this technology in the insurance sector will be a great step in digitization and will offer a great improvement in the customer experience of Admiral's insurance brands in Spain, Qualitas Auto and Balumba.

About Tractable:

Tractable develops artificial intelligence for accident and disaster recovery. Its AI solutions have been deployed by leading insurers across Europe, North America and Asia to accelerate accident recovery for hundreds of thousands of households. Tractable is backed by $55m in venture capital and has offices in London, New York City and Tokyo.

About Admiral Seguros

In Spain, Admiral Group plc has been based in Seville since 2006 thanks to the creation of Admiral Seguros. More than 700 people work from there and for the entire national territory, cementing and marketing their two commercial brands: Qualitas Auto, and Balumba.

Recognized as the third best company to work for in Spain, the sixth in Europe and the eighteenth in the world by the consultancy Great Place to Work, Admiral Seguros is committed to a corporate culture focused on people.

SOURCE Tractable

https://tractable.ai

Here is the original post:
Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage - PRNewswire

The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61…

New York, Sept. 30, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "North America Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Regional Analysis by Diagnostic Tool ; Application ; End User ; Service ; and Country" - https://www.reportlinker.com/p05974389/?utm_source=GNW

The healthcare industry has always been a leader in innovation.The constant mutating of diseases and viruses makes it difficult to stay ahead of the curve.

However, with the help of artificial intelligence and machine learning algorithms, it continues to advance, creating new treatments and helping people live longer and healthier.A study published by The Lancet Digital Health compared the performance of deep learning a form of artificial intelligence (AI) in detecting diseases from medical imaging versus that of healthcare professionals, using a sample of studies carried out between 2012 and 2019.

The study found that, in the past few years, AI has become more precise in identifying disease diagnosis in these images and has become a more feasible source of diagnostic information.With advancements in AI, deep learning may become even more efficient in identifying diagnosis in the coming years.

Moreover, it can help doctors with diagnoses and notify when patients are weakening so that the medical intervention can occur sooner before the patient needs hospitalization. It can save costs for both the hospitals and patients. Additionally, the precision of machine learning can detect diseases such as cancer quickly, thus saving lives.In 2019, the medical imaging toolsegment accounted for a larger share of the North America artificial intelligence in healthcare diagnosis market. Its growth is attributed to an increasing adoption of AI technology for diagnosis of chronic conditions is likely to drive the growth of diagnostic tool segment in the North America artificial intelligence in healthcare diagnosis.In 2019, the radiology segment held a considerable share of the for North America artificial intelligence in healthcare diagnosis market, by the application. This segment is also predicted to dominate the market by2027 owing to rising demand for AI based application for radiology.A few major primary and secondary sources for the artificial intelligence in healthcare diagnosis market included US Food and Drug Administration, and World Health Organization.Read the full report: https://www.reportlinker.com/p05974389/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Continued here:
The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61...

Expanding Access to Mental Healthcare with Artificial Intelligence – HealthITAnalytics.com

September 29, 2020 -Across the country today, it is widely acknowledged that access to mental healthcare is just as important as clinical care when it comes to overall wellness.

Mental health conditions are incredibly common in the US, impacting tens of millions of people each year, according to the National Institutes of Mental Health (NIMH). However, estimates suggest that only half of people with these conditions receive treatment, mainly due to barriers like clinician shortages, fragmented care, and societal stigma.

For the many individuals suffering from anxiety and depression, these existing barriers coupled with the current healthcare crisis can significantly interfere with the ability to carry out life activities.

The prevalence of mental health disorders particularly depression and anxiety is high. If anything, the prevalence of these conditions has only increased as a result of COVID-19. The need is greater than ever now, Jun Ma,PhD, Beth and GeorgeVitouxProfessor ofMedicineat the University of Illinois Chicago (UIC) department of medicine, told HealthITAnalytics.

To broaden mental healthcare access for people with moderate depression or anxiety, UIC researchers are testing an artificial intelligence-powered virtual agent called Lumen. The team will train the tool to provide patients with problem-solving therapy, a structured approach designed to help people focus on learning cognitive and behavioral skills.

READ MORE: Machine Learning May Support Personalized Mental Health Therapies

The two-phase, five-year project is funded by a $2 million grant from NIMH.

The goal is to meet the many challenges of people who dont have ready access to proven psychotherapy, which has been a longstanding issue, said Ma.

Over the years, my research team has done clinical trials testing the effectiveness and dissemination of different behavioral and psychosocial interventions. The results of that work, combined with the gaps that exist in practice and patient access, have really catalyzed the idea for this project.

Using the same technology as Amazons Alexa, researchers will develop an app that will act as a virtual mental health agent, talking through steps and strategies with patients following a validated treatment protocol.

If we prove this way of delivering problem-solving treatment is a safe and effective, once we put it into production anyone with access to Alexa would be able to access the program. We're very early in the development phase, so it will probably be another few years before its widely available, said Ma.

READ MORE: US Patients See Rising Burdens of Mental Health, Chronic Disease

We're making good strides. Were starting to conduct a user study on a small scale. And the immediate next step after this initial user development and the user testing phase will be a small scale randomized controlled trial (RCT), in which we'll enroll patients with depressive symptoms and/or anxiety.

Individuals will complete eight one-on-one counseling sessions over 12 weeks. In each session, participants will identify a problem they view as affecting their life and as a source of emotional distress, and the counselor will help them define goals and possible solutions. Solutions are then compared, and counselors and patients work to make an action plan to implement the chosen solution.

Researchers will program Lumen using the Alexa Skills Kit to act as the virtual counselor working with participants, taking them through problem-solving steps and encouraging them to engage in meaningful and enjoyable activities to improve their emotional well-being.

During the first phase, 80 study participants who report elevated depressive and anxiety symptoms will test the Lumen tool, with the potential for wider use going forward.

The researchers hope that the project will increase access to mental healthcare for those who need it most.

READ MORE: Applying Artificial Intelligence to Chronic Disease Management

One of the main advantages of using AI as a platform to provide therapy is the ability to scale and reduce significant barriers to access, as well as sustainability of proven psychotherapy such as problem-solving treatment, said Ma.

The technology can also be quite adaptable to individuals depending on when they need it and how they want to access it, and can potentially reduce barriers due to stigma.

Despite the serious potential for these tools to broaden the availability of mental healthcare, Ma also noted that the use of AI in this area comes with several concerns just as the technology does in any part of healthcare delivery.

Like any novel treatment in early development, it's unknown at this point what the effectiveness and the sustained impact of AI in psychotherapy. It's certainly very worth exploring, as we are doing now, she said.

Patient privacy is a very important area that warrants not only additional research, but also additional legislation and regulation. Additionally, AI and the underlying algorithms are trained using existing data and information, and there could be unintended consequences due to implicit or explicit bias. Its very important to have transparency in how the models are trained, as well as to ensure the data used to train such models is representative of the population.

Mas statements align with those of other industry experts, who consistently highlight the necessity of safety, data privacy, and health equity when building and using these tools.

In a recent viewpoint published in JAMA, authors noted that chatbots and other AI-powered virtual agents are still relatively new, and much of the data available comes from research rather than widespread clinical implementation. For these reasons, healthcare leaders must continually evaluate the capacity of these tools to improve care delivery, the authors stated.

In the development stage of the Lumen tool, Mas team at UIC plans to do just that.

If the small-scale RCT proves promising, then we'll go on to a larger-scale RCT in which we'll recruit 200 patients, again with that depressive symptoms and/or anxiety, to further test the potential impact and effectiveness of Lumen, Ma said.

Ultimately, the success of these tools in healthcare will depend on the industrys ability to weigh possible risks and rewards.

Given the potential concerns, it's worth emphasizing the importance of balancing excitement for such novel treatments with caution. It's a fine line between ensuring protection of patient privacy and confidentiality and not restricting the innovation in this area, Ma concluded.

More here:
Expanding Access to Mental Healthcare with Artificial Intelligence - HealthITAnalytics.com

Industry VoicesAI doesn’t have to replace doctors to produce better health outcomes – FierceHealthcare

Americans encounter some form of artificial intelligence and machine learning technologies in nearly every aspect of daily life: We accept Netflixs recommendations on what movie we should stream next, enjoy Spotifys curated playlists and take a detour when Waze tells us we can shave eight minutes off of our commute.

And it turns out that were fairly comfortable with this new normal: A survey released last year by Innovative Technology Solutions found that, on a scale of 1 to 10, Americans give their GPS systems an 8.1 trust and satisfaction score, followed closely by a 7.5 for TV and movie streaming services.

But when it comes to higher stakes, were not so trusting. When asked about whether they trust an AI doctor diagnosing or treating a medical issue, respondents scored it just a 5.4.

CMS Doubles Down on CAHPS and Raises the Bar on Member Experience

A new CMS final rule will double the impact of CAHPS and member experience on a Medicare plans overall Star Rating. Learn more and discover how to exceed member expectations and improve Star Ratings in this new whitepaper.

Overall skepticism about medical AI and ML is nothing new. In 2012, we were told that IBMs AI-powered Watson was being trained to recommend treatments for cancer patients. There were claims that the advanced technology could make medicine personalized and tailored to millions of people living with cancer. But in 2018, reports surfaced that indicated the research and technology had fallen short of expectations, leaving users to speculate the accuracy of Watsons predictive analytics.

RELATED:Investors poured $4B into healthcare AI startups in 2019

Patients have been reluctant to trust medical AI and ML out of fear that the technology would not offer a unique or personalized recommendation based on individual needs. A piece in Harvard Business Review in 2019 referenced a survey in which 200 business students were asked to take a free health assessment to perform a diagnosis40% of students signed up for the assessment when told their doctor would perform the diagnosis, while only 26% signed up when told a computer would perform the diagnosis.

These concerns are not without basis. Many of the AI and ML approaches that are being used in healthcare todaydue to simplicity and ease of implementationstrive for performance at the population-level by fitting to the characteristics most common among patients. They look to do well in the general case, failing to serve large groups of patients and individuals with unique health needs. However, this limitation of how AI and ML is being applied is not a limitation of the technology.

If anything, what makes AI and ML exceptionalif done rightis its ability to process huge sets of data comprising a diversity of patients, providers, diseases and outcomes and model the fine-grained trends that could potentially have a lasting impact on a patients diagnosis or treatment options. This ability to use data in the large for representative populations and to obtain inferences in the small for individual-level decision support is the promise of AI and ML. The whole process might sound impersonal or cookie-cutter, but the reality is that the advancements in precision medicine and delivery will make care decisions more data-driven and thus more exact.

Consider a patient choosing a specialist. Its anything but data-driven: Theyll search for a provider in-network or maybe one that is conveniently located, without understanding potential health outcomes as a result of their choice. The issue is that patients lack the proper data and information they need to make these informed choices.

RELATED:The unexpected ways AI is impacting the delivery of care, including for COVID-19

Thats where machine intelligence comes into playan AI/ML model that is able to accurately predict the right treatment, at the right time, by the right provider for a patient, which could drastically help reduce the rate of hospitalizations and emergency room visits.

As an example, research published last month in AJMC looked at claims data from 2 million Medicare beneficiaries between 2017 and 2019 to evaluate the utility of ML in the management of severe respiratory infections in community and post-acute settings. The researchers found that machine intelligence for precision navigation could be used to mitigate infection rates in the post-acute care setting.

Specifically, at-risk individuals who received care at skilled nursing facilities (SNFs) that the technology predicted would be the best choice for them had a relative reduction of 37% for emergent care and 36% for inpatient hospitalizations due to respiratory infections compared to those who received care at non-recommended SNFs.

This advanced technology has the ability to comb through and analyze an individuals treatment needs and medical history so that the most accurate recommendations can be made based on that individuals personalized needs and the doctors or facilities available to them. In turn, matching a patient to the optimal provider has the ability to drastically improve health outcomes while also lowering the cost of care.

We now have the technology where we can use machine intelligence to optimize some of the most important decisions in healthcare. The data show results we can trust.

Zeeshan Syed is the CEO and Zahoor Elahi is the COO of Health at Scale.

Read the original post:
Industry VoicesAI doesn't have to replace doctors to produce better health outcomes - FierceHealthcare

The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures? – Foreign Policy Research Institute

Access the Orbis Fall 2020 issue here

As militaries around the world seek to gain a strategic edge over their adversaries by integrating artificial intelligence (AI) innovations into their arsenals, how can members of the international community effectively reduce the unforeseen risks of this technological competition? We argue that pursuing confidence-building measures (CBMs), a class of information-sharing and transparency-enhancing arrangements that states began using in the Cold War to enhance strategic stability, could offer one model of managing AI-related risk today. Analyzing the conditions that led to early CBMs suggests such measures, however, will unlikely succeed today without being adapted to current conditions. This article uses historical analogies to illustrate how, in the absence of combat experiences involving novel military technology, it is difficult for states to be certain how these innovations change the implicit rules ofwarfare. Pursuing international dialogue, in ways that borrow from the Cold War CBM toolkit, may help speed the learning process about the implications of military applications of AI in ways that reduce the risk that states uncertainty about changes in military technology undermine international security and stability.

Access the article here

Read the original:
The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures? - Foreign Policy Research Institute