Machine Learning Approach Takes MSK Researchers Beyond Known Method to Predict Immunotherapy Response – On Cancer – Memorial Sloan Kettering

How can oncologists better predict who will benefit from a widely used class of immunotherapy drugs called checkpoint inhibitors?

In the precision medicine era of cancer care, its a question that has only increased in relevance. To answer it, Luc Morris, a physician-scientist and research laboratory head, together with several colleagues at Memorial Sloan Kettering Cancer Center, are looking beyond a known method to predict immunotherapy response.

Tumor mutational burden, or TMB, refers to the number of mutations a tumor has. High TMB means there are a lot of mutations. Low TMB means there are not many mutations. In the past five years, its been well established that tumors that have high TMB tend to respond better to checkpoint inhibitor therapy compared with tumors that have low TMB. Because checkpoint inhibitors only work in a fraction of people with cancer, the ability to predict response like with TMB is crucial. While TMB can be used to guide treatment decisions for certain patients with cancer for example, the checkpoint inhibitor pembrolizumab (Keytruda) is FDA approved for all tumors with high TMB it remains a crude predictor by itself, according to Dr. Morris.

We know that TMB provides some value in predicting immunotherapy response, but we also know that it is not a perfect predictor. It has limited value in isolation, says Dr. Morris, a senior author on the study, which published November 1, 2021, in Nature Biotechnology.

Oncologists will consider many factors when deciding on the best treatment for a patient with cancer TMB is only one, he says. For example, a melanoma tumor with low TMB may still have a very good chance of responding, just as a breast tumor with high TMB might have a lower chance of responding. We recognize that we need more predictive tools besides just TMB.

The studys co-first authors were Diego Chowell and Steve Yoo, research fellows in the lab of Timothy Chan at MSK, and Cristina Valero, a research fellow in the Morris Lab at MSK. Nils Weinhold, an MSK cancer researcher and computational biologist, led the study as a co-senior author together with Dr. Chan and Dr. Morris. (Dr. Chan, whose lab first reported the importance of TMB in cancer immunotherapy in 2014, moved to the Cleveland Clinic in 2020.)

Nils Weinhold

The limited value in isolation of TMB was one of the reasons why Dr. Morris and fellow investigators wanted to go beyond the biomarker in their latest analysis, he says. Another reason that Dr. Morris undertook this research was to learn more about a blood marker called neutrophil-to-lymphocyte ratio (NLR). Recent MSK research showed that NLR, especially when combined with TMB and other information such as patient blood markers could improve the ability to predict tumor immunotherapy response.

That opened the door for us to say: Why dont we just gather all of the variables that either have been shown to have predictive value, or that we think might possibly have predictive value, and put them into a machine learning algorithm and see how well we can predict outcomes with a larger pool of information, Dr. Morris says.

The team used a model that integrated 16 genomic, molecular, demographic, and clinical features, including TMB and NLR. By taking a machine learning approach, the investigators would be able to determine which combination of variables had the highest predictive power.

Using this large set of clinical and genomic data from patients treated at MSK, we trained a machine learning model that incorporated a number of different pieces of data, Dr. Morris explains.

Oncologists will consider many factors when deciding on the best treatment for a patient with cancer TMB is only one.

The investigators analyzed the variables in a group of 1,479 patients who were treated with immunotherapy: PD-1/PD-L1 inhibitor immunotherapy, CTLA-4 inhibitor immunotherapy, or a combination of both. Most patients (1,070) did not respond. The group included patients with 16 different types of cancer, of which non-small cell lung cancer and melanoma were the most prevalent. Investigators analyzed patients tumors using MSK-IMPACTTM, a powerful tool that provides detailed information about a tumors mutations.

MSK-IMPACT is an incredible resource for us, both as oncologists treating patients and as scientists trying to understand cancer, says Dr. Morris. For this study, we had a wealth of genomic data for these patients who were treated at MSK, to integrate with clinical data and blood test data.

The results reaffirmed TMBs relevance as a predictor of immunotherapy response; when the variables were studied individually, TMB was associated with the greatest effect of the 16 individual factors.

The next strongest predictors of response to immunotherapy were prior receipt of chemotherapy, albumin levels in the blood, and NLR.

Although each of these four measures could predict immunotherapy response, MSK researchers found that the 16-feature model more accurately predicted response than any one of the individual factors studied alone. Whats more, the 16-feature model was also able to better forecast survival differences among patients who did respond to immune checkpoint blockade and those who did not, further supporting the 16-feature approach over one involving fewer features. Cumulatively, the findings indicate that clinicians can do better than TMB alone by including other available pieces of information about the patient or the tumor genetics, Dr. Morris says.

Importantly, the model also takes into account TMBs varying degrees of predictive value across cancer types, Dr. Morris adds.

Although the predictive value of TMB varies quite a bit across different cancer types, the [16-feature] model had good predictive ability across all cancer types, he says. This is important because TMB is less predictive for some malignancies than for others, and for some types of cancer, it has no value at all. For example, the predictive value of elevated TMB is well established in melanoma and non-small cell lung cancer. In breast and prostate cancers, though, TMB has not been found to accurately predict immunotherapy response.

Broad use is part of Dr. Morris and his colleagues aim: This is a very good predictive biomarker based on genetic data from tumor sequencing, but our next research goal will be to try to determine how much value we can glean from a simpler model that maybe could be more widely implemented around the world.

Key Takeaways

Read the original post:
Machine Learning Approach Takes MSK Researchers Beyond Known Method to Predict Immunotherapy Response - On Cancer - Memorial Sloan Kettering

Machine learning can provide strong predictive accuracy for identifying adolescents that have experienced suicidal thoughts and behavior – EurekAlert

image:Fig 7. The top 10 most important questions for males vs females. view more

Credit: Weller et al., 2021, PLOS ONE, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

Researchers have developed a new, machine learning-based algorithm that shows high accuracy in identifying adolescents who are experiencing suicidal thoughts and behavior. Orion Weller of Johns Hopkins University in Baltimore, Maryland, and colleagues present these findings in the open-access journal PLOS ONE on November 3rd, 2021.

Decades of research have identified specific risk factors associated with suicidal thoughts and behavior among adolescents, helping to inform suicide prevention efforts. However, few studies have explored these risk factors in combination with each other, especially in large groups of adolescents. Now, the field of machine learning has opened up new opportunities for such research, which could ultimately improve prevention efforts.

To explore that opportunity, Weller and colleagues applied machine-learning analysis to data from a survey of high school students in Utah that is routinely conducted to monitor issues such as drug abuse and mental health. The data included responses to more than 300 questions each for more than 179,000 high school students who took the survey between 2011 to 2017, as well as demographic data from the U.S. census.

The researchers found that they could use the survey data to predict with 91 percent accuracy which individual adolescents answers indicated suicidal thoughts or behavior. In doing so, they were able to identify which survey questions had the most predictive power; these included questions about digital media harassment or threats, at-school bullying, serious arguments at home, gender, alcohol use, feelings of safety at school, age, and attitudes about marijuana.

The new algorithms accuracy is higher than that of previously developed predictive approaches, suggesting that machine-learning could indeed improve understanding of adolescent suicidal thoughts and behaviorand could thereby help inform and refine preventive programs and policies.

Future research could expand the new findings by using data from other states, as well as data on actual suicide rates.

The authors add: Our paper examines machine learning approaches applied to a large dataset of adolescent questionnaires, in order to predict suicidal thoughts and behaviors from their answers. We find strong predictive accuracy in identifying those at risk and analyze our model with recent advances in ML interpretability. We found that factors that strongly influence the model include bullying and harassment, as expected, but also aspects of their family life, such as being in a family with yelling and/or serious arguments.We hope that this study can provide insight to inform early prevention efforts.

Computational simulation/modeling

People

Predicting suicidal thoughts and behavior among adolescents using the risk and protective factor framework: A large-scale machine learning approach

3-Nov-2021

The authors have declared that no competing interests exist.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

More here:
Machine learning can provide strong predictive accuracy for identifying adolescents that have experienced suicidal thoughts and behavior - EurekAlert

How RPA and machine learning work together in the enterprise – TechTarget

More enterprises have adopted RPA functions to automate rote, repetitive tasks, but sometimes they need more capabilities. Enter machine learning functions and the result is "intelligent automation" which, unlike RPA, can learn and adapt.

The choice between the two should depend on the use case, but in today's AI-crazed world, there's a misconception that intelligent automation must be better when, in fact, robotic process automation (RPA) may be a more elegant solution.

"We view AI/ML as knowing what to do, RPA is knowing how to do it," said Muthu Alagappan, chief medical officer at intelligent automation platform provider Notable Health. "For example, OCR can be used to extract information from insurance cards, photo IDs and clinical documents. From there, RPA [enters] the extracted data into existing systems of record."

RPA simply executes its programming, so if requirements change, it needs to be reprogrammed. Machine learning is more dynamic.

"Machine learning relies on large data sets to inform computer systems how to make decisions," said Tommy McEvoy, senior lead technologist in the AI practice at management and IT consulting firm Booz Allen Hamilton. "An exciting advancement in the automation space is the integration of these capabilities, where RPA becomes the engine that accelerates ML, NLP and AI capabilities with the ability to produce an output at scale."

By having RPA rapidly clean and feed data into a machine learning algorithm, an organization can achieve a fully automated solution. For example, Booz Allen developed fully automated service solutions that can capture a customer's refund requests over the phone, transcribe that information, classify the customer's intent and then translate all of that into an appropriate trigger for the automation.

"A true automation platform includes RPA and machine learning, as well as decision management frameworks and event architectures to trigger actions," said Bill Lobig, VP of product management at IBM. "RPA has driven a significant rise in document extraction technologies, systems integration and process mining. I think all of these things together are what you need for intelligent automation, but certainly RPA and machine learning are a big part of it."

Genpact, a global IT technology services company, uses computer vision to make RPA more effective and more applicable to a wider range of use cases. The company also pairs machine learning with computer vision to discover and mine existing business processes, as well as their deviations and variations. The company also uses machine learning to look at RPA engine log files to determine the root cause of issues that need to be resolved in RPA.

"We use the computer vision capability a lot because there's a lot of unstructured data sitting in PDFs and other things," said Sanjay Srivastava, chief digital officer at Genpact. "We use ML for three things: designing the [process automation] configuration rules, execution and eliminating systemic upstream issues that drive downstream problems."

Srivastava also underscored the need to build a data foundation, which some organizations overlook.

"I find people jumping into RPA without having thought through that. Data science professionals know they can't get a thought out of the garage unless they have a database structure set up, so I would not lose focus on that in the context of RPA," said Srivastava. "The true test of RPA is not around automating the stuff that you know, it's about the stuff you didn't know was happening. Process discovery and process mining are central to figuring out the footprint, which is a massive opportunity for data scientists."

RPA lacks intelligence. Intelligent automation adds machine learning and other AI techniques, such as computer vision and NLP, based on the use case. Might AI wholly replace RPA? That's unlikely because not everything that needs to be automated requires machine intelligence.

If the business process was badly designed to begin with, automation will just accelerate its execution.

"If you're pushing RPA into thousands of bots, it's a nightmare managing all of those bots and you don't know where things are breaking. It's time for a better way: intelligent automation," said Anand Rao, global artificial intelligence lead at multinational professional services firm PwC. "Where data scientists go wrong is in building a deep learning transformer model to show off intellectual superiority as opposed to building the thing a company really needs. Be cognizant of the right tool for the right kind of task."

Today's organizations are heavily focused on cost control and improving efficiencies, both of which have driven RPA adoption. Yet, RPA is sometimes implemented without questioning whether the original process still makes sense. If the business process was badly designed to begin with, automation will just accelerate its execution. Alternatively, a business process may seem ripe for automation, but it may be that neither RPA nor intelligent automation is the best solution.

For example, an insurance company wanted to automate its paper-based claims filing process because it was slow, expensive and riddled with manual errors. Claimants mailed in paper forms, so the information contained in them needed to be retyped into a claims processing application. If the claims information could have been extracted digitally, the same process would have been done faster and with fewer errors. However, an even better solution would have been solving the data quality problem with a mobile app that allows claimants to enter and verify their information. In the end, the process wasn't automated, it was completely redesigned.

The lesson learned is to consider the business problem first, then consider technology options.

Continue reading here:
How RPA and machine learning work together in the enterprise - TechTarget

High-performance, low-cost machine learning infrastructure is accelerating innovation in the cloud – MIT Technology Review

Artificial intelligence and machine learning (AI and ML) are key technologies that help organizations develop new ways to increase sales, reduce costs, streamline business processes, and understand their customers better. AWS helps customers accelerate their AI/ML adoption by delivering powerful compute, high-speed networking, and scalable high-performance storage options on demand for any machine learning project. This lowers the barrier to entry for organizations looking to adopt the cloud to scale their ML applications.

Developers and data scientists are pushing the boundaries of technology and increasingly adopting deep learning, which is a type of machine learning based on neural network algorithms. These deep learning models are larger and more sophisticated resulting in rising costs to run underlying infrastructure to train and deploy these models.

To enable customers to accelerate their AI/ML transformation, AWS is building high-performance and low-cost machine learning chips. AWS Inferentia is the first machine learning chip built from the ground up by AWS for the lowest cost machine learning inference in the cloud. In fact, Amazon EC2 Inf1 instances powered by Inferentia, deliver 2.3x higher performance and up to 70% lower cost for machine learning inference than current generation GPU-based EC2 instances. AWS Trainium is the second machine learning chip by AWS that is purpose-built for training deep learning models and will be available in late 2021.

Customers across industries have deployed their ML applications in production on Inferentia and seen significant performance improvements and cost savings. For example, AirBnBs customer support platform enables intelligent, scalable, and exceptional service experiences to its community of millions of hosts and guests across the globe. It used Inferentia-based EC2 Inf1 instances to deploy natural language processing (NLP) models that supported its chatbots. This led to a 2x improvement in performance out of the box over GPU-based instances.

With these innovations in silicon, AWS is enabling customers to train and execute their deep learning models in production easily with high performance and throughput at significantly lower costs.

Machine learning is an iterative process that requires teams to build, train, and deploy applications quickly, as well as train, retrain, and experiment frequently to increase the prediction accuracy of the models. When deploying trained models into their business applications, organizations need to also scale their applications to serve new users across the globe. They need to be able to serve multiple requests coming in at the same time with near real-time latency to ensure a superior user experience.

Emerging use cases such as object detection, natural language processing (NLP), image classification, conversational AI, and time series data rely on deep learning technology. Deep learning models are exponentially increasing in size and complexity, going from having millions of parameters to billions in a matter of a couple of years.

Training and deploying these complex and sophisticated models translates to significant infrastructure costs. Costs can quickly snowball to become prohibitively large as organizations scale their applications to deliver near real-time experiences to their users and customers.

This is where cloud-based machine learning infrastructure services can help. The cloud provides on-demand access to compute, high-performance networking, and large data storage, seamlessly combined with ML operations and higher level AI services, to enable organizations to get started immediately and scale their AI/ML initiatives.

AWS Inferentia and AWS Trainium aim to democratize machine learning and make it accessible to developers irrespective of experience and organization size. Inferentias design is optimized for high performance, throughput, and low latency, which makes it ideal for deploying ML inference at scale.

EachAWS Inferentiachip contains four NeuronCores that implement a high-performancesystolic arraymatrix multiply engine, which massively speeds up typical deep learning operations, such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps to cut down on external memory accesses, reducing latency, and increasing throughput.

AWS Neuron, the software development kit for Inferentia, natively supports leading ML frameworks, likeTensorFlow andPyTorch. Developers can continue using the same frameworks and lifecycle developments tools they know and love. For many of their trained models, they can compile and deploy them on Inferentia by changing just a single line of code, with no additional application code changes.

The result is a high-performance inference deployment, that can easily scale while keeping costs under control.

Sprinklr, a software-as-a-service company, has an AI-driven unified customer experience management platform that enables companies to gather and translate real-time customer feedback across multiple channels into actionable insights. This results in proactive issue resolution, enhanced product development, improved content marketing, and better customer service. Sprinklr used Inferentia to deploy its NLP and some of its computer vision models and saw significant performance improvements.

Several Amazon services also deploy their machine learning models on Inferentia.

Amazon Prime Video uses computer vision ML models to analyze video quality of live events to ensure an optimal viewer experience for Prime Video members. It deployed its image classification ML models on EC2 Inf1 instances and saw a 4x improvement in performance and up to a 40% savings in cost as compared to GPU-based instances.

Another example is Amazon Alexas AI and ML-based intelligence, powered by Amazon Web Services, which is available on more than 100 million devices today. Alexas promise to customers is that it is always becoming smarter, more conversational, more proactive, and even more delightful. Delivering on that promise requires continuous improvements in response times and machine learning infrastructure costs. By deploying Alexas text-to-speech ML models on Inf1 instances, it was able to lower inference latency by 25% and cost-per-inference by 30% to enhance service experience for tens of millions of customers who use Alexa each month.

As companies race to future-proof their business by enabling the best digital products and services, no organization can fall behind on deploying sophisticated machine learning models to help innovate their customer experiences. Over the past few years, there has been an enormous increase in the applicability of machine learning for a variety of use cases, from personalization and churn prediction to fraud detection and supply chain forecasting.

Luckily, machine learning infrastructure in the cloud is unleashing new capabilities that were previously not possible, making it far more accessible to non-expert practitioners. Thats why AWS customers are already using Inferentia-powered Amazon EC2 Inf1 instances to provide the intelligence behind their recommendation engines and chatbots and to get actionable insights from customer feedback.

With AWS cloud-based machine learning infrastructure options suitable for various skill levels, its clear that any organization can accelerate innovation and embrace the entire machine learning lifecycle at scale. As machine learning continues to become more pervasive, organizations are now able to fundamentally transform the customer experienceand the way they do businesswith cost-effective, high-performance cloud-based machine learning infrastructure.

Learn more about how AWSs machine learning platform can help your company innovate here.

This content was produced by AWS. It was not written by MIT Technology Reviews editorial staff.

See the rest here:
High-performance, low-cost machine learning infrastructure is accelerating innovation in the cloud - MIT Technology Review

Oracle’s new cloud AI services aim to bring out-of-the-box machine learning to the SMB masses – TechRepublic

The collection of new ready-to-use AI services for Oracle Cloud infrastructure are available now for applications like speech, vision, analytics and more.

Jirsak, Getty Images/iStockphoto

Oracle has announced the availability of six new AI services for its Oracle Cloud Infrastructure platform. Designed for out-of-the-box operation, Oracle said that the new services will "make it easier for developers to apply AI services to their applications without requiring data science expertise," which is sure to be welcomed by small businesses looking to get into the AI game.

Many companies take on AI projects without knowing how to actualize their concepts, said Oracle Cloud Platform CTO Greg Pavlik. "It's essential for organizations to bridge the gap between the promise of AI and implementing AI that helps them achieve real results," Pavlik said. For many companies, these new services from Oracle are the sort of thing that could help reach their AI goals.

SEE: Digital transformation: A CXO's guide (free PDF) (TechRepublic)

Implementation issues like a scarcity of data science expertise, difficulty in properly training AI models, breaking down data silos or simply getting products to work in a live environment are all major reasons why more organizations haven't leapt into larger AI products, Oracle said. "As a result [of those issues], companies spend valuable time and resources when they need AI that's consistent, responsive and capable of working in their business applications and operational environments to deliver actionable results," Oracle said.

The six new AI services are fully managed parts of Oracle Cloud Infrastructure, and can be pre-trained on business-oriented data or trained in-house on an organization's own data. The services cover the following use cases:

SEE:Hiring Kit: Video Game Programmer(TechRepublic Premium)

Oracle is hosting a webinar on November 3, 2021 at 1 p.m. eastern time that will cover the new products, and interested parties can learn more at Oracle's AI and machine learning page.

Here is the original post:
Oracle's new cloud AI services aim to bring out-of-the-box machine learning to the SMB masses - TechRepublic

TrainerRoad Announces Release of Adaptive Training Platform, Making Machine Learning-Powered Training Available to Cyclists – Outside Business Journal

Get access to everything we publish when you sign up for Outside+.

RENO, Nevada (November 2, 2021) TrainerRoad, cyclings most complete and effective training system and the market leader in making athletes faster, announced today the official release of their Adaptive Training system, making TrainerRoad the worlds leading machine learning-driven training platform for cyclists.

TrainerRoads Adaptive Training system uses machine learning, science-based coaching principles, and an unprecedented data set to train athletes as individuals rather than offering cookie-cutter programs that dont account for variability in training. With Adaptive Training, TrainerRoad is able to recommend the workout each athlete needs at the right time to reach their goals.

The full integration of Adaptive Training is the next step in the ongoing development of TrainerRoads data-driven training ecosystem, TrainerRoad Communications Director, Jonathan Lee said. Thanks to a successful beta testing period, weve optimized the Adaptive Training experience and created a tool which puts the power of a seasoned coach in the hands of each TrainerRoad athlete. With every input and we now have tens of millions on the TrainerRoad platform, Adaptive Training evolves and becomes better at making intelligent recommendations for individual athletes.

Starting today, all TrainerRoad athletes have access to this powerful tool. If an athlete is targeting a specific training goal or event, TrainerRoads Plan Builder will quickly create a custom plan, and Adaptive Training will offer intelligent adjustments throughout training to maximize athlete success. For those that prefer the freedom of picking workouts as they go, TrainNow uses Adaptive Trainings insights to automatically recommend workouts based on their current abilities. The more an athlete uses TrainerRoad, the better and more finely-tuned their training becomes.

Adaptive Training is our most capable training system to date, but that doesnt mean innovation stops here, Lee said. TrainerRoad is built on a foundation of always striving to improve and get better. We continue to push ourselves forward and develop the best tools possible to improve athlete performance.

New athletes looking to increase their fitness and get faster with TrainerRoad can sign up today risk-free and receive a full refund within the first 30 days if theyre not 100% satisfied. To sign up or for more information on TrainerRoad, visit http://www.TrainerRoad.com.

Click here for Adaptive Training Media Kit

TrainerRoad is the leading training system for cyclists and triathletes who want to get faster. Athletes in over 150 countries use TrainerRoads training calendar, apps, workouts, training plans and analysis tools to elevate their performance. Additionally, TrainerRoads forum, blog, and podcasts are trusted educational resources for athletes around the world. Learn more at http://www.TrainerRoad.com.

Read more here:
TrainerRoad Announces Release of Adaptive Training Platform, Making Machine Learning-Powered Training Available to Cyclists - Outside Business Journal

Turn your tech skills into machine learning expertise with this book and class bundle – TechRepublic

Now that you've got mid-level tech skills, it's time to direct them into one of the tech industry's most in-demand fields.

Image: Chinnawat Ngamsom, Getty Images/iStockphoto

If your tech skills have reached the intermediate level and you're ready to turbocharge your career, then check out the Pay What You Want: The Comprehensive Machine Learning Bundle and learn the latest commercial machine learning methods.

The bundle consists of six books and four courses, and you get to choose what you want to pay. Here's the way it works.

Four of the books focus on Python. "Machine Learning for the Web" explores how to use Python to make better predictions, while "Python Machine Learning" will teach you how to generate the most useful data insights by using Python to build extremely powerful machine learning algorithms.

In "Advanced Machine Learning with Python" you can learn how to master Python's latest machine learning techniques and use them to solve problems in data science. Then you can find out how to quickly build potent machine learning models and implement predictive applications on a large scale in "Large Scale Machine Learning with Python."

In the two remaining books, you can learn how to use test-driven development to control machine learning algorithms in "Test Driven Machine Learning" and how to use Apache Spark to create a range of machine learning projects in "Apache Spark Machine Learning Blueprints."

The "Step-by-Step Machine Learning with Python" course covers the most effective tools and techniques for machine learning. And you'll find out how to perform real-world machine learning tasks in the "Python Machine Learning Solutions" course. The "Machine Learning with Open CV and Python" class can teach you how to use Python to analyze and understand data.

The final course is "Machine Learning with TensorFlow." It explains how to use Google's TensorFlow library to solve machine learning issues.

These books and courses are offered by Packt Publishing, which has created more than 3,000 books and videos full of actionable information for IT professionals, from optimizing skills in existing tools to emerging technology. Thousands of these bundles have already been sold.

Don't miss this chance to get a great machine learning bargain. Get the Pay What You Want: The Comprehensive Machine Learning Bundle today(normally $843.92).

Prices subject to change.

More here:
Turn your tech skills into machine learning expertise with this book and class bundle - TechRepublic

Psychologists use machine learning algorithm to pinpoint top predictors of cheating in a relationship – PsyPost

According to a study published in the Journal of Sex Research, relationship characteristics like relationship satisfaction, relationship length, and romantic love are among the top predictors of cheating within a relationship. The researchers used a machine learning algorithm to pinpoint the top predictors of infidelity among over 95 different variables.

While a host of studies have investigated predictors of infidelity, the research has largely revealed mixed and often contradictory findings. Study authors Laura M. Vowels and her colleagues aimed to improve on these inconsistencies by using machine learning models. This approach would allow them to compare the relative predictability of various relationship factors within the same analyses.

The research topic was actually suggested by my co-author, Dr. Kristen Mark, who was interested in understanding predictors of infidelity better. She has previously published several articles on infidelity and is interested in the topic, explained Vowels, a principal researcher forBlueheart.ioand postdoctoral researcher at the University of Lausanne.

Vowels and her team pooled data from two different studies. The first data set came from a study of 891 adults, the majority of whom were married or cohabitating with a partner (63%). Around 54% of the sample identified as straight, 21% identified as bisexual, 11% identified as gay, and 7% identified as lesbian. A second data set was collected from both members of 202 mixed-sex couples who had been together for an average of 9 years, the majority of whom were straight (93%).

Data from the two studies included many of the same variables such as demographic measures like age, race, sexual orientation, and education, in addition to assessments of participants sexual behavior, sexual satisfaction, relationship satisfaction, and attachment styles. Both studies also included a measure of in-person infidelity (having interacted sexually with someone other than ones current partner) and online infidelity (having interacted sexually with someone other than ones current partner on the internet).

Using machine learning techniques, the researchers analyzed the data sets together first for all respondents and then separately for men and women. They then identified the top ten predictors for in-person cheating and for online cheating. Across both samples and among both men and women, higher relationship satisfaction predicted a lower likelihood of in-person cheating. By contrast, higher desire for solo sexual activity, higher desire for sex with ones partner, and being in a longer relationship predicted a higher likelihood of in-person cheating. In the second data set only, greater sexual satisfaction and romantic love predicted a lower likelihood of in-person infidelity.

When it came to online cheating, greater sexual desire and being in a longer relationship predicted a higher likelihood of cheating. Never having had anal sex with ones current partner decreased the likelihood of cheating online a finding the authors say likely reflects more conservative attitudes toward sexuality. In the second data set only, higher relationship and sexual satisfaction also predicted a lower likelihood of cheating.

Overall, I would say that there isnt one specific thing that would predict infidelity. However, relationship related variables were more predictive of infidelity compared to individual variables like personality. Therefore, preventing infidelity might be more successful by maintaining a good and healthy relationship rather than thinking about specific characteristics of the person, Vowels told PsyPost.

Consistent with previous studies, relationship characteristics like romantic love and sexual satisfaction surfaced as top predictors of infidelity across both samples. The researchers say this suggests that the strongest predictors for cheating are often found within the relationship, noting that, addressing relationship issues may buffer against the likelihood of one partner going out of the relationship to seek fulfillment.

These results suggest that intervening in relationships when difficulties first arise may be the best way to prevent future infidelity. Furthermore, because sexual desire was one of the most robust predictors of infidelity, discussing sexual needs and desires and finding ways to meet those needs in relationships may also decrease the risk of infidelity, the authors report.

The researchers emphasize that their analysis involved predicting past experiences of infidelity from an array of present-day assessments. They say that this design may have affected their findings, since couples who had previously dealt with cheating within the relationship may have worked through it by the time they completed the survey.

The study was exploratory in nature and didnt include all the potential predictors, Vowels explained. It also predicted infidelity in the past rather than current or future infidelity, so there are certain elements like relationship satisfaction that might have changed since the infidelity occurred. I think in the future it would be useful to look into other variables and also look at recent infidelity because that would make the measure of infidelity more reliable.

The study, Is Infidelity Predictable? Using Explainable Machine Learning to Identify the Most Important Predictors of Infidelity, was authored by Laura M. Vowels, Matthew J. Vowels, and Kristen P. Mark.

Visit link:
Psychologists use machine learning algorithm to pinpoint top predictors of cheating in a relationship - PsyPost

A look at some of the AI and ML expert speakers at the iMerit ML DataOps Summit – TechCrunch

Calling all data devotees, machine-learning mavens and arbiters of AI. Clear your calendar to make room for the iMerit ML DataOps Summit on December 2, 2021. Join and engage with AI and ML leaders from multiple tech industries, including autonomous mobility, healthcare AI, technology and geospatial to name just a few.

Attend for free: Theres nothing wrong with your vision the iMerit ML DataOps Summit is 100% free, but you must register here to attend.

The summit is in partnership with iMerit, a leading AI data solutions company providing high-quality data across computer vision, natural language processing and content that powers machine learning and artificial intelligence applications. So, what can you expect at this free event?

Great topics require great speakers, and well have those in abundance. Lets highlight just three of the many AI and ML experts who will take the virtual stage.

Radha Basu: The founder and CEO of iMerit leads an inclusive, global workforce of more than 5,300 people 80% of whom come from underserved communities and 54% of whom are women. Basu has raised $23.5 million from investors, led the company to impressive revenue heights and has earned a long list of business achievements, awards and accolades.

Hussein Mehanna: Currently the head of Artificial Intelligence and Machine Learning at Cruise, Mehanna has spent more than 15 years successfully building and leading AI teams at Fortune 500 companies. He led the Cloud AI Platform organization at Google and co-founded the Applied Machine Learning group at Facebook, where his team added billions of revenue dollars.

DJ Patil: The former U.S. Chief Data Scientist, White House Office of Science and Technology Policy, Patils experience in data science and technology runs deep. He has held high-level leadership positions at RelateIQ, Greylock Partners, Color Labs, LinkedIn and eBay.

The iMerit ML DataOps Summit takes place on December 2, 2021. If your business involves data-, AI- and ML-driven technologies, this event is made for you. Learn, network and stay current with this fast-paced sector and do it for free. All you need to do is register. Start clicking.

See more here:
A look at some of the AI and ML expert speakers at the iMerit ML DataOps Summit - TechCrunch

Machine learning can revolutionize healthcare, but it also carries legal risks – Healthcare IT News

As machine learning and artificial intelligence have become ubiquitous in healthcare, questions have arisen about their potential impacts.

And as Matt Fisher, general counsel for the virtual care platform Carium, pointed out, those potential impacts can, in turn, leave organizations open to possible liabilities.

"It's still an emerging area," Fisher explained in an interview with Healthcare IT News. "There are a bunch of different questions about where the risks and liabilities might arise."

Fisher, who is moderating a panel on the subject at the HIMSS Machine Learning & AI for Healthcareevent this December, described two main areas of legal concern: cybersecurity and bias.(HIMSS is the parent organization of Healthcare IT News.)

When it comes to cybersecurity, he said, the potential issues are not so much with the consequence of using the model as with the process of training it."If big companies are contracting with a healthcare system, we're going to be working to develop new systems to analyze data and produce new outcomes," he said.

And all that data could represent a juicy target for bad actors. "If a health system is transferring protected health information over to a big tech company, not only do you have the privacy issue, there's also the security issue," he said. "They need to make sure their systems are designed to protect against attack."

Some hospitals that are victimized by ransomware have faced the double whammy of lawsuits from affected patients who say health systems should have taken more action to protect their information.

And a breach is a matter of when, not if, said Fisher. Fisher said synthetic or de-identified data are options to help alleviate the risk, if the sets are sufficient for training.

"Anyone working with sensitive information needs to be aware of and thinking about that," he said.

Meanwhile, if a device relies on a biased algorithm and results in a less than ideal outcome for a patientthat could possibly lead to claims against the manufacturer or a health organization. Research has shown, for instance, that biased models may worsen the disproportionate impact the COVID-19 pandemic has already had on people of color.

"You've started to see electronic health record-related claims come up in malpractice cases," Fisher pointed out. If a patient experiences a negative result from a device at home, they could bring the claim against a manufacturer, he said.

And a clinician relying on a device in a medical setting who doesn't account for varied outcomes for different groups of people might be at risk of a malpractice lawsuit. "When you have these types of issues widely reported and talked about, it presents more of a favorable landscape to try and find people who have been harmed," said Fisher.

In the next few years, he said, "We'll start to see those claims arise."

Addressing and preventing such legal risks depends on the situation, said Fisher. When an organization is going to subscribe to or implement a tool, he said, it should screen the vendor: Ask questions about how an algorithm was developed and how the system was trained, including whether it was tested on representative populations.

"If it's going to be directly interacting with patient care, consider building [the device's functionality] into informed consent if appropriate," he said.

Fisher said he hopes panel attendees leave the discussion inspired to engage in discourse about the legal risks at their own organizations. "I hope it spurs people to think about it and to start a dialogue," he said.

Ultimately, he said, while an organization can take steps to reduce liability, it's not possible to fully shield yourself from the threat of legal action. "You can never prevent a case from being brought," he said, but "you can try to set yourself up for the best footing."

At the HIMSS Machine Learning & AI for Healthcare event, Fisher will continue the discussion with Baker and McKenzie LLP's Bradford Newman and Dianne Bourque of Mintz Levin Cohn Ferris Glovsky and Popeo PC. Their virtual panel,"Sharing Data and Ethical Challenges: AI and Legal Risks," is scheduled for 2:30 p.m. ET on Tuesday, December 14.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Read the original post:
Machine learning can revolutionize healthcare, but it also carries legal risks - Healthcare IT News