The 7 Best Websites to Help Kids Learn About AI and Machine … – MUO – MakeUseOf

If you have kids or teach kids, you likely want them to learn the latest technologies to help them succeed in school and their future jobs. With rapid tech advancements, artificial intelligence and machine learning are essential skills you can teach young learners today.

Thankfully, you can easily access free and paid online resources to support your kids' and teens' learning journey. Here, we explore some of the best e-learning websites for students to gain experience in AI and ML technology.

Do you want to empower your child's creativity and AI skills? You might want to schedule a demo session with Kubrio. The alternative education website offers remote learning experiences on the latest technologies like ChatGPT.

Students eight to 18 years old learn about diverse subjects at their own pace. At the same time, they get to team up with learners who share their interests.

Kubrios AI Prompt Engineering Lab teaches your kids to use the best online AI tools for content creation. Theyll learn to develop captivating stories, interactive games, professional-quality movies, engaging podcasts, catchy songs, aesthetic designs, and software.

Kubrio also gamifies AI learning in the form of "Quests." Students select their Quest, complete their creative challenge, build a portfolio, and earn points and badges. This program is currently in beta, but you can sign them up for the private beta for the following Quests:

Explore the Create&Learn website if you want to introduce your kids to the latest technological advancements at an early age. The e-learning site is packed with classes that help kids discover the fascinating world of robots, artificial intelligence, and machine learning.

Depending on their grade level, your child can join AI classes such as Hello Tech!, AI Explorers, Python for AI, and AI Creators. The classes are live online, interactive, and hands-on. Students from grades two up to 12 learn how AI works and can be applied to the latest technology, such as self-driving cars, face recognition, and games.

Create&Learns award-winning curriculum was designed by experts from well-known institutions like MIT and Stanford. But if you aren't sure your kids will enjoy the sessions, you can avail of a free introductory class (this option is available for select classes only).

One of the best ways for students to learn ML and AI is through hands-on machine learning project ideas for beginners. Machine Learning for Kids gives students hands-on training with machine learning, a subfield of AI that enables computers to learn from data and experience.

Your kids will train a computer to recognize text, pictures, numbers, or sounds. For instance, you can train the model to distinguish between images of a happy person and a sad person using free photos from the internet. We tried this, and then tested the model with a new photo, and it was able to successfully recognize the uploaded image as a happy person.

Afterward, your child will try their hand at the Scratch, Python, or App Inventor coding platform to create projects and build games with their trained machine learning model.

The online platform is free, simple, and user-friendly. You'll get access to worksheets, lesson plans, and tutorials, so you can learn with your kids. Your child will also be guided through the main steps of completing a simple machine learning project.

If you and your kids are curious about how artificial intelligence and machine learning work, go through Experiments with Google. The free website explains machine learning and AI through simple, interactive projects for learners of different ages.

Experiments with Google is a highly engaging platform that will give students hours of fun and learning. Your child will learn to build a DIY sorter using machine learning, create and chat with a fictional character, conduct their own orchestra, use a camera to bring their doodles to life, and more.

Many of the experiments don't require coding. Choose the projects appropriate for your child's level. If youre working with younger kids, try Scroobly; Quick, Draw!; and LipSync with YouTube. Meanwhile, teens can learn how experts build a neural network to learn about AI or explore other, more complex projects using AI.

Do you want to teach your child how to create amazing things with AI? If yes, then AI World School is an ideal edtech platform for you. The e-learning website offers online and self-learning AI and coding courses for kids and teens seven years old and above.

AI World School courses are designed by a team of educators and technologists. The courses cover AI Novus (an introduction to AI for ages seven to ten), Virtual Driverless Car, Playful AI Explorations Using Scratch, and more.

The website also provides affordable resources for parents and educators who want to empower their students to be future-ready. Just visit the Project Hub to order $1-3 AI projects, you can filter by age group, skill level, and software.

Kids and teens can also try the free games when they click Play AI for Free. Converse with an AI model named Zhorai, teach it about animals, and let it guess where these animals live. Students can also ask an AI bot about the weather in any city, or challenge it to a competitive game of tic-tac-toe.

AIClub is a team of AI and software experts with real-world experience. It was founded by Dr. Nisha Tagala, a computer science Ph.D. graduate from UC Berkeley. After failing to find a fun and easy program to help her 11-year-old daughter learn AI, she went ahead and built her own.

AI Club's progressive curriculum is designed for elementary, middle school, and high school students. Your child will learn to create unique projects using AI and coding. Start them young, and they can flex their own AI portfolio to the world.

You can also opt to enroll your child in the one-on-one class with expert mentors. This personalized online class enables students to research topics they care about on a flexible schedule. They'll also receive feedback and advice from their mentor to improve their research.

What's more, students enrolled in one-on-one classes can enter their research in competitions or present their findings at a conference. According to the AIClub Competition Winners page, several students in the program have already been awarded in national and international competitions.

Have you ever wondered how machines can learn from data and perform tasks that humans can do? Check out Teachable Machine, a website by Google Developers that lets you create your own machine learning models in minutes.

Teachable Machine is a fun way for kids and teens to start learning the concepts and applications of machine learning. You don't need any coding skills or prior knowledge, just your webcam, microphone, or images.

Students can play with images, sounds, poses, text, and more. They'll understand how tweaking the settings and data changes the performance and accuracy of the models.

Teachable Machine is a learning tool and a creative platform that unleashes the imagination. Your child can use their models to create games, art, music, or anything else they can dream of. If they need inspiration, point them to the gallery of projects created by other users.

Artificial intelligence and machine learning are rapidly transforming the world. If you want your kids and teens to learn about these fascinating fields and develop their critical thinking skills and creativity, these websites that can help them.

Whether you want to explore Experiments with Google, AI World School, or other sites in this article, you'll find plenty of resources and fun challenges to spark your child's curiosity and imagination. There are also ways to use existing AI tools in school so that they can become more familiar with them.

Read more here:
The 7 Best Websites to Help Kids Learn About AI and Machine ... - MUO - MakeUseOf

The 11 Best AI Tools for Data Science to Consider in 2024 – Solutions Review

Solutions Reviews listing of the best AI tools for data science is an annual sneak peek of the top tools included in our Buyers Guide for Data Science and Machine Learning Platforms. Information was gathered via online materials and reports, conversations with vendor representatives, and examinations of product demonstrations and free trials.

The editors at Solutions Review have developed this resource to assist buyers in search of the best AI tools for data science to fit the needs of their organization. Choosing the right vendor and solution can be a complicated process one that requires in-depth research and often comes down to more than just the solution and its technical capabilities. To make your search a little easier, weve profiled the best AI tools for data science all in one place. Weve also included platform and product line names and introductory software tutorials straight from the source so you can see each solution in action.

Note: The best AI tools for data science are listed in alphabetical order.

Platform: DataRobot Enterprise AI Platform

Related products: Paxata Data Preparation, Automated Machine Learning, Automated Time Series, MLOps

Description: DataRobot offers an enterprise AI platform that automates the end-to-end process for building, deploying, and maintaining AI. The product is powered by open-source algorithms and can be leveraged on-prem, in the cloud or as a fully-managed AI service.DataRobotincludesseveralindependent but fully integrated tools (PaxataData Preparation,Automated Machine Learning, Automated Time Series,MLOps, and AI applications), and each can be deployed in multiple ways to match business needs and IT requirements.

Platform: H2O Driverless AI

Related products: H2O 3, H2O AutoML for ML, H2O Sparkling Water for Spark Integration, H2O Wave

Description: H2O.ai offers a number of AI and data science products, headlined by its commercial platform H2O Driverless AI. Driverless AI is a fully open-source, distributed in-memory machine learning platform with linearscalability. H2O supports widely used statistical and machine learning algorithms including gradient boosted machines, generalized linear models, deep learning and more. H2O has also developedAutoMLfunctionality that automatically runs through all the algorithms to produce a leaderboard of the best models.

Platform: IBM Watson Studio

Related products: IBM Cloud Pak for Data, IBM SPSS Modeler, IBM Decision Optimization, IBM Watson Machine Learning

Description: IBM Watson Studio enables users to build, run, and manage AI models at scale across any cloud. The product is a part of IBM Cloud Pak for Data, the companys main data and AI platform. The solution lets you automate AI lifecycle management, govern and secure open-source notebooks, prepare and build models visually, deploy and run models through one-click integration, and manage and monitor models with explainable AI. IBM Watson Studio offers a flexible architecture that allows users to utilize open-source frameworks likePyTorch, TensorFlow, and scikit-learn.

https://www.youtube.com/watch?v=rSHDsCTl_c0

Platform: KNIME Analytics Platform

Related products: KNIME Server

Description: KNIME Analytics is an open-source platform for creating data science. It enables the creation of visual workflows via a drag-and-drop-style graphical interface that requires no coding. Users can choose from more than 2000 nodes to build workflows, model each step of analysis, control the flow of data, and ensure work is current. KNIME can blend data from any source and shape data to derive statistics, clean data, and extract and select features. The product leverages AI and machine learning and can visualize data with classic and advanced charts.

Platform: Looker

Related products: Powered by Looker

Description: Looker offers a BI and data analytics platform that is built on LookML, the companys proprietary modeling language. The products application for web analytics touts filtering and drilling capabilities, enabling users to dig into row-level details at will. Embedded analytics in Powered by Looker utilizes modern databases and an agile modeling layer that allows users to define data and control access. Organizations can use Lookers full RESTful API or the schedule feature to deliver reports by email or webhook.

Platform: Azure Machine Learning

Related products:Azure Data Factory, Azure Data Catalog, Azure HDInsight, Azure Databricks, Azure DevOps, Power BI

Description: The Azure Machine Learning service lets developers and data scientists build, train, and deploy machine learning models. The product features productivity for all skill levels via a code-first and drag-and-drop designer, and automated machine learning. It also features expansiveMLopscapabilities that integrate with existing DevOps processes. The service touts responsible machine learning so users can understand models with interpretability and fairness, as well as protect data with differential privacy and confidential computing. Azure Machine Learning supports open-source frameworks and languages likeMLflow, Kubeflow, ONNX,PyTorch, TensorFlow, Python, and R.

Platform: Qlik Analytics Platform

Related products: QlikView, Qlik Sense

Description: Qlik offers a broad spectrum of BI and analytics tools, which is headlined by the companys flagship offering, Qlik Sense. The solution enables organizations to combine all their data sources into a single view. The Qlik Analytics Platform allows users to develop, extend and embed visual analytics in existing applications and portals. Embedded functionality is done within a common governance and security framework. Users can build and embed Qlik as simple mashups or integrate within applications, information services or IoT platforms.

Platform: RapidMiner Studio

Related products:RapidMiner AI Hub, RapidMiner Go, RapidMiner Notebooks, RapidMiner AI Cloud

Description: RapidMiner offers a data science platform that enables people of all skill levels across the enterprise to build and operate AI solutions. The product covers the full lifecycle of the AI production process, from data exploration and data preparation to model building, model deployment, and model operations. RapidMiner provides the depth that data scientists needbut simplifies AI for everyone else via a visual user interface that streamlines the process of building and understanding complex models.

Platform: SAP Analytics Cloud

Related products:SAP BusinessObjects BI, SAP Crystal Solutions

Description: SAP offers a broad range of BI and analytics tools in both enterprise and business-user driven editions. The companys flagship BI portfolio is delivered via on-prem (BusinessObjects Enterprise), and cloud (BusinessObjects Cloud) deployments atop the SAP HANA Cloud. SAP also offers a suite of traditional BI capabilities for dashboards and reporting. The vendors data discovery tools are housed in the BusinessObjects solution, while additional functionality, including self-service visualization, are available through the SAP Lumira tool set.

Platform: Sisense

Description: Sisense makes it easy for organizations to reveal business insight from complex data in any size or format. The product allows users to combine data and uncover insights in a single interface without scripting, coding or assistance from IT. Sisense is sold as a single-stack solution with a back end for preparing and modeling data. It also features expansive analytical capabilities, and a front-end for dashboarding and visualization. Sisense is most appropriate for organizations that want to analyze large amounts of data from multiple sources.

Platform: Tableau Desktop

Related products:Tableau Prep, Tableau Server, Tableau Online, Tableau Data Management

Description: Tableau offers an expansive visual BI and analytics platform, and is widely regarded as the major player in the marketplace. The companys analytic software portfolio is available through three main channels: Tableau Desktop, Tableau Server, and Tableau Online. Tableau connects to hundreds of data sources and is available on-prem or in the cloud. The vendor also offers embedded analytics capabilities, and users can visualize and share data with Tableau Public.

Visit link:
The 11 Best AI Tools for Data Science to Consider in 2024 - Solutions Review

Using Machine Learning to Predict the 2023 Kentucky Derby … – DataDrivenInvestor

Can the forecasted weather be used to predict the winning race time?

My hypothesis is that the weather plays a major impact on the Kentucky Derbys winning race time. In this analysis I will use the Kentucky Derbys forecasted weather to predict the winning race time using Machine Learning (ML). In previous articles I discussed the importance of using explainable ML in a business setting to provide business insights and help with buy-in and change management. In this analysis, because Im striving purely for accuracy, I will disregard this advice and go directly to the more complex, but accurate, black box Gradient Boosted Machine (GBM), because we want to win some money!

The data I will use comes from the National Weather Service:

# Read in Data #data <- read.csv("...KD Data.csv")

# Declare Year Variables #year <- data[,1]

# Declare numeric x variables #numeric <- data[,c(2,3,4)]

# Scale numeric x variablesscaled_x <- scale(numeric)# check that we get mean of 0 and sd of 1colMeans(scaled_x)apply(scaled_x, 2, sd)

# One-Hot Encoding #data$Weather <- as.factor(data$Weather)xfactors <- model.matrix(data$Year ~ data$Weather)[, -1]

# Bring prepped data all back together #scaled_df <- as.data.frame(cbind(year,y,scaled_x,xfactors))

# Isolate pre-2023 data #old_data <- scaled_df[-1,]new_data <- scaled_df[1,]

# Gradient Boosted Machine ## Find Max Interaction Depth #floor(sqrt(NCOL(old_data)))

# find index for n trees with minimum CV errorbest.iter <- gbm.perf(tree_mod, method="OOB", plot.it=TRUE, oobag.curve=TRUE, overlay=TRUE)print(best.iter)

In this article, I chose a more accurate, but complex, black box model to predict the Kentucky Derbys winning race time. This is because I dont care about generating insights or winning buy-in or change management, rather I want to use the model that is the most accurate so I can make a data driven gamble. In most business cases you will give up accuracy for explainability, however there are some instances (like this one) in which accuracy is the primary requirement of a model.

This prediction is based off forecasted weather for Saturday May 6th, taken on Thursday May 4th, so obviously it should be taken with a grain of salt. As everyone knows, even with huge amounts of technology, predicting weather is very difficult. Using forecasted weather to predict the winning race time adds even more uncertainity. That being said, I will take either the over or the under that matches my predicted winning time of 122.12 seconds.

Read the original post:
Using Machine Learning to Predict the 2023 Kentucky Derby ... - DataDrivenInvestor

Use of machine learning to assess the prognostic utility of radiomic … – Nature.com

Centers for Disease Control and Prevention. CDC covid data tracker. https://covid.cdc.gov/covid-data-tracker/ (Accessed 13 June 2022) (2022).

Karim, S. S. A. & Karim, Q. A. Omicron sars-cov-2 variant: A new chapter in the covid-19 pandemic. Lancet 398(10317), 21262128 (2021).

Article CAS PubMed PubMed Central Google Scholar

Kupferschmidt, K. & Wadman, M. Delta variant triggers new phase in the pandemic. Science 372(6549), 13751376 (2021).

Article ADS CAS Google Scholar

McCue, C. et al. Long term outcomes of critically ill covid-19 pneumonia patients: Early learning. Intensive Care Med. 47(2), 240241 (2021).

Article CAS PubMed Google Scholar

Michelen, M. et al. Characterising long term covid-19: A living systematic review. BMJ Glob. Health 6(9), e005427 (2021).

Article PubMed Google Scholar

Jacobi, A. et al. Portable chest x-ray in coronavirus disease-19 (covid-19): A pictorial review. Clin. Imaging 64, 3542 (2020).

Article PubMed PubMed Central Google Scholar

Kim, H. W. et al. The role of initial chest x-ray in triaging patients with suspected covid-19 during the pandemic. Emerg. Radiol. 27(6), 617621 (2020).

Article PubMed PubMed Central Google Scholar

Akl, E. A. et al. Use of chest imaging in the diagnosis and management of covid-19: A who rapid advice guide. Radiology 298(2), E63E69 (2021).

Article PubMed Google Scholar

Borkowski, A. A. et al. Using artificial intelligence for covid-19 chest x-ray diagnosis. Fed. Pract. 37(9), 398404 (2020).

PubMed PubMed Central Google Scholar

Balbi, M. et al. Chest x-ray for predicting mortality and the need for ventilatory support in covid-19 patients presenting to the emergency department. Eur. Radiol. 31(4), 19992012 (2021).

Article CAS PubMed Google Scholar

Maroldi, R. et al. Which role for chest x-ray score in predicting the outcome in covid-19 pneumonia?. Eur. Radiol. 31(6), 40164022 (2021).

Article CAS PubMed Google Scholar

Monaco, C. G. et al. Chest x-ray severity score in covid-19 patients on emergency department admission: A two-centre study. Eur. Radiol. Exp. 4(1), 68 (2020).

Article PubMed PubMed Central Google Scholar

Hussain, L. et al. Machine-learning classification of texture features of portable chest x-ray accurately classifies covid-19 lung infection. Biomed. Eng. Online 19(1), 88 (2020).

Article PubMed PubMed Central Google Scholar

Ismael, A. M. & engr, A. Deep learning approaches for covid-19 detection based on chest x-ray images. Expert Syst. Appl. 164(114), 054 (2021).

Google Scholar

Salvatore, M. et al. A phenome-wide association study (phewas) of covid-19 outcomes by race using the electronic health records data in michigan medicine. J. Clin. Med. 10(7), 1351 (2021).

Article CAS PubMed PubMed Central Google Scholar

Spector-Bagdady, K. et al. Coronavirus disease 2019 (covid-19) clinical trial oversight at a major academic medical center: Approach of michigan medicine. Clin. Infect. Dis. 71(16), 21872190 (2020).

Article CAS PubMed Google Scholar

Nypaver, M. et al. The michigan emergency department improvement collaborative: A novel model for implementing large scale practice change in pediatric emergency care. Pediatrics 142(1 MeetingAbstract), 105 (2018).

Article Google Scholar

Abbas, A., Abdelsamea, M. M. & Gaber, M. M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 51, 854864 (2021).

Article Google Scholar

Gupta, A. et al. Association between antecedent statin use and decreased mortality in hospitalized patients with COVID-19. Nat. Commun. 12(1), 1325 (2021).

Article ADS CAS PubMed PubMed Central Google Scholar

Cox, D. R. Regression models and life tables (with discussion). J. R. Stat. Soc. B 34(2), 187220 (1972).

MATH Google Scholar

Therneau, T. M. & Grambsch, P. M. Modeling survival data: Extending the Cox model. In The Cox Model 3977 (Springer, 2000).

MATH Google Scholar

Plsterl, S., Navab, N. & Katouzian, A. An efficient training algorithm for kernel survival support vector machines. https://doi.org/10.48550/arXiv.1611.07054 (Preprint posted online November 21, 2016).

Ishwaran, H. et al. Random survival forests. Ann. Appl. Stat. 2(3), 841860 (2008).

Article MathSciNet MATH Google Scholar

Hothorn, T. et al. Survival ensembles. Biostatistics 7(3), 355373 (2006).

Article PubMed MATH Google Scholar

Zhou, Z. H. Ensemble Methods: Foundations and Algorithms (CRC Press, 2012).

Book Google Scholar

Zwanenburg, A. et al. Image biomarker standardisation initiative. https://doi.org/10.48550/arXiv.1612.07003 (Preprint posted online December 21, 2016)

Harrell, F. E. et al. Evaluating the yield of medical tests. JAMA 247(18), 25432546 (1982).

Article PubMed Google Scholar

Harrell, F. E. Jr., Lee, K. L. & Mark, D. B. Multivariable prognostic models: Issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat. Med. 15(4), 361387 (1996).

Article PubMed Google Scholar

Holste, G. et al. End-to-end learning of fused image and non-image features for improved breast cancer classification from mri. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 32943303 (2021).

Zhou, H. et al. Diagnosis of distant metastasis of lung cancer: Based on clinical and radiomic features. Transl. Oncol. 11(1), 3136 (2018).

Article PubMed Google Scholar

Militello, C. et al. CT Radiomic Features and Clinical Biomarkers for Predicting Coronary Artery Disease. Cogn. Comput. 15(1), 238253 (2023).

Article Google Scholar

Huang, S. C. et al. Multimodal fusion with deep neural networks for leveraging CT imaging and electronic health record: A case-study in pulmonary embolism detection. Sci. Rep. 10(1), 19 (2020).

Article Google Scholar

Liu, Z. et al. Imaging genomics for accurate diagnosis and treatment of tumors: A cutting edge overview. Biomed. Pharmacother. 135, 111173 (2021).

Article CAS PubMed Google Scholar

Tomaszewski, M. R. & Gillies, R. J. The biological meaning of radiomic features. Radiology 298(3), 505516 (2021).

Article PubMed Google Scholar

Brouqui, P. et al. Asymptomatic hypoxia in COVID-19 is associated with poor outcome. Int. J. Infect. Dis. 102, 233238 (2021).

Article CAS PubMed Google Scholar

Struyf, T. et al. Cochrane COVID-19 Diagnostic Test Accuracy Group. Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID19. Cochrane Database Syst. Rev. (5) (2022).

Garrafa, E. et al. Early prediction of in-hospital death of covid-19 patients: A machine-learning model based on age, blood analyses, and chest x-ray score. Elife 10, e70640 (2021).

Article CAS PubMed PubMed Central Google Scholar

Schalekamp, S. et al. Model-based prediction of critical illness in hospitalized patients with covid-19. Radiology 298(1), E46E54 (2021).

Article PubMed Google Scholar

Soda, P. et al. Aiforcovid: Predicting the clinical outcomes in patients with covid-19 applying ai to chest-x-rays. An Italian multicentre study. Med. Image Anal. 74, 102216 (2021).

Article PubMed PubMed Central Google Scholar

Shen, B. et al. Initial chest radiograph scores inform covid-19 status, intensive care unit admission and need for mechanical ventilation. Clin. Radiol. 76(6), 473.e1-473.e7 (2021).

Article CAS PubMed Google Scholar

Liu, Y. et al. Tumor heterogeneity assessed by texture analysis on contrast-enhanced CT in lung adenocarcinoma: Association with pathologic grade. Oncotarget 8(32), 5366453674 (2017).

Article PubMed PubMed Central Google Scholar

Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 19 (2012).

Google Scholar

He, K. et al. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770778 (2016).

Chandra, T. B. et al. Coronavirus disease (covid19) detection in chest x-ray images using majority voting based classifier ensemble. Expert Syst. Appl. 165(113), 909 (2021).

Google Scholar

Johri, S. et al. A novel machine learning-based analytical framework for automatic detection of covid-19 using chest x-ray images. Int. J. Imaging Syst. Technol. 31(3), 11051119 (2021).

Article Google Scholar

Selvi, J. T., Subhashini, K. & Methini, M. Investigation of covid-19 chest x-ray images using texture featuresA comprehensive approach. Computational 1, 4558 (2021).

MATH Google Scholar

van Griethuysen, J. J. M. et al. Computational radiomics system to decode the radiographic phenotype. Cancer Res. 77(21), e104e107 (2017).

Article PubMed PubMed Central Google Scholar

Zhang, Q., Wu, Y. N. & Zhu, S. C. Interpretable convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 88278836 (2018).

Varghese, B. A. et al. Predicting clinical outcomes in covid-19 using radiomics on chest radiographs. Br. J. Radiol. 94(1126), 20210221 (2021).

Article PubMed Google Scholar

Iori, M. et al. Mortality prediction of COVID-19 patients using radiomic and neural network features extracted from a wide chest X-ray sample size: A robust approach for different medical imbalanced scenarios. Appl. Sci. 12(8), 3903 (2022).

Article CAS Google Scholar

Blain, M. et al. Determination of disease severity in covid-19 patients using deep learning in chest x-ray images. Diagn. Interv. Radiol. 27(1), 2027 (2021).

Article PubMed Google Scholar

Liu, X. et al. Temporal radiographic changes in covid-19 patients: Relationship to disease severity and viral clearance. Sci. Rep. 10(1), 10263 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Yasin, R. & Gouda, W. Chest x-ray findings monitoring covid-19 disease course and severity. Egypt. J. Radiol. Nucl. Med. 51(1), 193 (2020).

Article Google Scholar

Castelli, G. et al. Brief communication: Chest radiography score in young covid-19 patients: Does one size fit all?. PLoS ONE 17(2), e0264172 (2022).

Original post:
Use of machine learning to assess the prognostic utility of radiomic ... - Nature.com

IEEE Computer Society Emerging Technology Fund Recipient … – Benzinga

Presentation at The Eleventh International Conference on Learning Representations (ICLR) debuts new findings for end-to-end neural network Trojan removal techniques

LOS ALAMITOS, Calif., May 5, 2023 /PRNewswire/ -- Today, at the virtual Backdoor Attacks and Defenses in Machine Learning (BANDS) workshop during The Eleventh International Conference on Learning Representations (ICLR), participants in the IEEE Trojan Removal Competition presented their findings and success rates at effectively and efficiently mitigating the effects of neural trojans while maintaining high performance. Evaluated on clean accuracy, poisoned accuracy, and attack success rate, the competition's winning team from the Harbin Institute of Technology in Shenzhen, with set HZZQ Defense, formulated a highly effective solution, resulting in a 98.14% poisoned accuracy rate and only a 0.12% attack success rate. This group will be awarded the first-place prize of $5,000 USD.

"The IEEE Trojan Removal Competition is a fundamental solution to improve the trustworthy implementation of neural networks from implanted backdoors," said Prof. Meikang Qiu, chair of IEEE Smart Computing Special Technical Committee (SCSTC) and full professor of Beacom College of Computer and Cyber Science at Dakota State University, Madison, S.D., U.S.A. He also was named the distinguished contributor of IEEE Computer Society in 2021. "This competition's emphasis on Trojan Removal is vital because it encourages research and development efforts toward enhancing an underexplored but paramount issue."

In 2022, IEEE CS established its Emerging Technology Fund, and for the first time, awarded $25,000 USD to IEEE SCSTC for the "Annual Competition on Emerging Issues of Data Security and Privacy (EDISP)," which yielded the IEEE Trojan Removal Competition (TRC '22). The proposal offered a novel take on a cyber topic, because unlike most existing competitions that only focus on backdoor model detection, this competition encouraged participants to explore solutions that can enhance the security of neural networks. By developing general, effective, and efficient white box trojan removal techniques, participants have contributed to building trust in deep learning and artificial intelligence, especially for pre-trained models in the wild, which is crucial to protecting artificial intelligence from potential attacks.

With 1,706 valid submissions from 44 teams worldwide, six groups successfully developed techniques that achieved better results than the state-of-the-art baseline metrics published in top machine-learning venues. The benchmarks summarizing the models and attacks used during the competition are being released to enable additional research and evaluation.

"We're hoping that this benchmark provides diverse and easy access to model settings for people coming up with new AI security techniques," shared Yi Zeng, the competition chairof the IEEE TRC'22, research assistant atBradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, Va., U.S.A. "This competition has yielded new data sets consisting of trained poisoned pre-trained models that are of different architectures and trained on diverse kinds of data distributionswith really high attack success rates, and now developers can explore new defense methods and get rid of remaining vulnerabilities."

During the competition, collective participant results yielded two key findings:

These findings point to the fact that for the time being, a generalized approach to mitigating attacks on neural networks is not advisable. Zeng emphasized the urgent need for a comprehensive AI security solution: "As we continue to witness the widespread impact of pre-trained foundation models on our daily lives, ensuring the security of these systems becomes increasingly critical. We hope that the insights gleaned from this competition, coupled with the release of the benchmark, will galvanize the community to develop more robust and adaptable security measures for AI systems."

"As the world becomes more dependent on AI and machine learning, it is important to deal with the security and privacy issues that these technologies bring up," said Qiu. "The IEEE TRC '22 competition for EDISP has made a big difference in this area. I'd like to offer a special thanks to my colleagues on the steering committeeProfessors Ruoxi Jia from Virginia Tech, Neil Gong from Duke, Tianwei Zhang from Nanyang Technological University, Shu-Tao Xia from Tsinghua University, and Bo Li from University of Illinois Urbana-Champaignfor their help and support."

Ideas and insights coming out of the event, along with the public benchmark data, will help make the future of machine learning and artificial intelligence safer and more dependable. The team plans to run the competition for a second year, and those findings will further strengthen the security parameters of neural networks.

"This is precisely the kind of work we want the Emerging Technology Fund to fuel," said Nita Patel, 2023 IEEE Computer Society President. "It goes a long way toward bolstering iterative developments that will strengthen the security of machine learning and AI platforms as the technologies advance."

For more information about the Emerging Technology Grants Program overall, visit https://www.computer.org/communities/emerging-technology-fund.

About IEEE Trojan Removal CompetitionThe IEEE TRC'22 aims to encourage the development of innovative end-to-end neural network backdoor removal techniques to counter backdoor attacks. For more information, visit https://www.trojan-removal.com/.

About IEEE Computer SocietyThe IEEE Computer Society is the world's home for computer science, engineering, and technology. A global leader in providing access to computer science research, analysis, and information, the IEEE Computer Society offers a comprehensive array of unmatched products, services, and opportunities for individuals at all stages of their professional careers. Known as the premier organization that empowers the people who drive technology, the IEEE Computer Society offers international conferences, peer-reviewed publications, a unique digital library, and training programs. Visit computer.org for more information.

SOURCE IEEE Computer Society

Read the original post:
IEEE Computer Society Emerging Technology Fund Recipient ... - Benzinga

Machine Learning to Estimate Breast Cancer Recurrence | CLEP – Dove Medical Press

Introduction

Cancer recurrence is considered to be an important cancer outcome metric to measure the burden of the disease and success of (neo)adjuvant therapies. Despite this, high-quality breast cancer recurrence rates currently remain unknown in most countries, including Belgium. To date, cancer recurrence is not systematically registered in most population-based cancer registries, due to the difficulty and labor-intensity of registering follow-up for recurrences.

Recurrence definitions used for registration purposes differ among countries, due to the lack of consensus regarding a standardized clinical definition. Defining recurrence clinically is a challenge, since various methods exist to detect recurrences after (neo)adjuvant treatments of a patient such as physical examination, pathological examination, imaging, or tumor markers. Unlike the guidelines and definitions that currently exist in the clinical trial setting,1,2 no guidelines are set to correctly and consistently register a recurrence in a patient with stage IIII breast cancer at diagnosis.

Real-world recurrence data could give an estimation of cancer burden and efficacy of cancer treatment modalities outside a conventional clinical trial setting, which could eventually lead to improvements in quality of care.3,4 Administrative data from health insurance companies on medical treatments and procedures, also known as bill claims, and hospital discharge data could represent an alternative source for the assessment of disease evolution after breast cancer treatment.

Recently, machine learning algorithms based on classification and regression trees (CART) have been developed to detect cancer recurrence at the population level using claims data.5 However, only in a limited number of countries, research teams were able to successfully construct algorithms to detect breast cancer recurrences, and only for a small number of centers (USA,6,7 Canada,8,9 Denmark10,11 and Sweden)12 Our aim was to develop, test and validate an algorithm using administrative data features allowing the estimation of breast cancer recurrence rates for all Belgian patients with breast cancer.

To construct and validate an algorithm to detect distant recurrences, female patients with breast cancer diagnosed between January 1, 2009 and December 31, 2014 were included from nine different centers located in all three Belgian regions. We did not include patients with stage IV breast cancer at diagnosis, patients with a history of cancer (any second primary cancer, multiple tumors, and contralateral tumors), or patients who could not be coupled to administrative data sources. All breast cancers, regardless of molecular subtype, were included. Among the nine centers were centers from the Flemish region (University Hospitals Leuven, General Hospital Groeninge, Jessa Hospital, Imelda Hospital, and AZ Delta), Brussels-Capital region (Cliniques universitaires Saint-Luc and Institut Jules Bordet) and Walloon region (CHR Mons-Hainaut and CHU UCL Namur). For all nine centers, 300 patients were included per center, by randomly selecting from the study population 50 patients per incidence year. The study population of six centers was divided by randomization (6040% split-sample validation) into a training set to develop the algorithm, and an independent test set to perform an internal validation.13 The algorithm was additionally validated with an external validation set of the three remaining centers, to check reproducibility of the algorithm in a dataset with patients from other centers.

For the selection of the nine centers, we aimed for a reasonable variety of center characteristics based on teaching vs non-teaching hospital, the spread across the three regions in Belgium, and center size.

For each patient in the study population, recurrence status (yes, no, unknown) and recurrence date (day, month, year) were extracted and collected from electronic medical files and reviewed by trained data managers from each of the nine hospitals. Recurrence was defined as the occurrence of a distant recurrence or metastasis between 120 days after the primary diagnosis and within 10 years of follow-up after diagnosis or end of study (December 31, 2018). Data managers were instructed to consider death due to breast cancer in our definition of a recurrence. Loco-regional recurrence, was not considered as an outcome in our study. Both patients with a progression (without a disease-free interval) and patients with a recurrence (with a disease-free interval) were considered as outcome in our definition of recurrence. Patients with an unknown recurrence status, due to the lack of follow-up for example, were excluded from the analysis. Patients with a recurrence within 120 days were considered de novo stage IV and therefore excluded because interference of first-line treatment complicates recurrence detection. Starting from diagnosis to detect recurrent disease might cause more false positive recurrence cases due to the treatment of the initial breast cancer overlapping with the immediate first-line treatment due to metastatic disease. Recurrence diagnosis date was the time-point (described in day, month, and year), confirmed by pathological examination, imaging (CT, PET-CT, bone scintigraphy or MRI scan), or defined by physicians in the multidisciplinary team meeting (MDT).

In the course of an extensive data linking process with pseudonymization of the patient data, the recurrence data from the hospitals (i.e., gold standard) were linked to several population-based data sources. These included cancer registration data from the Belgian Cancer Registry (BCR), and administrative data sources, including claims or reimbursement data (InterMutualistic Agency, IMA),14 hospital discharge data (Technische Cel, TCT),15 information on vital status (Crossroads Bank for Social Security, CBSS)16 and cause of death (Agentschap Zorg en Gezondheid, Observatoire de la Sant et du Social de Bruxelles-Capitale, and Agence pour une Vie de Qualit AVIQ).17 Information on data sources and data used is presented in Appendix 1.

To build a robust algorithm to detect distant recurrences, pre-processing and extraction of features were performed. Expert-driven features to potentially detect recurrences in administrative data were created based on recommendations from breast oncologists (P.N. and H.W.). First, a comprehensive list of reimbursement codes for diagnostic and therapeutic procedures and medications was selected, and code groups were created based on their relevance for the diagnosis and/or treatment of distant metastasis in breast cancer patients (See Appendix 2).

Potential features were further refined based on the exploration of data from patients with a recurrence, including time-frames starting from time points after diagnosis (0 days, 90 days, 160 days, 270 days, and 365 days after diagnosis). We assessed different time-frames to obtain the most accurate feature to detect recurrences, and because starting from the date of diagnosis might result in noise from the treatment of the initial breast cancer. We additionally created features based on count of codes, by assessing the maximum number of codes per year or per pre-defined time-frame (starting from 0, 90, 160, 270, and 365 days after diagnosis) (Table 1). The best performing time-frame was selected for each feature by maximizing the Youdens J index:18

Table 1 List of Potential Markers for Recurrence (Available Within Administrative Data) Based on Recommendations from Breast Oncologists

After a feature list was obtained (as described in previous section), this list was narrowed down based on the ensemble method of bootstrapping.19 In total 1000 bootstrap samples were used to generate 1000 classification and regression trees (CART) using the same training set, and to select best-performing features based on the frequency of the features.19,20

Cost-complexity pruning was applied for each bootstrap sample, to obtain the best performing model and avoid over-fitting of the model to the dataset.20 CART inherently uses entropy for the selection of nodes or features. The higher the entropy, the more informative and useful the feature is.20 A 10-fold cross-validation was also performed to ensure robustness of the model in different training sets. Collinearity of the selected features was accounted for by the one standard error (1-SE) rule, to eliminate redundant features. The 1-SE rule selects the least complex tree that is within 1 standard error from the best performing tree.21

Based on the selected features from the bootstrapping, a principal CART model was built to classify patients as having a recurrence or not by using the complete training set.

Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and classification accuracy was calculated for evaluating and comparing the performance of the principal CART model. All models were created and trained in SAS 9.4 (SAS Institute, Cary, NC, USA) within the SAS Enterprise Guide software (version 7.15 of the SAS System for Windows).

Data for a total of 2507 patients could be retrieved from nine Belgian centers and were included in the final dataset to train, test and externally validate the algorithm (Figure 1 and Table 2). The mean follow-up period was 7.4 years. For the split sample validation, the patients from six centers were split into the training set (N = 975 of which 78 distant recurrences, 8.0%) and internal validation set (N = 713 of which 56 distant recurrences, 7.9%). The external validation set consisted of three independent centers with 819 patients, of which 82 had distant recurrences (10.0%). The training, internal validation, and external validation sets did not have differences in distribution of baseline tumor and patient characteristics (Table 2).

Table 2 Baseline Patient and Tumor Characteristics

Figure 1 Patient inclusion flow diagram.

Based on bootstrap aggregation, 1000 CART models were built using the following features: (1) Presence of a follow-up MDT meeting, starting from 270 days after diagnosis (feature present in 975 out of 1000 CART models), (2) Maximum number of CT codes present (with a moving average over time) of 5 or more times a year (851 CART models), and (3) Death due to breast cancer (412 CART models) (see Supplementary Figure 1). Afterwards, the final CART model was constructed with these three features and calculated by using all data of the training set (Figure 2).

Figure 2 Final CART model to detect recurrences based on the three selected features after bootstrapping. Nodes represent selected features by the algorithm to classify patients.

Abbreviations: MDT, multidisciplinary team meeting; CT, computed tomography scan.

The sensitivity of the principal CART model to detect recurrences for the training set was 79.5% (95% confidence interval [CI] 68.887.8%), specificity was 98.2% (95% CI 97.199.0%), with an overall accuracy of 96.7% (95% CI 95.497.7%) (Table 3), and an AUC (area under the curve) of 94.2%. After 10-fold cross-validation within the training set, we found a sensitivity of 71.8% (95% CI 66.486.7%), specificity of 98.2% (95% CI 96.398.5%) and overall accuracy of 96.1% (95% CI 94.797.2%). The internal validation (i.e. based on test set) resulted in a sensitivity of 83.9% (95% CI 71.792.4%), a specificity of 96.7% (95% CI 95.098.9%), and accuracy of 95.7% (95% CI 93.997.0%). After external validation was performed on three additional centers, the sensitivity was 84.1% (95% CI 74.491.3%), with a specificity of 98.2% (95% CI 97.099.1%) and accuracy of 96.8% (95% CI 95.497.9%).

Table 3 Performance of Training Set, Cross Validation, Internal Validation Set and External Validation Set

In this study, we were able to successfully develop a machine learning algorithm to detect distant recurrence in patients with breast cancer, achieving accuracy of 96.8% after external validation in multiple centers across Belgium. The final list of detected parameters were presence of a follow-up MDT meeting, CT scan (max 5 times a year), and death due to breast cancer. Recurrence data are lacking in many population-based cancer registries due to the cost and labor-intensity of registration.3 True incidence of cancer recurrence should be known across age groups and regions in Belgium, to measure burden of illness and eventually improve quality of care. Current recurrence numbers are often extrapolated from clinical trials, which typically exclude older and frail patients. Older patients are more susceptible to receive under-treatment and to recurrences22,23 and recurrence numbers could therefore be underestimated.

The administrative data sources used in our algorithm virtually cover all residents of Belgium,14 which was useful to achieve population-based recurrence data. We were also able to accomplish a multi-centric study by developing the training model and performing an external validation based on data of multiple centers. Likewise, it is highly important to have a relatively large population and reliable gold standard to develop and train a machine learning model in these studies, to avoid prolonging and complicating the feature selection process due to conflicting recurrence and treatment data occurrence.

The definition of a distant recurrence in medical files was the occurrence of a distant recurrence or metastases after a period of 120 days. This time-frame until detection of recurrence varied among previous studies.2427 Most common exclusions were done either from 120 days (Chubak et al 2012) or 180 days after diagnosis (Amar et al 2020). Disease progression can be difficult to measure accurately and can be overestimated because of timing of therapeutic procedures that might be delayed. The limitation of our study was that we could not make a distinction between disease progression and disease recurrence. Defining medical recurrence in the clinic is a challenge, which makes it more difficult to define recurrence with a proxy based on administrative data.28 Therefore, setting a clear definition of window of treatment and the time-frame for detection of recurrence is considered important for future studies.

We chose to restrict our definition to distant recurrences to achieve a straightforward feature selection. We included death due to breast cancer as an outcome in our definition of recurrences. Cause-specific death and accurate source of cause of death is of utmost importance when studying recurrences, since recurrence and death are closely related to each other.29

The machine learning algorithm used in this study was a decision tree, i.e. the Classification And Regression Tree (CART) with the ensemble method. Ensemble learning combines multiple decision trees sequentially (boosting) or in parallel (bootstrap aggregation). The key advantages of using bootstrap aggregation are: better predictive accuracy, less variance, and less bias than a single decision tree. Similarly, latest studies more often make use of ensemble methods.7,9,12

Within the recurrence detection features that were selected from the bootstrapping method for the cohort of six different Belgian centers, no treatment features were selected, which could indicate that there are more inter-center similarities for diagnostic regimens and more differences in terms of treatment regimens. During pre-processing of the features, we did additional checks of features to improve accuracy of the model. For instance, we generated a treatment feature that only included metastases-specific chemotherapy agent codes. However, this feature was not included in the final model. Next, we tried out a model without diagnostic features, but this did not improve accuracy. Previous studies mostly make use of metastatic diagnosis codes (secondary malignant neoplasm or SMN code from ICD-9 or ICD-10) in their algorithm, which would be useful if highly reliable. We also checked subgroups by testing out different models for patients younger or older than 70 years, and different incidence years. We applied the algorithm on subgroups based on age or incidence years, to check if the algorithm accuracy performed better in specific subgroups. As expected, we found higher performance in younger patients (Supplementary Table 1).

Our algorithm performance was comparable to previous studies using decision trees.9,12,24,3032 We found greater accuracy compared with the pooled accuracy of previous algorithms.5

Although algorithms with highest overall accuracy are often sought-after in earlier studies, some studies also provide multiple algorithms to choose from based on their preference, e.g. high-sensitivity or high-specificity algorithms.6,10,24,26,30 Finally, we also investigated the false negative cases from University Hospitals in Leuven to explain why these cases were misclassified. We found that in most false negative cases, the patients were missed due to the lack of attestation of the claims or management of the patients procedures. These cases were most likely patients for which there was a decision to withhold treatment because of comorbid disease, older age, the prognosis of the recurrence, or patients treatments were reimbursed by the sponsor of a clinical trial.

Previously, algorithms based on administrative claims data to detect breast cancer recurrences at the population level have been established.5,710,12 For example research groups from the USA, Canada, and Sweden have built algorithms to detect recurrences in a delimited region within a population. Recent results from these groups have proven that machine learning algorithms based on administrative data can be used to detect recurrences, in the absence of systematic registration. These studies, however, only encompassed a few centers and were thus not validated in a larger cohort of a population. Moreover, most of these algorithms included complete metastasis-specific International Classification of Diseases (ICD)-codes to detect recurrences. Since metastasis-specific codes are not complete in our database, we were not able to use this code in our algorithm. Particularly, the Danish registry has actively collected recurrence information in the Danish Breast Cancer Group (DBCG) clinical database, which they were able to use to construct and validate population-based recurrence-algorithms to complete their recurrence database.10,11 Additionally, they were able to look into long-term recurrences beyond 10 years after incidence date.4,33

The objective of this study was to develop an algorithm that could be used on a nation-wide level to estimate population-wide distant recurrences. Compared with other studies, we used a large sample size and reported both internal and external validation, which was hardly reported in earlier studies.5 Another strength of our study was that unlike many other studies from the USA using Medicare claims,3438 we were able to include all eligible patients with a breast cancer diagnosis, and not just patients older than 65 years.

Although we used different diagnosis and treatment code sources, it should be noted that treatment regimens often change over time and adaptation of the features should be performed for later use. Adapting the algorithm based on changes in diagnosis or treatment regimens might be necessary to obtain accurate recurrence rates of more incidence years in the future. Ideally, we would also prefer to have long-term follow-up and claims data for patients to detect long-term recurrences. However, due to regulations and the large bulk of data that is generated, a longer follow-up of the codes was not possible within the current study. Longer follow-up of recurrences and administrative data would likely improve the accuracy and lead to a more robust algorithm.

In conclusion, our machine learning algorithm to detect metastatic breast cancer recurrences performed with high accuracy after external validation. Claims data are available for medical procedures and medications, hospital discharge data, vital status and cause of death data on the whole population level, which allows the development of models for Belgium. This substantiates the feasibility to develop and validate recurrence algorithms at the population level and might encourage other population-based registries to develop recurrence models or actively register recurrences in the future as these become progressively important. These rates are valuable to gain more insights about recurrences outside the clinical trial setting and might unveil the importance of active registration of recurrences.

AUC, Area under the curve; ATC, Anatomical Therapeutic Chemical classification; AVIQ, Agence pour une Vie de Qualit; BCR, Belgian Cancer Registry; CA15-3, Cancer antigen 15-3; CART, Classification and regression tree; CBSS, Crossroads Bank for Social Security; CT, Computed tomography; FN, False negatives; FP, False positives; ICD, International Classification of Diseases and Related Health Problems; IMA, InterMutualistic Agency; MDT, Multidisciplinary team meeting; MRI, Magnetic Resonance Imaging; MZG, Minimale Ziekenhuis Gegevens; NPV, Negative predictive value; PPV, Positive predictive value; PET-CT, Positron emission tomography computed tomography; SE, Standard error; SMN, Secondary malignant neoplasm; TN, True negatives; TP, True positives.

The data that support the findings of this study are available upon reasonable request. The data can be given within the secured environment of the Belgian Cancer Registry, according to its regulations, and only upon approval by the Information Security Committee.

This retrospective chart review study involving human participants was in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. This study was approved by the Ethics Committee of University Hospitals Leuven (S60928). Informed consent for use of data of all participants was obtained.

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

This work was supported by VZW THINK-PINK (Belgium).

The authors report no conflicts of interest in this work.

1. Gourgou-Bourgade S, Cameron D, Poortmans P, et al. Guidelines for time-to-event end point definitions in breast cancer trials: results of the DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials). Ann Oncol. 2015;26(5):873879. doi:10.1093/annonc/mdv106

2. Eisenhauer EA, Therasse P, Bogaerts J, et al. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer. 2009;45(2):228247. doi:10.1016/j.ejca.2008.10.026

3. Warren JL, Yabroff KR. Challenges and opportunities in measuring cancer recurrence in the United States. J Natl Cancer Inst. 2015;107:djv134djv134. doi:10.1093/jnci/djv134

4. Negoita S, Ramirez-Pena E. Prevention of late recurrence: an increasingly important target for breast cancer research and control. J Natl Cancer Inst. 2021. doi:10.1093/JNCI/DJAB203

5. Izci H, Tambuyzer T, Tuand K, et al. A systematic review of estimating breast cancer recurrence at the population level with administrative data. J Natl Cancer Inst. 2020;112:979988. doi:10.1093/jnci/djaa050

6. Ritzwoller DP, Hassett MJ, Uno H, et al. Development, validation, and dissemination of a breast cancer recurrence detection and timing informatics algorithm. J Natl Cancer Inst. 2018;110:273281. doi:10.1093/jnci/djx200

7. Amar T, Beatty JD, Fedorenko C, et al. Incorporating breast cancer recurrence events into population-based cancer registries using medical claims: cohort study. JMIR Cancer. 2020;6(2):110.

8. Cairncrossh ZF, Nelson G, Shack L, Metcalfe A. Validation in Alberta of an administrative data algorithm to identify cancer recurrence. Curr Oncol. 2020;27(3):e343e346. doi:10.3747/co.27.5861

9. Lambert P, Pitz M, Singh H, Decker K. Evaluation of algorithms using administrative health and structured electronic medical record data to determine breast and colorectal cancer recurrence in a Canadian province: using algorithms to determine breast and colorectal cancer recurrence. BMC Cancer. 2021;21(1):110. doi:10.1186/s12885-021-08526-9

10. Pedersen RN, ztrk B, Mellemkjr L, et al. Validation of an algorithm to ascertain late breast cancer recurrence using Danish medical registries. Clin Epidemiol. 2020;12:10831093. doi:10.2147/CLEP.S269962

11. Rasmussen LA, Jensen H, Virgilsen LF, et al. A validated algorithm for register-based identification of patients with recurrence of breast cancer-Based on Danish Breast Cancer Group (DBCG) data. CANCER Epidemiol. 2019;59:129134. doi:10.1016/j.canep.2019.01.016

12. Valachis A, Carlqvist P, Szilcz M, et al. Use of classifiers to optimise the identification and characterisation of metastatic breast cancer in a nationwide administrative registry. Acta Oncol. 2021;60(12):16041610. doi:10.1080/0284186X.2021.1979645

13. Steyerberg EW, Vergouwe Y. Towards better clinical prediction models: seven steps for development and an ABCD for validation. Eur Heart J. 2014;35:19251931. doi:10.1093/eurheartj/ehu207

14. Het Intermutualistisch Agentschap [The Intermutualistic Agency] (IMA) LAgence InterMutualiste (AIM). https://ima-aim.be/.

15. Technische Cel voor het beheer van de MZG-MFG data [Technical cel for management of MZG-MFG data]- La Cellule Technique pour la gestion des donnes RHM-RFM. https://tct.fgov.be/.

16. CBSS - Crossroads Bank for Social Security. Available from: https://www.ksz-bcss.fgov.be/nl/documents-list. Accessed April 28, 2023.

17. Agence pour une Vie de Qualit [Walloon Agency for quality of life] (AViQ). https://www.aviq.be/.

18. Smits N. A note on Youdens J and its cost ratio. BMC Med Res Methodol. 2010;10(1):14. doi:10.1186/1471-2288-10-89

19. Sutton CD. Classification and regression trees, bagging, and boosting. Handb Stat. 2005;24:303329.

20. Breiman L, Friedman JH, Olshen RA, Stone CJ. Classification and regression trees. Classif Regres Trees. 1984;20:1358.

21. Chen Y, Yang Y. The one standard error rule for model selection: does it work? Stats. 2021;4(4):868892. doi:10.3390/stats4040051

22. Enger SM, Soe ST, Buist DSM, et al. Breast cancer treatment of older women in integrated health care settings. J Clin Oncol. 2006;24(27):43774383. doi:10.1200/JCO.2006.06.3065

23. Han Y, Sui Z, Jia Y, et al. Metastasis patterns and prognosis in breast cancer patients aged 80 years: a SEER database analysis. J Cancer. 2021;12(21):6445. doi:10.7150/jca.63813

24. Xu Y, Kong S, Cheung WY, et al. Development and validation of case-finding algorithms for recurrence of breast cancer using routinely collected administrative data. BMC Cancer. 2019;19(1):110. doi:10.1186/s12885-019-5432-8

25. Chubak J, Onega T, Zhu W, et al. An electronic health record-based algorithm to ascertain the date of second breast cancer events. Med Care. 2017;55:e81e87. doi:10.1097/MLR.0000000000000352

26. Kroenke CH, Chubak J, Johnson L, et al. Enhancing breast cancer recurrence algorithms through selective use of medical record data. J Natl Cancer Inst. 2016;108. doi:10.1093/jnci/djv336

27. Cronin-Fenton D, Kjrsgaard A, Nrgaard M, et al. Breast cancer recurrence, bone metastases, and visceral metastases in women with stage II and III breast cancer in Denmark. Breast Cancer Res Treat. 2018;167(2):517528. doi:10.1007/s10549-017-4510-3

28. In H, Bilimoria KY, Stewart AK, et al. Cancer recurrence: an important but missing variable in national cancer registries. Ann Surg Oncol. 2014;21(5):15201529. doi:10.1245/s10434-014-3516-x

29. Nout RA, Fiets WE, Struikmans H, et al. The in- or exclusion of non-breast cancer related death and contralateral breast cancer significantly affects estimated outcome probability in early breast cancer. Breast Cancer Res Treat. 2008;109(3):567572. doi:10.1007/s10549-007-9681-x

30. Chubak J, Yu O, Pocobelli G, et al. Administrative data algorithms to identify second breast cancer events following early-stage invasive breast cancer. J Natl Cancer Inst. 2012;104(12):931940. doi:10.1093/jnci/djs233

31. Nordstrom B, Whyte J, Stolar M, Catherine Mercaldi JK, Kallich JD. Identification of metastatic cancer in claims data. Pharmacoepidemiology. 2012;21(2):2128. doi:10.1002/pds.3247

32. Nordstrom BL, Simeone JC, Malley KG, et al. Validation of claims algorithms for progression to metastatic cancer in patients with breast, non-small cell lung, and colorectal cancer. Pharmacoepidemiol Drug Saf. 2015;24(1, SI):511.

33. Pedersen RN, Ozt Rk Esen B, Mellemkjaer L, et al. The incidence of breast cancer recurrence 1032 years after primary diagnosis. J Natl Cancer Inst. 2021. doi:10.1093/JNCI/DJAB202

34. Lamont EB, Ii JEH, Weeks JC, et al. Measuring disease-free survival and cancer relapse using medicare claims from CALGB breast cancer trial participants (Companion to 9344). J Natl Cancer Inst. 2006;98(18):13351338. doi:10.1093/jnci/djj363

35. Chawla N, Yabroff KR, Mariotto A, et al. Limited validity of diagnosis codes in Medicare claims for identifying cancer metastases and inferring stage. Ann Epidemiol. 2014;24(9):666672.e2. doi:10.1016/j.annepidem.2014.06.099

36. Hassett MJ, Ritzwoller DP, Taback N, et al. Validating billing/encounter codes as indicators of lung, colorectal, breast, and prostate cancer recurrence using 2 large contemporary cohorts. Med Care. 2014;52(10):e65e73. doi:10.1097/MLR.0b013e318277eb6f

37. Sathiakumar N, Delzell E, Yun H, et al. Accuracy of medicare claim-based algorithm to detect breast, prostate, or lung cancer bone metastases. Med Care. 2017;55:e144e149. doi:10.1097/MLR.0000000000000539

38. McClish D, Penberthy L, Pugh A. Using Medicare claims to identify second primary cancers and recurrences in order to supplement a cancer registry. J Clin Epidemiol. 2003;56(8):760767. doi:10.1016/S0895-4356(03)00091-X

View original post here:
Machine Learning to Estimate Breast Cancer Recurrence | CLEP - Dove Medical Press

How Capital One is democratizing machine learning to curb fraud – Banking Dive

Credit providers have grappled with fraudsters since long before mobile banking. In a modern landscape, financial services businesses dedicate ample resources to thwart fraud attempts.

As fraudulent actors get smarter, machine learning can help companies stay one step ahead. But first, organizations need access to those tools.

Capital One is democratizing access to ML tools, encouraging workers to contribute to a common shared ecosystem to provide practitioners with easy access to ML and spur innovation. In the process, Capital One found opportunities for cross-unit collaboration and improved how the company detects fraud.

"The future is here," said Zach Hanif, vice president and head of enterprise machine learning models and platforms at Capital One. "But, historically, it hasn't always been distributed evenly."

ML tools keep humans focused on the tasks that require their attention, prioritizing resources through technology. Artificial intelligence capabilities are finding a role in financial services in particular.

Four in five companies in the sector have up to five AI use cases at work in their organization, according to an NVIDIA reportpublished in February. Nearly one-quarter are using AI to help detect fraud.

Hanif's team worked alongside the card fraud division to build homegrown and open-source ML algorithms and technologies. With ML tools, the company can quickly determine whether a transaction is benign or if it needs further investigation because of potential fraud.

"We were able to get these teams on the same stack and focused on collaboration, which made sure that we were able to bring down some silos," Hanif said. "We were able to prioritize the development of reusable components so when one team would build a component of their pipeline, other teams were able to immediately begin leveraging it and save themselves the time of that initial development."

Machine learning gives the company a way to quickly determine whether something needs to be investigated, Hanifsaid.

Picking a technology and spreading it throughout the organization isn't a turnkey task.

There are several barriers to easing access to ML throughout any organization, according to Arun Chandrasekaran, distinguished VP analyst at Gartner.

The top barriers are security and privacy concerns and the black-box nature of AI systems, as well as the absence of internal AI know-how, AI governance tools and self-service AI and data platforms, Chandrasekaran told CIO Dive in an email.

Despite the advancement of AI tools in the enterprise, activities associated with data and analytics including preparation, transformation, pattern identification, model development and sharing insights with others are still done manually at many organizations.

"Demands for more data-driven and analytics-enabled decision making, and the friction and technical hurdles of this workflow, limit widespread user adoption and achieving better business outcomes," Chandrasekaran said.

But changing how companies operate is a human problem as much as it is a technical one. Cultural factors can determine whether or not a company succeeds at democratizing the use of a technology tool such as ML.

"To be able to drive change across a large organization, you're trying to make a cultural alteration," Hanif said.

Leaders need to encourage employees to imagine what they can do with specific tools, he said. With that mindset, fear of change falls away and employees begin to think about how a new technology can be contextualized within the existing problem space.

"Standardizing a platform allows everyone to have a common operating environment and runbook," Hanif said. "That way, they can start and engage in that process in a standard, well-understood way. That makes so many different things inside of the organization go smoother, go faster, and reduce the overall risk."

View post:
How Capital One is democratizing machine learning to curb fraud - Banking Dive

A machine learning method for the identification and … – Nature.com

GuiltyTargets-COVID-19 web tool

We start by providing a high level overview about the capabilities of the GuiltyTargets-COVID-19 web tool. The web application initially allows the user to browse through a ranked list of potential targets generated using six bulk RNA-Seq and three single cell RNA-Seq datasets applied to a lung specific proteinprotein interaction (PPI) network reconstruction. Our website is also equipped with several filtering options to allow the user to quickly obtain the most relevant results. The candidate targets were ranked using a machine learning algorithm, GuiltyTargets19, which aims to quantify the degree of similarity of a candidate target to other known (candidate) drug targets. Further details about GuiltyTargets are outlined in the Methods section of this paper.

The user can retrieve a consensus ranking of any combination of datasets desired (Fig. 1). For each protein listed, its level of differential gene expression (upregulated, downregulated, no differential expressed) is displayed using a color coding system in addition to its association with COVID-19 as described in the literature. This latter feature is accomplished using an automated web search of scientific articles from PubMed that mention the protein in combination with COVID-19.

Though we provide nine different RNA-Seq datasets to explore, our tool also allows one to upload their own gene expression data. Uploaded data is sent through the GuiltyTargets algorithm and, after a short period of time, a ranking of candidate proteins is made available to the user to download and explore.

To further elucidate their linkage to known disease mechanisms, GuiltyTargets-COVID-19 enables one to explore the neighborhood of any given candidate target within the lung tissue specific PPI network reconstruction (Fig. 2). The network is labeled with information about known disease associations in humans in addition to virus-host interactions.

Importantly, in order to present the user with a list of possible drug candidates for a given protein, we parsed the ChEMBL database to generate a mapping of known ligands for each of the prioritized proteins and included this information in our web application. Direct links to the ligands description pages were added to GuiltyTargets-COVID-19 so that researchers can quickly explore the each compounds profile.

To point out potential target related safety issues, GuiltyTargets-COVID-19 includes a list of adverse effects for each target-linked compound, all of which were derived from the NSIDES database20. By making this information readily available, the user can quickly decide which compounds for a given target are most viable.

Altogether, GuiltyTargets-COVID-19 implements a comprehensive workflow involving computational target prioritization supplemented with annotations from several key databases.

Screenshot of the GuiltyTargets-COVID-19 web application available at https://guiltytargets-covid.eu/.

In the following sections, we demonstrate the utility of GuiltyTargets-COVID-19 based on the analysis of 6 bulk RNA-Seq and 3 single cell RNA-Seq datasets. A detailed overview of the data and workflow can be found in the Differential gene expression section of the Methods. In brief, GuiltyTargets-COVID-19 maps differentially expressed genes in each of these datasets to a lung tissue specific, genome-wide PPI network, which was constructed using data from BioGRID21, IntAct22 and STRING23 (see PPI Network Construction in Methods). Users can choose a combination of these datasets and the tool will present a ranking of each protein for each selected dataset based on its similarity to known drug targets. Additionally, a consensus ranking is also calculated if multiple datasets were selected.

For our analysis, we initially performed a ranking for each individual dataset. This ranking was performed using the GuiltyTargets positive-unlabeled machine learning algorithm19, which combines a PPI network, a differential gene expression (DGE) dataset, and a list of included nodes that are labeled as putative targets. Based on these results, GuiltyTargets then quantifies the probability that a candidate protein could be labeled as a target as well. In order to create a usable model, GuiltyTargets-COVID-19 was trained using a set of 218 proteins targeted by small compounds extracted from ChEMBL. This set of proteins was previously found to be involved in cellular response mechanisms specific to COVID-19 that have been shown to be transcriptionally dysregulated in several bulk RNA-Seq datasets15. The set of 218 proteins may thus be regarded as an extendable set of candidate targets. We chose this approach as there are currently very few approved drugs for COVID-19 (7 as of December 2022 in the European Union), hence making a machine learning model based ranking with respect to only known targets of approved drugs rather questionable.

In order to maximize transparency, GuiltyTargets-COVID-19 also reports the ranking performance of the GuiltyTargets machine learning algorithm that is calculated using the cross-validated area under receiver operator characteristic curve (AUC). As show in Fig. 6, the cross-validated AUCs found for each of the nine datasets used in this work were found to be between 85% and 90%, which align with the results reported in19. Additional details regarding the algorithms performance can be found in the Methods Section.

First degree neighbors of the (a) AKT3 and (b) PIK3CA proteins. Nodes are colored according to their associations: light orange means no virus or human association was found, dark orange indicates only human association, purple signifies viral association, and and dark blue nodes are proteins with associations to both viral mechanisms and human processes. The neighboring proteins and their associations for AKT3 and PIK3CA are outlined in Supplementary Data S1 and S2, respectively.

For our use case, we focused on proteins with a predicted target likelihood higher than 85% in each of the nine datasets. This resulted in 5167 candidate targets for each of the bulk RNA-Seq datasets and 4565 candidate targets for each of the scRNA-Seq datasets. By enabling the filter option novel in our web tool, we can select for those prioritized targets that are not among the original set of 218 proteins labeled as known targets and used for training the model.

Among these prioritized targets, there was a considerable difference between the analyzed bulk RNA-Seq data, with only a single protein target appearing among the top candidates for all 6 datasets: AKT3 (Fig. 3). AKT3 is of great interest in COVID-19 research as the PI3K/AKT signaling pathway plays a central role in cell survival. Moreover, researchers have observed an association between this pathway and coagulopathies in SARS-CoV-2 infected patients24. It has been suggested that the PI3K/AKT signaling pathway can be over-activated in COVID-19 patients either by direct or indirect mechanisms, thus suggesting this pathway may serve as a potential therapeutic target25.

To better understand the relationship of AKT3 with known COVID-19 disease mechanisms, the user can also download a CSV file comprised of the direct (first-degree) neighbors of AKT3 in the lung tissue specific PPI network used for our analysis. Each first-degree neighbor is additionally annotated to indicate whether the corresponding protein is associated with either the disease or with the virus itself. Figure 2a provides a visualization of the AKT3 neighbor network generated using Cytoscape 3.9.126.

Interestingly, a larger number of shared prioritized protein targets can be found among the scRNA-Seq data. Based on the 17 cell types identified in the three datasets, four common target candidates were identified: AKT2, AKT3, MAPK11, and MLKL. The presence of AKT3, as well as its isoform AKT2, in our list of prioritized targets supports the predicted association of the PI3K/AKT signaling pathway with COVID-19 as observed in our analysis of the bulk RNA datasets. Interestingly, our analysis of the single-cell datasets revealed two additional proteins of interest, MAPK11 and MLKL. MAPK11 is targeted by the compound losmapimod, which was tested against COVID-19 in a (terminated) phase III clinical trial (NCT04511819). The trial ended in August 2021 due to the rapidly evolving environment for the treatment of Covid-19 and ongoing challenges to identify and enroll qualified patients to participate (https://clinicaltrials.gov/ct2/show/NCT04511819). MLKL is a pseudokinase that plays a key role in TNF-induced necroptosis, a programmed cell death process. Recent evidence suggests that it can become dysregulated by the inflammatory response due to SARS-CoV-2 infection27. According to the DGldb database28 (which is cross-referenced by GuiltyTargets-COVID-19), the protein is also druggable and thus may serve as a therapeutic target.

Overall, these results demonstrate that GuiltyTargets-COVID-19 has the capability of identifying candidate targets with a clear disease association as well as assessing their potential druggability.

Venn diagram of the number of prioritized targets from the bulk RNA-Seq datasets.

After analyzing the top ranked protein targets shared by each group of RNA-Seq data, we next sought to characterize those candidates found in unique cell types (Table 1). Interestingly, we found that PIK3CA was only ranked among the top therapeutic candidates in goblet cells. Goblet cells are modified epithelial cells that secrete mucus on the surface of mucous membranes of organs, particularly those of the lower digestive tract and airways. Dactolisib is a compound targeting PIK3CA that has been tested in a phase II clinical trial for its ability to reduce COVID-19 disease severity (NCT04409327). The trial was terminated due to an insufficient accrual rate (https://clinicaltrials.gov/ct2/show/NCT04409327). Figure 2b depicts the PIK3CA protein and its first-degree neighbors as defined by the PPI network used in the GuiltyTargets-COVID-19 algorithm.

Another interesting drug we identified during our analysis is the compound varespladib, a compound that is currently being tested in a phase II clinical trial (NCT04969991) and which targets PLA2G2A, a potential protein target that primarily affects NKT cells (Table 1). To better support the user in finding more information about the disease context of such candidate targets, GuiltyTargets-COVID-19 also includes links to PubMed articles in which the protein and its roles in COVID-19 are discussed. Identification of relevant articles is discussed in the the Methods section.

Altogether, these results demonstrate that the tool presented here can be used for cell type specific target prioritization as well as aiding in characterizing the proteins in the context of COVID-19.

GuiltyTargets-COVID-19 also includes a feature for identifying small compound ligands from the ChEMBL database with reported activity (pChEMBL > 5) against candidate targets. In our use case, we were able to identify 186 ligands for AKT3, the top prioritized target across bulk RNA-Seq datasets. Furthermore, 126 ligands were mapped to the four candidate targets that were found among all single cell RNA-Seq datasets. A complete report of the number of ligands mapped to protein targets unique for a given cell type can be found in Table 2. We observed a high imbalance of mapped ligands for different cell types with secretory cells being targeted by the vast majority of compounds.

In total, these results demonstrate the ability of GuiltyTargets-COVID-19 to efficiently identify active ligands against candidate targets, thus supporting researchers in rapidly identifying potential new drugs for therapeutic intervention or repurposing.

An important factor that must be taken into consideration with new target candidates are the adverse events which are associated with the drugs targeting these proteins. To better assess the suggested therapeutics, we mapped significant adverse effects from the NSIDES database (http://tatonettilab.org/offsides) to the extracted ChEMBL compounds. Hence, each protein can be visualized in tandem with the ligands that target it, as well as any side effects found to be associated with the linked compounds. To showcase this feature, Fig. 4 depicts the AKT3 protein as well as its associated ligands and their side effects as shown in the GuiltyTargets-COVID-19 web application.

Screenshot of part of the adverse effect network for the AKT3 protein.

Read the rest here:
A machine learning method for the identification and ... - Nature.com

Can Artificial Intelligence and Machine Learning Find Life in Space? – BBN Times

Artificial intelligence (AI) and machine learning (ML) are increasingly being used in the field of astrobiology to help in the search for life in space.

The latest advances in artificial intelligence and machine learning could accelerate the search for extraterrestrial lifeby showing the most promising places to look.

With the vastness of the universe, the search for life beyond Earth is a complex and challenging task. AI and ML have the potential to enhance our ability to detect signs of life and to identify the most promising targets for exploration.

The use of AI and ML in space applications have picked up pace as researchers and scientists worldwide deploy machine learning algorithms that analyze vast amounts of data and identify signals and potential targets in space.

The universe is a game of billions - being billions of years old, spanning across billions of light years and harboring billions of stars, galaxies, planets and unidentifiable elements. Amidst this, we are but a tiny speck of life living on the only identified habitable planet in space. Scientists, astronomers, astrologers and common people alike from all over the world have discussed the idea of extraterrestrial life prevailing in any corner of the universe. The likelihood of the existence of life beyond Earth is high, leading to various efforts being put into discovering traces of life through signals, observations, detections and more. And with AI and ML in space applications, detecting life in space has moved beyond just a dream and entered into its practical stages.

The termSETI, or Search for Extraterrestrial Intelligence, refers to the effort to find intelligent extraterrestrial life by searching the cosmos for signs of advanced civilizations. The theory underlying SETI is that there might be intelligent extraterrestrial civilizations out there and they might be sending out signals that we could pick up on. These signals could manifest as deliberate messages, unintended emissions from advanced technology or even proof of enormous engineering undertakings likeDyson spheres. SETIs role includes, but is not limited to:

To analyze the massive volumes of data gathered from radio telescopes and other sensors used in the hunt for extraterrestrial intelligence, SETI researchers employmachine learningtechniques. ML can be used to help analyze data from other instruments, such as optical telescopes, that may be used in the search for extraterrestrial intelligence. For example, machine learning algorithms can be trained to recognize patterns in the light curves of stars that may indicate the presence of advanced technology.

The identification of signals that might be an indication of extraterrestrial intelligence is one of the ways SETI makes use of machine learning. Both natural signals, such as those produced by pulsars and artificial signals, such as those from satellites and mobile phones, can be collected by radio telescopes. The properties of these various signals can be used to train machine learning algorithms to identify them and separate them from potential signals from extraterrestrial intelligence.

A further application of ML in SETI is to assist in locating and categorizing possible targets for further observations. With so much information to sort through, it can be challenging for human researchers to decide which signals are most intriguing and deserving of additional study. Based on criteria like signal strength, frequency and duration, machine learning algorithms can be used to automatically select possible targets.

While artificial intelligence and machine learning in space applications have shown significant promise in the study of astrobiology, finding extraterrestrial life is a complex and ongoing endeavor that requires many different approaches and technologies. Ultimately, it is only through collaborative efforts of scientific ingenuity and technological innovations that will allow us to find life beyond our planet.

See the original post here:
Can Artificial Intelligence and Machine Learning Find Life in Space? - BBN Times

How AI, automation, and machine learning are upgrading clinical trials – Clinical Trials Arena

Artificial intelligence (AI) is set to be the most disruptive emerging technology in drug development in 2023, unlocking advanced analytics, enabling automation, and increasing speed across the clinical trial value chain.

Todays clinical trials landscape is being shaped by macro trends that include the Covid-19 pandemic, geopolitical uncertainty, and climate pressures. Meanwhile, advancements in adaptive design, personalisation and novel treatments mean that clinical trials are more complex than ever. Sponsors seek greater agility and faster time to commercialisation while maintaining quality and safety in an evolving global market. Across every stage of clinical research, AI offers optimisation opportunities.

A new whitepaper from digital technology solutions provider Taimei examines the transformative impact of AI on the clinical trials of today and explores how it will shape the future.

The big delay areas are always patient recruitment, site start-up, querying, data review, and data cleaning, explains Scott Clark, chief commercial officer at Taimei.

Patient recruitment is typically the most time-consuming stage of a clinical trial. Sponsors must find and identify a set of subjects, gather information, and use inclusion/exclusion criteria to filter and select participants. And high-quality patient recruitment is vital to a trials success.

Once patients are recruited, they must be managed effectively. Patient retention has a direct impact on the quality of the trials results, so their management is crucial. In todays clinical trials, these patients can be distributed over more than a hundred sites and across multiple geographies, presenting huge data management challenges for sponsors.

AI can be leveraged across patient recruitment and management to boost efficiency, quality, and retention. Algorithms can gather subject information and screen and filter potential participants. They can analyse data sources such as medical records and even social media content to detect subgroups and geographies that may be relevant to the trial. AI can also alert medical staff and patients to clinical trial opportunities.

The result? Faster, more efficient patient recruitment, with the ability to reach more diverse populations and more relevant participants, as well as increase quality and retention. [Using AI], you can develop the correct cohort, explains Clark. Its about accuracy, efficiency, and safety.

Study build can be a laborious and repetitive process. Typically, data managers must read the study protocol and generate as many as 50-60 case report forms (CRFs). Each trial has different CRF requirements. CRF design and database building can take weeks and has a direct impact on the quality and accuracy of the clinical trial.

Enter AI. Automated text reading can parse, categorise, and stratify corpora of words to automatically generate eCRFs and the data capture matrix. In study building, AI is able to read the protocols and pull the best CRF forms for the best outcomes, adds Clark.

It can then use the data points from the CRFs to build the study base, creating the whole database in a matter of minutes rather than weeks. The database is structured for export to the biostatisticians programming. AI can then facilitate the analysis of data and develop all of the required tables, listings and figures (TLFs). It can even come to a conclusion on the outcomes, pending review.

Optical character recognition (OCR) can address structured and unstructured native documents. Using built-in edit checks, AI can reduce the timeframe for study build from ten weeks to just one, freeing up data managers time. We are able to do up to 168% more edit checks than are done currently in the human manual process, says Clark. AI can also automate remote monitoring to identify outliers and suggest the best route of action, to be taken with approval from the project manager.

AI data management is flexible, agile, and robust. Using electronic data capture (EDC) removes the need to manage paper-based documentation. This is essential for modern clinical trials, which can present huge amounts of unstructured data thanks to the rise of advances such as decentralisation, wearables, telemedicine, and self-reporting.

Once the trial is launched, you can use AI to do automatic querying and medical coding, says Clark. When theres a piece of data that doesnt make sense or is not coded, AI can flag it and provide suggestions automatically. The data manager just reviews what its corrected, adds Clark. Thats a big time-saver. By leveraging AI throughout data input, sponsors also cut out the lengthy process of data cleaning at the end of a trial.

Implementing AI means establishing the proof of concept, building a customised knowledge base, and training the model to solve the problem on a large scale. Algorithms must be trained on large amounts of data to remove bias and ensure accuracy. Today, APIs enable best-in-class advances to be integrated into clinical trial applications.

By taking repetitive tasks away from human personnel, AI accelerates the time to market for life-saving drugs and frees up man-hours for more specialist tasks. By analysing past and present trial data, AI can be used to inform future research, with machine learning able to suggest better study design. In the long term, AI has the potential to shift the focus away from trial implementation and towards drug discovery, enabling improved treatments for patients who need them.

To find out more, download the whitepaper below.

See the original post here:
How AI, automation, and machine learning are upgrading clinical trials - Clinical Trials Arena

10 Best Ways to Earn Money Through Machine Learning in 2023 – Analytics Insight

10 best ways to earn money through machine learning in 2023 are enlisted in this article

10 best ways to earn money through machine learning in 2023 take advantage of the early lifespan and its adoption may then leverage this into other apps.

Land Gigs with FlexJobs: FlexJobs is one of the top freelance websites for finding high-quality employment from actual businesses. Whether you are a machine learning novice or a specialist, you may begin communicating with clients to monetize your skills by working on freelancing projects.

Become a Freelancer or List your Company to Hire a Team on Toptal: Toptal is similar to FlexJobs in that it is reserved for top freelancers and top firms wanting to recruit freelance machine learning programmers. This is evident in the hourly pricing given on the site as well as the caliber of the programmers.

Develop a Simple AI App: Creating an app is another excellent approach to generating money using machine learning. You may design a subscription app in which users can pay to access certain premium features. Subscription applications are expected to earn at least 50% more money than other apps with various sorts of in-app sales.

Become an ML Educational Content Creator: You can make money with machine learning online right now if you start teaching people about machine learning and its benefits. To publish and sell your course, use online platforms that provide teaching platforms, such as Udemy and Coursera.

Create and Publish an Online ML Book: You may create a book to provide extraordinary insights on the power of 3D printing, robots, AI, synthetic biology, networks, and sensors. Online book publication is now feasible because of systems such as Kindle Direct Publication, which provides a free publishing service.

Sell Artificial Intelligence Devices: Another profitable enterprise to consider is selling GPS gadgets to automobile owners. GPS navigation services can aid with traffic forecasting. As a result, it can assist car users in saving money if they choose a different route to work. Based on everyday experiences, you may estimate the places likely to be congested with access to the current traffic condition.

Generate Vast Artificial Intelligence Data for Cash: Because machine learning can aid in the generation of massive amounts of data, you can benefit from providing AI solutions to various businesses. AI systems function similarly to humans and have a wide range of auditory and visual experiences. An AI system may learn new things and be motivated by dynamic data and movies.

Create a Product or a Service: AI chatbots are goldmines and a great method to generate money with machine learning. Creating chatbot frameworks for mobile phones in the back endand machine learning engines in the front end is an excellent way to make money quickly. Making services like sentiment analysis or Google Vision where the firm or user may pay after making numerous queries per month is another excellent approach to gaining money using ML.

Participate in ML Challenges: You may earn money using machine learning by participating in and winning ML contests, in addition to teaching it. If you are a guru or have amassed a wealth of knowledge on this subject, you may compete against other real-world machine-learning specialists in tournaments.

Create and License a Machine Learning Tech: If you can develop an AI technology and license it, you can generate money by selling your rights to someone else. As the licensor, you must sign a contract allowing another party, the licensee, to use, re-use, alter, or re-sell it for cash, compensation, or consideration.

Excerpt from:
10 Best Ways to Earn Money Through Machine Learning in 2023 - Analytics Insight

Announcing LityxIQ 6.0 – Powering Predictive Business Decisions … – PR Newswire

Lityx makes its leading alternative AI and MLOps platform easierto deliver value for organizations focused on digital transformation

WILMINGTON, Del., April 25, 2023 /PRNewswire/ -- Lityx, LLC today announced the release of LityxIQ 6.0, the first AutoML platform to combine machine learning with mathematical optimization in a single, cloud-hosted, no-code platform. A fully integrated enterprise decision engine, LityxIQ 6.0 extends a proven track record of success delivering rapid predictive and prescriptive insights, and simplifies model development, management, deployment, and monitoring to genuinely democratize advanced analytics for organizations of any size.

"Lityx combines a guided Customer Success strategy with our best-in-class LityxIQ platform to get analytics capabilities in the hands of anyone who uses data insights to make critical business decisions," said Paul Maiste, Ph.D., Lityx CEO and president. "LityxIQ is built by data scientists for analysts and statisticians, alike, to accelerate advanced analytics success to days or weeks versus months or years. Plus, LityxIQ provides immediate value to business leaders by making insights easy to understand for arriving at the best decisions faster, at a price to meet any organization's budget."

Lityx next-gen machine learning powers predictive business decisions, making digital transformation easier, affordable.

LityxIQ 6.0 users get enhanced MLOps functionality that streamlines machine learning development and production, ensuring that models remain robust, reliable and scalable. Additionally, through available solution accelerators, LityxIQ 6.0 makes the path from data to insights even faster.

"The platform has included essential tools for managing the end-to-end data lifecycle since our launch, and LityxIQ 6.0 makes decision intelligence even easier through additional data automation and a refreshed interface for a world-class user experience," said Dr. Maiste.

Industries achieving success through LityxIQ include global manufacturers, healthcare, financial services, media and advertising agencies, and more.

Notable enhancements in LityxIQ 6.0 include automated model monitoring, enhanced model performance analysis and comparisons, and additional model exploration tools such as customer engagement profitability optimization and threshold and cost optimization.

About Lityx: Wilmington, Del.-based Lityx, LLC is a software and services company focused on building and deploying advanced analytics and decision intelligence solutions. Founded in 2006, Lityx develops LityxIQ, a cloud-based software-as-a-service, to help business and technical users easily leverage the power of advanced analytics and mathematical optimization to achieve deeper insights and increased ROI rapidly. Lityx delivers LityxIQ 6.0 directly or through a global network of services partners. For more information, visit http://www.lityx.com.

SOURCE Lityx LLC

Go here to see the original:
Announcing LityxIQ 6.0 - Powering Predictive Business Decisions ... - PR Newswire

Application od Machine Learning in Cybersecurity – Read IT Quik

The most crucial aspect of every business is its cybersecurity. It aids in ensuring the security and safety of their data. Artificial intelligence and machine learning are in high demand, changing the cybersecurity industry as a whole. Cybersecurity may benefit greatly from machine learning, which can be used to better available antivirus software, identify cyber dangers, and battle online crime. With the increasing sophistication of cyber threats, companies are constantly looking for innovative ways to protect their systems and data. Machine learning is one emerging technology that is making waves in cybersecurity. Cybersecurity professionals can now detect and mitigate cyber threats more effectively by leveraging artificial intelligence and machine learning algorithms. This article will delve into key areas where machine learning is transforming the security landscape.

One of the biggest challenges in cybersecurity is accurately identifying legitimate connection requests and suspicious activities within a companys systems. With thousands of requests pouring in constantly, human analysis can fall short. This is where machine learning can play a crucial role. AI-powered cyber threat identification systems can monitor incoming and outgoing calls and requests to the system to detect suspicious activity. For instance, there are many companies that offer cybersecurity software that utilizes AI to analyze and flag potentially harmful activities, helping security professionals stay ahead of cyber threats.

Traditional antivirus software relies on known virus and malware signatures to detect threats, requiring frequent updates to keep up with new strains. However, machine learning can revolutionize this approach. ML-integrated antivirus software can identify viruses and malware based on their abnormal behavior rather than relying solely on signatures. This enables the software to detect not only known threats but also newly created ones. For example, companies like Cylance have developed smart antivirus software that uses ML to learn how to detect viruses and malware from scratch, reducing the dependence on signature-based detection.

Cyber threats can often infiltrate a companys network by stealing user credentials and logging in with legitimate credentials. It can be challenging to detect with traditional methods. However, machine learning algorithms can analyze user behavior patterns to identify anomalies. By training the algorithm to recognize each users standard login and logout patterns, any deviation from these patterns can trigger an alert for further investigation. For instance, Darktrace offers cybersecurity software that uses ML to analyze network traffic information and identify abnormal user behavior patterns.

Machine learning offers several advantages in the field of cyber security. First and foremost, it enhances accuracy by analyzing vast amounts of data in real time, helping to identify potential threats promptly. ML-powered systems can also adapt and evolve as new threats emerge, making them more resilient against rapidly growing cyber-attacks. Moreover, ML can provide valuable insights and recommendations to cybersecurity professionals, helping them make informed decisions and take proactive measures to prevent cyber threats.

As cyber threats continue to evolve, companies must embrace innovative technologies like machine learning to strengthen their cybersecurity defenses. Machine learning is transforming the cybersecurity landscape with its ability to analyze large volumes of data, adapt to new threats, and detect anomalies in user behavior. By leveraging the power of AI and ML, companies can stay ahead of cyber threats and safeguard their systems and data. Embrace the future of cybersecurity with machine learning and ensure the protection of your companys digital assets.

Continue reading here:
Application od Machine Learning in Cybersecurity - Read IT Quik

U.S. Patent filed by BYND Cannasoft to Expand its Artificial … – GlobeNewswire

BYND Cannasoft Subsidiary Zigi Carmel Initiatives & Investments LTD. filed U.S. Provisional Patent Application 63461609 on April 25, 2023 covering the mechanical structure, operation, and controlling aspects of a treatment device monitored by sensors and capable of stimulating the male sexual organs based on user preferences

ASHKELON, Israel and VANCOUVER, British Columbia, April 27, 2023 (GLOBE NEWSWIRE) -- BYND Cannasoft Enterprises Inc. (Nasdaq: BCAN) (CSE: BYND) ("BYND Cannasoft" or the "Company") announced today that its Zigi Carmel Initiatives & Investments LTD. subsidiary filed U.S. Provisional Patent Application 63461609 on April 25, 2023, covering the mechanical structure, operation, and controlling aspects of a male treatment device for external use capable of gathering information and creating custom programs according to the collected data from the sensors and uploading the data to the cloud. This U.S. Provisional Patent Application marks BYND Cannasoft's third potential candidate that could introduce new advanced haptic experiences to the fast-growing sexual wellness and sextech market.

The male treatment device utilizes artificial intelligence and machine learning algorithms to control its operational parameters based on the user's physiological parameters. The user, or a partner, can control the device with a smartphone app. Data collected by the device's sensors can be uploaded to the cloud where it will be stored to remember user preferences to create a custom experience for the user.

BYND Cannasoftannounced on March 8, 2023that its Zigi Carmel Initiatives & Investments LTD. subsidiary filed U.S. Provisional Patent Application number 63450503 covering the mechanical structure, operation, and controlling aspects of its smart female treatment device. OnApril 25, 2023, the company announcedit received a positive opinion from the Patent Cooperation Treaty (PCT) for its A.I.-based Female Treatment Device. The Patent Cooperation Treaty (PCT) assists applicants in seeking patent protection internationally for their inventions and currently has 157 contracting states. BYND Cannasoft intends to file a similar application with the PCT for its male treatment device.

AnApril 2023 industry reportby Market Research Future projects the Sexual Wellness Market size could grow to $115.92 billion by 2030 from $84.89 billion in 2022. The report cites the growing prevalence of Sexually Transmitted Diseases (STDs), HIV infection, increasing government initiatives, and NGOs promoting contraceptives as the key market drivers dominating the market growth.According to Forbes, the Sextech Market is expected to grow to $52.7 billion by 2026 from its current $30 billion as online sales continue to grow. BYND Cannasoft plans to develop this A.I.-based smart treatment device for men, its A.I.-based smart treatment device for women, and its EZ-G device.

Yftah Ben Yaackov, CEO and Director of BYND Cannasoft, said, "As the multi-billion-dollar sexual wellness and sextech market continues to grow, the industry is undergoing tremendous changes in consumer preferences as devices are increasingly connected online and enabled with interactive content. In this market, A.I., machine learning, and haptic technology have the potential to personalize the operational parameters of sexual wellness devices based on the physiological parameters of the user." Mr. Ben Yaackov continued, "As a corporate lawyer, I recognize the value of licensing our potential A.I. and machine learning patent portfolio to customers in the sexual wellness market and producing innovative new products. The Board of BYND Cannasoft is committed to protecting the company's I.P. covering this potentially lucrative market and bringing this innovative technology to market."

About BYND Cannasoft Enterprises Inc.

BYND Cannasoft Enterprises is an Israeli-based integrated software and cannabis company. BYND Cannasoft owns and markets "Benefit CRM," a proprietary customer relationship management (CRM) software product enabling small and mediumsized businesses to optimize their daytoday business activities such as sales management, personnel management, marketing, call center activities, and asset management. Building on our 20 years of experience in CRM software, BYND Cannasoft is developing an innovative new CRM platform to serve the needs of the medical cannabis industry by making it a more organized, accessible, and price-transparent market. The Cannabis CRM System will include a Job Management (BENEFIT) and a module system (CANNASOFT) for managing farms and greenhouses with varied crops. BYND Cannasoft owns the patent-pending intellectual property for the EZ-G device. This therapeutic device uses proprietary software to regulate the flow of low concentrations of CBD oil, hemp seed oil, and other natural oils into the soft tissues of the female reproductive system to potentially treat a wide variety of women's health issues. The EZ-G device includes technological advancements as a sex toy with a more realistic experience and the prototype utilizes sensors to determine what enhances the users' pleasure. The user can control the device through a Bluetooth app installed on a smartphone or other portable device. The data will be transmitted and received from the device to and from the secure cloud using artificial intelligence (AI). The data is combined with other antonymic user preferences to improve its operation by increasing sexual satisfaction.

For Further Information please refer to information available on the Companys website: http://www.cannasoft-crm.com, the CSEs website: http://www.thecse.com/en/listings/life-sciences/bynd-cannasoft-enterprises-inc and on SEDAR: http://www.sedar.com.

Gabi KabazoChief Financial OfficerTel: (604) 833-6820email: ir@cannasoft-crm.com

For Media and Investor Relations, please contact:

David L. Kugelman(866) 692-6847 Toll Free - U.S. & Canada(404) 281-8556 Mobile and WhatsAppdk@atlcp.comSkype: kugsusa

Cautionary Note Regarding Forward-Looking Statements

This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 involving risks and uncertainties, which may cause results to differ materially from the statements made. We intend such forward-looking statements to be covered by the safe harbor provisions for forward-looking statements contained in Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. When used in this document, the words "may," "would," "could," "will," "intend," "plan," "anticipate," "believe," "estimate," "expect," "potential," "continue," "strategy," "future," "project," "target," and similar expressions are intended to identify forward-looking statements, though not all forward looking statements use these words or expressions. All statements contained in this press release other than statements of historical fact, including, without limitation, statements regarding our male treatment device, our Cannabis CRM platform, our expanded EZ-G patent application, our market growth, and our objectives for future operations, are forward looking statements. Additional regulatory standards may be required, including FDA approval or any other approval for the purpose of manufacturing, marketing, and selling the devices under therapeutic indications. There is no certainty that the aforementioned approvals will be received, and all the information in this release is forward-looking. Such statements reflect the company's current views with respect to future events and are subject to such risks and uncertainties. Many factors could cause actual results to differ materially from the statements made, including unanticipated regulatory requests and delays, final patents approval, and those factors discussed in filings made by the company with the Canadian securities regulatory authorities, including (without limitation) in the company's management's discussion and analysis for the year ended December 31, 2022 and annual information form dated March 31, 2023, which are available under the company's profile atwww.sedar.com, and in filings made with the U.S. Securities and Exchange Commission. Should one or more of these factors occur, or should assumptions underlying the forward-looking statements prove incorrect, actual results may vary materially from those described herein as intended, planned, anticipated, or expected. We do not intend and do not assume any obligation to update these forwardlooking statements, except as required by law. Any such forward-looking statements represent management's estimates as of the date of this press release. While we may elect to update such forward-looking statements at some point in the future, we disclaim any obligation to do so, even if subsequent events cause our views to change. Shareholders are cautioned not to put undue reliance on such forwardlooking statements.

Go here to read the rest:
U.S. Patent filed by BYND Cannasoft to Expand its Artificial ... - GlobeNewswire

Current Applications of Artificial Intelligence in Oncology – Targeted Oncology

Image Credit: ipopba [stock.adobe.com]

The evolution of artificial intelligence (AI) is reshaping the field of oncology by providing new devices to detect cancer, individualize treatments, manage patients, and more.

Given the large number of patients diagnosed with cancer and amount of data produced during cancer treatment, interest in the application of AI to improve oncologic care is expanding and holds potential.

An aspect of care delivery where AI is exciting and holds so much promise is democratizing knowledge and access to knowledge. Generating more data, bringing together the patient data with our knowledge and research, and developing these advanced clinical decision support systems that use AI are going to be ways in which we can make sure clinicians can provide the best care for each individual patient, Tufia C. Haddad, MD, told Targeted OncologyTM.

While cancer treatment options have only improved over past decades, there is an unmet medical need to make these cancer treatments more affordable and personalized for each patient with cancer.1

As we continue to learn about and better understand the use of AI in oncology, experts can improve outcomes, develop approaches to solve problems in the space, and advance the development of treatments that are made available to patients.

AI is a branch of computer science which works with the simulation of intelligent behavior in computers. These computers follow algorithms which are established by humans or learned by the computer to support decisions and complete certain tasks. Under the AI umbrella lay important subfields.

Machine learning is the process in which a computer can improve its own performance by consistently utilizing newly-generated data into an already existing iterative model. According to the FDA, 1 of the potential benefits of machine learning is its ability to create new insights from the vast amount of data generated during the delivery of health care every day.2

Sometimes, we can use machine learning techniques in a way where we are training the computer to, for example, discern benign pathology, benign pathology from malignant pathology, and so we train the computer with annotated datasets, where we are showing the different images of benign vs malignancy. Ultimately, the computer will bring forward an algorithm that we then take separate data sets that are no longer labeled as benign or malignant. Then we continue to train that algorithm and fine tune the algorithm, said Haddad, medical oncologist, associate professor of oncology at the Rochester Minnesota Campus of the Mayo Clinic.

Deep learning is a smaller part of machine learning where mathematical algorithms are installed using multi-layered computational units which resemble human cognition. These include neural networks with different architeture types including recurrent neural networks, convolutional neural network, and long short-term memory.

Danielle S. Bitterman, MD

Many of the applications integrated into commercial systems are proprietary, so it is hard to know what specific AI methods underlie their system. For some applications, even simple rules-based systems still hold value. However, the recent surge in AI advances is primarily driven by more advanced machine learning methods, especially neural network-based deep learning, in which the AI teaches itself to learn patterns from complex data, Danielle S. Bitterman, MD, told Targeted OncologyTM. For many applications, deep learning methods have better performance, but come at a trade-off of being black boxes, meaning it is difficult for humans to understand how they arrive at their decision. This creates new challenges for safety, trust, and reliability.

Utilizing AI is important as the capacity the human brain must process information is limited, causing an urgent need for the implementation of alternative strategies to process big data. With machine learning and AI, clinicians can obtain increased availability of data, and boost the augmentation of storage and computing power.

As of October 5, 2022, the FDA had approved 521 medical devices which utilize AI and/or machine learning, with the majority of devices in the radiology space.2

Primarily, where it is being more robustly developed and, in some cases, now, at the point of receiving FDA approval and starting to be applied and utilized in the hospitals and clinics, is in the cancer diagnostic space. This includes algorithms to help improve the efficiency and accuracy of, for example, interpreting mammograms. Radiology services, and to some extent, pathology, are where some of these machine learning and deep learning algorithms and AI models are being used, said Haddad.

In radiology, there are many applications of AI, including deep learning algorithms to analyze imaging data that is obtained during routine cancer care. According to Haddad, some of this can include evaluating disease classification, detection, segmentation, characterization, and monitoring a patient with cancer.

According to radiation oncologist Matthew A. Manning, MD, AI is already a backbone of some clinical decision support tools.

The use of AI in oncology is rapidly increasing, and it has the potential to revolutionize cancer diagnosis, treatment, and research. It helps with driving automation In radiation oncology, there are different medical record platforms necessary for the practice that are often separate from the hospital medical record. Creating these interfaces that allow reductions in the redundancy of work for both clinicians and administrative staff is important. Tools using AI and business intelligence are accelerating our efforts in radiation oncology, Manning, former chief of Oncology at Cone Health, told Targeted OncologyTM, in an interview.

Through combining AI human power, mamography screening has been improved for patients with breast cancer. Additionally, deep learning models were trained to classify and detect disease subtypes based on images and genetic data.

To find lung nodules or brain metastases on MRI readouts, AI uses bounding boxes to locate a lesion or object of interest and classify them. Detection using AI supports physicians when they read medical images.

Segmentation involves recognizing these lesions and accessing its volume and size to classify individual pixels based on organ or lesions. Examples of this are brain gliomas as they require quantitative metrics for their management, risk stratification and prognostication.

Deep learning methods have been applied to medical images to determine a large number of features that are undetectable by humans.3 An example of using AI to characteroze tumor come from the study of radiomics, which studies combines disease features with clinicogenomic information. This methods can inform models that successfully predict treatment response and/or adverse effects from cancer treatments.

Radiomics can be applied to a variety of cancer types, including liver, brain, and lung tumors. According to research in Future Science OA1, deep learning using radiomic features from brain MRI also can help differentiate brain gliomas from brain metastasis with similar performance to trained neuroradiologists.

Utilizing AI can dramatically change the ways patients with cancer are monitored. It can detect a multitude of discriminative features in imaging that by humans, are unreadable. One process that is normally performed by radiologists and that plays a major role in determining patient outcomes is measuring how tumors react to cancer treatment.4 However, the process is known to be labor-intensive, subjective, and prone to inconsistency.

To try and alleviate this frequent problem, researchers developed a deep learning-based method that is able to automatically annotate tumors in patients with cancer. Using a small study, researchers from Johns Hopkins Kimmel Comprehensive Cancer Center and its Bloomberg~Kimmel Institute for Cancer Immunotherapy successfully trained a machine learning algorithm to predict which patients with melanoma would respond to treatment and which would not respond. This open-source program, DeepTCR, was valuable as a predictive tool and helped researchers understand the biological mechanisms and responses to immunotherapy.

This program can also help clinicians monitor patients by stratifying patient outcomes, identifying predictive features, and helping them manage patients with the best treatments.

Proper screening for early diagnosis and treatment is a big factor when combating cancer. In the current space, AI makes obtaining results easier and more convenient.

One of the important things to think about AI or the capabilities of AI in oncology is the ability to see what the human eye and the human mind cannot see or interpret today. It is gathering all these different data points and developing or recognizing patterns in the data to help with interpretation. This can augment some of the accuracy for cancer diagnostics. added Haddad.

AI may also provide faster, more accurate results, especially in breast cancer screening. While the incorporation of AI into screening methods is a relatively new and emerging field, it is promising in the early detection of breast cancer, thus resulting in a better prognosis of the condition. For patients with breast cancer, a mammography is the most popular method of breast cancer screening.

Another example of AI in the current treatment landscape for patients with colon cancer is the colonoscopy. Colon cancer screening utilizes a camera to give the gastroenterologist the ability to see inside the colon and bowel. By taking those images, and applying machine learning, deep learning neural network techniques, there is an ability to develop algorithms to not only help to better detect polyps or print precancerous lesions, but also to discern from early-stage or advanced cancers.

In addition, deep learning models can also help clinicians predict the future development of cancer and some AI applications are already being implemented in clinical practice. With further development, as well as refinement of the already created devices, AI will be further applied.

In terms of improving cancer screening, AI has been applied in radiology to analyze and identify tumors on scans. In the current state, AI is making its way into computer-assisted detection on diagnostic films. Looking at a chest CT, trying to find a small nodule, we see that AI is very powerful at finding spots that maybe the human eye may miss. In terms of radiation oncology, we anticipate AI will be very useful ultimately in the setting of clinical decision support, said Manning.

For oncologists, the emergence of the COVID-19 pandemic and time spent working on clinical documentation has only heightened the feeling of burnout. However, Haddad notes that a potential solution to help mitigate feelings of burnout is the development and integration of precision technologies, including AI, as they can help reduce the large amount of workload and increase productivity.

There are challenges with workforce shortages as a consequence of the COVID-19 pandemic with a lot of burnout at unprecedented rates. Thinking about how artificial intelligence can help make [clinicians] jobs easier and make them more efficient. There are smart hospitals, smart clinic rooms, where just from the ingestion of voice, conversations can be translated to the physician and patient into clinical documentation to help reduce the time that clinicians need to be spending doing the tedious work that we know contributes to burnout, including doing the clinical documentation, prior authorizations, order sets, etc, said Haddad.

Numerous studies have been published regarding the potential of machine learning and AI for the prognostication of cancer. Results from these trials have suggested that the performance and productivity of oncologists can be improved with the use of AI.5

An example is with the prediction of recurrences and overall survival. Deep learning can enhance precision medicine and improve clinical decisions, and with this, oncologists may feel emotional satisfaction, reduced depersonalization, and increased professional efficacy. This leaves clinicians with the potential of increased job satisfaction and a reduced feeling of burnout.

Research also has highlighted that the intense workload contributes to occupational stress. This in turn has a negative effect on the quality of care that is offered to patients.

Additionally, it has been reported that administrative tasks, such as collecting clinical, billing, or insurance information, contribute to the workload faced by clinicians, and this leads to a significantly limited time for direct face-to-face interaction between patients and their physicians. Thus, AI has helped significantly reduce this administrative burden.

Overall, if clinicians can do less of the tedious clerical work and spend more time doing the things they were trained to do, like having time with the patient, their overall outlook on their job will be more positive.

AI will help to see that joy restored and to have a better experience for our patient. I believe that AI is going to transform most aspects of medicine over the coming years. Cancer care is extremely complex and generates huge amounts of varied digital data which can be tapped into by computational methods. Lower-level tasks, such as scheduling and triaging patient messages will become increasingly automated. I think we will increasingly see clinical decision-support applications providing diagnostic and treatment recommendations to physicians. AI may also be able to generate novel insights that change our overall approach to managing cancers, said Haddad.

While there have been increasing amounts of updates and developments for AI in the oncology space, according to Bitterman, a large gap remains between AI research and what is already being used.

To bridge this gap, Bitterman notes that there must be further understanding by both clinicians and patients regarding how to properly interact with AI applications, and best optimize interactions for safety, reliability, and trust.

Digital data is still very siloed within institutions, and so regulatory changes are going to be needed before we can realize the full value of AI. We also need better standards and methods to assess bias and generalizability of AI systems to make sure that advances in AI dont leave minority populations behind and worsen health inequities.

Additionally, there is a concern that patients voices are being left out of the AI conversation. According to Bitterman, AI applications are developed by using patients data, and as a result, this will likely transform their care journey. To further improve the use of AI for patients with cancer, it is key to get the opinions from patients.

With further research, it should be possible to overcome the current challenges being faced with AI to continue to improve its use, make AI more popular, and improve the overall quality-of-life for patients with cancer.

We need to engage patients at every step of the AI development/implementation lifecycle, and make sure that we are developing applications that are patient-centered and prioritize trust, safety, and patients lived experiences, concluded Bitterman.

See the original post here:
Current Applications of Artificial Intelligence in Oncology - Targeted Oncology

Cracking the Code of Sound Recognition: Machine Learning Model Reveals How Our Brains Understand … – Neuroscience News

Summary: Researchers developed a machine learning model that mimics how the brains of social animals distinguish between sound categories, like mating, food or danger, and react accordingly.

The algorithm helps explain how our brains recognize the meaning of communication sounds, such as spoken words or animal calls, providing crucial insight into the intricacies of neuronal processing.

Insights from the research pave the way for treating disorders that affect speech recognition and improving hearing aids.

Key Facts:

Source: University of Pittsburgh

In a paper published today inCommunications Biology, auditory neuroscientists at theUniversity of Pittsburghdescribe a machine-learning model that helps explain how the brain recognizes the meaning of communication sounds, such as animal calls or spoken words.

The algorithm described in the study models how social animals, including marmoset monkeys and guinea pigs, use sound-processing networks in their brain to distinguish between sound categories such as calls for mating, food or danger and act on them.

The study is an important step toward understanding the intricacies and complexities of neuronal processing that underlies sound recognition. The insights from this work pave the way for understanding, and eventually treating, disorders that affect speech recognition, and improving hearing aids.

More or less everyone we know will lose some of their hearing at some point in their lives, either as a result of aging or exposure to noise. Understanding the biology of sound recognition and finding ways to improve it is important, said senior author and Pitt assistant professor of neurobiology Srivatsun Sadagopan, Ph.D.

But the process of vocal communication is fascinating in and of itself. The ways our brains interact with one another and can take ideas and convey them through sound is nothing short of magical.

Humans and animals encounter an astounding diversity of sounds every day, from the cacophony of the jungle to the hum inside a busy restaurant.

No matter the sound pollution in the world that surrounds us, humans and other animals are able to communicate and understand one another, including pitch of their voice or accent.

When we hear the word hello, for example, we recognize its meaning regardless of whether it was said with an American or British accent, whether the speaker is a woman or a man, or if were in a quiet room or busy intersection.

The team started with the intuition that the way the human brain recognizes and captures the meaning of communication sounds may be similar to how it recognizes faces compared with other objects. Faces are highly diverse but have some common characteristics.

Instead of matching every face that we encounter to some perfect template face, our brain picks up on useful features, such as the eyes, nose and mouth, and their relative positions, and creates a mental map of these small characteristics that define a face.

In a series of studies, the team showed that communication sounds may also be made up of such small characteristics.

The researchers first built a machine learning model of sound processing to recognize the different sounds made by social animals. To test if brain responses corresponded with the model, they recorded brain activity from guinea pigs listening to their kins communication sounds.

Neurons in regions of the brain that are responsible for processing sounds lit up with a flurry of electrical activity when they heard a noise that had features present in specific types of these sounds, similar to the machine learning model.

They then wanted to check the performance of the model against the real-life behavior of the animals.

Guinea pigs were put in an enclosure and exposed to different categories of sounds squeaks and grunts that are categorized as distinct sound signals. Researchers then trained the guinea pigs to walk over to different corners of the enclosure and receive fruit rewards depending on which category of sound was played.

Then, they made the tasks harder: To mimic the way humans recognize the meaning of words spoken by people with different accents, the researchers ran guinea pig calls through sound-altering software, speeding them up or slowing them down, raising or lowering their pitch, or adding noise and echoes.

Not only were the animals able to perform the task as consistently as if the calls they heard were unaltered, they continued to perform well despite artificial echoes or noise. Better yet, the machine learning model described their behavior (and the underlying activation of sound-processing neurons in the brain) perfectly.

As a next step, the researchers are translating the models accuracy from animals into human speech.

From an engineering viewpoint, there are much better speech recognition models out there. Whats unique about our model is that we have a close correspondence with behavior and brain activity, giving us more insight into the biology.

In the future, these insights can be used to help people with neurodevelopmental conditions or to help engineer better hearing aids, said lead author Satyabrata Parida, Ph.D., postdoctoral fellow atPitts department of neurobiology.

A lot of people struggle with conditions that make it hard for them to recognize speech, said Manaswini Kar, a student in the Sadagopan lab.

Understanding how a neurotypical brain recognizes words and makes sense of the auditory world around it will make it possible to understand and help those who struggle.

Author: Anastasia GorelovaSource: University of PittsburghContact: Anastasia Gorelova University of PittsburghImage: The image is credited to Neuroscience News

Original Research: Open access.Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model by Srivatsun Sadagopan et al. Communications Biology

Abstract

Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model

For robust vocalization perception, the auditory system must generalize over variability in vocalization production as well as variability arising from the listening environment (e.g., noise and reverberation).

We previously demonstrated using guinea pig and marmoset vocalizations that a hierarchical model generalized over production variability by detecting sparse intermediate-complexity features that are maximally informative about vocalization category from a dense spectrotemporal input representation.

Here, we explore three biologically feasible model extensions to generalize over environmental variability: (1) training in degraded conditions, (2) adaptation to sound statistics in the spectrotemporal stage and (3) sensitivity adjustment at the feature detection stage. All mechanisms improved vocalization categorization performance, but improvement trends varied across degradation type and vocalization type.

One or more adaptive mechanisms were required for model performance to approach the behavioral performance of guinea pigs on a vocalization categorization task.

These results highlight the contributions of adaptive mechanisms at multiple auditory processing stages to achieve robust auditory categorization.

Visit link:
Cracking the Code of Sound Recognition: Machine Learning Model Reveals How Our Brains Understand ... - Neuroscience News

How to Improve Your Machine Learning Model With TensorFlow’s … – MUO – MakeUseOf

Data augmentation is the process of applying various transformations to the training data. It helps increase the diversity of the dataset and prevent overfitting. Overfitting mostly occurs when you have limited data to train your model.

Here, you will learn how to use TensorFlow's data augmentation module to diversify your dataset. This will prevent overfitting by generating new data points that are slightly different from the original data.

You will use the cats and dogs dataset from Kaggle. This dataset contains approximately 3,000 images of cats and dogs. These images are split into training, testing, and validation sets.

The label 1.0 represents a dog while the label 0.0 represents a cat.

The full source code implementing data augmentation techniques and the one that does not are available in a GitHub repository.

To follow through, you should have a basic understanding of Python. You should also have basic knowledge of machine learning. If you require a refresher, you may want to consider following some tutorials on machine learning.

Open Google Colab. Change the runtime type to GPU. Then, execute the following magic command on the first code cell to install TensorFlow into your environment.

Import TensorFlow and its relevant modules and classes.

The tensorflow.keras.preprocessing.image will enable you to perform data augmentation on your dataset.

Create an instance of the ImageDataGenerator class for the train data. You will use this object for preprocessing the training data. It will generate batches of augmented image data in real time during model training.

In the task of classifying whether an image is a cat or a dog, you can use the flipping, random width, random height, random brightness, and zooming data augmentation techniques. These techniques will generate new data which contains variations of the original data representing real-world scenarios.

Create another instance of the ImageDataGenerator class for the test data. You will need the rescale parameter. It will normalize the pixel values of the test images to match the format used during training.

Create a final instance of the ImageDataGenerator class for the validation data. Rescale the validation data the same way as the test data.

You do not need to apply the other augmentation techniques to the test and validation data. This is because the model uses the test and validation data for evaluation purposes only. They should reflect the original data distribution.

Create a DirectoryIterator object from the training directory. It will generate batches of augmented images. Then specify the directory that stores the training data. Resize the images to a fixed size of 64x64 pixels. Specify the number of images that each batch will use. Lastly, specify the type of label to be binary (i.e., cat or dog).

Create another DirectoryIterator object from the testing directory. Set the parameters to the same values as those of the training data.

Create a final DirectoryIterator object from the validation directory. The parameters remain the same as those of the training and testing data.

The directory iterators do not augment the validation and test datasets.

Define the architecture of your neural network. Use a Convolutional Neural Network (CNN). CNNs are designed to recognize patterns and features in images.

model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3)))

model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3), activation='relu'))

model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(128, activation='relu'))

model.add(Dropout(0.5))

model.add(Dense(1, activation='sigmoid'))

Compile the model by using the binary cross-entropy loss function. Binary classification problems commonly use It. For the optimizer, use the Adam optimizer. It is an adaptive learning rate optimization algorithm. Finally, evaluate the model in terms of accuracy.

Print a summary of the model's architecture to the console.

The following screenshot shows the visualization of the model architecture.

This gives you an overview of how your model design looks.

Train the model using the fit() method. Set the number of steps per epoch to be the number of training samples divided by the batch_size. Also, set the validation data and the number of validation steps.

The ImageDataGenerator class applies data augmentation to the training data in real time. This makes the training process of the model slower.

Evaluate the performance of your model on the test data using the evaluate() method. Also, print the test loss and accuracy to the console.

The following screenshot shows the model's performance.

The model performs reasonably well on never seen data.

When you run code that does not implement the data augmentation techniques, the model training accuracy is 1. Which means it overfits. It also performs poorly on data it has never seen before. This is because it learns the peculiarities of the dataset.

TensorFlow is a diverse and powerful library. It is capable of training complex deep learning models and can run on a range of devices from smartphones to clusters of servers. It has helped power edge computing devices that utilize machine learning.

See the rest here:
How to Improve Your Machine Learning Model With TensorFlow's ... - MUO - MakeUseOf

Artificial Intelligence and Machine Learning in Cancer Detection – Targeted Oncology

Toufic Kachaamy, MD

City oh Hope Phoenix

Since the first artificial intelligence (AI) enabled medical device received FDA approval in 1995 for cervical slide interpretation, there have been 521 FDA approvals provided for AI-powered devices as of May 2023.1 Many of these devices are for early cancer detection, an area of significant need since most cancers are diagnosed at a later stage. For most patients, an earlier diagnosis means a higher chance of positive outcomes such as cure, less need for systemic therapy and a higher chance of maintaining a good quality of life after cancer treatment.

While an extensive review of these is beyond the scope of one article, this article will summarize the major areas where AI and machine learning (ML) are currently being used and studied for early cancer detection.

The first area is large database analyses for identifying patients at risk for cancer or with early signs of cancer. These models analyze the electronic medical records, a structured digital database, and use pattern recognition and natural language processing to identify patients with specific characteristics. These include individuals with signs and symptoms suggestive of cancer; those at risk of cancer based on known risk factors; or specific health measures associated with cancer. For example, pancreatic cancer has a relatively low incidence but is still the fourth leading cause of cancer death. Because of the low incidence, screening the general population is neither practical nor cost-effective. ML can be used to analyze specific health outcomes such as new onset hyperglycemia2 and certain health data from questionnaires (3) to classify members of the population as high risk for pancreatic cancer. This allows the screened population to be "enriched with pancreatic cancer," thus making screening higher yield and more cost-effective at an earlier stage.

Another area leveraging AI and ML learning is image analyses. The human vision is best centrally, representing less than 3 degrees of the visual field. Peripheral vision has significantly less special resolution and is more suited for rapid movements and "big picture" analysis. In addition, "inattentional blindness" or missing significant findings when focused on a specific task is one of the vulnerabilities of humans, as demonstrated in the study that showed even experts missed a gorilla in a CT when searching for lung nodules.3 Machines are not susceptible to fatigue, distraction, blind spots or inattentional blindness. In a study that compared a deep learning algorithm to radiologist from the National Lung Screening trial, the algorithm performed better than the radiologist in detecting lung cancer on chest X-rays.4

AI algorithm analysis of histologic specimens can serve as an initial screening tool and an assistant as a real-time interactive interface during histological analysis.5 AI is capable of diagnosing cancer with high accuracy.6 It can accurately determine grades, such as the Gleason score for prostate cancer and identify lymph node metastasis.7 AI is also being explored in predicting gene mutations from histologic analysis. This has the potential of decreasing cost and improving time to analysis. Both are limitations in today's practice limiting universal gene analysis in cancer patients,8 but at the same time are gaining a role in precision cancer treatment.9

An excitingand up-and-coming area where AI and deep learning are the combination of the above such as combining large data analysis with pathology assessment and/ or image analyses. For example, using medical record analysis and CXR findings, deep learning was used to identify patients at high risk for lung cancer and who would benefit the most from lung cancer screening. This has great potential, especially since only 5% of patients eligible for lung cancer screening are currently being screened.10

Finally, the holy grail of cancer detection: blood-based multicancer detection tests, many of which are already available and in development, often use AI algorithms to develop, analyze and validate their test.11

It is hard to imagine an area of medicine that AI and ML will not impact. AI is unlikely, at least for the foreseeable future, to replace physicians. It will be used to enhance physician performance, improve accuracy and efficiency. However, it is essential to note that machine-human interaction is very complicated, and we are scratching the surface of this era. It is premature to assume that real-world outcomes will be like outcomes seen in trials. Any outcome that involves human analysis and final decision-making is affected by human performance. Training and studying human behavior are needed for human-machine interaction to produce optimal outcomes. For example, randomized controlled studies have shown increased polyp detection during colonoscopy using computer-aided detection or AI-based image analysis.12 However, real-life data did not show similar findings13 likely due to a difference in how AI impacts different endoscopists.

Artificial intelligence and machine learning dramatically alter how medicine is practiced, and cancer detection is no exception. Even in the medical world, where change is typically slower than in other disciplines, AI's pace of innovation is coming upon us quickly and, in certain instances, faster than many can grasp and adapt.

Read more from the original source:
Artificial Intelligence and Machine Learning in Cancer Detection - Targeted Oncology

How to get going with machine learning – Robotics and Automation News

We can see everyone around us talking about machine learning and artificial intelligence. But is the hype of machine learning objective? Lets dive into the details of machine learning and how we can start it from scratch.

Machine learning is a technological method through which we teach our computers and electronic gadgets how to provide accurate answers. Whenever data is fed into the system, it acts in a defined way to find precise answers to those questions asked.

For example, questions such as: What is the taste of avocado?, What are the things to consider for buying an old car?, How do I drive safely on reload?, and so on.

But using machine language, the computer is trained to give precise answers even without input from developers. In other words, machine language is a sophisticated form of language in which computers are trained to provide correct answers to complicated questions.

Furthermore, they are trained to learn more, distinguish confusing questions, and provide satisfactory answers.

Machine learning and AI is the future. Therefore, people who can learn skills and become proficient will become the first in line to reap the profits. We have companies that offer machine learning services to augment your business.

In other words, to get unreal advantages, we must engage with these services for the exponential growth of our business.

Initially, the developers do a massive number of training and modeling. Other crucial things are also done by the developers for machine language development. Additionally, vast amounts of data are used to provide precise results and effectively reduce the decision taking time.

Here are the simple steps that can get you started with machine learning.

Make up your mind and choose a tool in which you want to master machine learning development.

Always look for the best language in terms of practicality and its acceptability on multiple platforms.

As we know, Machine learning is a process that involves a rigorous process of modeling and training. Therefore we must practice the given below bullet points.

To take the most advantage, create a delicate and lucid portfolio of yours to demonstrate your learned skills to the world. Keep in mind the below-mentioned bullet points too.

When we apply a precise algorithm to a data set, the output we get is called a Model. In other words, it is also known as Hypothesis.

In technical terms, a feature is a quantifiable property that defines the characteristics of a process in machine learning. One of the crucial characteristics of it is to recognize and classify algorithms. It is used as input into a model.

For example, to recognize a fruit, it uses features such as smell, taste, size, color, and so on. The element is vital in distinguishing the target or asked query using several characteristics.

The highest level of value or variable created by the machine learning model is called Target.

For example, In the previous set, we measured fruits. Each label has a specific fruit such as orange, banana, apple, pineapple, and so on.

In machine learning, Training is a term used for getting used to all the values and biases of our target examples. Under supervision during the learning process, many experiments are done to build a machine learning algorithm to reach the minimum loss going the correct output.

When a model is accomplished, we can set a variety of inputs that will give us the expected results as output. Always be careful and look that system is performing accurately on unseen data. Then only we can say it is a successful operation.

After preparing our model, we can input a set of data for which it will generate a predicted output or label. However, verifying its performance on new, untested data is essential before concluding that the machine is performing well.

As machine learning continues to increase in significance to enterprise operations and AI becomes more sensible in corporation settings, the machine learning platform wars will accentuate handiest.

Persisted research into deep studying and ai is increasingly targeted at developing different general applications. Cutting-edge AI models require sizeable training to produce an algorithm that is particularly optimized to perform one venture.

But some researchers are exploring approaches to make fashions greater bendy and are searching for techniques that allow a device to use context discovered from one project to future, specific tasks.

You might also like

Read the original here:
How to get going with machine learning - Robotics and Automation News

How the GPT Machine Learning Model Advances Generative AI – Acceleration Economy

In episode 105 of the AI/Hyperautomation Minute, Toni Witt provides clarity behind generative AI, its underlying technology the GPT (generative pre-trained transformer) machine learning model and how its evolving.

This episode is sponsored by Acceleration Economys Generative AI Digital Summit, taking place on May 25. Registration for the event, which features practitioner and platform insights on how solutions such as ChatGPT will impact the future of work, customer experience, data strategy, cybersecurity, and more, is free. To reserve your spot, sign up today.

00:26 While there are many conversations about generative AI, those outside of the tech field may still have a misunderstanding of the underlying technology and how its evolving.

01:03 Toni clarifies that ChatGPT is an web-based tool that gives access to GPT-3, which is the underlying machine learning model. GPT-3 is a word predictor. Its a form of deep learning with capabilities that are essentially a subset of what machine learning and AI can do.

01:37 Machine learning started with prediction and classification. Most AI applications that give returns to companies are these classification or predictor models, Toni explains. The Netflix recommender algorithm is an example of this, as it uses data from previous movies and shows that youve liked in the past to recommend what to watch next.

02:12 GPT-3 is a transformer model. Theres a pretty big debate going on whether these transformer models are going to be the ones that reach what you might call AGI, or artificial general intelligence, that basically matches the intelligence level of a human, Toni says.

02:57 Sam Altman, CEO of OpenAI, pointed out a trend that there will be base-level models. The GPT series is already an indication that models will help train other models. Think of it like a tech stack, says Toni.

Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel:

See the rest here:
How the GPT Machine Learning Model Advances Generative AI - Acceleration Economy