Machine learning approaches for estimating interfacial tension between oil/gas and oil/water systems: a performance … – Nature.com

Bui, T. et al. Water/oil interfacial tension reductionAn interfacial entropy driven process. JPCCP 23(44), 2507525085 (2021).

ADS Google Scholar

Kalam, S., Khan, M. R., Shakeel, M., Mahmoud, M. & Abu-khamsin, S. Smart Algorithms for Determination of Interfacial Tension (IFT) Between Injected Gas and Crude Oil-Applicable to EOR Projects (Middle East Oil, Gas and Geosciences Show/OnePetro, 2023).

Book Google Scholar

Garmsiri, H. et al. Stability of the emulsion during the injection of anionic and cationic surfactants in the presence of various salts. Sci. Rep. 13(1), 11337 (2023).

Article ADS PubMed PubMed Central Google Scholar

Shafiei, M., Kazemzadeh, Y., Martyushev, D. A., Dai, Z. & Riazi, M. Effect of chemicals on the phase and viscosity behavior of water in oil emulsions. Sci. Rep. 13(1), 4100 (2023).

Article ADS PubMed PubMed Central Google Scholar

Kalatehno, J. M. & Khamehchi, E. A novel packer fluid for completing HP/HT oil and gas wells. J. Petrol. Sci. Eng. 203, 108538 (2021).

Article Google Scholar

Drexler, S., Hoerlle, F., Godoy, W., Boyd, A. & Couto, P. Wettability alteration by carbonated brine injection and its impact on pore-scale multiphase flow for carbon capture and storage and enhanced oil recovery in a carbonate reservoir. Appl. Sci. 10(18), 6496 (2020).

Article Google Scholar

Hamidpour, S., Safaei, A., Kazemzadeh, Y., Hasan-Zadeh, A. & Khormali, A. Calculation of IFT in porous media in the presence of different gas and normal alkanes using the modified EoS. Sci. Rep. 13(1), 8077 (2023).

Article ADS PubMed PubMed Central Google Scholar

Kalatehno, J. M., Khamehchi, E., Abbasi, A. & Khaleghi, M. R. A novel approach to determining appropriate additive concentrations for stimulation of gas carbonate reservoirs. Results Eng. 20, 101440 (2023).

Article Google Scholar

Hou, X. & Sheng, J. J. Experimental study on the effects of IFT reduction and shut-in on water blockage after hydraulic fracturing in tight sandstone reservoirs based on the NMR method. Energy Fuels. 37(9), 65696584 (2023).

Article Google Scholar

Pereira, L. M., Chapoy, A., Burgass, R. & Tohidi, B. Interfacial tension of CO2+ brine systems. Exp. Predict. Model. 103, 6475 (2017).

Google Scholar

Kim, B. et al. Ensemble machine learning-based approach for predicting of FRP-concrete interfacial bonding. Mathematics 10(2), 231 (2022).

Article Google Scholar

Tadros, T. Gibbs adsorption isotherm. In Encyclopedia of Colloid and Interface Science (Tadros, T. ed.). 626 (Springer, 2013).

Sibanda, D., Oyinbo, S. T. & Jen, T.-C. A review of atomic layer deposition modelling and simulation methodologies: Density functional theory and molecular dynamics. Nanotechnol. Rev. 11(1), 13321363 (2022).

Article Google Scholar

Singh, S. K., Chaurasia, A. & Verma, A. Basics of Density Functional Theory, Molecular Dynamics, and Monte Carlo Simulation Techniques in Materials Science. In Coating Materials: Computational Aspects, Applications and Challenges (eds Verma, A. et al.) 111124 (Springer, 2023).

Chapter Google Scholar

Zhao, X., Duan, W., Zeng, X. & Liu, Y. J. Measurements of surface tension of R1234yf and R1234ze (E). Int. J. Refrig. 63(1), 2126 (2018).

Google Scholar

Clegg, C. Contact Angle Made Easy: Carl Clegg (2013).

DA. Standard Test Methods for Surface and Interfacial Tension of Solutions of Paints, Solvents, Solutions of Surface-Active Agents, and Related Materials. Annual Book of ASTM Standards. (American Society for Testing and Materials, 2014).

Gupta, A., Pandey, A., Kesarwani, H., Sharma, S. & Saxena, A. Automated determination of interfacial tension and contact angle using computer vision for oil field applications. J. Petrol. Explor. Prod. Technol. 12(5), 14531461 (2022).

Article Google Scholar

Esteghlal, S., Samadi, S. H., Hosseini, S. M. H. & Moosavi-Movahedi, A. A. Identification of machine learning neural-network techniques for prediction of interfacial tension reduction by zein based colloidal particles. Ind. Eng. Chem. Res. 36, 106546 (2023).

Google Scholar

Dargi, M., Khamehchi, E. & Mahdavi, K. J. Optimizing acidizing design and effectiveness assessment with machine learning for predicting post-acidizing permeability. Sci. Rep. 13(1), 11851 (2023).

Article ADS PubMed PubMed Central Google Scholar

Zamani, M. G., Nikoo, M. R., Rastad, D. & Nematollahi, B. A comparative study of data-driven models for runoff, sediment, and nitrate forecasting. J. Environ. Manag. 341, 118006 (2023).

Article Google Scholar

Khamehchi, E., Dargi, M., Imeri, M., Kalatehno, J.M. & Khaleghi, M.R. Pipe Diameter Optimization and Two-Phase Flow Pressure Drop in Seabed Pipelines: A Genetic Algorithm Approach.

Ahmadi, M. A. & Mahmoudi, B. Development of robust model to estimate gasoil interfacial tension using least square support vector machine: Experimental and modeling study. J. Supercrit. Fluids 107, 122128 (2016).

Article Google Scholar

Andersson, M., Eckert, F., Reinisch, J. & Klamt, A. Prediction of aliphatic and aromatic oilwater interfacial tension at temperatures > 100 C using COSMO-RS. Fluid Phase Equilib. 476, 2529 (2018).

Article Google Scholar

Amar, M. N., Shateri, M., Hemmati-Sarapardeh, A. & Alamatsaz, A. Modeling oil-brine interfacial tension at high pressure and high salinity conditions. J. Petrol. Sci. Eng. 183, 106413 (2019).

Article Google Scholar

Dehaghani, A. H. S. & Soleimani, R. Estimation of interfacial tension for geological CO2 storage. Chem. Eng. Technol. 42(3), 680689 (2019).

Article Google Scholar

Kirch, A., Celaschi, Y. M., de Almeida, J. M. & Miranda, C. R. Brineoil interfacial tension modeling: Assessment of machine learning techniques combined with molecular dynamics. ACS Appl. Mater. Interfaces 12(13), 1583715843 (2020).

Article PubMed Google Scholar

Zhang, J., Feng, Q. & Zhang, X. (eds.) The use of machine learning methods for fast estimation of CO2-brine interfacial tension: A comparative study. In Proceedings of the 2020 5th International Conference on Machine Learning Technologies (2020).

Amar, M. N. Towards improved genetic programming based-correlations for predicting the interfacial tension of the systems pure/impure CO2-brine. J. Taiwan Inst. Chem. Eng. 127, 186196 (2021).

Article Google Scholar

Cui, Z. & Li, H. Toward accurate density and interfacial tension modeling for carbon dioxide/water mixtures. Petrol. Sci. 18, 509529 (2021).

Article Google Scholar

Setiawan, R., Daneshfar, R., Rezvanjou, O., Ashoori, S. & Naseri, M. Surface tension of binary mixtures containing environmentally friendly ionic liquids: Insights from artificial intelligence. Environ. Dev. Sustain. 23, 1760617627 (2021).

Article Google Scholar

Bui, T. et al. Water/oil interfacial tension reductionAn interfacial entropy driven process. Phys. Chem. Chem. Phys. 23(44), 2507525085 (2021).

Article PubMed Google Scholar

Yang, Y., Che Ruslan, M. F. A., Narayanan Nair, A. K., Qiao, R. & Sun, S. Interfacial properties of the hexane+ carbon dioxide+ water system in the presence of hydrophilic silica. J. Chem. Phys. 157(23), 37 (2022).

Article Google Scholar

Seddon, D., Mller, E. A. & Cabral, J. T. Machine learning hybrid approach for the prediction of surface tension profiles of hydrocarbon surfactants in aqueous solution. J. Colloid Interface Sci. 625, 328339 (2022).

Article ADS PubMed Google Scholar

Nikseresht, S., Farshchi Tabrizi, F., Riazi, M., Torabi, F. & Hashemi, S. H. Thermodynamic prediction of interfacial tension of water/oil system with the presence surfactants and salt. Model. Earth Syst. Environ. 8(2), 21932199 (2022).

Article Google Scholar

Mahdaviara, M., Amar, M. N., Ostadhassan, M. & Hemmati-Sarapardeh, A. On the evaluation of the interfacial tension of immiscible binary systems of methane, carbon dioxide, and nitrogen-alkanes using robust data-driven approaches. Alex. Eng. J. 61(12), 1160111614 (2022).

Article Google Scholar

Wang, Y., Shardt, N., Elliott, J. A. & Jin, Z. Highly efficient and accurate gas-alkane binary mixture interfacial tension equations for a broad range of temperatures, pressures, and compositions. SPE J. 27(01), 895913 (2022).

Article Google Scholar

Ng, C. S. W., Djema, H., Amar, M. N. & Ghahfarokhi, A. J. Modeling interfacial tension of the hydrogen-brine system using robust machine learning techniques: Implication for underground hydrogen storage. Int. J. Hydrogen Energy 47(93), 3959539605 (2022).

Article Google Scholar

Rashidi-Khaniabadi, A., Rashidi-Khaniabadi, E., Amiri-Ramsheh, B., Mohammadi, M.-R. & Hemmati-Sarapardeh, A. Modeling interfacial tension of surfactanthydrocarbon systems using robust tree-based machine learning algorithms. Sci. Rep. 13(1), 10836 (2023).

Article ADS PubMed PubMed Central Google Scholar

Gbadamosi, A. et al. New-generation machine learning models as prediction tools for modeling interfacial tension of hydrogen-brine system. Int. J. Hydrogen Energy 50, 4 (2023).

Google Scholar

Mouallem, J., Raza, A., Glatz, G., Mahmoud, M. & Arif, M. Estimation of CO2-brine interfacial tension using machine learning: implications for CO2 geo-storage. J. Mol. Liq. 356, 123672 (2023).

Google Scholar

Jo, J.-M. Effectiveness of normalization pre-processing of big data to the machine learning performance. J. Korea Inst. Electron. Commun. Sci. 14(3), 547552 (2019).

Google Scholar

Carey, C., Boucher, T., Mahadevan, S., Bartholomew, P. & Dyar, M. Machine learning tools formineral recognition and classification from Raman spectroscopy. J. Raman Spectrosc. 46(10), 894903 (2015).

Article ADS Google Scholar

Al Shalabi, L. & Shaaban, Z. (eds.) Normalization as a preprocessing engine for data mining and the approach of preference matrix. In 2006 International Conference on Dependability of Computer Systems (IEEE, 2006).

Talebkeikhah, M., Sadeghtabaghi, Z. & Shabani, M. A comparison of machine learning approaches for prediction of permeability using well log data in the hydrocarbon reservoirs. J. Hum. Earth Future 2(2), 8299 (2021).

Article Google Scholar

Pan, J., Zhuang, Y. & Fong, S. (eds.) The impact of data normalization on stock market prediction: using SVM and technical indicators. In Soft Computing in Data Science: Second International Conference, SCDS 2016, Kuala Lumpur, Malaysia, September 2122, 2016, Proceedings 2 (Springer, 2016).

Peiro Ahmady Langeroudy, K., Kharazi Esfahani, P. & Khorsand Movaghar, M. R. Enhanced intelligent approach for determination of crude oil viscosity at reservoir conditions. Sci. Rep. 13(1), 1666 (2023).

Article ADS PubMed PubMed Central Google Scholar

Dargahi-Zarandi, A., Hemmati-Sarapardeh, A., Shateri, M., Menad, N. A. & Ahmadi, M. Modeling minimum miscibility pressure of pure/impure CO2-crude oil systems using adaptive boosting support vector regression: Application to gas injection processes. J. Petrol. Sci. Eng. 184, 106499 (2020).

Article Google Scholar

Ng, C. S. W., Ghahfarokhi, A. J. & Amar, M. N. Well production forecast in Volve field: Application of rigorous machine learning techniques and metaheuristic algorithm. J. Petrol. Sci. Eng. 208, 109468 (2022).

Article Google Scholar

Talebkeikhah, M. et al. Experimental measurement and compositional modeling of crude oil viscosity at reservoir conditions. J. Taiwan Inst. Chem. Eng. 109, 3550 (2020).

Article Google Scholar

Nait Amar, M. & Zeraibi, N. A combined support vector regression with firefly algorithm for prediction of bottom hole pressure. SN Appl. Sci. 2(1), 23 (2020).

Article Google Scholar

Amar, M. N., Zeraibi, N. & Jahanbani, G. A. Applying hybrid support vector regression and genetic algorithm to water alternating CO2 gas EOR. Greenh. Gases Sci. Technol. 10(3), 613630 (2020).

Article Google Scholar

Sethi, A. Support vector regression tutorial for machine learning. Stat. Comput. 14, 15 (2020).

Google Scholar

Zamani, M. G. et al. A multi-model data fusion methodology for reservoir water quality based on machine learning algorithms and bayesian maximum entropy. J. Clean. Prod. 416, 137885 (2023).

The rest is here:
Machine learning approaches for estimating interfacial tension between oil/gas and oil/water systems: a performance ... - Nature.com

Machine-learning model for predicting oliguria in critically ill patients | Scientific Reports – Nature.com

Subjects

This retrospective cohort study used the electronic health record data of consecutive patients admitted to the ICU at Chiba University Hospital, Japan, from November 2010 to March 2019. The annual number of patients admitted to the 22-bed surgical/medical ICU ranged from 1,541 to 1,832. We excluded patients on maintenance dialysis and those without a documented body weight. This study was approved by the Ethical Review Board of Chiba University Graduate School of Medicine (approval number: 3380) in accordance with the Declaration of Helsinki. The Ethical Review Board of Chiba University Graduate School of Medicine waived the requirement for written informed consent in accordance with the Ethical Guidelines for Medical and Health Research Involving Human Subjects in Japan.

We defined oliguria as urine output of less than 0.5mL/kg/h according to the Kidney Disease: Improving Global Outcomes stage I criteria. AKI was diagnosed based on an increase in serum creatinine level of at least 0.3mg/dL from the baseline or oliguria38.

Patient records from the ICU data system contained 1,031 input variables, including (A) physiological measurements acquired every minute (heart rate, blood pressure, respiratory rate, peripheral oxygen saturation, and body temperature), (B) blood tests (complete blood count, biochemistry, coagulation, and blood gas analysis), (C) name and dosage of medications, (D) type and amount of blood transfusion, (E) patient observation record, and (F) patient care record. The minute-by-minute time-series tables were aggregated into hourly time-series tables. In the process of aggregating the tables, the median value was used for physiological measurements and the blood test values were obtained from the most recent test. For patient excretion values, urine and stool volumes were calculated as one-hour sums. The following six calculated variables were added to the dataset: hourly intake, hourly output, hourly total balance, hourly urine volume (mL/kg), oliguria (urine volume of less than 0.5mL/kg/h), and oliguria for six consecutive hours. A total of 222 background information variables, including age, sex, and admission diagnosis, were also added to the dataset. Consequently, the dataset contained 1,127 variables. We treated the missing values as a separate group or excluded them from the analysis. To remove potential collinearity values, we performed a multicollinearity test and analyzed the data without these values.

The dataset was randomly divided: 80% for training and 20% for testing. We developed a sequential machine-learning model to predict oliguria at any given time during the ICU stay using hourly variables and baseline information (Fig.1). For the values that were not continuously obtained, we used the most recent ones for the model development. The input variables were updated to encompass a 1-h window of the preceding values for the physiological measurements, blood tests, and medications. The primary and secondary outcome variables were oliguria at 6 and 72h after an arbitrary time point from ICU admission to discharge, respectively. Accordingly, we used variables recorded until 6 or 72h before ICU discharge corresponding to each outcome variable. The outcome variable was not incorporated as a predictor in the final model. After constructing the algorithm with the training data, the model predictions were validated using the test data. We validated the model performance with a fivefold cross validation. To ensure that the estimated model probabilities aligned with the actual probabilities of oliguria occurrence, we plotted the calibration curve of the model. The curve indicated that our model was well calibrated (Supplementary File 1: Fig. S4).

We selected four representative machine learning classifiers: LightGBM, category boosting (CatBoost), random forest, and extreme gradient boosting (XGboost). Before developing the prediction model, we compared the computational performances and model accuracies using the four classifiers (Supplementary File 1: Table S2). To develop the machine learning algorithm, we used a cloud computer (Google Collaboratory memory 25GB) to evaluate the accuracy of the model. The AUC values based on the receiver operating calibrating curves, sensitivity, specificity, and F1 score were calculated. Among the machine learning classifiers, LightGBM showed the best computation speed and AUC and the second-best F1 score with a marginal difference from XGboost (XGboost 0.899, LightGBM 0.896). Based on these results, we decided to use LightGBM for the analysis in this study. After developing a prediction model with all the variables, we reduced the number of variables for prediction by selecting clinically relevant variables (Supplementary File: Table S2). Subsequently, we compared the performances of the LightGBM model using the selected variables and all the variables. As a sensitivity analysis, we re-analyzed the data using a different computer environment, Amazon Web Service Sagemaker. The computer settings included the following: image: Data Science 3.0, kernel: python 3, and instance type: ml.t3.medium (memory 64GB).

To evaluate the important variables contributing to building the prediction model, we used the SHAP value. The SHAP value indicates the impact of each feature on the model output, with higher interpretability in machine learning models. We expressed the SHAP value as an absolute number with a positive or negative association between the variable and outcome. SHAP individual force plots showed several features at scale with a color bar that indicated the feature contribution to the onset of oliguria in individual instances, enhancing the interpretability regarding the connection between traits and the occurrence of oliguria. For the subgroup analyses, we compared the accuracies of the models in predicting oliguria based on sex, age (65 or>66years), and furosemide administration. To quantify the differences in the AUC plots of the two groups, the absolute values of the differences in the AUCs of each group from 6 to 72h were summed and averaged to obtain the MAE.

Data were expressed as medians with interquartile ranges for continuous values and as absolute numbers and percentages for categorical values. A P value<0.05 was considered as statistically significant. The main Python packages used in the analysis to create the machine learning algorithms were Python 3.10.11, pandas 1.5.3, numpy 1.22.4, matplotlib 3.7.1, scikit-learn 1.2.2, XGboost 1.7.2, lightgbm 2.2.3, catboost 1.1.1, and shap 0.41.0.

See the original post:
Machine-learning model for predicting oliguria in critically ill patients | Scientific Reports - Nature.com

Machine learning and computer vision can boost urban renewal – Hello Future Orange – Hello Future

Monday 8th of January 2024

Reading time: 3 min

In the 2010s, the city of New York set an example for urban authorities when it used big data to optimise public services. Since then, progress in machine learning has led to further advances in the field of data analysis. A new computer vision project has notably demonstrated how Google Street View images can now be used to monitor urban decay.

In a ground-breaking project in the 2010s, the city of New York reorganized a wide range of public services to take into account the analysis of big data collected by local authorities. These included measures to prune the citys trees, and to investigate buildings with high levels of fire risk, properties managed by slumlords, and restaurants illegally dumping cooking oil into public sewers. Since then, progress in the field of machine learning has continued to extend the potential for data-driven public initiatives, and scientists are also investigating the use of new data sources on which they could be based, among them two researchers from the universities of Stanford (California) and Notre-Dame (Indiana), who recently presented a new approach for the monitoring of urban decay in the journal Scientific Reports.

We wanted to highlight the flexibility of the approach rather than propose a method with a fixed set of features.

The algorithm developed by their project identifies eight visual features of urban decay in street-view images: potholes, barred or broken windows, dilapidated facades, tents, weeds, graffiti, garbage, and utility markings. Until now, the researchers note, the measurement of urban change has largely centred on quantifying urban growth, primarily by examining land use, land cover dynamics and changes in urban infrastructure.

The idea of their project was not so much to show all that can be done with street-view images, but rather to test the use of a single algorithm trained on data from several cities, and if necessary to retrain it without modifying its underlying structure. At the same time, it should be noted that the data being used was not collected by public authorities, but from a new source: Big data and machine learning are increasingly being used for public policies, points out Yong Suk Lee, an assistant professor at Notre-Dame, specializing in technology and urban economics. Our proposed method is complementary to these approaches. Our paper highlights the potential to add street-viewImages to the increasing toolkit of urban data analytics.

As the researchers explain, the automated analysis of images can facilitate the evaluation of the scope of deterioration: The measurement of urban decay is further complicated by the fact that on the ground measurements of urban environments are often expensive to collect, and can at times be more difficult, and even dangerous, to collect in deteriorating parts of the city..

The research project focused on images from three urban areas: the Tenderloin and Mission districts in San Francisco, Colonia Doctores and the historic centre of Mexico City, and the western part of South Bend, Indiana, an average size American town.

A single algorithm (YOLO) was trained twice on, on two different corpora. The first of these was composed of manually collected pictures from the streets of San Francisco and images of graffiti captured in Athens (Greece) from the STORM corpus. This dataset also included Google Street View shots of San Francisco, Los Angeles and Oakland with homeless peoples tents and tarps, and images of Mexico City. All of these were sourced from a multiyear period to measure ongoing change. Subsequently the Mexican pictures were withdrawn to create a second training dataset.

We initially worked with US data but decided to compare if adding data from Mexico City made a difference, explains Yong Suk Lee. Not surprisingly, the larger consolidated data set was better. Also, we tried different model sizes (number of parameters) to see the trade-offs between speed and performance. For example, the algorithm was better able to detect potholes and broken windows in San Francisco when the training data included images from Mexico City.

However, due to a lack of similar images of in its training corpus, the algorithm significantly underperformed when tested on more suburban spaces in South Bend, although it was largely successful in following local changes signalled by dilapidated facades and weeds. The results showed that towns of this type require a specially adapted training corpus. The features identifying decay could differ in other places. That is what we wanted to convey as well, by comparing different cities, points out the Notre-Dame researcher. We wanted to highlight the flexibility of the approach rather than propose a method with a fixed set of features. With its inherent flexibility and a vast amount of readily available source data in Google Street View, this new approach will likely feature many more future research projects.

Go here to read the rest:
Machine learning and computer vision can boost urban renewal - Hello Future Orange - Hello Future

Plagiarism Detection Tools Offer a False Sense of Accuracy The Markup – The Markup

When Katherine Pickering Antonova became a history professor in 2008, she got access to the plagiarism detection software tools Turnitin and SafeAssign. At first blush, she thought the technology would be great. She had just finished a graduate program where she had manually graded papers as a teaching assistant, meticulously checking students suspect phrases to see if any showed up elsewhere.

But her first use of the plagiarism checkers gave her a jolt. The software suggested the majority of her students had copied portions of their essays.

Soon she realized the lie in how the tools were described to her. Its not tracking plagiarism at all, Pickering Antonova said. Its just flagging matching text. Those two concepts have different standards; plagiarism is a subjective assessment of misconduct, but scholars may have matching words in their academic articles for a variety of legitimate reasons.

Plagiarism checkers are built into The City University of New Yorks learning management system, where faculty members post assignments and students submit them. As at many colleges throughout the country, scanning for plagiarism in submitted assignments is the default. But fed up with false flags and the countless hours required to check potentially plagiarized passages against the source material Turnitin and SafeAssign highlight, Pickering Antonova gave up on the tools entirely a couple years ago.

The bots are literally worse than useless, she said. They do harm, and they dont find anything I couldnt find by myself.

Some experts agree that Claudine Gay, Harvards ousted president and a widely respected political scientist, recently became the latest victim of this technology. She was forced to step down from the presidency after an accuser flagged nearly 50 examples from her writing that they called plagiarism. But many of the examples looked a lot like what Pickering Antonova considered a waste of her time when she was grading student work.

The Voting Rights Act of 1965 is often cited as one of the most significant pieces of civil rights legislation passed in our nations history, Gay wrote in one paper. Her accuser says she plagiarized David Canons description of the landmark lawbut as the Washington Free Beacon reported in publishing the allegations, Canon himself disagrees, arguing Gay had done nothing wrong.

The controversy over Gays alleged plagiarism has roiled the academic community, and while much of the attention has been on the political maneuvering behind her ouster and the definition of plagiarism, some scholars have commented on the detection software that was likely behind it. The fact is, however, that students, not academics, bear the brunt of the tools shoddy analyses. Turnitin is the industry leader in marshaling text analysis tools to assess academic integrity, boasting partnerships with more than 20,000 institutions globally and a repository of over 1.8 billion student paper submissions (and still counting).

The companies that are marketing plagiarism detection tools tend to acknowledge their limitations. While they may be referred to as plagiarism checkers, the products are described as highlighting text similarities or duplicate content. They scan billions of webpages and scholarly articles looking for those matches and surface them for a reviewer. Some, like Grammarlys, are marketed to writers and offer to help people add proper citations where they may have forgotten them. It isnt meant to police plagiarism, but rather help writers avoid it. Turnitin specifically says its Similarity Report does not check for plagiarism.

Still, the tools are frequently used to justify giving students zeroes on their assignmentsand the students most likely to get such dismissive grading are those at less-selective institutions, where faculty are overstretched and underpaid.

For her part, Pickering Antonova came to feel guilty about putting students through the stress of seeing their Turnitin results.

They see their paper is showing up 60 percent plagiarized, and they have a heart attack, she said.

Plagiarism does not carry a legal definition. Institutions create their own plagiarism policies, and academic fields have norms about how to credit and cite sources in scholarly text. Plagiarism checkers are not designed with such nuance. It is up to users to follow up their algorithmic output with good, human judgment.

Jo Guldi, a professor of quantitative methods at Emory University, recently published The Dangerous Art of Text Mining: A Methodology for Digital History and jumped into the Gay plagiarism controversy with a now-deleted post on X before Christmas. She pointed out that computers can search for five-word overlaps in text but argued that such repetition does not equal plagiarism: the technology of text mining can be used to destroy the career of any scholar at any time, she wrote.

By phone, Guldi said that while she didnt cover plagiarism detection in her book, the parallel is clear. Her book traces bad conclusions reached because people fail to critically analyze the data. She, too, has used Turnitin in her classes and recognized the findings cannot be taken at face value.

You look at them and you see you have to apply judgment, she said. Its always a judgment call.

Many scholars, including those Gay is supposed to have plagiarized, have come to Gays defense over the course of the last month, arguing the text similarities highlighted do not rise to the level of plagiarism.

Machine Learning

Stanford study found AI detectors are biased against non-native English speakers

Yet her accuser has identified nearly 50 examples of overlap, pairing her writing with that of other scholars and insisting there is a pattern of academic misconduct. The sheer number of examplesand promise of more to comehelped seal Gays fate. And some scholars worry anyone with enemies could be next.

Ian Bogost, a professor at Washington University in St. Louis, mulled in The Atlantic what a full-bore plagiarism war could look like, running his own dissertation through iThenticate, a checker run by the same company as Turnitin that is marketed to researchers, publishers, and scholars.

Bill Ackman, a billionaire Harvard megadonor, signaled his commitment to participating in such a war after Business Insider launched its own grenade, publishing an analysis last week that accused his wife, Neri Oxman, of plagiarizing parts of her dissertation. Oxman got her Ph.D. at MIT in 2010 before joining the faculty and then leaving to become an entrepreneur. Suspecting someone from MIT encouraged Business Insider to take a closer look at her dissertation, Ackman posted on X that he was going to begin a review of the work of all current @MIT faculty members, President Kornbluth, other officers of the Corporation, and its board members for plagiarism.

He later added, Why would we stop at MIT? Dont we have to do a deep dive into academic integrity at Harvard as well? What about Yale, Princeton, Stanford, Penn, Dartmouth? You get the point.

Its unclear which tool Gays accuser used to identify their examples, but experts agree the accusations seem to come from a text comparison algorithm. A Markup analysis of five of Gays papers in the Grammarly and EasyBib plagiarism checkers did not turn up any of the plagiarism accusations that have surfaced in recent months. Grammarlys tool did flag instances of text overlap between Gays writing and other scholars, sometimes because they were citing her paper, but sometimes because the two authors were simply describing similar things. Gays 2017 political science paper A Room for Ones Own? is the subject of more than half a dozen accusations of plagiarism that Grammarly didnt flagbut the tool did, for example, suggest her line The estimated coefficients and standard errors from the may have been plagiarized from an article about diabetes in Bali.

Analyzing the same paper, Turnitin ignored several of the lines included in complaints against her but it did flag four from two academic papers. It also found other similarities, suggesting, for example, that the phrase receive a 10-year stream of tax credits warranted review.

Credit:Turnitin

David Smith, an associate professor of computer science at Northeastern University, has studied natural language processing and computational linguistics. He said plagiarism detection tools tend to start with what is called a null model. The algorithm is given very few assumptions and simply told to identify matching words across texts. To find examples in Gays writing, he said, it basically took people looking through the really low-precision output of these models.

Machine Learning

A Markup examination of a typical college shows how students are subject to a vast and growing array of watchful tech, including homework trackers, test-taking software, and even license platereaders

Somebody could have trained a better model that had higher precision, Smith said. That doesnt seem to be how it went in this case.

The result was a long list of plagiarism accusations most scholars found baffling.

Turnitin introduced its similarity check in 2000. Since then, plagiarism analyses have become the norm for editors of some academic journals as well as many college and university faculty members. Yet the tool is not universal. Many users, like Pickering Antonova, have decided the software isnt worth the time and dont align with their teaching goals. This has created two distinct classes of people: those who are subjected to plagiarism checkers and those who are not. For professional academics, Gays case highlights the concern that anyone with a high profile who makes the wrong enemy could quickly become part of the former group.

For students, its often just a matter of their schools norms. Plagiarism checkers can seem like a straightforward assessment of the originality of student work, reporting a percentage of the paper that may have been plagiarized. For faculty members who dont have the time to look at the dozens of false flags, it can be easy to rely on the total percentage and grade accordingly.

This behavior worries Smith, the computer scientist. Getting a quantification makes it easier to just judge a lot of student papers at scale, he said. Thats not whats going on in the Claudine Gay case but is troubling about whats going on with students subjection to these methods.

Tech companies have produced a steady stream of new tools for educators concerned with students cheating, including AI detectors that followed the widespread adoption of ChatGPT. With each new tool comes a promise of scientific accuracy and cutting-edge analysis of unbiased data.

But as Claudine Gays case demonstratesand the threat of the plagiarism wars promisesplagiarism detection is far from precise.

Follow this link:
Plagiarism Detection Tools Offer a False Sense of Accuracy The Markup - The Markup

Data Science Salon Seattle Spotlights Generative AI and Machine Learning in Retail and E-commerce – GlobeNewswire

SEATTLE, Jan. 11, 2024 (GLOBE NEWSWIRE) -- Data Science Salon (DSS), recognized as the most diverse data science and machine learning community in the U.S., is delighted to announce its upcoming Seattle event. Scheduled for January 24th, 2024, at the modern Block 41 venue, DSSSEA is designed to spark transformative and innovative conversations about the application of AI and Machine Learning in the retail and e-commerce sectors.

DSS Seattle is dedicated to unraveling the complexities and potential of generative AI and machine learning within retail and e-commerce. Industry professionals will gather to explore pivotal topics, including:

This one-day, 200-person conference provides expert talks with leading data scientists from prominent companies such as Nordstrom, eBay, Amazon, Pinterest and Google and ample opportunities for networking, and collaborative discussions. All sessions will be recorded and made available on-demand within two hours post-event, ensuring that the insights and learnings are accessible to a wider audience beyond the day of the conference. Pre-recorded virtual sessions will also be available prior to the event to get our attendees ready for all DSSSEA has to offer.

I am thrilled to be speaking about experimentation at the Data Science Salon in Seattle. I hope to learn about the latest trends and techniques in data science experimentation, and to share my own experiences and insights with fellow attendees. I am excited to connect with like-minded professionals and to further develop my skills in this fast-paced and rapidly evolving field, says Benjamin Skrainka, Data Science Manager at eBay and virtual speaker for DSSSEA.

We invite data science practitioners, retail strategists, and e-commerce specialists to join us at DSSSEA for a day of identifying new ways to use AI and ML in your field. Registration is now open.

For more information and to reserve your seat for the in-person or on-demand event, please visit https://www.datascience.salon/seattle/.

About Data Science Salon Data Science Salon elevates the conversation in data science and machine learning by connecting industry experts and practitioners in a collaborative, community-focused environment. With a commitment to diversity and the advancement of the field, DSS is shaping the future of data-driven decision-making.

For Media and Sponsorship Inquiries: Anna Anisin Phone: +1 305-215-4527 Email: anna.a@formulatedby.com

More here:
Data Science Salon Seattle Spotlights Generative AI and Machine Learning in Retail and E-commerce - GlobeNewswire

Weekly AiThority Roundup: Biggest Machine Learning, Robotic And Automation Updates – AiThority

This is your AI Weekly Roundup. We are covering the top updates from around the world. The updates will feature state-of-the-art capabilities inartificial intelligence (AI),Machine Learning, Robotic Process Automation, Fintech, and human-system interactions. We cover the role of AI Daily Roundup and its application in various industries and daily lives.

As the technology landscape evolves, Dell emerges in 2023 with a host of transformative developments, marking its continued impact on the world of computing and innovation. Dell, a stalwart in the tech industry, starts the year 2023 with a flurry of groundbreaking news stories, offering a glimpse into the companys strategic moves and technological advancements that are set to shape the future of computing.

Skylo, the global leader in non-terrestrial networks, announced that it will interconnect its NTN satellite network with FocusPoints PULSE platform enabling FocusPoints IoT monitoring and emergency escalation service.

Ansysannounced that Ansys AVxcelerate Sensors will be accessible within NVIDIA DRIVE Sim,a scenario-based AV simulator powered by NVIDIA Omniverse, a platform for developingUniversal Scene Description (OpenUSD)applications for industrialdigitalization.

Intel CorpandDigitalBridge Group, a global investment firm announced the formation of Articul8 AI, Inc. (Articul8), an independent company offering enterprise customers a full-stack, vertically-optimized and secure generativeartificial intelligence(GenAI) software platform.

Cerence Inc.AI for a world in motion, announced it is collaborating with Microsoft to deliver an evolved in-vehicleuser experiencethat combines Cerences extensiveautomotive technologyportfolio and professional services with the innovative technology and intelligence of Microsoft Azure AI Services.

View original post here:
Weekly AiThority Roundup: Biggest Machine Learning, Robotic And Automation Updates - AiThority

Machine Learning: The Future of Predicting Health Outcomes in Aging Canadians – Medriva

Healthcare as we know it is being transformed by artificial intelligence (AI) and machine learning. A research team from the University of Alberta is pioneering this transformation by using machine learning programs to predict the future mental and physical health of aging Canadians. The project, which utilizes data from the Canadian Longitudinal Study on Aging (CLSA), focuses on over 30,000 Canadians between the ages of 45 and 85.

The research team has developed a unique biological age index using machine learning models, which allows them to assess the health of individuals more accurately than ever before. This index is not just about chronological age. Instead, it provides a holistic view of an individuals health by considering various health-related, lifestyle, socio-economic, and other data. The biological age index gives a more accurate reflection of an individuals overall health status, providing critical insights for personalized care plans.

In addition to the biological age index, the team has also developed a program that can accurately predict the onset of depression within three years. Depression is a common but serious condition that can significantly impact the quality of life, especially for the aging population. Early detection and intervention are critical, and this machine learning model could potentially revolutionize mental health care by allowing for early, proactive interventions.

These machine learning models are not yet ready for real-world implementation. However, they signify a significant shift towards individualized care tailored to each patients unique health profile. The ultimate aim is to contribute to healthy aging, benefiting not just Albertans but all Canadians. These models could potentially transform patient care by providing clinicians, patients, and people with lived experience with valuable insights into potential health outcomes.

This groundbreaking research is funded by various organizations, including the Canada Research Chairs program, Alberta Innovates, Mental Health Foundation, Mitacs Accelerate program, and others. The researchers plan to refine these models further, involving clinicians, patients, and individuals with lived experience in the process. The goal is to demonstrate the potential benefits of these models and pave the way for their eventual implementation in healthcare settings.

AI and machine learning have immense potential in the healthcare sector. The ability to process and interpret multi-modal data can lead to more personalized patient care. They can also save time for researchers analyzing clinical trial results. However, as with any transformative technology, there are challenges. For AI and machine learning to work effectively, the quality of data fed into these models needs to be high. There is also a need for technologies that help patients manage their health. In addition, the ethical and regulatory aspects of AI use in healthcare need careful consideration.

As the University of Alberta continues to lead in the intersection of machine learning, health, energy, and indigenous initiatives in health and humanities, the future of healthcare looks promising. The ability of machine learning to predict future health conditions in aging Canadians is just the beginning. As these models are refined and tested further, they could significantly contribute to the development of a healthier future for all.

Read the original post:
Machine Learning: The Future of Predicting Health Outcomes in Aging Canadians - Medriva

New study: Countless AI experts doesnt know what to think on AI risk – Vox.com

In 2016, researchers at AI Impacts, a project that aims to improve understanding of advanced AI development, released a survey of machine learning researchers. They were asked when they expected the development of AI systems that are comparable to humans along many dimensions, as well as whether to expect good or bad results from such an achievement.

The headline finding: The median respondent gave a 5 percent chance of human-level AI leading to outcomes that were extremely bad, e.g. human extinction. That means half of researchers gave a higher estimate than 5 percent saying they considered it overwhelmingly likely that powerful AI would lead to human extinction and half gave a lower one. (The other half, obviously, believed the chance was negligible.)

If true, that would be unprecedented. In what other field do moderate, middle-of-the-road researchers claim that the development of a more powerful technology one they are directly working on has a 5 percent chance of ending human life on Earth forever?

Each week, we explore unique solutions to some of the world's biggest problems.

In 2016 before ChatGPT and AlphaFold the result seemed much likelier to be a fluke than anything else. But in the eight years since then, as AI systems have gone from nearly useless to inconveniently good at writing college-level essays, and as companies have poured billions of dollars into efforts to build a true superintelligent AI system, what once seemed like a far-fetched possibility now seems to be on the horizon.

So when AI Impacts released their follow-up survey this week, the headline result that between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction didnt strike me as a fluke or a surveying error. Its probably an accurate reflection of where the field is at.

Their results challenge many of the prevailing narratives about AI extinction risk. The researchers surveyed dont subdivide neatly into doomsaying pessimists and insistent optimists. Many people, the survey found, who have high probabilities of bad outcomes also have high probabilities of good outcomes. And human extinction does seem to be a possibility that the majority of researchers take seriously: 57.8 percent of respondents said they thought extremely bad outcomes such as human extinction were at least 5 percent likely.

This visually striking figure from the paper shows how respondents think about what to expect if high-level machine intelligence is developed: Most consider both extremely good outcomes and extremely bad outcomes probable.

As for what to do about it, there experts seem to disagree even more than they do about whether theres a problem in the first place.

The 2016 AI impacts survey was immediately controversial. In 2016, barely anyone was talking about the risk of catastrophe from powerful AI. Could it really be that mainstream researchers rated it plausible? Had the researchers conducting the survey who were themselves concerned about human extinction resulting from artificial intelligence biased their results somehow?

The survey authors had systematically reached out to all researchers who published at the 2015 NIPS and ICML conferences (two of the premier venues for peer-reviewed research in machine learning, and managed to get responses from roughly a fifth of them. They asked a wide range of questions about progress in machine learning and got a wide range of answers: Really, aside from the eye-popping human extinction answers, the most notable result was how much ML experts disagreed with one another. (Which is hardly unusual in the sciences.)

But one could reasonably be skeptical. Maybe there were experts who simply hadnt thought very hard about their human extinction answer. And maybe the people who were most optimistic about AI hadnt bothered to answer the survey.

When AI Impacts reran the survey in 2022, again contacting thousands of researchers who published at top machine learning conferences, their results were about the same. The median probability of an extremely bad, e.g., human extinction outcome was 5 percent.

That median obscures some fierce disagreement. In fact, 48 percent of respondents gave at least a 10 percent chance of an extremely bad outcome, while 25 percent gave a 0 percent chance. Responding to criticism of the 2016 survey, the team asked for more detail: how likely did respondents think it was that AI would lead to human extinction or similarly permanent and severe disempowerment of the human species? Depending on how they asked the question, this got results between 5 percent and 10 percent.

In 2023, in order to reduce and measure the impact of framing effects (different answers based on how the question is phrased), many of the key questions on the survey were asked of different respondents with different framings. But again, the answers to the question about human extinction were broadly consistent in the 5-10 percent range no matter how the question was asked.

The fact the 2022 and 2023 surveys found results so similar to the 2016 result makes it hard to believe that the 2016 result was a fluke. And while in 2016 critics could correctly complain that most ML researchers had not seriously considered the issue of existential risk, by 2023 the question of whether powerful AI systems will kill us all had gone mainstream. Its hard to imagine that many peer-reviewed machine learning researchers were answering a question theyd never considered before.

I think the most reasonable reading of this survey is that ML researchers, like the rest of us, are radically unsure about whether to expect the development of powerful AI systems to be an amazing thing for the world or a catastrophic one.

Nor do they agree on what to do about it. Responses varied enormously on questions about whether slowing down AI would make good outcomes for humanity more likely. While a large majority of respondents wanted more resources and attention to go into AI safety research, many of the same respondents didnt think that working on AI alignment was unusually valuable compared to working on other open problems in machine learning.

In a situation with lots of uncertainty like about the consequences of a technology like superintelligent AI, which doesnt yet exist theres a natural tendency to want to look to experts for answers. Thats reasonable. But in a case like AI, its important to keep in mind that even the most well-regarded machine learning researchers disagree with one another and are radically uncertain about where all of us are headed.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

See the original post:
New study: Countless AI experts doesnt know what to think on AI risk - Vox.com

Unlocking the Potential of Acceleration Data in Disease Diagnosis – Medriva

Unlocking the Potential of Acceleration Data in Disease Diagnosis

Advancements in technology have paved the way for innovative approaches to disease diagnosis, particularly in the realm of gait-related diseases such as peripheral artery disease (PAD). Traditional methods for diagnosing cardiovascular diseases, such as PAD, have proven to be inadequate in identifying individuals at risk, often resulting in late-stage diagnoses. This has necessitated the development of more accurate, cost-effective, and convenient diagnostic tools.

A recent study introduces a promising framework for processing acceleration data collected from reflective markers and wearable accelerometers. This data is key to diagnosing diseases affecting gait, including PAD. The framework shows impressive accuracy in distinguishing PAD patients from non-PAD controls using raw marker data. Although accuracy is slightly reduced when using data from a wearable accelerometer, the results remain promising.

Machine learning models have been proposed to overcome the limitations of current diagnostic methods. However, these models often require significant time, resources, and expertise. The new framework addresses these challenges by utilizing existing data and wearable accelerometers to gather detailed gait parameters outside laboratory settings.

One of the key advantages of this approach is the potential for data availability and consistency. With wearable accelerometers, data can be collected in a variety of real-world settings, providing a more accurate picture of an individuals gait. This could lead to earlier detection and treatment of PAD, and potentially other gait-related diseases.

Further advancements in technology have led to the development of self-powered gait analysis systems (SGAS) based on a triboelectric nanogenerator (TENG). These systems comprise a sensing module, a charging module, a data acquisition and processing module, and an Internet of Things (IoT) platform. They use specialized sensing units positioned at the forefoot and heel to generate synchronized signals for real-time step count and step speed monitoring. The data is then wirelessly transmitted to an IoT platform for analysis, storage, and visualization, offering a comprehensive solution for motion monitoring and gait analysis.

Aside from gait analysis, recent studies have also explored the use of eye movement patterns to diagnose neurodegenerative disorders such as Alzheimers disease, mild cognitive impairment, and Parkinsons disease. An algorithm has been developed to automatically identify these patterns, with significantly different saccade and pursuit characteristics observed in the patient groups compared to controls. This showcases the potential of non-invasive eye tracking devices to record eye motion and gaze location across different tasks, further contributing to early and accurate disease detection.

With the advent of smartwatch-smartphone technology, home-based monitoring of patients with gait-related diseases has become a realistic possibility. This technology can be used to process acceleration data, helping to diagnose diseases affecting gait. This approach offers a low-cost, convenient tool for diagnosing PAD and other gait-related diseases, marking a significant step forward in the field of disease diagnosis and management.

In conclusion, the use of acceleration data, machine learning, and wearable technology offers a promising pathway for the early detection and diagnosis of PAD and potentially other gait-related diseases. As we continue to push the boundaries of technology and harness the power of data, we can look forward to a new era of healthcare that is more proactive, personalized, and effective.

See the original post here:
Unlocking the Potential of Acceleration Data in Disease Diagnosis - Medriva

Machine Learning: The Key to Quantum Device Variability – Medriva

Machine Learning: The Key to Quantum Device Variability

A breakthrough study led by the University of Oxford has managed to bridge the reality gap in quantum devices, a term referring to the inherent variability between the predicted and observed behavior of these devices. This was achieved through the innovative use of machine learning techniques. The studys findings provide a promising new approach to infer the internal disorder characteristics indirectly. The pioneering research could have significant implications for the scaling and combination of individual quantum devices. It could also guide the engineering of optimum materials for quantum devices.

The researchers at the University of Oxford used a physics-informed machine learning approach for their study. This method allowed the team to infer nanoscale imperfections in the materials that quantum devices are made from. These imperfections can cause functional variability in quantum devices and lead to a difference between predicted and actual behavior the so-called reality gap. The research group was able to validate the algorithms predictions about gate voltage values required for laterally defined quantum dot devices. This technique, therefore, holds significant potential for developing more complex quantum systems.

The studys findings could help engineers design better quantum devices. By being able to quantify the variability between quantum devices, engineers can make more accurate predictions of device performance. This could aid in the design and engineering of optimal materials for quantum devices. Applications range from climate modeling to drug discovery, making this a crucial development in the field.

The development in quantum device engineering comes at a time when the quantum computing market is experiencing exponential growth. According to a report by GlobalDatas Thematic Intelligence, the quantum computing market was valued between $500 million and $1 billion in 2022, and it is projected to rise to $10 billion between 2026 and 2030. This represents a compound annual growth rate of between 30% and 50%. With increasing investment and market growth, the Oxford studys findings could have far-reaching implications for the future of quantum computing.

In conclusion, the study led by the University of Oxford marks a significant leap forward in quantum computing. By utilizing machine learning to bridge the reality gap in quantum devices, the researchers have provided a new method to infer nanoscale imperfections in materials and quantify the variability between quantum devices. This not only allows for more accurate predictions of device performance but also informs the engineering of optimum materials for quantum devices. With quantum computing predicted to grow significantly in the coming years, these findings could have a profound impact on the industry.

See the rest here:
Machine Learning: The Key to Quantum Device Variability - Medriva

How Machine Learning is Transforming the Financial Industry – Medium

The financial industry has always relied heavily on using data to model risks, identify opportunities, and optimize decisions. Today, machine learning is taking financial data science to new levels analyzing massive datasets, uncovering subtle patterns, and powerfully predicting future outcomes. These AI-powered models are being woven into countless processes in banking, insurance, trading firms, and more.

In this article, well explore some of the most impactful applications of machine learning across the financial sector and why this technology represents a breakthrough in capabilities compared to traditional statistical methods. Well also consider some promising directions this transformation might take in the years to come.

Banks lose billions each year to payment fraud despite their best efforts to stop it. The volume and variety of transactions make spotting criminals in the act like finding a needle in a haystack. Fortunately, machine learning algorithms have an uncanny knack for finding needles.

By analyzing past payment data like timestamps, locations, devices, and more, unsupervised learning models can define a normal pattern of legitimate behavior for each customer. When a new payment strays too far from that norm, the algorithms flag it for review. This enables banks to catch many more fraudulent payments while minimizing false alarms that frustrate legitimate customers.

Whats most impressive is that these models continually monitors customers and adapt to their evolving behaviors over time. So banks can keep account security tight without compromising convenience for most payments. Unsupervised learning stops fraud in real-time behind the scenes without customers ever knowing.

Evaluating loan applications requires careful analysis of employment details, financial statements, credit reports, property values, and more to estimate risks and repayment capacity. This complex process is time-consuming, subjective, and inconsistent when done manually.

The rest is here:
How Machine Learning is Transforming the Financial Industry - Medium

DOD’s cutting-edge research in AI and ML to improve patient care – DefenseScoop

The Defense Departments responsibility to its active and veteran service members extends to their health and well-being. One organization driving innovation for patient care is the DODs Uniformed Services University. And within the university is a center known as the Surgical Critical Care Initiative, SC2i a consortium of federal and non-federal research institutions.

In a recent panel discussion with DefenseScoop, Dr. Seth Schobel, scientific director for SC2i, shared how cutting-edge research in artificial intelligence and machine learning improves patient care. Schobel elaborated on one specific tool called the WounDx Clinical Decision Support Tool which predicts the best time for surgeons to close extremity wounds.

[These wounds] are actually one of the most common combat casualty injuries experienced by our warfighters. We believe the use of these tools will allow military physicians to close most wounds faster, and it has the potential to save costs and avoid wound infections and other complications. We believe by using this tool well increase the success rate of military surgeons on closing these wounds at first attempt [improving rates] from 72% to 88% of the time, he explained.

Uniformed Services Universitys Chief Technology and Senior Information Security Officer, Sean Baker, joined Schobel on the panel to elaborate on how when IT and medical research teams work together, they can drive better health outcomes in patient care.

Overall, our job is to provide cutting-edge tools into the hands of clinical experts, recognizing that risk management does not mean risk avoidance. Clinical care is not going to advance without taking some measure of digital risks, he explained.

Baker added, We need to continue to empower our users across the healthcare space, across government, to use these emerging capabilities in a risk-informed way to take this into the next level of education, of research, of care delivery.

Schobel and Baker both underlined AI and MLs disruptive potential to positively improve patient care in the near future.

We need to be ready for this [disruptor] by understanding how these tools are built and how they apply in different clinical settings. This will dramatically improve a data-driven and evidence-based healthcare system, Schobel explained. By embracing these considerations, the public health sector, as well as the military, can harness the power of AI and ML to enhance patient care and improve health outcomes, and really be at the forefront of that transformation for the future of healthcare.

Googles Francisco Rubio-Bertrand, who manages federal healthcare client business, reacted to the panel interview, saying: We believe that Google, by leveraging its vast resources and expertise, can be a driving force in advancing research and healthcare. Through access to our powerful cloud computing platforms and extensive datasets, we can significantly accelerate the development of AI/ML models specifically designed to address pressing needs in the healthcare sector.

Watch the full discussion to learn more about driving better patient care and health outcomes with artificial intelligence and machine learning.

This video panel discussion was produced by Scoop News Group for DefenseScoop, and underwritten by Google for Government.

See the rest here:
DOD's cutting-edge research in AI and ML to improve patient care - DefenseScoop

The Shaping of Material Science by AI and ML: A Journey Towards a Smarter, Greener Industrial Future – Medriva

The field of material science is experiencing a remarkable transformation, thanks to the integration of Artificial Intelligence (AI) and Machine Learning (ML) technologies. These technological advancements are revolutionizing the process of material discovery and development, promising enhanced efficiency, innovation, and commitment to sustainability and environmental responsibility. The impact of this integration is far-reaching, touching various industries from consumer packaged goods to automotive, oil and gas, and energy. For businesses to stay competitive in this rapidly evolving, environmentally conscious landscape, embracing these technologies is crucial, representing a transformative journey towards a smarter, greener industrial future.

As highlighted by Forbes, the challenges in material development are being addressed by the use of ML, MLOps, and large language models (LLMs). These technologies enhance efficiency, innovation, and sustainability in material science, offering new prospects to various industries. Key factors for success in leveraging ML and LLMs in material science include foundational education in ML and LLMs, cross-collaboration between material scientists and data experts, a gradual approach through small-scale pilot projects, effective data management, and ethical considerations in AI ethics and data privacy.

According to a Springer article, advancements in high throughput data generation and physics-informed AI and ML algorithms are rapidly challenging the way materials data is collected, analyzed, and communicated. A novel architecture for managing materials data is being proposed to address the fact that current ecosystems are not well equipped to take advantage of potent computational and algorithmic tools.

The Materials Virtual Lab at UC San Diego has significantly increased the speed and efficiency of materials design by applying first principle calculations and machine learning techniques. These computational methods have transformed the process by streamlining calculations, increasing prediction velocities, and accelerating the discovery of new materials, reducing the time and cost required for data collection and analysis.

As per Arturo Robertazzi, machine learning is gradually integrating itself into the fabric of materials science, lowering barriers to future breakthroughs. Google DeepMind recently announced the discovery of 2.2 million new crystals using Graph Networks for Materials Exploration (GNoME), marking a significant advancement in structure selection and generation algorithms.

In a remarkable collaboration between Microsoft and Pacific Northwest National Laboratory (PNNL), AI and high-performance computing were used to discover a new material, N2116, which could reduce reliance on lithium in batteries by up to 70%. The fusion of AI and high-performance computing stands as a beacon of hope for finding sustainable solutions and reshaping industries.

Overall, the integration of AI and ML in material science marks a significant step in our journey towards a smarter, more sustainable future. These technologies are not just reshaping material science but also redefining our approach to environmental responsibility and sustainable development.

See the original post here:
The Shaping of Material Science by AI and ML: A Journey Towards a Smarter, Greener Industrial Future - Medriva

AI 101: Generative AI pioneering the future of digital creativity and automation – Proactive Investors USA

Artificial Intelligence (AI) has made significant strides in recent years, leading to the development of Generative AI, a subset of AI focused on creating new content.

This technology harnesses machine learning algorithms to generate text, images, audioand other forms of media it's not just about creating things that already exist, but also about inventing entirely new creations.

Generative AI operates by analysing vast amounts of data and learning patterns within it.

This enables the AI to produce new outputs that are similar in style, toneor function to its input data.

For example, if it's fed a large number of paintings, it can generate new artworks; if given pieces of music, it can compose new melodies.

Two main types of models are commonly used in generative AI: generative adversarial networks (GANs) and variational autoencoders (VAEs).

GANs involve two parts a generator that creates images and a discriminator that evaluates them.

The discriminator's feedback helps the generator improve its outputs.

VAEs, on the other hand, focus on encoding data into a compressed format and then reconstructing it, allowing the generation of new, similar data.

ChatGPT is a prime example of the intersection between generative AI and large language models, showcasing the capabilities of modern AI in understanding and generating human language.

As a generative AI platform, ChatGPT is designed to generate text-based content in response to user prompts. It can produce a wide range of outputs, including answers to questions, essays, creative stories, code and even poetry.

Its ability to create content that wasn't pre-written but is generated in real-time in response to specific prompts is a defining characteristic of generative AI.

ChatGPT is built on OpenAI's Generative Pre-trained Transformer (GPT) architecture, which is a type of a large language model (LLM).

LLMs are a specialised class of AI model that usenatural language processing (NLP) to understand and generate humanlike text-based content in response.

Unlike generative AI models, which have broad applications across various creative fields, LLMs are specifically designed for handling language-related tasks.

Generative AI's potential is vast and varied. In the creative industries, it is revolutionising how music, artand literature are created.

AI-generated art and music are already making waves, providing artists with new tools to express their creativity.

In business, Generative AI can be a game-changer for marketing and advertising, generating personalised content for targeted audiences.

For instance, AI can create varied versions of an advertisement tailored to different demographics, improving engagement rates.

Healthcare is another sector where generative AI is making an impact. It can assist in drug discovery by predicting molecular structures and their interactions, potentially speeding up the development of new medications.

Furthermore, in technology and engineering, generative AI assists in designing new products and solving complex problems. It can simulate multiple design scenarios, helping engineers optimise their creations.

The ability of AI to generate realistic content raises concerns about misinformation and the creation of deepfakes, which could be used for malicious purposes.

Ensuring the responsible use of this technology is paramount.

There is also the issue of intellectual property rights. When AI creates new content, who owns it? The programmer, the useror the AI itself? These are questions that legal systems around the world are currently grappling with.

Moreover, there's the potential impact on jobs. While generative AI can automate repetitive tasks, potentially increasing efficiency and reducing costs, it also raises concerns about job displacement in certain sectors.

Looking to the future, it's clear that generative AI will continue to evolve and influence various facets of life and industry.

Its ability to analyse and synthesise information at unprecedented scales holds the promise of breakthroughs in numerous fields.

In conclusion, generative AI is not just a technological marvel; it's a catalyst for innovation across sectors.

Its potential for creative expression, problem-solving and personalisation is immense.

However, as we harness its power, it's crucial to address the ethical and societal implications to ensure its benefits are realised responsibly and equitably.

As we step into an era where the lines between human and machine creativity become increasingly blurred, generative AI stands at the forefront, redefining the boundaries of possibility.

Original post:
AI 101: Generative AI pioneering the future of digital creativity and automation - Proactive Investors USA

Vbrick Unveils Powerful AI Enhancements, Driving the Future of Video in the Enterprise – AiThority

Vbrick, the leading end-to-end enterprise video solutions provider, unveiled several new artificial intelligence (AI) capabilities within its video platform in general availability. Adding to its existing suite, Vbricks new AI transforms content management at scale, automates tasks, improves accessibility, and simplifies processes across the enterprise.

In the fast-evolving landscape of digital communication, video has become an indispensable tool for businesses. However, with the exponential rise in video content, from expertly produced training videos and company townhalls to user-created how-to videos and meeting recordings, effectively navigating through vast libraries and ensuring easy access to the right content poses a significant challenge.

Recommended AI News:Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

Vbricks AI-powered enterprise video platform (EVP) transforms how organizations manage, share, and derive value from their video assets, enhancing accessibility, efficiency, and productivity for both content contributors and viewers alike. Building on Vbricks existing AI-powered transcription, translation, and user tagging features, new AI capabilities include:

Video Assistant:Powered by generative AI, Video Assistant extracts key insights from video content using transcripts. Users can increase productivity while posing specific questions to the assistant and receiving real-time responses about the video content.

Summarization:Utilizing generative AI, Summarization allows video owners to automatically create video descriptions based on the video transcript. This not only saves time but also enhances search functionality, simplifies content discovery, and improves video metadata.

Content Intelligence:Leveraging AI and natural language processing, Content Intelligence reviews videos to gain actionable insights instantly. This feature allows moderation of video content for high-value or sensitive material, delivery of personalized video recommendations, and tracking and analysis of video content trends.

Smart Search:Revolutionizing search capabilities with intelligent algorithms that identify concepts, not just keywords, Smart Search leverages vectorized metadata and machine learning to ensure more precise search results, quickly surfacing the most relevant content, interpreting context and intent behind searches, and accommodating diverse search behaviors.

Recommended AI News:World First: Continental Integrates Face Authentication Invisibly Behind Driver Display Console

The totality of an organizations video content is a treasure trove of unused value, said Paul Sparta, Vbrick Chairman and CEO. Vbricks EVP first federates video content, then, our video AI distills the value from the video and makes it consumable and available to the appropriate business process, providing enterprises with the capability to address the rapidly accelerating growth of video in the modern day.

Vbrick caters specifically to enterprise organizations, some of which have amassedvideo libraries exceeding 500 terabytes, stored securely in Vbricks intelligent cloud platform. With additional native video creation, eCDN distribution, live streaming, integrations, and analytic capabilities, Vbricks platform serves as the centralized, secure hub for all video activity within the enterprise.

Recommended AI News:MediaGo Partners With Voluum to Optimize Campaign Delivery and Management for Advertisers

With video content aggregated in the Vbrick platform, organizations can truly begin to unlock the value of video by streamlining content discovery, automating tasks, and promoting global accessibility, all while providing an engaging experience for the entire enterprise, said Sparta.

[To share your insights with us, please write tosghosh@martechseries.com]

Read more:
Vbrick Unveils Powerful AI Enhancements, Driving the Future of Video in the Enterprise - AiThority

The 3 Best Machine Learning Stocks to Buy in January 2024 – InvestorPlace

Machine learning is transforming sectors including healthcare and transportation, offering lucrative opportunities in the best machine learning stocks. However, investors should approach cautiously, as not all stocks in this sector ensure returns. Discernment is key, as many firms claim advanced machine learning needs more solid business models or definitive applications.

Moreover, this sector branches into specialized niches, including data analysis and artificial intelligence (AI), with machine learning being a key driver. Some businesses have made remarkable strides in this space, demonstrating commendable growth and innovation. Their work within machine learning is remarkable, effectively reshaping the way we interact with technology. Subsequently, Statista projects that the machine-learning market will reach $204.30 billion by 2024.

Furthermore, machine learning stocks are gaining momentum, reflecting a growing fascination with AI. This expanding field holds substantial growth prospects, offering investors opportunities to support the innovators shaping our tech future. For those seeking the next breakthrough, machine learning stocks could be the secret to forge the billionaires of tomorrow.

Source: Claudio Divizia / Shutterstock.com

Amazon (NASDAQ:AMZN) has impressively evolved from a garage startup to the worlds second-largest company by revenue. A significant part of its 2023 success was achieving the fastest delivery speeds ever, particularly boosting its appeal in the consumables and everyday essentials market.

Impressively, Amazon shows robust growth in its financial performance, notably in the third quarter, with EPS of 94 cents, smashing the 60 cents forecast. The company revenue soared by 12.6% year over year (YOY) to $143.1 billion, beating expectations by $1.54 billion and showcasing its market strength and efficiency.

Furthermore, Amazon is boosting its Prime Video game, bringing in a pro from Walt Disney for its advertising push. Additionally, Amazon has been focused on developing a platform that appeals to businesses for machine learning purposes, creating a workflow pipeline to onboard companies of various sizes. This effort leverages AWS cloud technology to build AI models.

Source: Sergio Photone / Shutterstock.com

Nvidia (NASDAQ:NVDA) is pushing the frontiers of quantum computing with its cuQuantum project, revolutionizing qubit simulation.

Simultaneously, its spicing up the AI realm with the Omniverse Cloud, enabling developers to master Isaac AMRs for sophisticated, AI-enhanced robotics. This fusion of high-tech and utility delivers innovation with a snazzy edge.

In the third quarter, Nvidias financials were impressive. Their non-GAAP earnings per share soared to $4.02, surpassing estimates by 63 cents. Revenue rocketed to $18.12 billion, up an astonishing 205.6% YOY. Also, data center revenue hit a new high of $14.51 billion, cementing Nvidias strong standing in the tech sector.

Furthermore, unveiling the GeForce RTX 4090D GPU in China gave Nvidias stock an additional boost. Analyst Vivek Arya, holding a confident $700 price target, forecasts the company will generate an impressive $100 billion incremental free cash flow over 2024 and 2025. Nvidia is not just playing in the tech arena; its setting new benchmarks, making it a standout choice for investors.

Source: Pamela Marciano / Shutterstock.com

Advanced Micro Devices (NASDAQ:AMD), with a market capitalization of 244 billion, solidifies its prominent status in the semiconductor sector. Endorsed by investment firm UBS alongside Micron Technology (NASDAQ:MU) for 2024, AMDs robust market presence and growth prospects are recognized, signaling a promising future.

Financially, In the third quarter, AMDs non-GAAP earnings per share reached 70 cents, exceeding estimates by 2 cents. Revenue rose to $5.8 billion, a 4.1% increase from last year, beating expectations by $110 million. Particularly, client segment revenue, driven by robust Ryzen mobile processor sales, soared to $1.5 billion, up 42% YOY.

Moreover, AMD isnt just riding the wave. Its making its own with the MI300 chips, poised as rivals to Nvidias H100. This strategic move has attracted tech giants like Meta Platforms (NASDAQ:META) and Microsoft (NASDAQ:MSFT), who are lining up for AMDs innovative chips. In the high-stakes semiconductor game, AMD is not just playing. Its setting the pace.

On the date of publication, Muslim Farooque did not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Muslim Farooque is a keen investor and an optimist at heart. A life-long gamer and tech enthusiast, he has a particular affinity for analyzing technology stocks. Muslim holds a bachelors of science degree in applied accounting from Oxford Brookes University.

More here:
The 3 Best Machine Learning Stocks to Buy in January 2024 - InvestorPlace

Unleashing the Power of AI: Discover the Mind-Blowing Potential of Machine Learning – Medium

10 min read

1. Introduction: Exploring the World of AI and Machine Learning

Artificial Intelligence (AI) and Machine Learning (ML) have become buzzwords that permeate almost every aspect of our lives. From personalized recommendations on streaming platforms to voice assistants that make our homes smarter, AI and ML are revolutionizing how we interact with technology. In this article, we delve into the mind-blowing potential of AI and explore the endless possibilities that machine learning brings. Whether youre new to the world of AI or an enthusiast looking to gain a deeper understanding, join us on this journey to discover how AI is reshaping industries, the benefits it offers, the challenges it presents, and how you can tap into its power for a better future.

Unleashing the Power of AI: Discover the Mind-Blowing Potential of Machine Learning

1. Introduction: Exploring the World of AI and Machine Learning

1.1 What is AI and Machine Learning? Artificial Intelligence (AI) and Machine Learning (ML) are not just fancy buzzwords; theyre revolutionizing the way we live and work. In simple terms, AI refers to the ability of machines to mimic human intelligence and perform tasks that typically require human cognition. ML, on the other hand, is a subset of AI that focuses on enabling machines to learn from data and improve their performance over time.

1.2 The Evolution and Importance of AI AI has come a long way since its inception. From fictional characters like HAL 9000 to real-life applications like voice assistants and autonomous vehicles, AI has become an integral part of our daily lives. Its importance lies in its potential to solve complex problems, automate repetitive tasks, and make data-driven decisions faster than humans ever could.

And hey, if you want to stay up-to-date with the latest AI trends and news, dont forget to follow me on Twitter! I promise to keep you entertained and informed with my witty take on all things AI.

2. Understanding the Basics: What is Machine Learning?

2.1 Definition and Concept of Machine Learning Machine Learning is like having a personal tutor for computers. Its all about developing algorithms that allow machines to learn from data and make predictions or take actions without explicit programming. In essence, machine learning enables computers to recognize patterns, identify trends, and adapt to new information, just like we do as humans (minus the occasional coffee addiction).

2.2 Types of Machine Learning Algorithms Machine Learning algorithms come in various flavors, each with its own superpowers. We have supervised learning, where machines learn from labeled data to make predictions, and unsupervised learning, where they decipher patterns in unlabeled data to find hidden insights. And lets not forget about reinforcement learning, where machines learn through trial and error, like a determined puppy learning to fetch (and occasionally breaking a vase or two).

2.3 Supervised vs. Unsupervised Learning Supervised learning is like having a teacher guide you through your homework, while unsupervised learning is the joy of exploring new territories on your own. In supervised learning, the machine is given labeled examples to learn from, whereas in unsupervised learning, it discovers patterns and relationships in the data by itself. Its like the difference between solving a math problem with a step-by-step guide versus figuring out a puzzle without instructions.

3. Applications of AI in Various Industries: Real-Life Examples

3.1 AI in Healthcare In the healthcare industry, AI is saving lives and transforming patient care. From diagnosing diseases using medical imaging to developing personalized treatment plans, AI is helping doctors make more accurate decisions and improving patient outcomes. Its like having a brilliant medical assistant who never gets tired or forgets to wash their hands.

3.2 AI in Finance AI is also making waves in the finance industry. With its ability to analyze vast amounts of financial data in real-time, AI-powered algorithms can detect fraud, predict market trends, and optimize investment strategies. Its like having a financial advisor whos always one step ahead and never pressures you into buying that expensive latte.

3.3 AI in Retail In the world of retail, AI is revolutionizing the customer experience. From personalized recommendations based on browsing history to cashier-less stores, AI is making shopping more convenient and tailored to individual preferences. Its like having a personal shopper who knows your style better than you do (but without the judgmental stares).

3.4 AI in Manufacturing Manufacturing is getting a major makeover thanks to AI. From predictive maintenance to optimizing supply chains, AI is streamlining processes, reducing costs, and improving overall efficiency. Its like having a production manager who can predict machine failures before they happen and always knows where to find that missing screw.

4. The Benefits and Challenges of Implementing AI Solutions

4.1 Advantages of AI in Business Processes Implementing AI solutions can bring a myriad of benefits to businesses. It can automate repetitive tasks, increase productivity, improve decision-making, and enhance customer experiences. Its like having a team of super-efficient employees who never complain about Monday mornings or steal your snacks from the office fridge.

4.2 Challenges and Limitations of AI Implementation As amazing as AI is, its not without its challenges. Data quality and availability, algorithm biases, and ethical considerations are just a few hurdles that need to be overcome. Its like trying to teach a mischievous monkey to use proper table manners it takes time and patience.

4.3 Overcoming Ethical and Privacy Concerns AI raises important ethical and privacy concerns that need to be addressed. We must ensure that AI systems are fair, transparent, and respect individual privacy rights. Its like teaching AI to follow the Golden Rule: treat others data as you would like your data to be treated.

Remember, you dont want to miss out on the AI revolution. So, hit that follow button on Twitter and join me in exploring the mind-blowing potential of AI. Lets geek out together!

5. Future Trends: How AI is Evolving and What to Expect When it comes to the future of AI, the possibilities are as endless as a buffet with no time limit. Here are some exciting trends that will make your jaw drop and your brain do somersaults:

5.1 Advancements in Deep Learning Deep learning is like the Olympics of AI, where machines compete to become the Michael Phelps of algorithms. Were talking about models that can learn from vast amounts of data and make mind-blowing predictions. From image recognition to natural language processing, deep learning is leveling up faster than Mario on a quest to rescue Princess Peach.

5.2 AI-powered Automation and Robotics AI isnt just about machines taking over the world like a sci-fi movie plot. Its also about making our lives easier and more efficient. With AI-powered automation and robotics, we can delegate repetitive tasks to smart machines, giving us humans more time to binge-watch our favorite shows on Netflix. Its like having a personal assistant that never needs bathroom breaks.

5.3 Impact of AI on the Job Market Now, before you start panicking about robots stealing your job, lets take a deep breath. Yes, AI will change the job market, but its not all doom and gloom. While some jobs may become obsolete, new opportunities will emerge. Its like a game of musical chairs, where everyone gets a shot at finding a new seat. So, sharpen your skills, stay curious, and embrace the AI wave with open arms (but not too open, we still need hugs).

6. Ethical Considerations: Addressing Concerns and Ensuring Responsible AI Use AI is like a shiny new toy that can bring immense joy, but we shouldnt forget about the potential pitfalls. Here are some ethical considerations to keep AI on the right path:

6.1 Privacy and Data Security As AI gets smarter, the amount of data it needs to consume grows like a teenagers appetite during a growth spurt. This raises concerns about privacy and data security. We need to ensure that the information we feed AI is protected and used responsibly. Nobody wants their secrets leaking out faster than a dropped ice cream cone on a summer day.

6.2 Bias and Fairness in AI Algorithms AI is only as unbiased as the humans who create it. If were not careful, AI algorithms can amplify existing biases and perpetuate discrimination. We need to make sure our algorithms treat everyone fairly, regardless of race, gender, or whether they like pineapple on pizza (we wont judge, promise).

6.3 Transparency and Accountability AI can sometimes feel like a black box, leaving us wondering how it came up with certain decisions. To build trust, we need transparency and accountability. We need to know how AI works and have mechanisms in place to challenge its decisions when they dont make sense. Its like having a magician explain their tricks, but without the disappointment of discovering that rabbits dont really disappear.

7. Getting Started: Practical Steps for Harnessing the Power of AI Ready to dive into the AI pool? Here are some practical steps to make your journey smoother than a babys bottom (figuratively, of course):

7.1 Identifying Opportunities for AI Integration Look around your business or personal life and identify tasks that could benefit from a touch of AI magic. Whether its automating repetitive processes or analyzing mountains of data, theres an AI solution for almost everything. Think of it as finding the perfect tool to fix that leaky faucet or shave that stubborn unibrow.

7.2 Data Collection and Preparation AI runs on data, like a car needs fuel (or a coffee addict needs caffeine). Collect the right data, clean it up, and make it all shiny and presentable for AI to work its magic. Its like organizing your wardrobe before a big night out you want to make sure you look your best and find the perfect outfit in a flash.

7.3 Selecting and Implementing AI With so many AI tools and technologies out there, its easy to get overwhelmed. Take your time, do your research, and find the AI solution that aligns with your needs and goals. Implementing AI is like adopting a pet it requires commitment, care, and a willingness to clean up the occasional mess (both literal and metaphorical).

Remember, AI is not a one-size-fits-all solution, but with a little know-how and a lot of enthusiasm, youll be riding the AI wave like a pro in no time. Now, go forth and unleash the power of AI, but dont forget to follow me on Twitter for more AI-related awesomeness. I promise it wont disappoint (or at least, lets hope not).

In conclusion, the power of AI and machine learning is truly awe-inspiring. As technology continues to advance, we can expect to witness even more mind-blowing applications and advancements in this field. However, it is crucial to approach AI with responsibility and ethical considerations, ensuring that it is used for the betterment of society. By embracing the potential of AI and staying informed about its evolving trends, we can harness its power to create a future that is truly transformative. So, lets embark on this exciting journey together and unlock the boundless possibilities that AI and machine learning have to offer.

FAQ

1. What is the difference between AI and Machine Learning? AI refers to the broader concept of machines exhibiting human-like intelligence, while Machine Learning is a subset of AI that focuses on algorithms enabling machines to learn and make predictions based on data.

2. How is AI being used in different industries? AI is being utilized in various industries such as healthcare, finance, retail, and manufacturing. In healthcare, AI is helping with diagnosis and treatment planning, while in finance, AI is being used for fraud detection and algorithmic trading. Retail businesses are leveraging AI for personalized recommendations, and manufacturing industries are implementing AI for predictive maintenance and process optimization.

3. What are the ethical concerns surrounding AI? Ethical concerns in AI include issues related to privacy and data security, biases in algorithms, and the potential impact on the job market. It is crucial to address these concerns and ensure that AI is developed and implemented responsibly, with transparency, fairness, and accountability in mind.

4. How can businesses harness the power of AI? To harness the power of AI, businesses can start by identifying opportunities for AI integration within their processes and operations. Collecting and preparing relevant data, selecting appropriate AI algorithms, and partnering with experts in the field can help businesses effectively implement and leverage AI solutions for improved efficiency, decision-making, and customer experiences.

The rest is here:
Unleashing the Power of AI: Discover the Mind-Blowing Potential of Machine Learning - Medium

Minimizing the Reality Gap in Quantum Devices with Machine Learning – AZoQuantum

A major obstacle facing quantum devices has been solved by a University of Oxford study that leveraged machine learning capabilities. The results show how to bridge the reality gap, or the discrepancy between expected and observed behavior from quantum devices, for the first time. Physical Review X has published the findings.

Image Credit:metamorworks/Shutterstock.com

Numerous applications, such as drug development, artificial intelligence, financial forecasting, and climate modeling, might be significantly improved by quantum computing. However, this will necessitate efficient methods for combining and scaling separate quantum bits (also known as qubits). Inherent variability, which occurs when even seemingly similar units display distinct behaviors, is a significant obstacle to this.

It is assumed that nanoscale flaws in the materials utilized to create quantum devices are the source of functional variability. This internal disorder cannot be represented in simulations since these cannot be measured directly, which accounts for the discrepancy between expected and observed results.

The study team addressed this by indirectly inferring certain disease traits through the use of a physics-informed machine learning technique. This was predicated on how the devices intrinsic instability impacted the electron flow.

As an analogy, when we play crazy golf the ball may enter a tunnel and exit with a speed or direction that doesnt match our predictions. But with a few more shots, a crazy golf simulator, and some machine learning, we might get better at predicting the balls movements and narrow the reality gap.

Natalia Ares, Study Lead Researcher and Associate Professor, Department of Engineering Science, University of Oxford

One quantum dot device was used as a test subject, and the researchers recorded the output current across it at various voltage settings. A simulation was run using the data to determine the difference between the measured current and the theoretical current in the absence of an internal disturbance.

The simulation was forced to discover an internal disorder arrangement that could account for the results at all voltage levels by monitoring the current at numerous distinct voltage settings. Deep learning was combined with statistical and mathematical techniques in this method.

Ares added, In the crazy golf analogy, it would be equivalent to placing a series of sensors along the tunnel, so that we could take measurements of the balls speed at different points. Although we still cant see inside the tunnel, we can use the data to inform better predictions of how the ball will behave when we take the shot.

The novel model not only identified appropriate internal disorder profiles to explain the observed current levels, but it also demonstrated the ability to precisely forecast the voltage settings necessary for particular device operating regimes.

Most importantly, the model offers a fresh way to measure the differences in variability between quantum devices. This could make it possible to predict device performance more precisely and aid in the development of ideal materials for quantum devices. It could guide compensatory strategies to lessen the undesirable consequences of material flaws in quantum devices.

Similar to how we cannot observe black holes directly but we infer their presence from their effect on surrounding matter, we have used simple measurements as a proxy for the internal variability of nanoscale quantum devices. Although the real device still has greater complexity than the model can capture, our study has demonstrated the utility of using physics-aware machine learning to narrow the reality gap.

David Craig, Study Co-Author and PhD Student, Department of Materials, University of Oxford

Craig, D. L., et. al. (2023) Bridging the Reality Gap in Quantum Devices with Physics-Aware Machine Learning. Physical Review X. doi:10.1103/PhysRevX.14.011001

Source: https://www.ox.ac.uk/

See the rest here:
Minimizing the Reality Gap in Quantum Devices with Machine Learning - AZoQuantum

Machine Learning in Business: 5 things a Data Science course won’t teach you – Towards Data Science

The author shares some important aspects of Applied Machine Learning that can be overlooked in formal Data Science education.

If you feel that I used a clickbaity title for this article, Id agree with you but hear me out! I have managed multiple junior data scientists over the years and in the last few years I have been teaching an applied Data Science course to Masters and PhD students. Most of them have great technical skills but when it comes to applying Machine Learning to real-world business problems, I realized there were some gaps.

Below are the 5 elements that I wish data scientists were more aware of in a business context:

Im hoping that reading this will be helpful to junior and mid-level data scientists to grow their career!

In this piece, I will focus on a scenario where data scientists are tasked with deploying machine learning models to predict customer behavior. Its worth noting that the insights can be applicable to scenarios involving product or sensor behaviors as well.

Lets start with the most critical of all: the What that you are trying to predict. All subsequent steps data cleaning, preprocessing, algorithm, feature engineering, hyperparameters optimization become futile unless you are focusing on the right target.

In order to be actionable, the target must represent a behavior, not a data point.

Ideally, your model aligns with a business use case, where actions or decisions will be based on its output. By making sure the target you are using is a good representation of a customer behavior, it is easy for the business to understand and utilize these models outputs.

Read more from the original source:
Machine Learning in Business: 5 things a Data Science course won't teach you - Towards Data Science

Machine Learning for Predicting Oliguria in Intensive Care Units | Healthcare News – Medriva

Intensive care units (ICUs) are critical environments that deal with high-risk patients, where early detection of complications can significantly improve patient outcomes. Oliguria, a condition characterized by low urine output, is a common concern in ICUs and often signals acute kidney injury (AKI). Early prediction of oliguria can lead to timely intervention and better management of patients. Recent studies have shown that machine learning, a branch of artificial intelligence, can be effectively used to predict the onset of oliguria in ICU patients.

A retrospective cohort study aimed to develop and evaluate a machine learning algorithm for predicting oliguria in ICU patients. The study used electronic health record data from 9,241 patients admitted to the ICU between 2010 and 2019. The machine learning model demonstrated high accuracy in predicting the onset of oliguria at 6 hours and 72 hours with Area Under the Curve (AUC) values of 0.964 and 0.916, respectively. This suggests that the machine learning model can be a valuable tool for early identification of patients at risk of developing oliguria, enabling prompt intervention and optimal management of AKI.

The machine learning model identified several important variables for predicting oliguria. These included urine values, severity scores (SOFA score), serum creatinine, oxygen partial pressure, fibrinogen, fibrin degradation products, interleukin 6, and peripheral temperature. By taking into account these variables, the model was able to provide accurate predictions. The use of machine learning also allows for the continuous update and improvement of the model as more data becomes available, increasing its predictive accuracy over time.

Interestingly, the models accuracy varied based on several factors, including sex, age, and furosemide administration. This highlights the complex nature of predicting oliguria and the need for personalized, patient-specific models. It also underlines the potential of machine learning to adapt and learn from varying patient characteristics, providing more precise and individualized predictions.

The utilization of machine learning is not limited to predicting oliguria. Another study aimed to develop a machine learning model for early prediction of adverse events and treatment effectiveness in patients with hyperkalemia, a condition characterized by high levels of potassium in the blood. This study, too, achieved promising results, underscoring the potential of machine learning to revolutionize various aspects of patient care in the ICU setting.

The use of machine learning models in healthcare, and particularly in intensive care units, is a promising avenue for improving patient outcomes. By predicting the onset of conditions like oliguria, these models can provide critical early warnings that allow healthcare providers to intervene promptly. However, its crucial to remember that these models are tools to assist clinicians and not replace their judgment. As research continues and more data becomes available, these models are expected to become even more accurate and valuable in the future.

View post:
Machine Learning for Predicting Oliguria in Intensive Care Units | Healthcare News - Medriva