The 2034 Millionaire’s Club: 3 Machine Learning Stocks to Buy Now – InvestorPlace

Machine learning stocks are gaining traction as the interest in artificial intelligence (AI) and machine learning soars, especially after the launch of ChatGPT by OpenAI. This technology has given us a glimpse of its potential, sparking curiosity about its future applications, and has led to my list of machine learning stocks to buy.

Machine learning stocks present a promising opportunity for growth, with the potential to create significant wealth. As per analyst forecasts, I think around a decade from now is when we will see these companies go parabolic and reach their full growth potential.

These companies leverage machine learning for various applications, including diagnosing life-threatening diseases, preventing credit card fraud, developing chatbots and exploring advanced tech like artificial general intelligence. The future will only get better from here.

So if youre looking for machine learning stocks to buy with substantial upside potential, keep reading to discover three top picks.

Source: Lori Butcher / Shutterstock.com

DraftKings (NASDAQ:DKNG) leverages machine learning to enhance its online sports betting and gambling platform. The company has shown significant growth, with recent revenue increases and expansion in legalized betting markets.

DKNG has significantly revised its revenue outlook for 2024 upwards, expecting it to be between $4.65 billion and $4.9 billion, marking an anticipated year-over-year growth of 27% to 34%. This adjustment reflects higher projections compared to their earlier forecast ranging from $4.50 billion to $4.80 billion. Additionally, the company has increased its adjusted EBITDA forecast for 2024, now ranging from $410 million to $510 million, up from the previous estimate of $350 million to $450 million.

DraftKings has also announced plans to acquire the gambling company Jackpocket for $750 million in a cash-and-stock deal. This acquisition is expected to further enhance DraftKings market presence and capabilities in online betting.

I covered DKNG before, and I still think its one of the best meme stocks that investors can get behind. The companys stock price has risen 72.64% over the past year, and it seems theres still plenty of fuel left in the tank to surge higher.

Source: Sundry Photography / Shutterstock

Cloudflare (NYSE:NET) provides a cloud platform that offers a range of network services to businesses worldwide. The company uses machine learning to enhance its cybersecurity solutions.

Cloudflare has outlined a robust strategy for 2024, focusing on advancing its cybersecurity solutions and expanding its network services. The company expects to generate total revenue between $1.648 billion and $1.652 billion for the year. This revenue forecast reflects a significant increase in their operational scale.

NET is another stock that is leveraging machine learning to its full advantage. Ive been bullish on this company for some time and continue to be so. Notably, Cloudflare is expanding its deployment of inference-tuned graphic processing units (GPUs) across its global network. By the end of 2024, these GPUs will be deployed in nearly every city within Cloudflares network.

NET has been silently integrating many parts of its network within the internets fabric for millions of users, such as through its DNS service, Cloudflare WARP; reverse proxy for website owners; and much more. Around 30% of the 10,000 most popular websites globally use Cloudflare. Many of NETs services can be accessed free of charge.

It is following a classic tech stock strategy of expanding its users, influence and reach over reaching immediate profits, and its financials have slowly scaled with this performance.

Source: VDB Photos / Shutterstock.com

CrowdStrike (NASDAQ:CRWD) is a leading cybersecurity company that uses machine learning to detect and prevent cyber threats.

In its latest quarterly report on Mar. 5, CRWD reported a 102% earnings growth to 95 cents per share and a 33% revenue increase to $845.3 million. Analysts expect a 57% earnings growth to 89 cents per share in the next report and a 27% EPS increase for the full fiscal year ending in January.

Adding to the bull case for CRWD is that it has has partnered with Google Cloud by Alphabet (NASDAQ:GOOG, GOOGL) to enhance AI-native cybersecurity solutions, positioning itself strongly against competitors like Palo Alto Networks (NASDAQ:PANW).

Many contributors here at Investorplace have identified CRWD as one of the best cybersecurity stocks for investors to buy, and I am in agreement here. Its aggressive EPS growth and stock price appreciation (140.04% over the past year), make it a very attractive pick for long-term investors.

On the date of publication, Matthew Farley did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Matthew started writing coverage of the financial markets during the crypto boom of 2017 and was also a team member of several fintech startups. He then started writing about Australian and U.S. equities for various publications. His work has appeared in MarketBeat, FXStreet, Cryptoslate, Seeking Alpha, and the New Scientist magazine, among others.

Read the original here:
The 2034 Millionaire's Club: 3 Machine Learning Stocks to Buy Now - InvestorPlace

Tricorder Tech: A.I. / Machine Learning: Adaptive Sampling With PIXL On Mars Perseverance – Astrobiology – Astrobiology News

Model of the Mars Perseverance rover showing the location of PIXL at the end of the robotic arm. NASA

Editors note: From the papers Introduction: At the time of this writing, PIXLs adaptive sampling capability has been operating on Mars for over 951 sols (martian days), with over 52 scans analyzed. To our knowledge, it represents the first case of autonomous decision-making through compositional analysis performed by a spacecraft on another planet. And from the Conclusion: We have successfully demonstrated new adaptive sampling technology with PIXL on the Mars Perseverance rover. To our knowledge, this has enabled the first autonomous decision-making based on real-tie compositional analysis by an exploration spacecraft. Almost all the rules were implemented through machine learning, trained with compositional analysis of PIXL Mars data taken earlier in the mission.

As we send our droids to other worlds to study local geology and search for biosignatures well need to have them equipped with as much autonomy as we can provide autonomy that we can upgrade based on mission experience as well as autonomy that can be improved by the droid itself based on its own experience. When we join our robotic partners on site well need to adopt the same approach with the tools that we bring with us. Well need the latest analytical tools in our Base Camp laboratory and out in the field on Away Team sorties embedded inside our tricorders and other sensors that we bring along.

These tools will need to be forward/backward compatible so as to be upgradeable. Even with fast communication back to Earth, the combination of varying message lags and inevitable bandwidth constraints, an emphasis will be placed on in situ capabilities as will creativity on the part of both humans and robots.

If we are going to expend the large amount of financial and national resources to send these missions eventually with humans we need to equip them as best we can with tools that learn just like human crews do. Our droids are leading the way programmed by smart humans back home. Here is one example already at work in the field- on Mars.

Planetary rovers can use onboard data analysis to adapt their measurement plan on the fly, improving the science value of data collected between commands from Earth.

This paper describes the implementation of an adaptive sampling algorithm used by PIXL, the X-ray fluorescence spectrometer of the Mars 2020 Perseverance rover.

PIXL is deployed using the rover arm to measure X-ray spectra of rocks with a scan density of several thousand points over an area of typically 5 x 7 mm. The adaptive sampling algorithm is programmed to recognize points of interest and to increase the signal-to-noise ratio at those locations by performing longer integrations.

Preparations for the PIXL scan on sol 294 (Quartier). Most PIXL scans (about 60%) have been conducted on an abraded patch that exposes the sub-surface of the rock (an additional 32% are on natural surfaces and 8% on regolith). In this example an abraded patch of 50-mm diameter and approximately 7-mm deep is produced using the Sampling and Caching Subsystem (Moeller et al., 2021). This patch has been cleared of dust to roughly 40-mm diameter using the Gas Dust Removal Tool. The PIXL scan on sol 294 is 77 mm (placement not shown), about one seventh the diameter of the abraded patch, and has 3299 points spaced 0.125 mm apart. astro-ph.EP

PIXLISE view of carbonates detected on sol 879 (Gabletop Mountain). Shown here is a compositional map of carbonates derived from PIXLISE expressions. The expressions are equations that can use counts from PIXLs two detectors to estimate weight percents of compositions of interest while taking into account the effects of diffraction. The grey-colored points are where divide-by-zero errors have occurred and should be ignored. astro-ph.EP

5 x 7 mm PIXL scan of target Lake Haiyaha (sol 851) displaying elemental abundances of Cr2O3 in green. The scan only contains 3 Cr-rich grains (chromites) comprising 9 PMCs total (0.4% of the map scan [9 PMCs/2346 PMCs]). The Chromite-bearing PMCS are defined here as PMCs with >2 wt% Cr2O3. astro-ph.EP

Two approaches are used to formulate the sampling rules based on past quantification data: 1) Expressions that isolate particular regions within a ternary compositional diagram, and 2) Machine learning rules that threshold for a high weight percent of particular compounds.

The design of the rulesets are outlined and the performance of the algorithm is quantified using measurements from the surface of Mars.

To our knowledge, PIXLs adaptive sampling represents the first autonomous decision-making based on real-time compositional analysis by a spacecraft on the surface of another planet.

Peter R. Lawson (1), Tanya V. Kizovski (2), Michael M. Tice (3), Benton C. Clark III (4), Scott J. VanBommel (5), David R. Thompson (6), Lawrence A. Wade (6), Robert W. Denise (1), Christopher M. Heirwegh (6), W. Timothy Elam (7), Mariek E. Schmidt (2), Yang Liu (6), Abigail C. Allwood (6), Martin S. Gilbert (6), Benjamin J. Bornstein (6) ((1) Retired Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA (2) Brock University, St. Catherines, ON, Canada (3) Texas A&M University, College Station, TX, USA (4) Space Sciences Institute, Boulder, CO, USA (5) Washington University in St. Louis, St. Louis, MO, USA (6) Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA (7) University of Washington, Seattle, WA, USA)

Comments: 24 pages including 11 figures and 7 tables. Submitted for publication to the journal Icarus Subjects: Earth and Planetary Astrophysics (astro-ph.EP); Instrumentation and Methods for Astrophysics (astro-ph.IM) Cite as: arXiv:2405.14471 [astro-ph.EP] (or arXiv:2405.14471v1 [astro-ph.EP] for this version) https://doi.org/10.48550/arXiv.2405.14471 Focus to learn more Submission history From: Peter Lawson [v1] Thu, 23 May 2024 11:57:02 UTC (17,776 KB) https://arxiv.org/abs/2405.14471 Astrobiology,

Read more:
Tricorder Tech: A.I. / Machine Learning: Adaptive Sampling With PIXL On Mars Perseverance - Astrobiology - Astrobiology News

AI Startup Says California AI Bill Will Hamper Innovation – BroadbandBreakfast.com

AI

The bill increases regulatory requirements for machine learning systems in California.

May 24, 2024 In a Tuesday press release, Haltia AI, an artificial intelligence startup based in Dubai, warned leaders in machine learning that Californias new AI bill will cripple innovation with overly burdensome regulations.

Haltia said that the bill throws a wrench into the growth of AI startups with its unrealistic requirements and stifling compliance costs.

The legislation, titled the Understanding the Safe and Secure Innovation for Frontier Artificial Intelligence Act, was introduced in February and passed the California State Senate on Tuesday. The act mandates that developers of AI tools comply with various safety requirements and report any safety concerns.

AI systems are defined by the act as machine-based systems that can make predictions, recommendations, decisions, and formulate options. Safety tests include ensuring that an AI model does not have the capability to enable harms, such as creation of chemical and biological weapons or cyberattacks on critical infrastructure. Third party testers will be required to determine the safety of these systems.

Haltia said that on the surface, the act aims for responsible AI development. However, its implementation creates a labyrinth of red tape that disproportionately impacts startups. Because the bill requires ongoing annual reviews, Haltia argues that it adds significant technical and financial burdens.

Arto Bendiken, co-founder and CTO at Haltia, said that the act is a prime example of how well-intentioned regulations can morph into a bureaucratic nightmare. He added that the financial penalties for non-compliance only exacerbate the issue, potentially deterring groundbreaking ideas before they even take flight.

Haltia called for other AI startups to follow its lead and move operations to the United Arab Emirates where its thriving ecosystem, coupled with its commitment to the future of AI, makes it the ideal launchpad for the next generation of groundbreaking AI technologies in the Silicon Valley of the East.

In 2023, California Governor Gavin Newson signed an executive order that announced new directives aimed at understanding the risks of machine learning technologies in order to ensure equitable outcomes when used and to prepare the states workforce for its use.

View post:
AI Startup Says California AI Bill Will Hamper Innovation - BroadbandBreakfast.com

Airbnb using machine learning technology to prevent parties – KYW

PHILADELPHIA (KYW Newsradio) With the help of machine learning technology, Airbnb says it will be cracking down on parties this summer.

Its really important that those spaces are respected and treated with care, and that, you know, people are not showing up and taking advantage of that, said Airbnbs Global Director of Corporate and Policy Communications Christopher Nulty.

The best part about staying in an Airbnb is often that you're staying in a neighborhood, and the only way to continue staying in a neighborhood is to be a good neighbor.

Nulty says the company will be using the technology to prevent any disruptive parties, paying close attention to bookings on Memorial Day, Fourth of July and Labor Day. It looks at how long guests are staying, past rental ratings, distance from home, and the number of guests.

So far, it has resulted in a 50% reduction in unauthorized parties. In 2023, more than 67,000 people across the U.S., including 950 in Philadelphia, were deterred from booking entire home listings over those weekends.

Those who are flagged, but arent actually planning on throwing a party, can call Airbnbs customer service line.

Go here to see the original:
Airbnb using machine learning technology to prevent parties - KYW

Collaborative artificial intelligence system for investigation of healthcare claims compliance | Scientific Reports – Nature.com

Formal definition of rule and similarity metrics

A rule is a logical representation of a section of text in a policy document. We formally define a rule (R={bigwedge }_{i=1}^{n}{C}_{i}) as the conjunction of the set of conditions (mathcal{C}left(Rright)={{C}_{1},{C}_{2},dots ,{C}_{n}}), where each condition ({C}_{i}) is defined by a property in our ontology, for example hasMinAge or hasExcludedService. The ontology also specifies restrictions on properties such as cardinality or expected data types. We refer to the set of values of condition ({C}_{i}) in rule (R) as (mathcal{V}left({C}_{i},Rright)={{v}_{1},{v}_{2},dots ,{v}_{m}}); multiple values are interpreted as a disjunction. If (mathcal{V}left({C}_{i},Rright)) contains only one numeric value (for example hasMinAge(12)), then we refer to ({C}_{i}) as a numeric condition.

Given two rules ({R}_{x}) and ({R}_{y}), and a condition ({C}_{i}), we define the condition similarity metric as:

$${mathcal{S}}_{C}left({C}_{i},{R}_{x},{R}_{y}right)= left{begin{array}{cc}frac{1}{1+left|{v}_{i, x} - {v}_{i, y}right|}& begin{array}{c}if ,{C}_{i}in mathcal{C}left({R}_{x}right) ;and ,{C}_{i}in mathcal{C}left({R}_{y}right); and ,{C}_{i} ,is, numeric\ and ; mathcal{V}left({C}_{i},{R}_{x}right)={{v}_{i, x}}; and ;mathcal{V}left({C}_{i},{R}_{y}right)={{v}_{i, y}}end{array}\ frac{left|mathcal{V}left({C}_{i},{R}_{x}right)cap mathcal{V}left({C}_{i},{R}_{y}right)right|}{left|mathcal{V}left({C}_{i},{R}_{x}right)cup mathcal{V}left({C}_{i},{R}_{y}right)right|}& text{if }{C}_{i}in mathcal{C}left({R}_{x}right)text{ and }{C}_{i}in mathcal{C}left({R}_{y}right)text{ and }{C}_{i}text{ is not numeric}\ 0& text{ otherwise}end{array}right.$$

If ({C}_{i}) is a numeric condition, and is present in both ({R}_{x}) and ({R}_{y}), then the condition similarity ({mathcal{S}}_{C}left({C}_{i},{R}_{x},{R}_{y}right)) is inversely proportional to the distance between ({v}_{i, x}) (the numeric value of ({C}_{i}) in ({R}_{x})) and ({v}_{i, y}) (the numeric value of ({C}_{i}) in ({R}_{y})). If instead, ({C}_{i}) is present in both ({R}_{x}) and ({R}_{y}), but it is not numeric, then the condition similarity ({mathcal{S}}_{C}left({C}_{i},{R}_{x},{R}_{y}right)) is the Jaccard similarity48 between the set of values of ({C}_{i}) in ({R}_{x}) and the set of values of ({C}_{i}) in ({R}_{y}). Finally, when ({C}_{i}) is missing in either ({R}_{x}) or ({R}_{y}), the condition similarity ({mathcal{S}}_{C}left({C}_{i},{R}_{x},{R}_{y}right)) is equal to 0.

Given two rules ({R}_{x}) and ({R}_{y}), we also define the rule structure similarity as the Jaccard similarity between the set of conditions in ({R}_{x}) and the set of conditions in ({R}_{y}):

$${mathcal{S}}_{S}left({R}_{x},{R}_{y}right)= frac{left|mathcal{C}left({R}_{x}right)cap mathcal{C}left({R}_{y}right)right|}{left|mathcal{C}left({R}_{x}right)cup mathcal{C}left({R}_{y}right)right|}$$

When ({R}_{x}) is a ground-truth rule and ({R}_{y}) is automatically extracted from the same paragraph of text corresponding to ({R}_{x}), then we use a slightly modified version of condition similarity and structure similarity, where we replace the Jaccard similarity between the set of values and the set of conditions with the SrensenDice coefficient36,37 between the same sets. More precisely:

$${mathcal{S}}_{C}left({C}_{i},{R}_{x},{R}_{y}right)= left{begin{array}{cc}frac{1}{1+left|{v}_{i, x} - {v}_{i, y}right|}& begin{array}{c}if ;{C}_{i}in mathcal{C}left({R}_{x}right) ;and ;{C}_{i}in mathcal{C}left({R}_{y}right) ;and ;{C}_{i} ;is ;numeric\ and ;mathcal{V}left({C}_{i},{R}_{x}right)={{v}_{i, x}}; and ;mathcal{V}left({C}_{i},{R}_{y}right)={{v}_{i, y}}end{array}\ frac{2cdot left|mathcal{V}left({C}_{i},{R}_{x}right)cap mathcal{V}left({C}_{i},{R}_{y}right)right|}{left|mathcal{V}left({C}_{i},{R}_{x}right)right| + left|mathcal{V}left({C}_{i},{R}_{x}right)right|}& text{if }{C}_{i}in mathcal{C}left({R}_{x}right)text{ and }{C}_{i}in mathcal{C}left({R}_{y}right)text{ and }{C}_{i}text{ is not numeric}\ 0& text{ otherwise}end{array}right.$$

$${mathcal{S}}_{S}left({R}_{x},{R}_{y}right)= frac{2cdot left|mathcal{C}left({R}_{x}right)cap mathcal{C}left({R}_{y}right)right|}{left|mathcal{C}left({R}_{x}right)right| + left|mathcal{C}left({R}_{x}right)right|}$$

The SrensenDice coefficient gives a better measure of the accuracy of the system when extracting rules from text. When ({R}_{x}) is a ground-truth rule and ({R}_{y}) is the corresponding automatically extracted rule, we consider (mathcal{C}left({R}_{x}right)) and (mathcal{C}left({R}_{y}right)) as, respectively, the set of expected and predicted conditions in the rule, and (mathcal{V}left({C}_{i},{R}_{x}right)) and (mathcal{V}left({C}_{i},{R}_{y}right)) as, respectively, the set of expected and predicted values for condition ({C}_{i}) in the rule. In this scenario the SrensenDice coefficient is equivalent to the F1-score, and it measures the harmonic mean of the precision and recall of Clais when extracting ({R}_{y}).

Finally, we define the overall rule similarity ({mathcal{S}}_{R}left({R}_{x},{R}_{y}right)) and the text similarity ({mathcal{S}}_{T}left({R}_{x},{R}_{y}right)) between rules ({R}_{x}) and ({R}_{y}) as follows:

$${mathcal{S}}_{R}left({R}_{x},{R}_{y}right)= frac{{mathcal{S}}_{S}left({R}_{x},{R}_{y}right)+sum_{{C}_{i}in mathcal{C}left({R}_{x}right)cup mathcal{C}left({R}_{y}right)}{mathcal{S}}_{C}left({C}_{i},{R}_{x},{R}_{y}right)}{1+left|mathcal{C}left({R}_{x}right)cup mathcal{C}left({R}_{y}right)right|}$$

$${mathcal{S}}_{T}left({R}_{x},{R}_{y}right) = 1 - frac{arccosleft(frac{{u}_{x}cdot {u}_{y}}{Vert {u}_{x}Vert Vert {u}_{y}Vert }right)}{pi }$$

The overall rule similarity is the arithmetic mean of the structure similarity ({mathcal{S}}_{S}left({R}_{x},{R}_{y}right)) and the condition similarity ({mathcal{S}}_{C}left({C}_{i},{R}_{x},{R}_{y}right)) for all conditions ({C}_{i}) in ({R}_{x}) or ({R}_{y}). The text similarity ({mathcal{S}}_{T}left({R}_{x},{R}_{y}right)) is the angular similarity between the embedding vectors ({u}_{x}) and ({u}_{y}) encoding the sections of text corresponding to ({R}_{x}) and ({R}_{y},) respectively. We use Sentence-BERT (SBERT)49 with the state of the art model all-mpnet-base-v250 to compute the embedding vectors; similar to other work51, we convert the cosine similarity between the embedding vectors into an angular distance in the range [0, 1] using arccos and normalizing by (pi).

Clais uses knowledge graphs to represent rules. A knowledge graph is a directed labelled graph (example in Fig.1d), and we encode its structure and semantics using RDF52 triples (subject, predicate, object). Our ontology32 (excerpt in Fig.1b), designed in collaboration with expert policy investigators25, formally defines the meaning of subjects, predicates and objects used in the rule knowledge graphs. The ontology also specifies restrictions on the predicates (for example, expected domain and range, disjoint or cardinality constraints), which guide our system in building semantically valid RDF triples and meaningful rules. Additionally, the ontology defines rule types, which consist of concepts-relationships templates capturing repeatable linguistic patterns in policy documents. The current version of the ontology specifies three rule types: (1) limitations on services, such as units of service or reimbursable monetary amounts that a provider can report for a single beneficiary over a given period; (2) mutually exclusive procedures that cannot be billed together for the same patient over a period; and (3) services not covered by a policy under certain conditions.

The ontology design ensures that every rule knowledge graph can be modelled as a tree (an undirected graph in which any two vertices are connected by exactly one path), where leaves are values of the conditions in the rule. The tree representation enables Clais to visualize the rules conditions and their values in an intuitive user interface, which simplifies editing and validation of the rule. The same user interface also supports the interactive creation of new rules: professionals compose a rule by selecting items from a library of conditions based on the property defined in the ontology; the system asks for condition values and checks their validity in accordance with the restrictions defined for the corresponding ontology predicate (for example domain, range, cardinality).

We build upon recent natural language processing (NLP) techniques24,25 to automatically identify dependencies between relevant entities and relations described in a fragment of policy text, and to assemble them into a rule. Clais uses a configurable NLP extraction pipeline, where each component can be replaced or complemented by others with similar functionalities. The configuration can be customized either manually or using hyper-parameter optimization53,54 to tune the overall performance of the extraction pipeline for a given policy, domain or geographic region (specifically, we use the Optuna55 hyperparameter optimization framework). Clais NLP extraction pipeline (Fig.6), which does not require labelled data, consists of the following steps: (1) data preparation according to the policy domain and geography (state/region); (2) automatic annotation of policy text fragments to identify mentions of domain entities and relations in the text; (3) building of rule knowledge graphs corresponding to policy text segments using their domain entities and relations in accordance to the ontology definitions; (4) knowledge graph consolidation and filtering to produce a set of well-formed rules (necessary when different components/approaches are used to build the knowledge graphs, or when rules extend across multiple sentences in the policy text).

Clais NLP extraction pipeline, which transform a policy PDF document and related domain-specific tabular data into a knowledge graph describing the policy rules. The four stages of the NLP extraction pipeline rely on the general domain ontology.

We automated the data preparation step with tools that use configurable mappings to translate any existing tabular data (used to define domain specific values for a target policy in a geographical region) into ontology instance values of a specific entity type. The tabular data sources contain both the surface forms (meaningful textual labels, synonyms, etc.) necessary to identify mentions in the policy text, and their mapping to ontological resources. The system uses the surface forms for entity and relation annotation in the cold-start scenario (lack of labelled data); the surface forms are usually part of existent tabular data, such as the ones describing codes for eligible places of service56 (hospital, clinic, etc.) or the Healthcare Common Procedure Coding System (HCPCS)57 published by Centers for Medicare and Medicaid.

We use state of the art PDF tools to extract the text and the structure of a policy document. The annotation of text relies on ontology-based annotators58usingdictionary-based lemma matches. In addition, we initialize the annotators with entity labels such as UMLS types59, which are useful to fill in values for some ontology properties (depending on their range definition).

The core of the NLP extraction pipeline transforms textual patterns between ontological annotations into sets of RDF triples, which constitute a partial rule knowledge graph; it uses two domain-agnostic approaches: semantic role labeling25 and semantic reasoning24. Semantic role labeling is based on a declarative information extraction system58 that can identify actions and roles in a sentence (who is the agent, what is the theme or context of the action, if there are any conditionals, the polarity of the action, temporal information, etc.). We use the ontology definitions for the domain and range of properties to reason over the semantic roles, and to identify semantically compatible relation and entity/value pairs in a sentence. Semantic reasoning transforms syntactic dependencies from a dependency tree60,61 into RDF triples. Dependency trees capture fine-grained and relevant distant (not necessarily consecutive in the text) syntactic connections by following the shortest path between previously annotated ontological entities. For each linguistically connected predicate-arguments tuple in the sentence, the dependency tree searches the ontology definitions for non-ambiguous paths that could connect the tuple elements in a semantically correct way; the search is based on parametrized templates24.

Semantic role labelling and semantic parsing may produce partially overlapping knowledge graphs from the same paragraph of text: the final stage of the extraction pipeline24 consolidates them into one final rule knowledge graph, and it also filters potential inaccuracies, based on heuristics that detect violations of ontological constraints such as disjointness (a procedure code cannot be both reimbursable and non-reimbursable in the same rule) or cardinality restrictions (there can only be one applicable time period for each rule).

We observe that the extraction pipeline does not require training with labelled data. However, as users validate policy rules, they progressively build a reusable, shared library of machine-readable and labelled rules, which Clais gradually incorporates into two deep learning models26 (based on BERTBidirectional Encoder Representations from Transformers62) that complement the other components of the extraction pipeline: a classifier that identifies paragraphs of text potentially containing a rule description in new policy documents, and a model that predicts the probability that a text fragment provides conditions (and values) for a rule.

Clais executes rules on claims data through dynamically constructed software pipelines whose components translate the semantics of rule conditions into executable code. The workflow consists of three stages: first, the system normalizes the conditions of a rule (R) into a format amenable to execution; second, it transforms the normalized conditions into an evaluation pipeline ({P}_{R}) by assembling executable components; and third, it executes instances of ({P}_{R}) and reports the results.

Rules automatically extracted from a policy document (or manually defined by professionals) may be either compliance rules (defining conditions characterizing valid claims) or non-compliance rules (defining conditions that identify claims at-risk). When analyzing claims data, the typical task consists of identifying non-compliant claims, and therefore Clais executes only non-compliant rules. The system transforms a compliance rule into a non-compliance rule by changing one (or more) of its conditions according to their semantics (ontology definitions), and to the logic structure of the rule (logical negation). Figure1ce show an example of such transformation: the policy text A periodic oral evaluation is a payable benefit once in a six-month period is a compliance rule defining a unit limitation: it is compliant to claim at most one unit of service within six months. The corresponding knowledge graph models the unit limitation using the conditions hasMaxUnits(1) (for brevity we omit conditions defining the temporal aspect of the rule). Clais transforms the rule into a non-compliance rule by changing hasMaxUnits(1) into hasMinUnit(2): it is not compliant to claim two or more units of service within six months. The preliminary transformation stage may also replace a subset of conditions with a single condition whose semantics maps directly to one of the executable components in the downstream execution pipeline. An example of such transformation is the following: rules typically have some temporal constraints, which are expressed with two conditions: one defining the amount of time (hasAmountOfTime), and one defining the unit of time (hasUnitOfTime)see examples in Fig.1d and Fig.5k. This pair of conditions is replaced with a single condition that defines a time window in days (hasSlidingWindowInDays)see Fig.1e.

Clais dynamically assembles an evaluation pipeline for each rule by chaining executable components that perform operations on an evaluation context; the evaluation context consists of the temporal sequence of claim records belonging to a patient. Clais uses three types of executable components: filter, splitter, and evaluator; these are sufficient to map the semantics of the ontology properties that define normalized rule conditions. Future extensions of the ontology may require new types of executable components. The definition of filters, splitters, and evaluators uses declarative functions that query the fields of a claim record and produce some output value; we support the modular expression language SpEL63 for writing such functions. Filters apply logical conditions to limit the evaluation context only to claim records that are relevant for the rule being executed. A typical example of a condition that maps to a filter is hasApplicableService (Fig.1e), which restricts the evaluation context to those claims referring to a specific service code. Splitters divide an evaluation context into a stream of evaluation contexts depending on values in the claim records, or on temporal windows; they may also perform grouping operations and compute aggregation on claim records. The condition hasBillingCommonality(same_provider) (see Rule ({R}_{1}) in Fig.5k) maps to a splitter that groups claim records by their provider identifier; the condition hasSlidingWindowInDays(180) (Fig.1e) maps to a splitter that divides the sequence of claim records into sub-sequences whose temporal duration spans at most 180days. Finally, an evaluator analyses each claim in an evaluation context using an expression that evaluates to true or false; a typical evaluator produces a true value when the cumulative sum of a specified claim field, over the claims in the evaluation context, is greater (lower) than a given threshold.

Clais uses a configurable mapping to transform conditions in a rule into their respective executable components, and to assemble them in an evaluation pipeline. In each pipeline, the final evaluator component evaluates each claim in its respective evaluation context. A claim may appear in several evaluation contexts depending on the rule conditions: the claim is non-compliant if the evaluator associates a true value with the claim in at least one evaluation context. Clais builds an evaluation pipeline ({P}_{R}) for every rule (R); it deploys an instance of ({P}_{R}) for each patient (the input consists of an evaluation context listing the claims of the patient in chronological order). Each instance of ({P}_{R}) is executed in parallel: the parallel execution of evaluation pipelines (for different rules and for different patients) can be distributed across a computing infrastructure, thus enabling the scalability required to efficiently process very large volumes of claims (distributed execution was not necessary in our experiments).

We used a dataset of 44,913,580 dental claims to train our baseline models; 421,321 (0.94%) claims are labelled as fraudulent. In our experiments we used random under-sampling for each rule data with 80:20 distribution (80% normal and 20% fraudulent claims) to build the training dataset (same as previous work15). We experimented with several strategies to cope with the problem of class imbalance, including considering different balancing ratios for the dataset and re-weighting of classes in the negative log likelihood loss; the best performance was obtained with a simple random under-sampling with 80:20 distribution.

For both baseline models, we did not perform feature engineering to create additional features, because manual feature engineering is data-dependant. Different rules require different types of features and creating manual features for every rule is impractical, and as costly as manually implementing algorithms for compliance checks (similarly to what policy investigators currently do).

We used the open-source LightGBM64 implementation of gradient boosting trees with the following default hyper-parameters: n_estimator=20,000, max_depth=6, num_leaves=1000, learning_rate=0.01, bagging_fraction=0.8, min_data_in_leaf=5. For gradient boosting models, we have rebalanced the data with different balancing ratios ranging from 0.1 to 1.0 but we did not observe an improvement over the reported results (Fig.3a).

The deep neural network baseline uses a Recurrent Neural Network (RNN) architecture44,45,46 to learn dependencies between elements in a sequence. The sequence used to classify a claim consists of all the preceding claims and related features in chronological order, in addition to the claim being classified. Each claim in the sequence also contains the time-delta from the previous claims. All categorical features are processed with an embedding layer, having a size equal to the square root of the feature cardinality; all numerical features are processed by a normalization layer that estimates the variance and the average during the training phase. The RNN implementation relies on a Gated Recurrent Unit46 with 3 hidden layers, trained with a learning rate of 10e-3 using negative log likelihood loss. We use a binary softmax layer as a final claims classification layer. We trained the network for up to 50 epochs (similarly to previous work21), but we did not see improvements.

We organized the user study in two sessions: the first with the External Group (participants had no prior knowledge of Clais), and the second with the Internal Group (participants were also involved in the design and prototyping of Clais and in the development of the ground truth). Each session was organized as a video-conference: each participant used her/his own computer and answered the questionnaires independently. We started each session describing the purpose of the user study, the structure of PUEU and USE questionnaires, and their scope in the context of our user study. The session with the External Group included a one-hour introductory tutorial about Clais. We delivered the questionnaires on spreadsheets; we went through the questions one by one, providing clarifications if necessary, and waited for each participant to answer a question (including writing comments) before moving to the next. We collected the spreadsheets with the answers and removed any reference to the participants identity (except for job role and group) before processing the results.

Go here to see the original:
Collaborative artificial intelligence system for investigation of healthcare claims compliance | Scientific Reports - Nature.com

Machine Learning for Exoplanet Detection in High-Contrast Spectroscopy: Revealing Exoplanets by Leveraging … – Astrobiology News

Molecular maps of H2O for real PZ Tel B data using cross-correlation for spectroscopy. This figure shows a real case example where the noise structures may reduce detection capabilities of cross-correlation methods. The brown dwarf was observed under good conditions (airmass: 1.11, Seeing start to end: 0.77 0.72) and lower conditions (airmass: 1.12, Seeing: 1.73 1.54), c.f. Appendix A for full details on observing conditions. Upper plots show molecular maps of PZ Tel B, while the lower plots show the cross-correlation series along the radial velocity (RV) support for pixels at the centre of the object, and within the objects brightness area. While the brown dwarf should appear at the same spatial coordinates for respective RV locations in both cases (c.f. vertical lines), it is clearly visible when conditions are good, but hardly visible on equal scales under lower conditions. astro-ph.EP

The new generation of observatories and instruments (VLT/ERIS, JWST, ELT) motivate the development of robust methods to detect and characterise faint and close-in exoplanets. Molecular mapping and cross-correlation for spectroscopy use molecular templates to isolate a planets spectrum from its host star.

However, reliance on signal-to-noise ratio (S/N) metrics can lead to missed discoveries, due to strong assumptions of Gaussian independent and identically distributed noise. We introduce machine learning for cross-correlation spectroscopy (MLCCS); the method aims to leverage weak assumptions on exoplanet characterisation, such as the presence of specific molecules in atmospheres, to improve detection sensitivity for exoplanets. MLCCS methods, including a perceptron and unidimensional convolutional neural networks, operate in the cross-correlated spectral dimension, in which patterns from molecules can be identified.

We test on mock datasets of synthetic planets inserted into real noise from SINFONI at K-band. The results from MLCCS show outstanding improvements. The outcome on a grid of faint synthetic gas giants shows that for a false discovery rate up to 5%, a perceptron can detect about 26 times the amount of planets compared to an S/N metric. This factor increases up to 77 times with convolutional neural networks, with a statistical sensitivity shift from 0.7% to 55.5%. In addition, MLCCS methods show a drastic improvement in detection confidence and conspicuity on imaging spectroscopy.

Once trained, MLCCS methods offer sensitive and rapid detection of exoplanets and their molecular species in the spectral dimension. They handle systematic noise and challenging seeing conditions, can adapt to many spectroscopic instruments and modes, and are versatile regarding atmospheric characteristics, which can enable identification of various planets in archival and future data.

Emily O. Garvin, Markus J. Bonse, Jean Hayoz, Gabriele Cugno, Jonas Spiller, Polychronis A. Patapis, Dominique Petit Dit de la Roche, Rakesh Nath-Ranga, Olivier Absil, Nicolai F. Meinshausen, Sascha P. Quanz

Comments: 27 pages, 24 figures. Submitted for publication in A&A January 2, 2024. After first iteration with the referee, resubmitted May 17, 2024 Subjects: Earth and Planetary Astrophysics (astro-ph.EP); Instrumentation and Methods for Astrophysics (astro-ph.IM); Machine Learning (cs.LG); Applications (stat.AP) Cite as: arXiv:2405.13469 [astro-ph.EP] (or arXiv:2405.13469v1 [astro-ph.EP] for this version) Submission history From: Emily Omaya Garvin [v1] Wed, 22 May 2024 09:25:58 UTC (2,774 KB) https://arxiv.org/abs/2405.13469 Astrobiology, Astrochemistry,

Read the original post:
Machine Learning for Exoplanet Detection in High-Contrast Spectroscopy: Revealing Exoplanets by Leveraging ... - Astrobiology News

Bringing generative artificial intelligence to space – SpaceNews

TAMPA, Fla. Amazon Web Services is busy positioning its cloud infrastructure business to capitalize on the promise of generative artificial intelligence for transforming space and other industries.

More than 60% of the companys space and aerospace customers are already using some form of AI in their businesses, according to AWS director of aerospace and satellite Clint Crosier, up from single digits around three years ago.

Crosier predicts similar growth over the next few years in space for generative AI, which uses deep-learning models to answer questions or create content based on patterns detected in massive datasets, marking a major step up from traditional machine-learning algorithms.

Mathematical advances, an explosion in the amount of available data and cheaper and more efficient chips for processing it are a perfect storm for the rise of generative AI, he told SpaceNews in an interview, helping drive greater adoption of cloud-based applications.

In the last year, AWS has fundamentally reorganized itself internally so that we could put the right teams [and] organizational structure in place so that we can really double down on generative AI, he said.

He said AWS has created a generative AI for space cell of a handful of people to engage with cloud customers to help develop next-generation capabilities.

These efforts include a generative AI laboratory for customers to experiment with new ways of using these emerging capabilities.

Crosier sees three main areas for using generative AI in space: geospatial analytics, spacecraft design and constellation management.

Earth observation satellite operators such as BlackSky and Capella Space already use these tools to help manage search queries and gain more insights into their geospatial data.

Its early days in the manufacturing sector, but Crosier said engineers are experimenting with how a generative AI model fed with design parameters could produce new concepts by drawing from potentially overlooked data, such as from the automotive industry.

Whether youre designing a satellite, rocket or spacecraft, youre letting the generative AI go out and do that exploratory work around the globe with decades of data, he said, and then it will come back and bring you novel design concepts that nobody has envisioned before for your team to use as a baseline to start refining.

He said generative AI also has the potential to help operators manage increasingly crowded orbits by helping to simulate testing scenarios.

If I have a constellation of 600 satellites, I want to model how that constellation will behave under various design parameters, he said.

Well, I can get a model of two concepts, which leaves me woefully inadequate but it costs time and money to model them, or I can model an infinite number. Gen AI will tell me what are the top 25 cases I should model for my modeling simulation capability that will give me the best design optimization, and so were seeing it used that way.

AWS efforts to accelerate the adoption of these emerging computing capabilities also include scholarships and a commitment announced in November to provide free AI training for two million people worldwide before the end of 2025.

Go here to read the rest:
Bringing generative artificial intelligence to space - SpaceNews

Machine Learning Approach Uncovers Unreported PFAS in Industrial Wastewater | American Association for the … – AAAS

A new technique can more accurately detect the presence of nondegradable, synthetic chemicals that linger invisibly in ecosystems thanks to their as-yet unknown structures. These aptly nicknamed "forever chemicals" persist and accumulate in the environment, causing cancers and developmental disorders across all organisms.

When tested on wastewater samples collected in 2011 from a Chinese industrial park, the framework identified 31 classes of these forever chemicals, which are also known as per- and polyfluoroalkyl substances (PFAS). Within those 31 classes were 17 classes of PFAS that had gone unreported until now.

"[This] reveals a greater presence of these chemicals in the environment than previously known," said Si Wei, a specialist in environmental chemistry, professor at Nanjing University and corresponding author of the research published in Science Advances. "[Our] finding is critical for understanding the potential impact of PFAS on the environment and human health."

During the mid-20 th century, PFAS appeared on the scene and rapidly gained popularity because of their innate resistance to heat, water and oil. These qualities made them perfect for nonstick pans, waterproof fabrics, fire-fighting foam, food packaging and more. However, as the 21 st century began, national agencies including the Centers for Disease Control and Prevention and international authorities noticed that the substances' ubiquity had led to unexpected side effects: PFAS were furtively entering water sources, land and animals, and they were causing cancer, reproductive defects and more.

Like many drugs that break down in the gut and then disperse throughout the body, PFAS too break down into compounds, or "seeds," as they slink through ecosystems.

"These seeds act as reference points. Once we identify a seed, we can use it to find other PFAS that have similar structures," said Wei. "It's like finding a particular tree in a forest. Once you recognize that tree, you can easily spot others of the same type."

This tactic can provide health and environmental policymakers with information necessary for taking regulatory action against these pollutants. Just this spring, the U.S. Environmental Protection Agency passed legislation to address PFAS contamination. Yet, a problem remains. Companies are inventing different PFAS that may circumvent legislative efforts, and most do not have to disclose any information about these newer chemicals' design.

"New PFAS are kind of like cousins to the old ones they share similar properties and similar structural features, but [are] not completely consistent," said Wei. "Because they're different, they're not as familiar to scientists and regulators, which makes it more difficult to keep track of them and evaluate their potential risks."

Many new PFAS are so different in structure from their predecessors that they and their seeds are invisible to existing tools.

Noting this clear need for new investigatory technology, Wei and his colleagues developed a platform, called APP-ID, that combines machine learning algorithms with a high-resolution mass spectroscopy molecular network approach. Researchers conventionally use this latter strategy to screen for microbial natural products and metabolites with therapeutic potential that lurk in the deep ocean's hydrothermal vents.

The approaches "cluster structure similar compounds and identify the unknowns based on the information of knowns," said Wei. "By applying molecular networking, we can map out the relationships between known and unknown PFAS."

In tests, the PFAS detection framework discovered unknown chemicals with 58.3% accuracy an improvement over three other current methods with 43.8%, 37.5% and 12.5% accuracy, respectively.

The team also had APP-ID evaluate wastewater samples taken over a decade ago from a fluorochemical industrial site in China. It successfully unearthed 733 PFAS belonging to 31 classes. Roughly 54% of those classes seen had never been described before. Notably, 10 of the unreported classes were made of single compounds, which are particularly hard to trace using traditional methods.

Next, the group had the tool retrospectively screen a public repository called MassIVE. This databank is renowned among scientists because it holds 15,000 datasets with environmental and human data from 50 countries. When analyzing a variety of environmental and human samples from 20 of those countries, APP-ID exposed 126 PFAS comprised of 81 unknown, eight legacy, and 37 emergent or newer, already-characterized types of PFAS chemicals. Essentially, 64% of the 126 had never been catalogued before a result that underscores how newer generations of forever chemicals are hiding in plain sight.

Wei hopes that APP-ID will aid future efforts to conceptualize unknown PFAS accumulation globally. "It's reasonable to assume that the variety of PFAS has expanded over the past decade," he added. "Further research is necessary to uncover the historical trends and forecast future patterns of both known and unknown PFAS."

Read the rest here:
Machine Learning Approach Uncovers Unreported PFAS in Industrial Wastewater | American Association for the ... - AAAS

Machine learning drafted to aid Phase 3 testing of ALS therapy PrimeC – ALS News Today

NeuroSense Therapeutics is collaborating with PhaseV for insights into how to better design the protocol for the planned Phase 3 trial that will test PrimeC for amyotrophic lateral sclerosis (ALS).

A specialist in machine learning technology for clinical trials, PhaseV used data from the ongoing Phase 2b PARADIGM trial (NCT05357950) as input to a causal machine learning model. This is a form of artificial intelligence that can help unlock insights and identify features that may contribute to a treatment response.

As part of its independent analysis, the company found that PrimeC could work well in multiple subgroups of patients in the Phase 3 study, which should start in the coming months.

Being able to predict treatment outcomes in certain patients may help optimize the design of the upcoming trial by selecting the patients most likely to respond, while reducing costs.

ALS is a complex disease that manifests in unique ways in each patient. Although there is an improved understanding of the underlying mechanisms of ALS, therapeutic options remain limited, Raviv Pryluk, CEO and co-founder of PhaseV, said in a press release.

NeuroSense plans to submit an end-of-Phase 2 package for review by the U.S. Food and Drug Administration (FDA) and the European Medicines Agency, the FDAs European counterpart, and discuss the clinical protocol for the Phase 3 trial with the regulators.

There remains a critical need for new innovative approaches to address this devastating neurodegenerative disease, said Alon Ben-Noon, CEO of NeuroSense. We plan to continue to collaborate with PhaseV as we develop our Phase 3 trial.

PrimeC contains fixed doses of two FDA-approved oral medications: the antibiotic ciprofloxacin and celecoxib, a pain killer that reduces inflammation. Both are expected to work together to slow or stop disease progression by blocking key mechanisms that lead up to ALS, such as inflammation, iron accumulation, and RNA processing.

PARADIGM is testing a long-acting formulation of PrimeC in 68 adults with ALS who started to see symptoms up to 2.5 years before enrolling. While continuing their standard ALS treatments, the participants were randomly assigned to PrimeC or a placebo, taken as two tablets twice daily for six months.

An analysis of PARADIGMs per-protocol population 62 adults with ALS who adhered well to the clinical protocol showed a significant 37.4% reduction in functional decline, as measured by the ALS Functional Rating Scale-Revised (ALSFRS-R).

A subgroup of those patients who were at a higher risk for rapid disease progression had the most clinical benefit, with those treated with PrimeC for six months showing a significant, 43% reduction in functional decline over a placebo. High-risk patients made up about half the adults in the Phase 2b trial.

Another subgroup of newly diagnosed patients whod had their first symptoms of ALS within a year of enrollment showed a 52% reduction in the rate of disease progression. This translated to a 7.76-point difference in favor of PrimeC on a maximum total of 48 points in the ALSFRS-R.

Through our initial collaboration with PhaseV, we gained an even greater understanding of the effect of PrimeC across multiple patient subgroups, Ben-Noon said. We will apply these insights to optimize the design of our Phase 3 study with the aim of maximizing meaningful clinical results that will differentiate PrimeC in the market.

Through a unique combination of causal [machine learning], real-world data, and advanced statistical methods, we confirmed the potential clinical benefit of PrimeC, Pryluk said. Our analysis predicted a high rate of success for PrimeC in the Phase 3 clinical trial for multiple recommended subgroups.

See more here:
Machine learning drafted to aid Phase 3 testing of ALS therapy PrimeC - ALS News Today

Pipe welding integrates machine learning to boost production – TheFabricator.com

The SWR was designed specifically for automated pipe welding.

For Joe White Tank in Fort Worth, Texas, increased demand for construction projectsand more competitive bidding for the pipe fabrication jobs within those projectsrecently presented a new challenge.

The company has been in the welding industry since 1942, specializing in fabricating custom tanks, pressure vessels for industrial and ammonia refrigeration, and piping for commercial and industrial construction. It has built a reputation for quality work while consistently delivering products faster than typical market lead times.

That standard recently got put to the test, but President Jeff Yurtin and his management team arent people used to resting on their laurels. Rather, theyre normally on the hunt for new ways to clear the next hurdle.

For Yurtin, the issue at hand was scaling labor for projects that can dampen productivity if theyre not managed correctly. To meet that need, he chose to invest in a Novarc Spool Welding Robot (SWR). The machine offers accurate torch control and machine learning algorithms that can detect different features of a workpiece.

The unit was designed specifically for pipes, pressure vessels, and other types of roll-welded workpieces. It features an adaptive controls system to help ensure accurate torch control, and AI/machine learning algorithms to detect weld pool features.

The SWR can integrate smoothly with the production flow and existing manufacturing processes of customers, according to Soroush Karimzadeh, Novarcs co-founder and CEO.

The SWR is designed with a small footprint and a very long reach, enabling it to be adopted in almost any fabrication shop, no matter the layout or various requirements, Karimzadeh said. Its designed to be minimally intrusive to the production flow.

The nature of Joe White Tanks bread-and-butter projects can throw a kink into its work process, however. And that includes persistent labor issues.

Piping projects can require hard starts and stops with little time to ramp your labor up and down, Yurtin said. Hiring and firing welders for jobs was not our idea of success. We pursued growing our business with a long-term mindset. By adding the SWR to our shop floor, we added capacity strategically and avoided many of the negative implications that come from a short-term, job-to-job labor force. And it has a small footprint; we would have had to install four manual weld cells to do the job of the SWR.

The SWR also helps to address the nationwide shortage of skilled welders by helping less-experienced operators produce high-quality welds.

Novarcs machine includes a user interface that has proven easy to learn for operators, regardless of experience level.

Balancing a stable workforce with changing customer and industry demands can be difficult, Yurtin said. Our organizations culture is very important to our management team. So, as we have grown our companys market presence, we have worked to limit high employee turnover.

The benefits of workforce continuity are legion. Not only does it give employees a greater sense of job security, but it also results in a more willing commitment to corporate goals. At a time when fabrication shops across North America are experiencing a shortage of skilled welders, the SWR helped limit the impact of this challenge, Yurtin said.

We used to have a department dedicated to pipe welding, but now we have our SWR operators working with fitters, supporting them, to ramp up efficiency, he said. This has freed up welders to work on other projects in our backlog, shrinking our market lead time and significantly increasing our capacity. The Novarc SWR increased our capacity by 400% without reducing quality.

The SWR accommodates users with a set of requirements for the fit-up process and provides comprehensive training for fitters.

This is another way to ensure the integration of the SWR into our clients manufacturing processes is as smooth as possible, Karimzadeh said.

With a nod to sustaining their strong corporate culture, the companys employees are buying in, according to Yurtin.

Anyone thats been in the welding business for more than 10 minutes knows that the physical demands are significant, Yurtin said. Welders get tired.

The ergonomics of the SWR are an immediate benefit for the welder, he added. They're still using their hands, but they dont have to wear a hood, and its much easier with the joystick control.

The increased productivity also created an unexpected effect on the shop floor that Yurtin recounted with a smile:

Sometimes its like a game, where the welders see how much they can get through in one day, and were all pumped when we have a super productive day.

With that sort of team reaction, the machine could even be seen as an aid in recruitment.

Its a more attractive place to work, and the younger generation of welders is really excited about automation and working with the SWR, Yurtin said. Even the older workers find the learning curve easy to handle.

Often with automation comes an initial hesitancy, either about using the new technology or the need to make a change that could be perceived as high risk. Yurtin, however, chooses to focus on the ROI.

Quick payback, he said. As we are able to operate four times faster, we have been able to take on more work. We would need four weld cells previously to deliver the same capacity as one SWR. And the SWR takes 25% to 30% less space. The SWR has increased our ability to accept jobs with shorter lead times, win more projects, and pursue larger bids.

At the end of the day, safety always comes first for Karimzadeh.

Novarc cobots are designed to follow the standard for collaborative robots and collaborative robot applications, governed by the ISO 15066 standard, so the cobot is basically equipped with force- and speed-limiting sensors to ensure that if there is a safety event, it can safely stop the work, Karimzadeh said. In addition, the health hazards for welders are significantly reduced, as the welding torch is moved by the cobot, and welders are not exposed to weld fumes and arc light.

As helpful as those benefits are, the one that stands out for Yurtin is quality.

We are mainly using the SWR to weld pipe for pressure vessels, industrial refrigeration, ammonia refrigerationbasically pipe for industrial applications, Yurtin noted. These need to be ASME-quality, X-ray-quality welds. And the SWR, besides being super easy to operate, has increased the quality and consistency of the welds. The SWR lays the root pass itself, and the penetration is perfect from root to cap. The SWR can handle it all. And it always passes X-ray inspection.

Yurtin also credits the SWR for helping to position Joe White Tank as a future-friendly welding shop.

Were excited to be a showcase for innovation and believe the manufacturing industry needs to adopt new technology to be successful and meet increasing demands for productivity and competitive bidding, he said. Our clients are really impressed by the technology of the collaborative robot in our shop, not to mention the quality, productivity, output, and efficiency.

Karimzadeh added, The end users of pipe spools are pushing harder and harder regarding project delivery timelines, cost of production, and quality of welds. This is ultimately pushing the industry to automate. Its the only way to meet the timelines, manage the cost, and maintain the quality of the work.

For Joe White Tank, the search for new solutions to welding challenges is a constant quest for answers that improve its product, its work environment, and, ultimately, its bottom line while reflecting positively on its reputation in the industry.

Continue reading here:
Pipe welding integrates machine learning to boost production - TheFabricator.com

Here’s how AI and ML are shaping the future of machine design – Interesting Engineering

In the latest episode of Lexicon, the podcast by Interesting Engineering (IE), we sit down with Jaroslaw Rzepecki, Ph. D., Monumos chief technology officer (CTO).

Our mission is to improve the efficiency of motor systems in a way that has never been possible before and, in doing so, help us use precious resources more sustainably, Monumo explains. Monumo is working hard to get there using a unique set of data and machine learning techniques to build one of the worlds first large engineering models (LEM).

Once matured, this model will work like an engineering R&D Midjourney or Dall-E to help engineers with components or entire machine plans on demand. A quantum leap in computer-aided design (CAD), if you like.

While the interface wont be as dumbed down as you might expect from large-language models (LLMs) like ChatGPT, it will leverage an engineers time to make the best kit they can imagine. And the potential is enormous.

Jaroslaw Rzepecki leads the companys technological development, oversees the hardware and software development pipelines, and directs machine learning (ML) research.

Before joining Monumo, Jaroslaw was an integral part of the Codemasters team behind the racing video games Grid and Dirt 2; he has also worked as a software engineer at Siemens and held senior roles at Microsoft Research and ARM.

As he told Interesting Engineering during our interview, he also spends some of his spare time in martial arts, specifically kickboxing. We asked him if martial arts had helped his professional life.

Yeah, so its a bit similar to my professional journey, so I tried several different disciplines as I moved around. You know I was also changing clubs, and obviously, then you also change the styles a little bit, Jaroslaw told us.

I did quite a few different ones. I would say that my favorite sport is kickboxing. Ive done that for probably the longest out of all of them, and whether it helps, it does. I think it helps with focus. It helps with clearing your mind, he added.

Afterwards, you probably feel physically exhausted; youre quite invigorated. You have more energy that day to do something than if you would skip that training the previous day. So yes, I would say that it does help, Jaroslaw said.

After his extensive and diverse career, including academia, computer game design, and software engineering at Siemens and ARM, Jaroslaw saw the potential for Monumo and jumped ship to become its second-ever employee. He has since worked up the ranks to become its head tech honcho.

When asked if this was a big risk for him, Jaroslaw said, Um, there is always some risk involved when you change, right? But you know no risk, no fun, right? So, yes, I think a bit of a risk was involved. But um, as I said, I calculated that risk and thought, thats okay.

The main thrust of Monumos work is to combine physics and engineering knowledge with machine learning (ML) and artificial intelligence (AI) to build a computer model that can help sketch out new models for machines. The idea is that, with enough data and training, such a model could conceivably be used to make novel designs never dreamed up before.

And it will be data and professional-driven to boot. Not just any Tom, Dick, and Harry will be able to pick it up and run with it. This is mainly because Monumo plans to keep its software proprietary but also because, at its heart, the software is a complicated multidisciplinary physics model.

It combines data and understanding of many different engineering fields and physics and can conceivably integrate many other diverse fields. This could encompass nuclear physics, nanotechnology, biology, and geology. It could be integrated into the model if it can be measured or modeled.

One sentence headline here, and Im sure that everybody in the engineering community listening to this podcast will appreciate how difficult it is to find the right balance of different components of a complex engineering system if you want to design it, right? Jaroslaw said.

Its a difficult problem. So I like a challenge, I like difficult problems, and applying deep tech to engineering also automatically makes it a multidisciplinary problem because obviously, you have to combine, you know, the latest developments in computer science algorithms optimization, math, and physics, he added.

But the LEM is the long-term goal. For now, they are building an Anser model that can generate models but, crucially, provide the training data for the LEM later down the line. Monumo is focusing on making electric motors as energy-efficient as possible.

When pressed about problems of LLMs and hallucinations, Jaroslaw explained that Anser and the eventual LEM would be immune to this. He explained this because the generated designs are then sense checked using mechanical engineering tools to assess their viability.

If they dont pass the muster, the software flags issues, and the user will go back to the drawing board to amend the design accordingly. The entire design process is the same as in real life, with multiple stages yielding the final piece.

It is a collaborative approach, like tweaking parameters in Midjourney or Dall-E to get the picture you want. Anser can also integrate certain customer considerations or constraints into the design based on their needs.

Since many aspects of our modern world use energy in some form or another, even a marginal increase in energy efficiency could provide enormous energy savings around the world. Less energy wasted is a bonus for the planet as a whole.

And so any kind of improvements that we can make to electric motors will have a huge positive impact on ecology and our movement of the society to towards a more and more green way of life, Jaroslaw said.

The company chose the electric motor as it is a simple and complex enough problem. If Anser can be proven with something like this, it can be used for basically anything (within reason) with enough data and training.

The techniques that were applying and the simulation that we build is a multiphysics simulation so that it could be applied to other branches of engineering we are indeed. Yes, we are laying the foundations and building the simulation that is flexible enough, he explained.

LLMs (Large Language Models) drive todays AI models to mimic human ability with words and pictures. Tomorrow, LEMs (Large Engineering Models) will create solutions that surpass anything humans have previously achieved. Our ability to run and store large volumes of simulations, combined with our optimization intelligence, means that we are already on the way to building these precious data sets and training new models, Monumo explains.

And dont worry about such a model taking your engineering job. Jaroslaw explained that Anser and its progeny should be considered a new, competent computer-aided (CAD) design software.

I dont think that Engineers have to worry about losing their jobs. I will always need engineers. You know, all of this is, um, its a tool, and weve seen in the past that each time a new tool is developed in principle, he said.

Humankind has an option: either Im going to use this great new tool and do the same thing that I did before but with fewer humans being involved, or I can use this new tool and all the humans that I have just to do more, and we always go for Lets just do more, he added.

So, it may be time to brush up on your AI and ML expertise.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Christopher McFadden Christopher graduated from Cardiff University in 2004 with a Masters Degree in Geology. Since then, he has worked exclusively within the Built Environment, Occupational Health and Safety and Environmental Consultancy industries. He is a qualified and accredited Energy Consultant, Green Deal Assessor and Practitioner member of IEMA. Chris’s main interests range from Science and Engineering, Military and Ancient History to Politics and Philosophy.

See original here:
Here's how AI and ML are shaping the future of machine design - Interesting Engineering

Redox, Snowflake Partner to Streamline Healthcare Data Exchange for AI and Machine Learning – HIT Consultant

What You Should Know:

Redox, a healthcare interoperability company, and Snowflake, the Data Cloud company, have joined forces to simplify the exchange of healthcare data.

This strategic partnership aims to revolutionize how healthcare organizations access and utilize patient data, ultimately leading to improved patient care.

Unifying Legacy Systems for Seamless Data Flow

The collaboration leverages Redoxs expertise in unifying healthcare data from various sources, including legacy systems and disparate formats. This unified data stream is then delivered to Snowflakes Healthcare & Life Sciences Cloud in near real-time. This streamlined approach eliminates data silos and ensures a more comprehensive view of patient health information.

Empowering Providers, Payers, and Digital Health with AI and ML

By making healthcare data readily available in Snowflakes secure and scalable cloud environment, Redox and Snowflake empower various healthcare stakeholders. Providers, payers, and digital health organizations can leverage this data for advanced analytics powered by Artificial Intelligence (AI) and Machine Learning (ML).

The ability to quickly, easily, and securely access health data from a variety of systems is essential for uncovering meaningful insights that are required for better precision-based care and better member outcomes, said Joe Warbington, Industry Principal for Healthcare at Snowflake. Together, the Snowflake Healthcare and Life Sciences Data Cloud and Redox accelerate interoperability to centralize live healthcare data from often dozens to hundreds of data system silos, equipping our customers to garner deeper data insights, construct comprehensive Patient 360 data products, and push insights back into EHRs and health tech apps. We look forward to seeing how Snowflakes and Redoxs technologies drive the future of connected healthcare.

Read more:
Redox, Snowflake Partner to Streamline Healthcare Data Exchange for AI and Machine Learning - HIT Consultant

Machine Learning Stocks to Buy That Are Millionaire-Makers: May – InvestorPlace

Source: Wright Studio / Shutterstock.com

The next phase of technology has been established: machine learning and AI will revolutionize the world for the better. Although it might seem like these stocks are trading in a bubble, investors need to keep a discerning and keen long-term vision for these disruptive, emerging technologies. Some way or another, AI will grow to become a secular movement that nearly every industry, not every company in the world, will incorporate to increase productivity and efficiency.

Of course, anxiousness about the AI bubble is not unwarranted. Preparing a well-diversified portfolio of the right stocks is crucial to avoid such major drawdowns. Just because a company mentions AI doesnt mean it instantly becomes a good investment. Weve already seen this with pullbacks in industries like EVs and fintech. So, if you want to gain machine learning exposure in your portfolio, consider these three machine learning stocks to buy and thank us in the coming five or ten years.

Source: Ascannio / Shutterstock.com

Palantir (NYSE:PLTR) went from a meme stock to a legitimate business, earning hundreds of millions each year in profits. The stock is trading right at the average analyst price target of $21.45 and has a street-high price target of $35.00. This high-end target represents a more than 60% upside from the current price.

This stock has been polarizing on Wall Street since its direct listing debut in September 2020. While the first few years were a roller coaster ride for investors, the stock is earning legitimate backing through its machine-learning integrated production deployment infrastructure. Additionally, the hype doesnt get any more legit than Stanley Druckenmiller, who disclosed that he bought nearly 770,000 shares in the recent quarter! For those who dont know him, Druckenmiller has long supported the ML revolution, with NVIDIA (NASDAQ:NVDA) being his most recent win during its massive rally over the past year.

The problem with Palantir has always been its valuation. Currently, shares trade at 21x sales and 65x forward earnings. Nonetheless, growth prospects are looking strong now, with revenue growing at a five-year compound annual growth rate (CAGR) of 12% and a three-year CAGR of 21%. As multiples begin to compress, investors should consider Palantir to be a legitimate money-making contender in the ML space.

Baidu (NASDAQ:BIDU) is a Chinese technology company that recently amassed over 200 million users on its new Ernie AI chatbot. This year, the stock is down by about 4.0% as Chinese stocks have lagged the broader rally in US equities. Nonetheless, Wall Street has maintained an average analyst price target of $153.36, about 40% higher than the current price.

Baidu recently made headlines after reporting it was interested in partnering with Tesla (NASDAQ:TSLA) to use its robotaxis in China. As China looks to get its hands on some for immediate rollout, investors should keep their eyes peeled for the unveiling of the CyberCabs in America this August. Not only will this potentially be one of the strongest new channels for revenue growth for both these companies, but Baidus race to get first movers advantage could solidify it as a leader in the Chinese automobile space.

As with many Chinese ADR stocks, the multiples for BIDU are low. For example, its P/E ratio of 9.79x is sitting 25% lower than its sectors median! On top of such a discounted valuation, Baidu has maintained a strong 10-year revenue CAGR of 14%. Baidu looks like a bargain for investors who can tolerate the risk that comes with Chinese stocks.

Micron Technologies (NASDAQ:MU) is an American chip maker with a major surge in demand due to AI and machine learning technology. Analysts are bullish on MU, with 28 of 31 recommendations coming in May as a Buy or Strong Buy rating. The average analyst price target is $145.52, nearly 15% higher than the current price.

This chip maker has already hit new all-time highs this month and is seeing revitalized product demand. This growth potential has largely been attributed to Micron being one of three companies in the world that make DRAM memory chips. These chips allow for storing massive amounts of data, which will help accelerate the training of AI and machine learning technologies. These DRAM chips account for 71% of Microns revenue as of Q2 2024, which bodes well for the stocks upward momentum.

Usually, when a stock trades at all-time highs, its valuations also stretch. Thats not exactly true for Micron, as shares are trading at just 7.5x sales and 17x forward earnings. As revenue growth accelerates, Micron sticks out as one of the more under-the-radar ways to gain exposure to AI and potentially join the million-dollar club.

On the date of publication, Ian Hartana and Vayun Chugh did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Chandler Capital is the work of Ian Hartana and Vayun Chugh. Ian Hartana and Vayun Chugh are both self-taught investors whose work has been featured in Seeking Alpha. Their research primarily revolves around GARP stocks with a long-term investment perspective encompassing diverse sectors such as technology, energy, and healthcare.

See more here:
Machine Learning Stocks to Buy That Are Millionaire-Makers: May - InvestorPlace

Slack is training its machine learning on your chat behavior unless you opt out via email – TechRadar

Slack has been using customer data to power its machine learning functions, including search result relevance and ranking, leading to the company being criticized over confusing policy updates that led many to believe that their data was being used to train its AI models.

According to the company's policy, those wishing to opt out must do so through their organizations Slack admin, who must email the company to put a stop to data use.

Slack has confirmed in correspondence to TechRadar Pro that the information it uses to power its ML not its AI is de-identified and does not access message content.

An extract from the companys privacy principles page reads:

To develop non-generative AI/ML models for features such as emoji and channel recommendations, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.

Another passage reads: To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com

The company does not provide a timeframe for processing such requests.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

In response to uproar among the community, the company posted a separate blog post to address concerns arising, adding: We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce any customer data of any kind.

Slack confirmed that user data is not shared with third-party LLM providers for training purposes.

The company added in its correspondence to TechRadar Pro that its "intelligent features (not Slack AI) analyze metadata like user behavior data surrounding messages, content and files but they don't access message content."

More:
Slack is training its machine learning on your chat behavior unless you opt out via email - TechRadar

Scientists leverage machine learning to decode gene regulation in the developing human brain – EurekAlert

image:

The study is part of the PsychENCODE Consortium, which brings together multidisciplinary teams to generate large-scale gene expression and regulatory data from human brains across several major psychiatric disorders and stages of brain development. (From left: first authors Sean Whalen and Chengyu Deng, and senior authors Katie Pollard and Nadav Ahituv.)

Credit: Gladstone Institutes / Michael Short

SAN FRANCISCOMay 24, 2024In a scientific feat that broadens our knowledge of genetic changes that shape brain development or lead to psychiatric disorders, a team of researchers combined high-throughput experiments and machine learning to analyze more than 100,000 sequences in human brain cellsand identify over 150 variants that likely cause disease.

The study, from scientists at Gladstone Institutes and University of California, San Francisco (UCSF), establishes a comprehensive catalog of genetic sequences involved in brain development and opens the door to new diagnostics or treatments for neurological conditions such as schizophrenia and autism spectrum disorder. Findings appear in the journal Science.

We collected a massive amount of data from sequences in noncoding regions of DNA that were already suspected to play a big role in brain development or disease, says Senior Investigator Katie Pollard, PhD, who also serves as director of the Gladstone Institute for Data Science and Biotechnology. We were able to functionally test more than 100,000 of them to find out whether they affect gene activity, and then pinpoint sequence changes that could alter their activity in disease.

Pollard co-led the sweeping study with Nadav Ahituv, PhD, professor in the Department of Bioengineering and Therapeutic Sciences at UCSF and director of the UCSF Institute for Human Genetics. Much of the experimental work on brain tissue was led by Tomasz Nowakowski, PhD, associate professor of neurological surgery in the UCSF Department of Medicine.

In all, the team found 164 variants associated with psychiatric disorders and 46,802 sequences with enhancer activity in developing neurons, meaning they control the function of a given gene.

These enhancers could be leveraged to treat psychiatric diseases in which one copy of a gene is not fully functional, Ahituv says: Hundreds of diseases result from one gene not working properly, and it may be possible to take advantage of these enhancers to make them do more.

Organoids and Machine Learning Take the Spotlight

Beyond identifying enhancers and disease-linked sequences, the study holds significance in two other key areas.

First, the scientists repeated parts of their experiment using a brain organoid developed from human stem cells and found that the organoid was an effective stand-in for the real thing. Notably, most of the genetic variants detected in the human brain tissue replicated in the cerebral organoid.

Our organoid compared very well against the human brain, Ahituv says. As we expand our work to test more sequences for other neurodevelopmental diseases, we now know that the organoid is a good model for understanding gene regulatory activity.

Second, by feeding massive amounts of DNA sequence data and gene regulatory activity to a machine learning model, the team was able to train the computer to successfully predict the activity of a given sequence. This type of program can enable in-silico experiments that allow researchers to predict the outcomes of experiments before doing them in the lab. This strategy enables scientists to make discoveries faster and using fewer resources, especially when large quantities of biological data are involved.

Sean Whalen, PhD, a senior research scientist in the Pollard Lab at Gladstone and a co-first author of the study, says the team tested the machine learning model using sequences held out from model training to see if it could predict the results already gathered on gene expression activity.

The model had never seen this data before and was able to make predictions with great accuracy, showing it had learned the general principles for how genes are impacted by noncoding regions of DNA in developing brain cells, Whalen says. You can imagine how this could open up a lot of new possibilities in research, even predicting how combinations of variants might function together.

A New Chapter for Brain Discoveries

The study was completed as part of the PsychENCODE Consortium, which brings together multidisciplinary teams to generate large-scale gene expression and regulatory data from human brains across several major psychiatric disorders and stages of brain development.

Through the consortiums publication of multiple studies, it seeks to shed light on poorly understood psychiatric conditions, from autism to bipolar disorder, and ultimately jumpstart new treatment approaches.

Our study contributes to this growing body of knowledge, showing the utility of using human cells, organoids, functional screening methods, and deep learning to investigate regulatory elements and variants involved in human brain development, says Chengyu Deng, PhD, a postdoctoral researcher at UCSF and a co-first author of the study.

About the Study

The study, Massively Parallel Characterization of Regulatory Elements in the Developing Human Cortex, appears in the May 24, 2024 issue of Science. Authors include: Chengyu Deng, Sean Whalen, Marilyn Steyert, Ryan Ziffra, Pawel Przytycki, Fumitaka Inoue, Daniela Pereira, Davide Capauto, Scott Norton, Flora Vaccarino, PsychENCODE Consortium, Alex Pollen, Tomasz Nowakowski, Nadav Ahituv, and Katherine Pollard.

The work was funded in part by the National Institute of Mental Health, the New York Stem Cell Foundation, the National Human Genome Research Institute, and Coordination for the Improvement of Higher Education Personnel. The data generated was part of thePsychENCODE Consortium.

About Gladstone Institutes

Gladstone Institutesis an independent, nonprofit life science research organization that uses visionary science and technology to overcome disease. Established in 1979, it is located in the epicenter of biomedical and technological innovation, in the Mission Bay neighborhood of San Francisco. Gladstone has created a research model that disrupts how science is done, funds big ideas, and attracts the brightest minds.

Massively parallel characterization of regulatory elements in the developing human cortex

24-May-2024

Read more from the original source:
Scientists leverage machine learning to decode gene regulation in the developing human brain - EurekAlert

Machine learning winnows memory-care cohort to only the most appropriate nuc-med patients – Health Imaging

An AI-aided way has emerged to confidently select dementia patients who are likely to benefit from amyloid-PET imaging while appropriately de-selecting patients for whom the costly exam would probably be unhelpful.

The selection method uses a computerized decision support (CDS) system based on personalized patient data as combed by supervised machine learning.

Researchers in the Netherlands designed the tool to help answer one question:

If a clinician already has detailed information on key disease indicatorsneuropsychological tests, apolipoprotein E (APOE) genotype status and brain imagingwould adding amyloid-PET guide the clinician to a more certain diagnosis?

Amyloid-PET is shorthand for positron emission tomography augmented by injection with the radiotracer florbetaben (brand name Neuraceq), which helps neuroimaging specialists visualize beta-amyloid plaques in the brain.

The researchers found their homegrown AI tool narrowed a field of 286 amyloid-PET candidatesall of whom were clients of a memory-care clinicto the 60 individuals (21%) who stood to benefit the most by undergoing the additional imaging exam.

The field included 135 controls, 108 persons with Alzheimers disease dementia, 33 with frontotemporal lobe dementia and 10 with vascular dementia.

Of the 60 amyloid-PET patients, 188 (66%) ended up receiving a diagnosis of sufficient certainty.

Publishing the results May 20 in PLOS One, lead investigator Hanneke Rhodius-Meester, MD, PhD, and colleagues at Amsterdam University Medical Center report that their computerized CDS approach bested three others with which they compared it:

In their discussion, Rhodius-Meester and co-authors underscore that their computerized CDS approach advised performing an amyloid-PET scan in 21% of patients without compromising proportion of correctly classified cases.

More:

Our approach was thus more efficient than the other scenarios, where we would have performed PET in all patients, in none, or according to the appropriate use criteria (AUC). When implemented in a computer tool, this approach can support clinicians in making a balanced decision in ordering additional (expensive) amyloid-PET testing using personalized patient data.

The study is available in full for free.

Read the original:
Machine learning winnows memory-care cohort to only the most appropriate nuc-med patients - Health Imaging

Machine Learning vs. Deep Learning: What’s the Difference? – Gizmodo

Artificial intelligence is everywhere these days, but the fundamentals of how this influential new technology works can be difficult to wrap your head around. Two of the most important fields in AI development are machine learning and its sub-field, deep learning, although the terms are sometimes used interchangeably, leading to a certain amount of confusion. Heres a quick explanation of what these two important disciplines are, and how theyre contributing to the evolution of automation.

Like It or Not, Your Doctor Will Use AI | AI Unlocked

Proponents of artificial intelligence say they hope to someday create a machine that can think for itself. The human brain is a magnificent instrument, capable of making computations that far outstrip the capacity of any currently existing machine. Software engineers involved in AI development hope to eventually make a machine that can do everything a human can do intellectually but can also surpass it. Currently, the applications of AI in business and government largely amount to predictive algorithms, the kind that suggest your next song on Spotify or try to sell you a similar product to the one you bought on Amazon last week. However, AI evangelists believe that the technology will, eventually, be able to reason and make decisions that are much more complicated. This is where ML and DL come in.

Machine learning (or ML) is a broad category of artificial intelligence that refers to the process by which software programs are taught how to make predictions or decisions. One IBM engineer, Jeff Crume, explains machine learning as a very sophisticated form of statistical analysis. According to Crume, this analysis allows machines to make predictions or decisions based on data. The more information that is fed into the system, the more its able to give us accurate predictions, he says.

Unlike general programming where a machine is engineered to complete a very specific task, machine learning revolves around training an algorithm to identify patterns in data by itself. As previously stated, machine learning encompasses a broad variety of activities.

Deep learning is machine learning. It is one of those previously mentioned sub-categories of machine learning that, like other forms of ML, focuses on teaching AI to think. Unlike some other forms of machine learning, DL seeks to allow algorithms to do much of their work. DL is fueled by mathematical models known as artificial neural networks (ANNs). These networks seek to emulate the processes that naturally occur within the human brainthings like decision-making and pattern identification.

One of the biggest differences between deep learning and other forms of machine learning is the level of supervision that a machine is provided. In less complicated forms of ML, the computer is likely engaged in supervised learninga process whereby a human helps the machine recognize patterns in labeled, structured data, and thereby improve its ability to carry out predictive analysis.

Machine learning relies on huge amounts of training data. Such data is often compiled by humans via data labeling (many of those humans are not paid very well). Through this process, a training dataset is built, which can then be fed into the AI algorithm and used to teach it to identify patterns. For instance, if a company was training an algorithm to recognize a specific brand of car in photos, it would feed the algorithm huge tranches of photos of that car model that had been manually labeled by human staff. A testing dataset is also created to measure the accuracy of the machines predictive powers, once it has been trained.

When it comes to DL, meanwhile, a machine engages in a process called unsupervised learning. Unsupervised learning involves a machine using its neural network to identify patterns in what is called unstructured or raw datawhich is data that hasnt yet been labeled or organized into a database. Companies can use automated algorithms to sift through swaths of unorganized data and thereby avoid large amounts of human labor.

ANNs are made up of what are called nodes. According to MIT, one ANN can have thousands or even millions of nodes. These nodes can be a little bit complicated but the shorthand explanation is that theylike the nodes in the human brainrelay and process information. In a neural network, nodes are arranged in an organized form that is referred to as layers. Thus, deep learning networks involve multiple layers of nodes. Information moves through the network and interacts with its various environs, which contributes to the machines decision-making process when subjected to a human prompt.

Another key concept in ANNs is the weight, which one commentator compares to the synapses in a human brain. Weights, which are just numerical values, are distributed throughout an AIs neural network and help determine the ultimate outcome of that AI systems final output. Weights are informational inputs that help calibrate a neural network so that it can make decisions. MITs deep dive on neural networks explains it thusly:

To each of its incoming connections, a node will assign a number known as a weight. When the network is active, the node receives a different data item a different number over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node fires, which in todays neural nets generally means sending the number the sum of the weighted inputs along all its outgoing connections.

In short: neural networks are structured to help an algorithm come to its own conclusions about data that has been fed to it. Based on its programming, the algorithm can identify helpful connections in large tranches of data, helping humans to draw their own conclusions based on its analysis.

Machine and deep learning help train machines to carry out predictive and interpretive activities that were previously only the domain of humans. This can have a lot of upsides but the obvious downside is that these machines can (and, lets be honest, will) inevitably be used for nefarious, not just helpful, stuffthings like government and private surveillance systems, and the continued automation of military and defense activity. But, theyre also, obviously, useful for consumer suggestions or coding and, at their best, medical and health research. Like any other tool, whether artificial intelligence has a good or bad impact on the world largely depends on who is using it.

Link:
Machine Learning vs. Deep Learning: What's the Difference? - Gizmodo

Development and validation of machine learning algorithms based on electrocardiograms for cardiovascular … – Nature.com

Data sources

This study was performed in Alberta, Canada, where there is a single-payer healthcare system with universal access and 100% capture of all interactions with the healthcare system.

ECG data was linked with the following administrative health databases using a unique patient health number: (1) Discharge Abstract Database (DAD) containing data on inpatient hospitalizations; (2) National Ambulatory Care Reporting System (NACRS) database of all hospital-based outpatient clinic, and emergency department (ED) visits; and (3) Alberta Health Care Insurance Plan Registry (AHCIP), which provides demographic information.

We used standard 12-lead ECG traces (voltage-time series, sampled at 500Hz for the duration of 10seconds for each of 12 leads) and ECG measurements (automatically generated by Philips IntelliSpace ECG systems built-in algorithm). The ECG measurement included atrial rate, heart rate, RR interval, P wave duration, frontal P axis, horizontal P axis, PR interval, QRS duration, frontal QRS axis in the initial 40ms, frontal QRS axis in the terminal 40ms, frontal QRS axis, horizontal QRS axis in the initial 40ms, horizontal QRS axis in terminal 40ms, horizontal QRS axis, frontal ST wave axis (equivalent to ST deviation), frontal T axis, horizontal ST wave axis, horizontal T axis, Q wave onset, Fridericia rate-corrected QT interval, QT interval, Bazetts rate-corrected QT interval.

The study cohort has been described previously25. In brief, patients who were hospitalized at 14 sites between February 2007 and April 2020 in Alberta, Canada, and includes 2,015,808 ECGs from 3,336,091 ED visits and 1,071,576 hospitalizations of 260,065 patients. Concurrent healthcare encounters (ED visits and/or hospitalizations) that occurred for a patient within a 48-hour period of each other were considered to be transfers and part of the same healthcare episode. An ECG record was linked to a healthcare episode if the acquisition date was within the timeframe between the admission date and discharge date of an episode. After excluding the ECGs that could not be linked to any episode, ECGs of patients <18 years of age, as well as ECGs with poor signal quality (identified via warning flags generated by the ECG machine manufacturers built-in quality algorithm), our analysis cohort contained 1,605,268 ECGs from 748,773 episodes in 244,077 patients (Fig. 1).

We developed and evaluated ECG-based models to predict the probability of a patient being diagnosed with any of 15 specific common CV conditions: AF, SVT, VT, CA, AVB, UA, NSTEMI, STEMI, PTE, HCM, AS, MVP, MS, PHTN, and HF. The conditions were identified based on the record of corresponding International Classification of Diseases, 10th revision (ICD-10) codes in the primary or in any one of 24 secondary diagnosis fields of a healthcare episode linked to a particular ECG (Supplementary Table 5). The validity of ICD coding in administrative health databases has been established previously36,37. If an ECG was performed during an ED or inpatient episode, it was considered positive for all diagnoses of interest that were recorded in the episode. Some diagnoses, such as AF, SVT, VT, STEMI, and AVB, which are typically identified through ECGs, were included in the study as positive controls to showcase the effectiveness of our models in detecting ECG-diagnosable conditions.

The goal of the prediction model was to output calibrated probabilities for each of selected 15 conditions. These learned models could use ECGs that were acquired at any time point during a healthcare episode. Note that a single patient visit may involve multiple ECGs. When training the model, we used all ECGs (multiple ECGs belonging to the same episode were included) in the training/development set to maximize learning. However, to evaluate our models, we used only the earliest ECG in a given episode in the test/holdout set, with the goal of producing a prediction system that could be employed at the point of care, when the patients first ECG is acquired during an ED visit or hospitalization (See section Evaluation below for more details).

We used ResNet-based DL for the information-rich voltage-time series and gradient boosting-based XGB for the ECG measurements25. To determine whether demographic features (age and sex) add incremental predictive value to the performance of models trained on ECGs only, we developed and reported the models in the following manner: (a) ECG only (DL: ECG trace); (b) ECG + age, sex (DL: ECG trace, age, sex [which is the primary model presented in this study]); and (c) XGB: ECG measurement, age, sex.

We employed a multi-label classification methodology with binary labelsi.e., presence (yes) or absence (no) for each one of the 15 diagnosesto estimate the probability of a new patient having each of these conditions. Since the input for the models that used ECG measurements was structured tabular data, we trained gradient-boosted tree ensembles (XGB)38 models, whereas we used deep convolutional neural networks for the models with ECG voltage-time series traces. For both XGB and DL models, we used 90% of training data to train the model, and used the remaining 10% as a tuning set to track the performance loss and to early stop the training process, to reduce the chance of overfitting39. For DL, we learned a single ResNet model for a multi-class multi-label task10, which mapped each ECG signal into 15 values, corresponds to the probability of presence of each of the 15 diagnoses. On the other hand, for gradient boosting, we learned 15 distinct binary XGB models, each mapping the ECG signal to the probability for one of the individual labels. The methodological details of our XGB and DL model implementations have been described previously25.

Evaluation design: we used a 60/40 split on the data for training and evaluation. We divided the overall ECG dataset into random splits of 60% for the model development (which used fivefold internal cross-validation for training and fine-tuning the final models) and the remaining 40% as the holdout set for final external validation. We ensured that ECGs from the same patient were not shared between development and evaluation data or between the train/test folds of internal cross-validation. As mentioned earlier, since we expect the deployment scenario of our prediction system to be at the point of care, we evaluated our models using only the patients first ECG in a given episode, which was captured during an ED visit or hospitalization. The number of ECGs, episodes, and patients used in overall data and in experimental splits are presented in Fig. 1 and Supplementary Table 5. In addition to the primary evaluation, we extend our testing to include all ECGs from the holdout set, to demonstrate the versatility of DL model in handling ECGs captured at any point during an episode.

Furthermore, we performed Leave-one-hospital-out validation using two large tertiary care hospitals to assess the robustness of our model with respect to distributional differences between the hospital sites. To guarantee complete separation between our training and testing sets, we omitted ECGs of patients admitted to both the training and testing hospitals during the study period, as illustrated in Supplementary Figure 1. Finally, to underscore the applicability of DL model in screening scenarios, we present additional evaluations by consolidating 15 disease labels into a composite prediction, thereby enhancing diagnostic yield20.

We reported area under the receiver operating characteristic curve (AUROC, equivalent to C-index) and area under the precision-recall curve (AUPRC). Also, we generated F1 Score, Specificity, Recall, Precision (equivalent to PPV) and Accuracy after binarizing the prediction probabilities into diagnosis/non-diagnosis classes using optimal cut-points derived from the training set Youdens index40. We also used the calibration metric Brier Score41 (where a smaller score indicates better calibration) to evaluate whether predicted probabilities agree with observed proportions.

Sex and Pacemaker Subgroups: We investigated our models performance in specific patient subgroups, based on the patients sex. We also investigated any potential bias with ECGs captured in the presence of cardiac pacing (including pacemaker or implantable cardioverter-defibrillators [ICD]) or ventricular assist devices (VAD) since ECG interpretation can be difficult in these situations, by comparing the model performances in ECGs without pacemakers in the holdout set versus the overall holdout set (including ECGs both with or without pacemakers) (Fig. 1). The diagnosis and procedure codes used for identifying the presence of pacemakers are provided in the Supplementary Table 7.

Model comparisons: For each evaluation, we report the performances from the fivefold internal cross-validation as well as the final performances in the holdout set, using the same training and testing splits for the various modeling scenarios. The performances were compared between models by sampling holdout instances with replacement in pairwise manner, to generate a total of 10,000 bootstrap replicates of pairwise differences in AUROCi.e., each comparing without pacemakers versus the original. The difference in the model performances was said to be statistically significant if the 95% confidence intervals of the mean pairwise differences in AUROCs did not include the zero value for the compared models.

Visualizations: We used feature importance values based on information gained to identify the ECG measurements that were key contributors to the diagnosis prediction in the XGB models. Further, we visualized the gradient activation maps that contributed to the models prediction of diagnosis in our DL models using Gradient-weighted Class Activation Mapping (GradCAM)42 on the last convolutional layer. Also, we used feature importance values based on information gain to identify the ECG measurements that were key contributors to the diagnosis prediction in the XGB models.

Read the original post:
Development and validation of machine learning algorithms based on electrocardiograms for cardiovascular ... - Nature.com

Slack has been using data from your chats to train its machine learning models – Engadget

Slack trains machine-learning models on user messages, files and other content without explicit permission. The training is opt-out, meaning your private data will be leeched by default. Making matters worse, youll have to ask your organizations Slack admin (human resources, IT, etc.) to email the company to ask it to stop. (You cant do it yourself.) Welcome to the dark side of the new AI training data gold rush.

Corey Quinn, an executive at DuckBill Group, spotted the policy in a blurb in Slacks Privacy Principles and posted about it on X (via PCMag). The section reads (emphasis ours), To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.

In response to concerns over the practice, Slack published a blog post on Friday evening to clarify how its customers data is used. According to the company, customer data is not used to train any of Slacks generative AI products which it relies on third-party LLMs for but is fed to its machine learning models for products like channel and emoji recommendations and search results. For those applications, the post says, Slacks traditional ML models use de-identified, aggregate data and do not access message content in DMs, private channels, or public channels. That data may include things like message timestamps and the number of interactions between users.

A Salesforce spokesperson reiterated this in a statement to Engadget, also saying that we do not build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data.

I'm sorry Slack, you're doing fucking WHAT with user DMs, messages, files, etc? I'm positive I'm not reading this correctly. pic.twitter.com/6ORZNS2RxC

Corey Quinn (@QuinnyPig) May 16, 2024

The opt-out process requires you to do all the work to protect your data. According to the privacy notice, To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at feedback@slack.com with your Workspace/Org URL and the subject line Slack Global model opt-out request. We will process your request and respond once the opt out has been completed.

The company replied to Quinns message on X: To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results. And yes, customers can exclude their data from helping train those (non-generative) ML models.

How long ago the Salesforce-owned company snuck the tidbit into its terms is unclear. Its misleading, at best, to say customers can opt out when customers doesnt include employees working within an organization. They have to ask whoever handles Slack access at their business to do that and I hope they will oblige.

Inconsistencies in Slacks privacy policies add to the confusion. One section states, When developing Al/ML models or otherwise analyzing Customer Data, Slack cant access the underlying content. We have various technical measures preventing this from occurring. However, the machine-learning model training policy seemingly contradicts this statement, leaving plenty of room for confusion.

In addition, Slacks webpage marketing its premium generative AI tools reads, Work without worry. Your data is your data. We dont use it to train Slack AI. Everything runs on Slacks secure infrastructure, meeting the same compliance standards as Slack itself.

In this case, the company is speaking of its premium generative AI tools, separate from the machine learning models its training on without explicit permission. However, as PCMag notes, implying that all of your data is safe from AI training is, at best, a highly misleading statement when the company apparently gets to pick and choose which AI models that statement covers.

Update, May 18 2024, 3:24 PM ET: This story has been updated to include new information from Slack, which published a blog post explaining its practices in response to the community's concerns.

Update, May 19 2024, 12:41 PM ET: This story and headline have been updated to reflect additional context provided by Slack about how it uses customer data.

Here is the original post:
Slack has been using data from your chats to train its machine learning models - Engadget

Transforming manufacturing with AI and machine learning: Real-world applications and data management integration – The Manufacturer

The manufacturing industry is at the cusp of a revolution driven by Artificial Intelligence (AI) and Machine Learning (ML). These technologies are poised to transform operations, enhance efficiency, and reduce costs.

Introducing AI and ML into manufacturing organizations involves practical applications that highlight their potential. Additionally, understanding the critical role of data management is essential for ensuring the success of these technologies.

AI and ML are no longer futuristic concepts; they are essential tools for modern manufacturing. The imperative for adopting these technologies stems from the need to remain competitive in a rapidly evolving market. Manufacturers face increasing pressure to improve productivity, reduce waste, and enhance quality. AI and ML offer solutions by providing insights and automating processes that were previously labour-intensive and error prone.

In the manufacturing industry, Machine Learning (ML), a critical subset of Artificial Intelligence (AI), involves the use of sophisticated algorithms to learn from and make predictions based on data. These technologies can analyse vast amounts of production data to identify patterns, optimize workflows, and predict equipment failures. For example, ML algorithms can continuously monitor machinery performance, detecting subtle anomalies that may indicate future breakdowns, thus enabling predictive maintenance. Additionally, ML can be used to refine production schedules in real-time based on demand forecasts and resource availability, ensuring maximum efficiency and minimal downtime. By integrating AI and ML, manufacturers can enhance quality control, streamline supply chains, and drive overall operational excellence.

Managing industry standards is a complex task, but AI and ML can simplify it by automating the classification and tagging of data. These technologies can transform standards into digital formats and continuously learn from new data to provide up-to-date compliance guidelines. For instance, AI algorithms can parse through large datasets, identify relevant industry standards, and ensure that manufacturing processes adhere to the latest regulations, reducing compliance costs and enhancing operational efficiency.

AI and ML can enrich business partner information, offering deep profiling that can be leveraged across the value chain. By analysing data from various sources, AI can provide insights into a partners financial stability, market performance, and strategic alignment. This deep profiling enables manufacturers to make informed decisions about partnerships, negotiate better terms, and predict potential risks. Integrating these insights helps streamline operations and optimize inventory management, leading to cost savings and improved supply chain efficiency.

Predictive maintenance is one of the most impactful applications of AI and ML in manufacturing. These technologies analyse data from sensors and machinery to predict equipment failures before they occur. For example, ML algorithms can monitor the vibration and temperature of a machine to forecast potential issues. By scheduling maintenance activities based on these predictions, manufacturers can prevent unexpected downtime, extend equipment lifespan, and reduce maintenance costs. This proactive approach ensures continuous production and enhances safety.

AI and ML can optimize production scheduling by analysing production data, demand forecasts, and resource availability to create efficient schedules. These systems can dynamically adjust production plans in real-time based on changing conditions, such as delays in raw material supply or shifts in demand. For instance, AI can identify bottlenecks in the production process and suggest adjustments to mitigate delays, ensuring that production targets are met consistently. This flexibility maximizes resource utilization and minimizes idle time.

For AI and ML to function effectively, accurate and consistent data is essential. This is where Master Data Management (MDM) plays a critical role. MDM involves creating a single, authoritative source of truth for critical business data, ensuring that all systems and processes across the organization work with the same accurate information. MDM enhances AI and ML efficiency by providing clean, consistent, and reliable data, which is vital for generating meaningful insights and predictions. For example, in predictive maintenance, the reliability of sensor data is crucial for accurate failure predictions.

The integration of AI and ML into manufacturing processes offers significant benefits, including simplified management of industry standards, enriched business partner profiling, predictive maintenance, and optimized production scheduling. These applications demonstrate how AI and ML can save time and money while enhancing operational efficiency. However, the success of these technologies hinges on the quality of data, underscoring the importance of robust data management practices. By ensuring data accuracy and consistency, MDM enables AI and ML systems to perform at their best, delivering reliable insights and driving informed decision-making. As manufacturers continue to embrace AI and ML, robust MDM practices will be essential to unlocking the full potential of these technologies and achieving sustained operational excellence.

His passion for addressing industry challenges led him to solution provision, working with organisations like Autodesk and Microsoft.

Now, with Stibo Systems, he leverages master data management to help manufacturers thrive in volatile markets.

Continue reading here:
Transforming manufacturing with AI and machine learning: Real-world applications and data management integration - The Manufacturer