Machine learning helps to validate the species, provenance and cut of meat samples – Beef Central

THE analytical power of machine learning models is helping to validate the species, provenance and cut of meat.

Despite the inclusion of analytical testing within meat production systems around the world, meat fraud still happens. The horsemeat infiltration of the European beef supply chain in 2013 is an example (click here to view earlier story).

Australias most infamous case of substitution took place in 1981, when kangaroo meat was substituted for beef in an export consignment to the US. The events profoundly damaged the industrys reputation at the time, and Australia narrowly avoided losing its beef export license to the US.

The main culprits were charged with forgery, exporting a prohibited product, false trade description, conspiracy, perjury, theft by deception and selling pet meat for human consumption. Several men received up to four and a half years jail. The events were captured in a popular satirical song of the era, Who put the roo in the stew?

A recent 20-year analysis, which examined more than 400 incidences of beef fraud around the world, found that counterfeiting meat species or substituting one cut for another (within the same species) were the most common type of meat fraud.

Prof Louwrens Hoffman

This kind of counterfeiting with its food safety and consumer swindling implications is being tackled by Queensland Alliance for Agricultural and Food Innovation Professors Louwrens Hoffman and Daniel Cozzolino.

With population growth increasing demand for food, there is considerable economic gain in adulterating food; swapping premium products for inferior products or species, Professor Hoffman said.

And high-value products, such as meat, are especially susceptible to food fraud.

As QAAFIs chair of meat science, Professor Hoffman is concerned by limitations in the testing technology currently used within meat production systems to detect deliberate fraud or accidental substitution.

Consequently, he has been examining newer technology for its potential to overcome current limitations and says a step-change in testing capability is possible.

The cut, the species and even the provenance of meat down to the region of origin and feedlot can now be rapidly determined using imaging technology that is easy and non-destructive to use, he said.

Professor Hoffman said the needs of industry can be best met by using light-based (spectroscopic) technology to provide data about a meat sample. The analysis of this data is done with advanced machine learning algorithms that QAAFI is helping to develop.

He said that light is especially useful for analytical purposes because of a quirk of physics. Atoms (or more specifically, electrons in atoms) can absorb and emit light. As a result, every atom, molecule and compound in the universe produces a unique spectrum of reflected light. This acts as a signature that can be used to forensically identify any compound.

Prof Hoffman is recommending the use of commercially available devices that emit light in the near-infrared (NIR) range.

Handheld NIRs project light onto a meat sample and collect the reflected light called the signature.

The caveat with this approach is that the spectral signatures that can identity meat cuts, species and provenance have to be decoded beforehand. This is where additional R&D is needed.

Meat is a biochemically complex compound. By necessity, the imaging-based identifiers of meat traits are equally complex and surpass the ability of human senses to detect.

In the past, the problem would have hit an impasse because of this. Instead, Prof Hoffman turned to machine learning algorithms to solve what amounts to an enormous, statistical, jigsaw puzzle.

To develop the analytical software, we matched the spectral signatures of meat products of known species, cut, provenance or other variable of interest, he said. That data is used to train machine learning algorithms to detect what distinguishes the different samples from a complex set of spectral clues.

This training process for machine learning could be expanded in the future as industry needs evolve. This could come to include, for example, insects as they start entering the food and feed supply chains, something that Professors Hoffman and Cozzolino also work on from a food-safety perspective.

Prof Hoffman has also been involved in field testing the handheld NIR technology, including in South Africa where it proved highly effective.

We could rapidly differentiate between South African game species, the muscle type and whether the meat was fresh or frozen, he said.

Accuracies for African species differentiation ranged from 89.8 to 93.2 percent and included ostrich, zebra and springbok game meat. Given that South Africa currently has no game meat quality standards or standardised meat cuts, this kind of technological advance opens up new opportunities to cheaply and effectively provide consumer protection.

Prof Hoffman pointed out that the technology is only as good as the back-end analytical software. That is where industry should collaboratively focus its attention in terms of R&D investment to effectively stamp-out meat fraud, he said.

Once the machine learning models are operational, the system is fast, cheap, reliable and accurate, Prof Hoffman said.

Click the links below for more information

Source: QAAFI

More:
Machine learning helps to validate the species, provenance and cut of meat samples - Beef Central

How Huupe’s Innovative Smart Basketball Hoop uses Machine Learning and Advanced Analytics to Revolutionize How Basketball Enthusiasts Play the Game -…

LONDON, UK / ACCESSWIRE / March 31, 2022 / As the National Basketball Association (NBA) increases its domestic and international reach, the league and the game of basketball, in general, are maturing. Specifically, there is a growing analytics revolution happening across the NBA, college basketball, & international basketball. This revolution is not just contained to the top professional leagues as semi-pro, amateur, & youth basketball coaches are utilizing analytics as well. Over the last decade, general managers and coaches have increasingly relied on new hyper-specific statistics and advanced analytics to make smarter decisions on and off the court.

As basketball professionals become more intelligent and new technology enables coaches to capture hyper-specific statistics that were previously impossible to log, players can improve their game by training smarter not harder. Player Efficiency Rating (PER), True Shooting Percentage (TS%), Usage Percentage (USG%), Offensive & Defensive Win Shares (OWS & DSW), and Offensive & Defensive Box Score Plus/Minus (OBPM & DBPM) are just a few of the new statistical categories changing how we understand the sport. By making use of these new statistics and inventive analytics programs, coaches can help players maximize the outcomes of their training hours and reach their true potential.

Huupe is the newest piece of basketball technology driving the analytics revolution in the sport forward. With a smart screen replacing the traditional backboard, advanced (and fun) training mechanisms, as well as a sleek yet weatherproof design, Huupe is the world's first smart basketball hoop, offering training videos, contests, and video highlights directly from the smart hoop's backboard, Huupe is presenting basketball enthusiasts with a powerful new basketball experience. Further, by utilizing machine learning and computer vision, Huupe's smart basketball hoop offers can track players' statistics while playing and can analyze the data captured.

Co-founders and lifelong friends Paul Anton and Lyth Saeed spent one year prototyping and three years perfecting Huupe's hardware and software with their CTO Dan Hayes, in order to make sure that the smart hoop is a truly revolutionary invention in IoT and smart consumer product technology.

Story continues

After building and breaking more basketball hoops and smart screens than one can count, the Huupe team has created a game-changing smart basketball hoop that utilizes computer vision and machine learning to capture and analyze important statistics needed to help a player's performance. While the Huupe team is extremely proud of the powerful technology behind the smart hoop, they are also proud of their product's friendly & exciting gamified UX, the contests & leaderboards, as well as other various internet-enabled features. Of course, as many people install basketball hoops outside, the Huupe is extremely durable and weatherproof without comprising the aesthetic appeal.

Whether Huupe owners are shooting around, playing games with friends, competing in challenges, or practicing with one of the hundreds of NBA-level training videos, Huupe's smart basketball hoop stores all of the performance statistics. This allows players to track performance data with ease and intelligently analyze this data with the touch of a finer. Huupe's innovative computer vision captures traditional statistics as well as advanced statistics such as swishes, makes, misses, trajectory, shot position, vertical jump, wingspan, and much more.

With their innovative smart hoop, Anton and Saeed are the perfect individuals to help push the analytics revolution within the NBA and for the entire sport of basketball forward. Anton and Saeed are legitimate lifelong fans of the sport with basketball in their blood. Having bonded throughout their childhood over basketball, Anton and Saeed are actually passionate about the game; they are not simply looking to attach themselves to an innovative piece of technology nor are they simply looking for their next entrepreneurial endeavor. The two co-founders are on a mission to help players improve their skills, increase opportunities for people to access world-class training, and help basketball enthusiasts connect with like-minded individuals.

Further, Anton and Saeed are uniquely equipped to achieve these goals. Anton's previous venture, Real Shot, used AR/VR technology, machine learning, and computer vision to create an innovative basketball experience, earning a spot in Deutsche Telekom's hub:raum accelerator; Saeed has an impressive track record helping marketplace and AI technology startups operate and grow their business.

We are excited to see how these passionate and talented co-founders continue to capture the hearts and minds of basketball players and fans around the world with their game-changing smart hoop.

Media Contact:

Name: Saqib MalikCompany: Prestige PerfectionsLocation: London, UKNumber: +447935552527

SOURCE: Prestige Perfections

View source version on accesswire.com: https://www.accesswire.com/695440/How-Huupes-Innovative-Smart-Basketball-Hoop-uses-Machine-Learning-and-Advanced-Analytics-to-Revolutionize-How-Basketball-Enthusiasts-Play-the-Game

Read more from the original source:
How Huupe's Innovative Smart Basketball Hoop uses Machine Learning and Advanced Analytics to Revolutionize How Basketball Enthusiasts Play the Game -...

How can reinforcement learning be applied to transportation? – Analytics India Magazine

Reinforcement Learning (RL), a field of machine learning, is based on the principle of trial and error. In easier words, it learns from its own mistakes and corrects the mistake. The aim is simply to build a strategy to guide the intelligent agent to take action in a sequence that leads to fulfilling some ultimate goal. Autonomous Driving (AD) uses Deep Reinforcement Learning (DRL) to make real-time decisions and strategies, not only in AD but also in the field of sales, management and many others. In this article, we will mainly discuss how RL can be used in transportation for better intelligent solutions. Following would be the topics that will be covered in this article.

Lets understand the working of reinforcement learning first.

Reinforcement Learning (RL) is a decision making and strategy building technique that uses trial and error methodology to do these operations in real-time. Its different from the other two machine learning techniques supervised and unsupervised:

The basic architecture of Reinforcement Learning consists of five key terms

Are you looking for for a complete repository of Python libraries used in data science, check out here.

Sometimes it is complex for RL to make decisions. So, a new technique was developed with the help of neural networks and RL which can handle complex decision making and strategy building known as DRL.

Deep Reinforcement Learning (DRL) is a machine learning technique that applies the learning from the previous task to an upcoming task with the help of neural networks and reinforcement learning. As it is derived from Reinforcement Learning the basic principle would be the same but neural networks have the computational power to solve complex problems.

This powerful AI tool, which combines the power of tackling large, complex problems with a generic and flexible framework for sequential decision-making, makes deep reinforcement learning a powerful tool that has become increasingly popular in autonomous decision-making and operation control. Lets see how a DRL is implemented in controlling a taxi fleet.

In the coming decades, ride-sharing companies such as Uber and Ola may aggressively begin to use shared fleets of electric and self-driving cars that could be drafted to pick up passengers and drop them off at their destinations. As cool as it sounds, more complex would be to implement and one major operational challenge which such systems might encounter, however, is the imbalance of supply and demand. Users travel patterns are asymmetric both spatially and temporally, thus causing the vehicles to be clustered in certain regions at certain times of day, and customer demand may not be satisfied in time.

So, the model has to focus on parameters such as customer demands and travel times to be optimal. The objective of this dispatching system is to provide the optimal vehicle dispatch strategy at the lowest possible operational costs and on the passenger side, there are costs associated with the waiting time experienced by all the passengers. To solve this problem the Actor-Critic Algorithm is implemented.

Actor-critic methods combine the advantages of actor-only (policy function only) and critic-only (value function only) methods. Policy gradient methods are reinforcement learning techniques that rely on optimizing parametrized policies concerning the expected return, which is the long-term cumulative reward by gradient descent. They do not suffer from many of the problems, such as the complexity arising from continuous states and actions.

The general idea of policy gradient is that, by generating samples of sequences of tuples of state(trajectories), action and reward from the environment based on the current policy function, it can collect the rewards associated with different trajectories, on which the model could update the parametrized policy function such that high-reward paths are more likely compared to low-reward paths based on their likelihood.

Policy gradient methods have strong convergence properties, which is naturally inherited from gradient descent methods since the sampled rewards usually have very large variances making the vanilla policy gradient method less efficient to learn. The schematic flow of this algorithm is shown below which explain the plan of action of this algorithm in the model.

Lets see the background process of DRL used for dispatching taxis with the help of a case study.

The objective of this case study is to learn the process by which DRL is dispatching taxis for a particular region. A fully-connected neural network of a total of 8 hidden layers, 4 for each actor function and critical function. And there are 128 units at each hidden layer with a learning rate of 510-5 and 1024 as the trajectory (samples of sequences of tuples of state) batch size for each iteration.

Assume that the travel demand is deterministic for this study, i.e. from day to day there are a fixed number of passengers who need to travel between each pair of zones at a certain time of day. The optimal dispatching strategy is solved based on the formula that consists of the waiting time costs for the passengers and the costs of repositioning empty vehicles; this formulation is known as the integer programming model(IP).

So by tracking the convergence factor of the RL there is a finding that the convergence value was very close to the optimal value calculated by the theoretical method. Now lets allow some stochasticity in the travel demand realization to check the sturdiness of the model. So for travel demand distribution was divided into two parts: weekdays and weekends. On each day one travel demand profile was picked randomly for the network.

The DRL learner has no idea of this setup and it starts learning without any prior information about the network. Implemented the same process as did for the above scenario. As the travel demand is stochastic and unknown, the actor-critic method, which may not give the theoretical optimal, can still provide satisfying results.

In this case, the proposed model-free reinforcement learning method (i.e., actor-critic) is an efficient alternative way to solve for reliable and close-to-optimal solutions.

A deep reinforcement learning approach is explained for the problem of dispatching autonomous vehicles for taxi services. In particular, a policy-value framework with neural networks as approximations for both the policy and value functions are explained in this article.

Excerpt from:
How can reinforcement learning be applied to transportation? - Analytics India Magazine

Link Machine Learning (LML), High Volatility and Rising Thursday: Is it Time to Cash Out? – InvestorsObserver

Link Machine Learning (LML) has been relatively more volatile than the crypto market according to a recent analysis of the market. So far Thursday, the crypto has advanced 42.02% to $0.006831457729.

The Volatility Gauge tracks which means that one day won't define its volatility rank - a trend will.LML's high volatility reading is coupled with a low reading on the Risk/Reward Gauge, meaning that the token has relatively wide price swings and is well protected from price manipulation.

Link Machine Learning price is trading above resistance. With support around $0.00400022790391529 and resistance around $0.0064162672552892. This leaves Link Machine Learning out of range and potentially in a volatile position if the rally burns out.

Subscribe to our daily morning update newsletter and never miss out on the need-to-know market news, movements, and more.

Thank you for signing up! You're all set to receive the Morning Update newsletter

Link:
Link Machine Learning (LML), High Volatility and Rising Thursday: Is it Time to Cash Out? - InvestorsObserver

A Qualitative Thematic Analysis of Addressing the Why: An Artificial Intelligence (AI) in Healthcare Symposium – Cureus

According to a report by Johns Hopkins, medical errors are now the third leading cause of death behind cardiovascular disease and cancer [1]. The study details inefficient processes anddistracted and inconsistent care as causative factors, not bad doctors. Medicine is a profoundly personal profession, especially in primary care. Providers take care of patients from the womb to the tomb and everything in between. A patient has an expectation for their primary care provider to be empathetic and knowledgeable in their craft. Instead, individuals often encounter burned-out providers, overburdened by inefficient documentation within electronic medical records, inefficient processes, and inadequate clinic staffing [2]. The pandemic has highlighted the importance of adequate staff, the mental and physical health of staff, and an efficient process for a health system to meet the growing demands of the public. Often, the realities of a complex system that cannot function at the highest level loom large over the reality of the public in desperate need of the proper care at the right time.

The 2021 update of the Commonwealth Fund, which looks into health outcomes among high-income countries, does not cast a favorable view of the United Stateshealthcare system [3]. The report looked at 71 measures across five areas: access to care, care process, administrative efficiency, equity, and health outcomes [3]. The United States was last overall. The United States came in second on measuring care processes; however, it ranked last on the remaining four measures [3]. This rank is in stark contrast to the number of dollars spent on healthcare in the United States. The United States far outspends the other countries regarding the percentage of gross domestic product on related healthcare dollars [3]. Artificial intelligence (AI) is gaining attention as a disruptor of the status quo in medicine. Great promise and potential lie within AI as a growth agency to improve process efficiency and care within medicine.

AI is considered by many the most recent industrial revolution, detailed in an article in Forbes entitled, "The 4th Industrial Revolution Is Here, Are You Ready?" [4]. AI has revolutionized the way we communicate and interact with the supply chainand has increased efficiency in multiple industries, ultimately increasing profit margins. According to a white paper from Accenture, AI can increase healthcare profits by 55% by the year 2035 [5]. Integration of AI into primary care is part of this growth. Currently, AI is being used and tested in specialties such as radiology, cardiology, and oncology [6,7]. Specialties that are dependent on imaging have seen the rapid acceptance of AI pilot programs due to the ability of AI to synthesize large data sets, evaluate, and accurately diagnose. Some of these AI programs are showing accuracy in diagnosis to the same degree as or better than human physicians. The appropriate application of this technology continues to be researched[6,7].

Healthcare organizations have begun to adopt AI systems and have successfully implemented aspects of this technology into their daily process. However, AI has yet to gain full acceptancethroughout healthcare. AI has the potential to garner mainstream attention; however, it must first gain the trust of patients, providers, and staff while showing viability as a business model within clinics and health systems.

This project looks at the themes garnered from a thematic analysis of an online symposium on AI in medicine. The objectives of the symposium include 1) current trends in AI in medicine; 2) short-term and long-term potential of AI in medicine to address issues such as patient access, patient engagement, and patient safety; and 3) understanding the current barriers to the implementation and utilization of AI in medicine.

In June 2021, five expert speakers convened a web-based symposium to discuss some of the more controversial topics around AI. The industry experts include a data scientist from a university with research around data mining, a senior program engineer from a large electronic medical record company, an executive from a prominent AI healthcare platform, a chief medical information officer with a large local health system, and a fellow from a medical informatics program. The first five 20-minute modules were uploaded to a web-based platform for viewing in advance of the 60-minute moderated roundtable (Zoom, Zoom Video Communications, San Jose, CA, USA), modeling a "flipped classroom" curricular design. The interactive 60-minute moderated roundtable provided an opportunity for participants to engage directly with the presenters, ask questions, and critically analyze the topics in a meaningful way. The panel discussion was transcribed with three authors reviewing the themes (identified here as EK, HP, and JB). An inductive thematic analysis of a semi-structured moderated panel on AI in medicine was performed utilizing an iterative process. The transcription was reviewed multiple times, with codes from each reviewer identified. Common themes from these codes were analyzed and condensed for dissemination and included data privacy and access, process improvement, physicianexperience, value in data, and bias in healthcare and AI.

For the evaluation of themes, a topical literature search was conducted utilizing Google Scholar (Google, Mountain View, CA, USA) with the following queries: AI and data privacy and data access, AI and process improvement, AI and physician's experience, AI and bias in healthcare, and AI and value in data. Articles with a published date of January 2020 to the present were considered (Table 1).

The following themes emerged after reviewing the transcribed data: data privacy and access (N=3, number of times identified); process improvement (N=2); physician experience (N=1); value in data (N=2); and bias in healthcare and AI (N=3) (Table 2).

Data from the symposium were synthesized utilizing an iterative process. The transcription was analyzed, and the section below reflects the synthesis of themes followed by quotes from presenters supporting the themes. The discussion section applies medical literature to each theme for further evaluation.

Large amounts of data exist with electronic medical records (EMRs), smartphones, and mobile devices;how do we utilize technology to synthesize data for process improvement and quality measures while maintaining patient privacy? In the United States, organizations own their data. How do we share data among health systems in a meaningful way while maintaining privacy? Is there a way to compensate patients for the use of their data? Would this incentivize patients to engage in programs that seek data for purposes of research? The appropriate analysis and dissemination of data between organizations in healthcare provide an opportunity for insight to improve care delivery. Nations with a centralized healthcare system can draw on large amounts of data without privacy concerns when sharing between organizations. The United States has a siloed system where each organization owns the data of the patient population. Interoperability is an essential piece in improving data sharing in the US healthcare system. Steps must be taken to ensure data collection, and sharing is done ethically.

One solution discussed by presenters is training in basic algorithms, data governance, and interoperability. Patients and healthcare professionals generally lack an understanding of AI and data management. Tech companies like Google and Amazon have a competitive advantage over health systems concerning data governance and algorithm management. Understanding how data scientists and engineers create and evaluate algorithms is essential for healthcare professionals to engage in data management. Healthcare professionals must engage in conversations around data management if health systems want to be competitive in the health tech market. Training in data management for healthcare professionals, administrators, and patients is an important step to help create and maintain privacy standards and improve data sharing between organizations.

"How are we going to survive and make this transition into sort of a data-centric model, versus having all these silos where, you know, we're very protective of our data, but how do we engage with other organizations, how do we leverage the power of, um, data sharing in a way that maintains privacy?" EK

"AI now relies a lot on EHR data, but we are thinking about smartphones, wearable devices, huge amounts of data that patients are collecting, and patients are not sure they want to share that data. Do they want more transparency on how the data is being used?" AG

"I mean, when we go to a clinician and tell them, hey, here's a bunch of data, I mean, they're going to be interested for about six seconds, because they know that there's power in that, but it's sort of like taking a drowning man in the middle of the Atlantic and handing him a glass of water and saying, "here, this is going to be really good for you." RC

"If we do the work to be able to get information aggregated and accessible, will it actually be useful in a clinical setting? Will it actually improve outcomes?" RC

Clinicians are drowning in data. How does an extensive data set show value in healthcare if a provider does not have the time or ability to analyze the data before them? Auto-summarization can pick out important pieces of a patient's entire chart, including structured and unstructured data, and synthesize them into an easily reviewable document for the provider. AI and machine learning (ML) can improve efficiency in back-office processes in billing, scheduling, and provider documentation. The improved efficiency of the process can decrease scheduling and billing errors leading to improved profit and patientexperience. In addition, the improved efficiency in provider workflow can decrease documentation time, allowing the clinician more time with a patient.

"I think the low-hanging fruit, the problems that are solvable, the ones that are easy to measure, you know, are often financial, did I collect more revenue, can I get more patients seen in a day, can I get better utilization. Those are pretty discrete, right, and we can much more quickly and easily measure that" CF

"I think we have a lot of people in healthcare who just don't want to deal with it and they put their head in the sand and say, 'no, I'm not going to do A.I. because I, no one can explain it,' and I think that that's a mistake, because we're missing an incredible opportunity. That would be like, you know, a hundred plus years ago and somebody says, 'yeah, you know what? I understand what you're talking about with germ theory of disease, but I'm not going to participate in that because I don't believe it. You can't show me a germ, I'm not going to believe it until you can.'" CN

Workflow-augmentation algorithms are being developed, utilizing natural language processing with ambient voice technology that decreases provider documentation time allowing for more time in the room with patients. ML and AI are being utilized to improve a provider's experience with chart documentation and data entry, allowing a provider to spend more time in critical thinking and providing care to patients during office visits and at the hospital's bedside. Increased provider documentation and data entry requirements are linked to an increased percentage of medical error and burnout. With the implementation of algorithms supporting workflow, a health system can improve quality measures of care and physician experience, leading to improved overall patient care.

Knowledge augmentation is another area being explored through AI and ML methods. Companies are developing algorithms to improve quality and decrease error with the ultimate goal to improve outcomes. Chatbots have been developed and are being utilized as patient triage. Algorithms utilize large data sets and deep AI to learn how to read radiographs and allow providers to use diagnosis assist in EMR systems. AI is a tool, and knowledge augmentation is an area with great promise for deep AI algorithms to decrease variability in care with the ultimate goal of improving outcomes. The lack of explanation in how a deep AI algorithm produces a result is an ongoing concern for its use in medicine. Knowledge augmentation continues to be an avenue of research.

". There's this other application, of A.I., around workflow augmentation. Taking things that are really burdensome, but relatively easy, and taking those off of people's plates, making that process a lot easier. And so we've seen that be very successful in other industries. I think it's been arguably even a more successful approach to the application of A.I. in non-healthcare, but healthcare continues to focus on the knowledge augmentation rather than workflow augmentation approach, relatively

So, if you can walk out of the room with your chart note already written, and ninety percent of your interview complete, and ninety-plus percent of your documentation complete you're handing the clinician all of the information that they need in the most actionable, usable format possible. That is certainly an application of AI, to know what questions to ask, to know how to translate that information from the patient-friendly interview, into a chart-ready [provider] note." RC

The panelists discussed the importance of finding the right tool for a particular problem. In process improvement, finding the right tool involves understanding the problem compared to the end goal of success: quality improvement, physician experience, and patient outcome. Variability in healthcare leads to inconsistent care. Care should adhere to evidence-based guidelines and be consistent in quality and delivery.

"I think a lot of us in healthcare are in it, yes to care for patients, but also to do it better, right, and we recognize in process improvement, we need to decrease the variability, right. The variation in care needs to get narrowed so that we can recognize if we are doing something right or wrong first, then we can correct it." CN

According to the 2021 Commonwealth Fund Report, the United States healthcare systems ranked last when compared in 71 measures, when rated against other industrialized nations globally [3]. Technology can add value to healthcare, but each program and algorithm must be strategically applied to the appropriate problem. Each organization must consider the ability to implement, maintain, and monitor AI and ML algorithms. Training must occur for clinical and administrative staff. The application and analysis of data sets are as or more important than the amount of data used. The ability of healthcare staff to understand results from algorithms and apply those results will determine the value of AI in mainstream medicine. RC compared value perceived by a consumer of social media and healthcare, saying, " the interesting thing was it was not about building trust, it was about delivering value. And the truth is, Facebook was exciting and fun, and it let you connect with your friends and gave you a whole different way to be able to experience the Internet and interact, and it delivered on the value that people expected the same way as Google. I mean, I think that that's one of those places where healthcare has arguably failed in the past, and I think that that is arguably one of the roots of why we haven't seen more data sharing on the consumer side of things is because they don't see the value of making that data accessible."

"Look at 23andMe. I mean, people pay a hundred plus dollars for the right to give up their genomic information into an aggregate pool. Why'd they do it? Because they made it entertaining. It wasn't even a direct monetary value, in fact it was an inverse monetary value, you had to pay money for the right to get your information into that pool, and people just flowed in there because they made it engaging and entertaining." RC

"How do you see balancing the massive amounts of data that are out there, that need to be able to use it across some of those silos that have come up? But then, perhaps even more functionally, more importantly, is taking that data and turning it into information. Making it actionable, making it valuable." RC

Bias in healthcare affects care pathways and processes in healthcare and has been an ongoing issue in medicine. AI algorithms highlight the bias already present in the system.An article by Igoe, titled Algorithmic Bias in Healthcare Exacerbates Social Inequities: How to Prevent It, highlights the algorithmic bias in healthcare [8]. The author discusses an example of racial bias in the Framingham risk study. In this study, nearly 80% of respondents were Caucasian. The bias of this study has the potential to affect outcomes when treating a diverse population [8]. In an article by Panchet al. (2019), titled Artificial Intelligence and Algorithmic Bias: Implications for Health Systems, the idea of inherent bias in algorithmic processes is discussed further [9]. The authors define algorithmic bias as the implementation of an algorithm that perpetuates the existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation [9]. Panch et al. goon to describe the reality of this bias, If the world looks a certain way, that will be reflected in the data, either directly or through proxies, and thus in the decisions [9].

Data in healthcare have been biased due to the siloed nature of care the US healthcare system has created. Each organization owns its data, and the sharing of data between various groups is complicated. Populations served by various healthcare organizations may be homogenous, thus potentially creating a bias towardparticular groups of people. Evaluating a group of people to note diagnosis and attempt to risk stratify the population is considered by the presenters as a way to help keep the focus of data on patient outcomes. Bias in data can lead to inconsistent care among different populations, and variability can lead to worse outcomes from a population standpoint.

"I want to maybe challenge everybody to think about it differently. It's actually exposing bias that's already there, right. We are already treating people differently, and we're getting away with it because, once again, they're in small scales that nobody has voiced it or it just hasn't been significant enough to be brought to their attention. But, when an algorithm does it, oh my god, everybody has to stop and turn off the algorithm and never use A.I. again." CN

"Im excited that I think were going to find all kinds of different possibilities to take better care of our patients, but I think, maybe, to go even back to some of the earlier conversation, we need to gain not just the trust of the doctors on some of the algorithms and this, deep learning black box, you know, unexplainable A.I. We need to gain the trust of our patients too, to let us use the data, because insofar as we dont have a kind of, heterogeneous population in our data, were going to have nothing but bias, and its going to exaggerate some of the inconsistent delivery of medicine that we have now. CN

I think what we need to do instead, if we cant explain all of the nodes through deep learning and the machine learning component of it, we need to understand, what does that patient population look like, right? Its more like a nutrition label, it contains this many of this kind of person, this many diabetics, this many hypertensives, this is their age, and apply it to a similar population to get the best results. And then what you have to do, as somebody using that algorithm, is you need to actually apply it to your population and let it run and make sure its giving you similar results before you turn it on and you have it start making recommendations in changing care whatsoever. CN

The term artificial intelligence was first coined in 1956 by John McCarthy. He defined AI as "the science and engineering of making intelligent machines [10]. The field of AI is large and separated into different subfields includingmachine learning (ML), deep AI, and natural language processing (NLP). Machine learning works with established data sets and excels at pattern identification and analysis [11]. Deep AI or deep learning is composed of neural networks and allows the program or machine to learn and make autonomous decisions [11]. Natural language processing allows a machine to listen to a human voice and synthesize and analyze information based on conversations [11]. A detailed timeline of some aspects of the history ofAIin medicine is shown in Figure 1, taken from an article in Gastrointestinal Endoscopy Vol 92, issue 4, titled "History of Artificial Intelligence in Medicine [11]. The aforementioned panel consisted of five experts in their field all related to AI and data dissemination to improve processes and outcomes in healthcare. The symposium was designed as an accredited continuing medical education event. The panelists were asked to provide a 30-minute presentation and participate in an online moderated discussion. The moderated discussion centered around the various problems each participant felt AI is equipped to solve. AI is a tool, and one must figure out the problem before the implementation of the tool. Problems exist in medicine that are both process driven and provider/patient experience driven. AI, as a tool, has great promise to create improved pathways for scheduling, billing, and documentation within healthcare. AI can free up time for a provider to have improved conversation with patients, decrease errors in documentation, and potentially decrease burnout [12]. Currently, companies are using chatbots to assist in triage and funnel patients to the right provider. Some chatbots, such as one Babylon Health developed, even give treatment recommendations for low acuity issues [12]. In medical imaging, AI has been studied and shown to be as effective as specialists and radiologists in reading various image modalities across multiple specialties [6,7].

As highlighted by the panelists, knowledge augmentation is challenging to implement on a large scale in medicine. Great promise to add value to the providers decision-making process can be realized through AI algorithms, but it can also be a dangerous arena for the continuation of biasfound in the sample data population used to create the algorithms as highlighted by the panelists. The discussion and exploration of knowledge augmentation algorithms will continue, and the potential for bias should not be taken lightly. Companies utilize AI in conjunction with EMR systems to improve a providers documentation time, make suggestions, and create notes usingambient voice technology, all within a workflow augmentation paradigm [7,13]. In one article, ambient voice technology with natural language processing may forever alter the way a provider documents, allowing for more time with the patient and less time on chart documentation [13]. Barriers to implementation include upfront cost, training, data sharing, and compatibility with the existing technology [13,14]. The authors also discussed bias and risk to data privacy as potential issues to be aware of [13]. True knowledge augmentation algorithms should be developed without bias and meticulously follow evidence-based diagrams. The algorithms should be explainable to the provider, meaning that the steps taken to reach a diagnosis should be discoverable. As alluded by the participants, if an algorithm makes medical decisions, it should go through board certification.

Workflow augmentation is perhaps a more promising area for AI to provide meaningful assistance to problem areas in billing, scheduling, and nursing workflows and triage. AI performs faster and more efficiently than humans in billing, scheduling, accounting, and management of processes. AI chatbots can improve efficiency in nursing and triage processes in the clinic [15]. Healthcare is a complex system of professionals, departments, and institutions. Utilizing engineer planning paradigms such as Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) and Systems Engineering Initiative for Patient Safety (SEIPS) to evaluate, implement, and monitor AI-enabled systems within the culture of each department the system interacts with provides an opportunity to collect data on how AI has addressed a given process and decreased inefficiencies [16]. Instead of looking just at the tool's effectiveness,Li et al. suggest the importance of understanding the workflows of each individualdepartment and how this will change the utilization of AI [16]. The entire system including workflows altered or vanquished is evaluated as well as the effect of the particular algorithm on quality and efficiency [16]. The delivery science rests on three principles: healthcare is delivered in complex systemsand AI must adapt; AI should be viewed as a tool to be used in part of the broader system and not as the end product; and the problems that AI addresses consist of a complex web of people, process, and technologies [16]. The implementation of AI and ML in medicine should not be about implementing a particular algorithm. Instead, quality healthcare systems are built by finding the best solution available to each particular set of problems [16]. Finding the right tool for the right problem is essential to consider when evaluating the implementation of AI in healthcare. For example, the authors discuss an algorithm built to predict acute kidney injury with high accuracy [16]. This was built under the premise that the algorithm would improve care by decreasingtime to diagnosis. However, when put into practice, the algorithm was a burden to physicians and value to the provider and patient was determined to be unclear [16]. This example highlights the importance of understanding the workflows the algorithm will affect [16]. Building an algorithm for the sole purpose of predicting a task in healthcare is not adequate to improve care. The broad view of all persons, departments, and workflows affected must be taken into account [16].

Data privacy and access is a critical issue and one in which policy and process must be developed to ensure the safe acquisition and transfer of data in the healthcare sector. The exchange of patient data among healthcare institutions, academic research, and industry can declassify sensitive information and allow malicious data breaches [17]. Efforts must be taken to limit the risk of such data breaches. Before using an extensive data set of patient information, understanding issues involving patient consent is essential [17]. Some ethics committees do not require consent for deidentified information, while some prefer to use an opt-out method. This method could lead to bias as only the most engaged patients will be involved [17]. Value is an essential factor to consider when implementing a new product or service. The same is true for healthcare and especially AI in healthcare. For a system or provider to change workflow, the value of the change to the new service must be shown. The user and the consumer must see value in the service and gain trust to utilize the service. The perceived risks to the implementation of AI include fear of new technology, ethical or trust issues, and regulatory concerns [18]. Performance and communication concerns were noted to be the highest predictor of risk belief among survey participants [18]. The panelists discussed the data and the reality of data fatigue for the healthcare provider. The analysis of data should be meaningful for the endpoint of a particular problem. Brault and Saxena discussed the large amounts of data collected with mobile health technology and questioned the validity of the data due to inherent bias [19]. The authors suggest the value of data is not just the amount collected but the appropriate evaluation and analysis of said data [19].

Unconscious bias in healthcare is understood as the attitude and opinion towarda particular person or group that affects perceptions by changing the way the care is provided [20]. Bias is inherent in the way medicine is practiced. The siloed nature of care in the United States creates homogenous pools of patients that data are pulledfrom research and process improvement projects. Algorithms expose bias already embedded in the process due to the particular data set being studied, a theme that emerged in the discussion. Brault and Saxena offer three principles to keep in mind for further research. The first principle is to create a catalog of bias listing the source and ramifications of bias [19]. The second deals with creating standards for the use of AI as a tool in medicine [19]. The third principle the authors discuss concerning the research of AI is to develop an approach to evaluate and analyze the effectiveness of AI to solve a particular problem [19]. The reality of AI and the change to specific processes and workflows in medicine will require training for healthcare professionals in the basic language of AI and algorithms. Training will open up the world of AI to professionals and help to safeguard this growing technology in the complex arena of medicine.

Although approximately 50 people registered for the event, less than half were able to attend. Reasons for this could be online meeting fatigue, time of day, and lack of good reminders. Although the event had technical difficulties initially and the first five minutes was not captured, 52 minutes of discussion was transcribed and analyzed. The symposium drew expert panelists from various aspects of the health technology industry; however, more industry leaders should be included in this discussion, including, but not limited to, experts on interoperability, health policy as it relates to AI and emerging technology, and system strategy and management to evaluate new care models emerging through the use of AI.

The research for the analysis was conducted through Google Scholar. The search could have been expanded to include other search engines to potentially include more articles for study. Words such as machine learning, deep AI, and neural networks were uncovered with the search conducted; however, the search could be completed using these words and others to potentially increase the article pool.

Read the original post:
A Qualitative Thematic Analysis of Addressing the Why: An Artificial Intelligence (AI) in Healthcare Symposium - Cureus

Researchers Use Machine Learning to Model Proteins Linked to Cancer – Livermore Independent

Lawrence Livermore National Laboratory (LLNL) researchers and a multi-institutional team of scientists have developed a machine learning-backed model showing the importance of lipids to the signaling dynamics of RAS, a family of proteins whose mutations are linked to numerous cancers.

Lipids are fatty acid organic compounds that are insoluble in water, but soluble in organic solvents.

In a paper published in the Proceedings of the National Academy of Sciences, researchers detail the methodology behind the Multiscale Machine-Learned Modeling Infrastructure (MuMMI), which simulates the behavior of RAS proteins on a cell membrane, their interactions with lipids which help make up cell membranes and the activation of RAS signaling on a macro and molecular level.

According to the researchers, the data indicates that lipids rather than protein interfaces govern both RAS orientation and the accumulation of RAS proteins.

We always knew lipids were important, said LLNL computer scientist and lead author Helgi Ingolfsson. You need some of them, otherwise you dont have this behavior. But after that, scientists didnt know what was important about them.

Normally, RAS proteins receive and follow signals to switch between active and inactive states, but as the proteins move along the cell membrane they combine with other proteins and can activate signaling behavior.

Mutated RAS proteins can become stuck in an uncontrollable, always on growth state, which is seen in the formation of about 30% of all cancers, particularly pancreatic, lung and colorectal cancers.

The research is showing us that lipids are a key player, Ingolfsson said. By modulating the lipids and different lipid environments, RAS changes its orientation, and you can actually change the signaling (between grow and not grow) by changing the lipids underneath.

Researchers said the MuMMI framework represents a fundamentally new technology in computational biology and could be used to improve their basic understanding of RAS protein binding.

The research is part of a pilot project of the Joint Design of Advanced Computing Solutions for Cancer, a collaboration between the Department of Energy, National Cancer Institute, and other organizations.

Traditional researchers can simulate only a small, fixed number of proteins and one lipid composition at a time, Ingolfsson explained, and they need to know which lipids are important to model beforehand. With the MuMMI framework, researchers can simulate thousands of different cell compositions derived from the macro model, allowing them to answer questions about RAS-lipid interactions that previously would be possible only with a multiscale simulation.

Were demonstrating that the old way of doing things is starting to be outdated, Ingolfsson said. At Livermore, we have enormous computing power, we have a lot of people working on this and we can show what can be possible.

Go here to read the rest:
Researchers Use Machine Learning to Model Proteins Linked to Cancer - Livermore Independent

68% of CTOs have Implemented Machine Learning at their Organization – insideBIGDATA

55% of businesses now employ at least one team member dedicated to AI/ML solutions, although only 15% have their own separate AI division

Research fromSTX Next, Europes largest software development company specializing in the Python programming language, has found that 68% of chief technical officers (CTOs) have implemented machine learning at their company. This makes it overwhelmingly the most popular subset of AI, with others such as natural language processing (NLP), pattern recognition and deep learning also showing considerable growth.

Despite the popularity of AI and its various subsets, its also clear that AI implementation is still in its early phases and theres progress to be made in recruiting the talent needed for its development. In fact, 63% of CTOs reported that they arent actively hiring AI talent and of those that are, over 50% report facing recruitment challenges.

The findings were taken from STX Nexts 2021 Global CTO Survey, which gathered insights from 500 global CTOs about their organizations tech stack and what theyre looking to add to it in the future. Other key findings from the research included:

ukasz Grzybowski, Head of Machine Learning & Data Engineering at STX Next, said: The implementation of AI and its subsets in many companies is still in its early stages, as evidenced by the prevalence of small AI teams.

Its unsurprising to see machine learning as a definite leader when it comes to future technologies as its applications are becoming more widespread every day. Whats less obvious is the skills that people will need to take full advantage of its growth and face the challenges that will arise alongside it. Its important that CTOs and other leaders are wise to these challenges, and are willing to take the steps to increase their AI expertise in order to maintain their innovative edge.

Deep learning is a good example of where there is plenty of room for progress to be made. It is one of the fastest developing areas of AI, in particular when it comes to its application in natural language processing, natural language understanding, chatbots, and computer vision. Many innovative companies are trying to use deep learning to process unstructured data such as images, sounds, and text.

However, AI is still most commonly used to process structured data, which is evidenced by the high popularity of classical machine learning methods such as linear or logistic regression and decision trees.

Grzybowski concluded: To adapt AI to unstructured data, the technology will need to mature further. This is why initiatives such as MLOps have a major role to play, as long-term success will only be achieved when data scientists and operations professionals are all on the same page and fully committed to making AI and machine learning work for everyone.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Read more:
68% of CTOs have Implemented Machine Learning at their Organization - insideBIGDATA

Physics and the machine-learning black box | MIT News | Massachusetts Institute of Technology – MIT News

Machine-learning algorithms are often referred to as a black box. Once data are put into an algorithm, its not always known exactly how the algorithm arrives at its prediction. This can be particularly frustrating when things go wrong. A new mechanical engineering (MechE) course at MIT teaches students how to tackle the black box problem, through a combination of data science and physics-based engineering.

In class 2.C01 (Physical Systems Modeling and Design Using Machine Learning), Professor George Barbastathis demonstrates how mechanical engineers can use their unique knowledge of physical systems to keep algorithms in check and develop more accurate predictions.

I wanted to take 2.C01 because machine-learning models are usually a black box, but this class taught us how to construct a system model that is informed by physics so we can peek inside, explains Crystal Owens, a mechanical engineering graduate student who took the course in spring 2021.

As chair of the Committee on the Strategic Integration of Data Science into Mechanical Engineering, Barbastathis has had many conversations with mechanical engineering students, researchers, and faculty to better understand the challenges and successes theyve had using machine learning in their work.

One comment we heard frequently was that these colleagues can see the value of data science methods for problems they are facing in their mechanical engineering-centric research; yet they are lacking the tools to make the most out of it, says Barbastathis. Mechanical, civil, electrical, and other types of engineers want a fundamental understanding of data principles without having to convert themselves to being full-time data scientists or AI researchers.

Additionally, as mechanical engineering students move on from MIT to their careers, many will need to manage data scientists on their teams someday. Barbastathis hopes to set these students up for success with class 2.C01.

Bridging MechE and the MIT Schwarzman College of Computing

Class 2.C01 is part of the MIT Schwarzman College of Computings Common Ground for Computing Education. The goal of these classes is to connect computer science and artificial intelligence with other disciplines, for example, connecting data science with physics-based disciplines like mechanical engineering. Students take the course alongside 6.C01 (Modeling with Machine Learning: from Algorithms to Applications), taught by professors of electrical engineering and computer science Regina Barzilay and Tommi Jaakkola.

The two classes are taught concurrently during the semester, exposing students to both fundamentals in machine learning and domain-specific applications in mechanical engineering.

In 2.C01, Barbastathis highlights how complementary physics-based engineering and data science are. Physical laws present a number of ambiguities and unknowns, ranging from temperature and humidity to electromagnetic forces. Data science can be used to predict these physical phenomena. Meanwhile, having an understanding of physical systems helps ensure the resulting output of an algorithm is accurate and explainable.

Whats needed is a deeper combined understanding of the associated physical phenomena and the principles of data science, machine learning in particular, to close the gap, adds Barbastathis. By combining data with physical principles, the new revolution in physics-based engineering is relatively immune to the black box problem facing other types of machine learning.

Equipped with a working knowledge of machine-learning topics covered in class 6.C402 and a deeper understanding of how to pair data science with physics, students are charged with developing a final project that solves for an actual physical system.

Developing solutions for real-world physical systems

For their final project, students in 2.C01 are asked to identify a real-world problem that requires data science to address the ambiguity inherent in physical systems. After obtaining all relevant data, students are asked to select a machine-learning method, implement their chosen solution, and present and critique the results.

Topics this past semester ranged from weather forecasting to the flow of gas in combustion engines, with two student teams drawing inspiration from the ongoing Covid-19 pandemic.

Owens and her teammates, fellow graduate students Arun Krishnadas and Joshua David John Rathinaraj, set out to develop a model for the Covid-19 vaccine rollout.

We developed a method of combining a neural network with a susceptible-infected-recovered (SIR) epidemiological model to create a physics-informed prediction system for the spread of Covid-19 after vaccinations started, explains Owens.

The team accounted for various unknowns including population mobility, weather, and political climate. This combined approach resulted in a prediction of Covid-19s spread during the vaccine rollout that was more reliable than using either the SIR model or a neural network alone.

Another team, including graduate student Yiwen Hu, developed a model to predict mutation rates in Covid-19, a topic that became all too pertinent as the delta variant began its global spread.

We used machine learning to predict the time-series-based mutation rate of Covid-19, and then incorporated that as an independent parameter into the prediction of pandemic dynamics to see if it could help us better predict the trend of the Covid-19 pandemic, says Hu.

Hu, who had previously conducted research into how vibrations on coronavirus protein spikes affect infection rates, hopes to apply the physics-based machine-learning approaches she learned in 2.C01 to her research on de novo protein design.

Whatever the physical system students addressed in their final projects, Barbastathis was careful to stress one unifying goal: the need to assess ethical implications in data science. While more traditional computing methods like face or voice recognition have proven to be rife with ethical issues, there is an opportunity to combine physical systems with machine learning in a fair, ethical way.

We must ensure that collection and use of data are carried out equitably and inclusively, respecting the diversity in our society and avoiding well-known problems that computer scientists in the past have run into, says Barbastathis.

Barbastathis hopes that by encouraging mechanical engineering students to be both ethics-literate and well-versed in data science, they can move on to develop reliable, ethically sound solutions and predictions for physical-based engineering challenges.

Read more:
Physics and the machine-learning black box | MIT News | Massachusetts Institute of Technology - MIT News

Machine Learning and 5G Are Crucial to Scale the Metaverse – BBN Times

Machine learning and 5G can attract more people in the metaverse, blurring the lines between the virtual and real worlds.

The concept of metaverse is closely related to advanced technologies such as artificial intelligence (AI), machine learning (ML), augmented reality (AR), virtual reality (VR), blockchain, 5G and the internet of things (IoT).

Improvedtechnology will allow avatars to use body language effectively and better convey human emotions producinga feeling of real communication in a virtual space.

ARand VR won't be the only critical components of themetaverse, 5G and machine learning are also crucial.

Source: Jon Radoff

The metaverse is a future iteration of the internet, made up of 3D virtual spaces linked into a perceived virtual universe. In a broader sense, it may not only refer to virtual worlds but the entire spectrum of augmented and virtual reality.

Image Credit: Unit 2 Games Limited

Users can interact with 3D digital objects and 3D virtual avatars of each other in a complex manner that mimics the real world.

The idea of the metaverse was first coined by science fiction writer Neal Stephenson in the early 90s and was eventually developed in parts by companies like Second Life, Decentraland, Microsoft, and most recently Meta.

In this virtual world,people can interact, hold meetings, buy property and do even more.

Themetaverseconcept relies on augmented and virtual reality (AR/VR) in combination with machine learning, 5G, the internet of things (IoT) and blockchain to create a scalable digital world.

Machine Learning is defined as the field of AI that applies statistical methods to enable computer systems to learn from the data towards an end goal.

The types of machine learning include supervised, unsupervised, semi-supervised and reinforcementlearning.

Supervised Learning: a learning algorithm that works with data that is labelled (annotated). Supervised Learning Algorithms may use Classification or Numeric Prediction. Classification (Logistic Regression, Decision Tree, KNN, Random Forest, SVM, Naive Bayes, etc), is the process of predicting the class of given data points.for example learning to classify fruits with labelled images of fruits as apple, orange, lemon, etc. Regression algorithms (Linear Regression, KNN, Gradient Boosting & AdaBoost, etc) are used for the prediction of continuous numerical values.

Unsupervised Learning is a learning algorithm to discover patterns hidden in data that is not labelled (annotated). An example is segmenting customers into different clusters. Examples include clustering with K-Means, and pattern discovery. A powerful technique from Deep Learning, known as Generative Adversarial Networks (GANs), uses unsupervised learning.

Semi-Supervised Learning: is a learning algorithm only when a small fraction of the data is labelled. An example is provided byDataRobot"When you dont have enough labelled data to produce an accurate model and you dont have the ability or resources to get more, you can use semi-supervised techniques to increase the size of your training data. For example, imagine you are developing a model for a large bank intended to detect fraud. Some fraud you know about, but other instances of fraud slipped by without your knowledge. You can label the dataset with the fraud instances youre aware of, but the rest of your data will remain unlabelled. "

Reinforcement Learning entails Q-Learning and involves an agent taking appropriate actions in order to maximize a reward in a particular situation. It is used by an intelligent agent to solve for the optimal behaviour or path that the agent should take in a specific situation.

Machine Learning plays a major role in everyday applications via facial recognition, voice search, natural language processing (NLP), faster computing, and all sorts of other under-the-hood processes.It hasthe potential to parse huge volumes of data at lightning speed to generate insights and drive action, which can significantly improve the interaction of users in the metaverse.

Source: IT World Canada

5G is thefifth generation wireless technology. It can provide higher speed, lower latency and greater capacity than 4G LTE networks.

The impact of 5G on the metaverse is clearly the increased number of devices that can be connected to the network. All connected devices are able to communicate with each other in real-time and exchange information.

5G is up to 20 times faster than 4G, it offers more than just faster speeds. Due to its low latency, 5G speeds will allow developers to create applications that take full advantage of improved response times, including near real-time video transmission for sporting events or security purposes.

The combination of 5G and machine learning is truly transformative. Replacing traditional wireless algorithms with advanced machine learningalgorithms will dramatically reduce power consumption and improve the performance of 5G networks which support a metaverse environment.

A key piece of the metaverse puzzle is that organizations need advanced data to create specific electronics equipment that will help everyone connect to the metaverse. At the moment, VR headsets or AR glasses are still experimental products at best. Machine learning can help organisations build modern VR and AR devices, which will keep on improving.

Source: Qualcomm

Machine learning and 5G could make the metaverse viral, as they are already two of the most disruptive technologies the world has seen in decades.

In order to take the concept of metaverse to another level, internet connection has to improve around the world.While 5G networks are set to roll out in many countries, a faster internet network is needed to connect seamlessly. Machine learning is also in its infancy as well.

A cultural shift in the tech world has to occur to attract more users. The covid-19 pandemic also plays a major role as people are not ready for a disruptive digital world, patience is key here. Better virtual and augmented reality devices are needed.

The concept of the metaverse could fail if it is rushed and that companies and users aren't prepared for the next version of the internet.

The hardest technology challenge of our time may be fitting a supercomputer into the frame of normal-looking glasses.

Mark Zuckerberg

Read more:
Machine Learning and 5G Are Crucial to Scale the Metaverse - BBN Times

Advanced data science, machine learning and the power of knowledge graphs: What can we expect from this combination? – IDG Connect

This is contributed article by Maya Natarajan, Sr. Director Product Marketing, Neo4j.

From bridging data silos and building data fabrics to accelerating machine learning (ML) and artificial intelligence (AI) adoption, knowledge graphs are foundational and allow businesses to go beyond digital transformation. Defined by The Turing Institute, the UK's national institute for data science and AI, as the best way to encode knowledge to use at scale in open, evolving, decentralised systems, knowledge graphs are a perfect foundation for advanced data science initiatives. So why arent they better known and exploited?

This is a problem. Business leaders know the value of their data and are keenly aware that it holds the answers to their most pressing business questions. The insights to improve decision-making and enhance business performance they need, however, arent easy to elicit. Hence the widespread interest in machine learning.

Knowledge graphs can help an organisation trying to get machine learning to a useful production status and out of the lab. Thats because knowledge graphs are a special, non-disruptive insight layer on top of this complex data resource. They drive intelligence into data to significantly enhance its value, but without changing any of the existing data infrastructure. Lets look at how.

Knowledge graphs make existing technologies better by providing better data management, better predictions, and better innovation, in part because they fuel AI and machine learning. In practice, knowledge graph use cases divide into two groupings: actioning and decisioning. The actioning graphs aim is to drive action by providing assurance or insight. Data actioning graphs automate processes for better outcomes by providing data assurance, discovery, and insight, and include examples like data lineage, data provenance, data governance, compliance, and risk management.

A great example of a data actioning graph is a knowledge graph that tracks objects in space, both functional equipment and broken equipment. The ASTRIAGraph project monitors the Earths orbit for space objects, including functioning hardware and other space junk, striving for safety, security, and sustainability. Using a knowledge graph, the team can categorise a lot of disparate space domain data to locate and track objects from the size of a mobile phone to the largest satellite. The ASTRIAGraph predicts their trajectory, minimises risk, and provides complete visibility.With the goal of maximising decision intelligence, ASTRIAGraph curates information and creates models of the space domain and environment.

The real magic of knowledge graphs comes into play as you use them to support AI and machine learning, uncovering patterns and anomalies. A decisioning knowledge graph surfaces data trends to augment analytics, machine learning, and data science initiatives. With all of this, its not surprising that Gartner recently stated, Up to 50% of Gartner inquiries on the topic of AI involve discussion of the use of graph technology.

We know this from speaking to customers. Moving from an actioning graph to sophisticated decisioning graphs fuelling AI and machine learning is a typical graph technology journey for many data science teams we work with, with knowledge graphs at the centre.

From data sourcing to training machine learning models to analysing predictions and applying results, knowledge graphs enhance every step of the machine learning process.

In the initial step of data sourcing, knowledge graphs can be used for data lineage to track data that feeds machine learning. In the next phase of training a machine learning model, knowledge graphs allow for graph feature engineering using simple graph queries or more complex graph algorithms, like centrality, community detection, and the like. The results of such algorithms can be written back to the knowledge graph, further enriching it.

The next step forward in sophistication is the use of graph embeddings. Graph embeddings offer a way of encoding the nodes and the relationships in a knowledge graph into a structure that's suitable for machine learning. Effectively, embeddings turn your knowledge graph into numbers and learn all its features. Relationships are highly predictive of behaviour, so using connected, contextualised features maximises the predictive power of machine learning models.

Once a machine learning model has been developed, knowledge graphs can be used for investigations and counterfactual analyses by data scientists to understand if a model is useful and making accurate predictions.

Lets look at decisioning in action. UBS, for example, built a detailed data lineage and governance tool that offers deep transparency into the data flows that feed its risk reporting mechanisms to meet finance compliance regulations.

Another example is NASA, which has decades of mission experience that wasnt well catalogued. NASA built a knowledge graph-enhanced application to comb through millions of documents, reports, project data, lessons learned, scientific research, medical analysis, geospatial data, and IT logs. As a result, an old breakthrough from the Apollo era in the 1960s solved a problematic issue in its 21st Century Orion class of crewed spaceships. It saved a million dollars of taxpayer money by heading off the need for two years of work reinventing the wheel.

And in the life sciences, one large global pharmaceutical company is working with knowledge graphs to help clinicians know when to best intervene for complex diseases. Its data science team used graph algorithms to find patients that had specific journey types and patterns, and find others with similar experiences. This insight is used to train its machine learning model, analyse predictions, and bring back results to help clinicians make better decisions. And were talking about scalethis companys knowledge graph holds three years of visits, tests, and patient diagnoses across tens of billions of records.

By using the power of knowledge graphs, AI and machine learning models are better able to represent relationships. That means the organisations using them can find more accurate interpretations of complex data, putting context back into data, and training AI to be a trustworthy partner.

Its a powerful trend that we see more and more in data science. No wonder the c-suite is waking up to this innovation.

Maya Natarajan is Sr. Director Product Marketing at native graph database leader Neo4j

Read more from the original source:
Advanced data science, machine learning and the power of knowledge graphs: What can we expect from this combination? - IDG Connect