What is artificial intelligence? – Brookings

Few concepts are as poorly understood as artificial intelligence. Opinion surveys show that even top business leaders lack a detailed sense of AI and that many ordinary people confuse it with super-powered robots or hyper-intelligent devices. Hollywood helps little in this regard by fusing robots and advanced software into self-replicating automatons such as the Terminators Skynet or the evil HAL seen in Arthur Clarkes 2001: A Space Odyssey, which goes rogue after humans plan to deactivate it. The lack of clarity around the term enables technology pessimists to warn AI will conquer humans, suppress individual freedom, and destroy personal privacy through a digital 1984.

Part of the problem is the lack of a uniformly agreed upon definition. Alan Turing generally is credited with the origin of the concept when he speculated in 1950 about thinking machines that could reason at the level of a human being. His well-known Turing Test specifies that computers need to complete reasoning puzzles as well as humans in order to be considered thinking in an autonomous manner.

Turing was followed up a few years later by John McCarthy, who first used the term artificial intelligence to denote machines that could think autonomously. He described the threshold as getting a computer to do things which, when done by people, are said to involve intelligence.

Since the 1950s, scientists have argued over what constitutes thinking and intelligence, and what is fully autonomous when it comes to hardware and software. Advanced computers such as the IBM Watson already have beaten humans at chess and are capable of instantly processing enormous amounts of information.

The lack of clarity around the term enables technology pessimists to warn AI will conquer humans, suppress individual freedom, and destroy personal privacy through a digital 1984.

Today, AI generally is thought to refer to machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention. According to researchers Shubhendu and Vijay, these software systems make decisions which normally require [a] human level of expertise and help people anticipate problems or deal with issues as they come up. As argued by John Allen and myself in an April 2018 paper, such systems have three qualities that constitute the essence of artificial intelligence: intentionality, intelligence, and adaptability.

In the remainder of this paper, I discuss these qualities and why it is important to make sure each accords with basic human values. Each of the AI features has the potential to move civilization forward in progressive ways. But without adequate safeguards or the incorporation of ethical considerations, the AI utopia can quickly turn into dystopia.

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. As such, they are designed by humans with intentionality and reach conclusions based on their instant analysis.

An example from the transportation industry shows how this happens. Autonomous vehicles are equipped with LIDARS (light detection and ranging) and remote sensors that gather information from the vehicles surroundings. The LIDAR uses light from a radar to see objects in front of and around the vehicle and make instantaneous decisions regarding the presence of objects, distances, and whether the car is about to hit something. On-board computers combine this information with sensor data to determine whether there are any dangerous conditions, the vehicle needs to shift lanes, or it should slow or stop completely. All of that material has to be analyzed instantly to avoid crashes and keep the vehicle in the proper lane.

With massive improvements in storage systems, processing speeds, and analytic techniques, these algorithms are capable of tremendous sophistication in analysis and decisionmaking. Financial algorithms can spot minute differentials in stock valuations and undertake market transactions that take advantage of that information. The same logic applies in environmental sustainability systems that use sensors to determine whether someone is in a room and automatically adjusts heating, cooling, and lighting based on that information. The goal is to conserve energy and use resources in an optimal manner.

As long as these systems conform to important human values, there is little risk of AI going rogue or endangering human beings. Computers can be intentional while analyzing information in ways that augment humans or help them perform at a higher level. However, if the software is poorly designed or based on incomplete or biased information, it can endanger humanity or replicate past injustices.

AI often is undertaken in conjunction with machine learning and data analytics, and the resulting combination enables intelligent decisionmaking. Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it with data analytics to understand specific issues.

For example, there are AI systems for managing school enrollments. They compile information on neighborhood location, desired schools, substantive interests, and the like, and assign pupils to particular schools based on that material. As long as there is little contentiousness or disagreement regarding basic criteria, these systems work intelligently and effectively.

Figuring out how to reconcile conflicting values is one of the most important challenges facing AI designers. It is vital that they write code and incorporate information that is unbiased and non-discriminatory. Failure to do that leads to AI algorithms that are unfair and unjust.

Of course, that often is not the case. Reflecting the importance of education for life outcomes, parents, teachers, and school administrators fight over the importance of different factors. Should students always be assigned to their neighborhood school or should other criteria override that consideration? As an illustration, in a city with widespread racial segregation and economic inequalities by neighborhood, elevating neighborhood school assignments can exacerbate inequality and racial segregation. For these reasons, software designers have to balance competing interests and reach intelligent decisions that reflect values important in that particular community.

Making these kinds of decisions increasingly falls to computer programmers. They must build intelligent algorithms that compile decisions based on a number of different considerations. That can include basic principles such as efficiency, equity, justice, and effectiveness. Figuring out how to reconcile conflicting values is one of the most important challenges facing AI designers. It is vital that they write code and incorporate information that is unbiased and non-discriminatory. Failure to do that leads to AI algorithms that are unfair and unjust.

The last quality that marks AI systems is the ability to learn and adapt as they compile information and make decisions. Effective artificial intelligence must adjust as circumstances or conditions shift. This may involve alterations in financial situations, road conditions, environmental considerations, or military circumstances. AI must integrate these changes in its algorithms and make decisions on how to adapt to the new possibilities.

One can illustrate these issues most dramatically in the transportation area. Autonomous vehicles can use machine-to-machine communications to alert other cars on the road about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved experience is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions.

A similar logic applies to AI devised for scheduling appointments. There are personal digital assistants that can ascertain a persons preferences and respond to email requests for personal appointments in a dynamic manner. Without any human intervention, a digital assistant can make appointments, adjust schedules, and communicate those preferences to other individuals. Building adaptable systems that learn as they go has the potential of improving effectiveness and efficiency. These kinds of algorithms can handle complex tasks and make judgments that replicate or exceed what a human could do. But making sure they learn in ways that are fair and just is a high priority for system designers.

In short, there have been extraordinary advances in recent years in the ability of AI systems to incorporate intentionality, intelligence, and adaptability in their algorithms. Rather than being mechanistic or deterministic in how the machines operate, AI software learns as it goes along and incorporates real-world experience in its decisionmaking. In this way, it enhances human performance and augments peoples capabilities.

Of course, these advances also make people nervous about doomsday scenarios sensationalized by movie-makers. Situations where AI-powered robots take over from humans or weaken basic values frighten people and lead them to wonder whether AI is making a useful contribution or runs the risk of endangering the essence of humanity.

With the appropriate safeguards,countries can move forward and gain the benefits of artificial intelligence and emerging technologies without sacrificing the important qualities that define humanity.

There is no easy answer to that question, but system designers must incorporate important ethical values in algorithms to make sure they correspond to human concerns and learn and adapt in ways that are consistent with community values. This is the reason it is important to ensure that AI ethics are taken seriously and permeate societal decisions. In order to maximize positive outcomes, organizations should hire ethicists who work with corporate decisionmakers and software developers, have a code of AI ethics that lays out how various issues will be handled, organize an AI review board that regularly addresses corporate ethical questions, have AI audit trails that show how various coding decisions have been made, implement AI training programs so staff operationalizes ethical considerations in their daily work, and provide a means for remediation when AI solutions inflict harm or damages on people or organizations.

Through these kinds of safeguards, societies will increase the odds that AI systems are intentional, intelligent, and adaptable while still conforming to basic human values. In that way, countries can move forward and gain the benefits of artificial intelligence and emerging technologies without sacrificing the important qualities that define humanity.

Read the rest here:
What is artificial intelligence? - Brookings

Posted in Uncategorized

What is Artificial Intelligence (AI) and How Does it Work …

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.

AI is important because it can give enterprises insights into their operations that they may not have been aware of previously and because, in some cases, AI can perform tasks better than humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business opportunities for some larger enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders to taxis, but today Uber has become one of the largest companies in the world by doing just that. It utilizes sophisticated machine learning algorithms to predict when people are likely to need rides in certain areas, which helps proactively get drivers on the road before they're needed. As another example, Google has become one of the largest players for a range of online services by using machine learning to understand how people use their services and then improving them. In 2017, the company's CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and gain advantage on their competitors.

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

Advantages

Disadvantages

AI can be categorized as either weak or strong.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained in a 2016 article that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows:

AI is incorporated into a variety of different types of technology. Here are six examples:

Artificial intelligence has made its way into a wide variety of markets. Here are nine examples.

AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and can respond to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications include using online virtual health assistants and chatbots to help patients and healthcare customers find medical information, schedule appointments, understand the billing process and complete other administrative processes. An array of AI technologies is also being used to predict, fight and understand pandemics such as COVID-19.

AI in business. Machine learning algorithms are being integrated into analytics and customer relationship management (CRM) platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.

AI in education. AI can automate grading, giving educators more time. It can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. And it could change where and how students learn, perhaps even replacing some teachers.

AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, artificial intelligence software performs much of the trading on Wall Street.

AI in law. The discovery process -- sifting through documents -- in law is often overwhelming for humans. Using AI to help automate the legal industry's labor-intensive processes is saving time and improving client service. Law firms are using machine learning to describe data and predict outcomes, computer vision to classify and extract information from documents and natural language processing to interpret requests for information.

AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into the workflow. For example, the industrial robots that were at one time programmed to perform single tasks and separated from human workers, increasingly function as cobots: Smaller, multitasking robots that collaborate with humans and take on responsibility for more parts of the job in warehouses, factory floors and other workspaces.

AI in banking. Banks are successfully employing chatbots to make their customers aware of services and offerings and to handle transactions that don't require human intervention. AI virtual assistants are being used to improve and cut the costs of compliance with banking regulations. Banking organizations are also using AI to improve their decision-making for loans, and to set credit limits and identify investment opportunities.

AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI technologies are used in transportation to manage traffic, predict flight delays, and make ocean shipping safer and more efficient.

Security. AI and machine learning are at the top of the buzzword list security vendors use today to differentiate their offerings. Those terms also represent truly viable technologies. Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations. The maturing technology is playing a big role in helping organizations fight off cyber attacks.

Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general.

While AI tools present a range of new functionality for businesses, the use of artificial intelligence also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.

Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, United States Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The European Union's General Data Protection Regulation (GDPR) puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In October 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI. Technology breakthroughs and novel applications can make existing laws instantly obsolete. For example, existing laws regulating the privacy of conversations and recorded conversations do not cover the challenge posed by voice assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation -- except to the companies' technology teams which use it to improve machine learning algorithms. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.

The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to machines that replace human intelligence by simulating how we sense, learn, process and react to information in the environment.

The label cognitive computing is used in reference to products and services that mimic and augment human thought processes.

The concept of inanimate objects endowed with intelligence has been around since ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish theologian Ramon Llull to Ren Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.

The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, invented the first design for a programmable machine.

1940s. Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer -- the idea that a computer's program and the data it processes can be kept in the computer's memory. And Warren McCulloch and Walter Pitts laid the foundation for neural networks.

1950s. With the advent of modern computers, scientists could test their ideas about machine intelligence. One method for determining whether a computer has intelligence was devised by the British mathematician and World War II code-breaker Alan Turing. The Turing Test focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.

1956. The modern field of artificial intelligence is widely cited as starting this year during a summer conference at Dartmouth College. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the conference was attended by 10 luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an economist, political scientist and cognitive psychologist, who presented their groundbreaking Logic Theorist, a computer program capable of proving certain mathematical theorems and referred to as the first AI program.

1950s and 1960s. In the wake of the Dartmouth College conference, leaders in the fledgling field of AI predicted that a man-made intelligence equivalent to the human brain was around the corner, attracting major government and industry support. Indeed, nearly 20 years of well-funded basic research generated significant advances in AI: For example, in the late 1950s, Newell and Simon published the General Problem Solver (GPS) algorithm, which fell short of solving complex problems but laid the foundations for developing more sophisticated cognitive architectures; McCarthy developed Lisp, a language for AI programming that is still used today. In the mid-1960s MIT Professor Joseph Weizenbaum developed ELIZA, an early natural language processing program that laid the foundation for today's chatbots.

1970s and 1980s. But the achievement of artificial general intelligence proved elusive, not imminent, hampered by limitations in computer processing and memory and by the complexity of the problem. Government and corporations backed away from their support of AI research, leading to a fallow period lasting from 1974 to 1980 and known as the first "AI Winter." In the 1980s, research on deep learning techniques and industry's adoption of Edward Feigenbaum's expert systems sparked a new wave of AI enthusiasm, only to be followed by another collapse of government funding and industry support. The second AI winter lasted until the mid-1990s.

1990s through today. Increases in computational power and an explosion of data sparked an AI renaissance in the late 1990s that has continued to present times. The latest focus on AI has given rise to breakthroughs in natural language processing, computer vision, robotics, machine learning, deep learning and more. Moreover, AI is becoming ever more tangible, powering cars, diagnosing disease and cementing its role in popular culture. In 1997, IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov, becoming the first computer program to beat a world chess champion. Fourteen years later, IBM's Watson captivated the public when it defeated two former champions on the game show Jeopardy!. More recently, the historic defeat of 18-time World Go champion Lee Sedol by Google DeepMind's AlphaGo stunned the Go community and marked a major milestone in the development of intelligent machines.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.

Popular AI cloud offerings include the following:

Read more from the original source:
What is Artificial Intelligence (AI) and How Does it Work ...

Posted in Uncategorized

Uncovering the Secrets of the Big Bang With Artificial Intelligence – SciTechDaily

A quark gluon plasma after the collision of two heavy nuclei. Credit: TU Wien

Can machine learning be used to uncover the secrets of the quark-gluon plasma? Yes but only with sophisticated new methods.

It could hardly be more complicated: tiny particles whir around wildly with extremely high energy, countless interactions occur in the tangled mess of quantum particles, and this results in a state of matter known as quark-gluon plasma. Immediately after the Big Bang, the entire universe was in this state; today it is produced by high-energy atomic nucleus collisions, for example at CERN.

Such processes can only be studied using high-performance computers and highly complex computer simulations whose results are difficult to evaluate. Therefore, using artificial intelligence or machine learning for this purpose seems like an obvious idea. Ordinary machine-learning algorithms, however, are not suitable for this task. The mathematical properties of particle physics require a very special structure of neural networks. At TU Wien (Vienna), it has now been shown how neural networks can be successfully used for these challenging tasks in particle physics.

Simulating a quark-gluon plasma as realistically as possible requires an extremely large amount of computing time, says Dr. Andreas Ipp from the Institute for Theoretical Physics at TU Wien. Even the largest supercomputers in the world are overwhelmed by this. It would therefore be desirable not to calculate every detail precisely, but to recognize and predict certain properties of the plasma with the help of artificial intelligence.

Therefore, neural networks are used, similar to those used for image recognition: Artificial neurons are linked together on the computer in a similar way to neurons in the brain and this creates a network that can recognize, for example, whether or not a cat is visible in a certain picture.

When applying this technique to the quark-gluon plasma, however, there is a serious problem: the quantum fields used to mathematically describe the particles and the forces between them can be represented in various different ways. This is referred to as gauge symmetries, says Ipp. The basic principle behind this is something we are familiar with: if I calibrate a measuring device differently, for example, if I use the Kelvin scale instead of the Celsius scale for my thermometer, I get completely different numbers, even though I am describing the same physical state. Its similar with quantum theories except that there the permitted changes are mathematically much more complicated. Mathematical objects that look completely different at first glance may in fact describe the same physical state.

If you dont take these gauge symmetries into account, you cant meaningfully interpret the results of the computer simulations, says Dr. David I. Mller. Teaching a neural network to figure out these gauge symmetries on its own would be extremely difficult. It is much better to start out by designing the structure of the neural network in such a way that the gauge symmetry is automatically taken into account so that different representations of the same physical state also produce the same signals in the neural network, says Mller. That is exactly what we have now succeeded in doing: We have developed completely new network layers that automatically take gauge invariance into account. In some test applications, it was shown that these networks can actually learn much better how to deal with the simulation data of the quark-gluon plasma.

With such neural networks, it becomes possible to make predictions about the system for example, to estimate what the quark-gluon plasma will look like at a later point in time without really having to calculate every single intermediate step in time in detail, says Andreas Ipp. And at the same time, it is ensured that the system only produces results that do not contradict gauge symmetry in other words, results which make sense at least in principle.

It will be some time before it is possible to fully simulate atomic core collisions at CERN with such methods, but the new type of neural networks provides a completely new and promising tool for describing physical phenomena for which all other computational methods may never be powerful enough.

Reference: Lattice Gauge Equivariant Convolutional Neural Networks by Matteo Favoni, Andreas Ipp, David I. Mller and Daniel Schuh, 20 January 2022, Physical Review Letters.DOI: 10.1103/PhysRevLett.128.032003

See original here:
Uncovering the Secrets of the Big Bang With Artificial Intelligence - SciTechDaily

Posted in Uncategorized

Global Artificial Intelligence (AI) in Beauty and Cosmetics Market worth US$ 13.34 Billion by 2030 – Exclusive Report by InsightAce Analytic -…

JERSEY CITY, N.J., Jan. 28, 2022 /PRNewswire/ -- The newly published report titled "Global Artificial Intelligence (A.I.) in Beauty and Cosmetics Market By Trends, Industry Competition/Company Profiles Analysis, Revenue (US$ Billions) and Forecast Till 2030." features in-depth analysis and an extensive study on the market, exploring its significant factors.

According to the latest market intelligence research report by InsightAce Analytic, the global Artificial Intelligence (A.I.) in Beauty and Cosmetics market size was valued at US$ 2.70 Billion in 2021, and it is expected to reach US$ 13.34 Billion in 2030, record a promising CAGR of 19.7% from 2021 to 2030.

Request Sample Report:https://www.insightaceanalytic.com/request-sample/1051

The beauty and cosmetic sector have witnessed a massive upsurge in Artificial Intelligence (A.I.) in recent years. Due to advancements in A.I. technologies and the fact that beauty is characterized as a personalized and engaging market that generates a large amount of data, A.I. appears to be a solution to deal with this complex environment, prompting beauty companies to make data-driven decisions on their strategies to remain competitive. The beauty market has changed dramatically over the last decade, owing to the introduction of new technology and a shift in customer shopping behaviors. The beauty sector has been incorporating digital transformation into its business models to give consumers individualized skin regimens and beauty products tailored to their specific needs.

The Artificial Intelligence (A.I.) In Beauty and Cosmetics Market, growth can be attributed to the Integration of advanced technology like A.I. in the beauty and cosmetic field, providing new ways of engaging with the consumer, bringing efficiency and custamised solutions to the beauty client such as virtual try-on and personalized products. Increased demand for beauty products and technological advancements is expected to positively impact market growth. The outbreak of Covid-19 has changed consumer purchasing patterns across the beauty and cosmetic industry due to strict lockdown situations and the practice of social distancing across various countries. However, The COVID-19 crisis is likely to create opportunities for beauty and cosmetic brands due to the growing demand for personalized beauty & cosmetic products and the rapidly evolving eCommerce sector. According to one of the fashion-industry trade journals, online sales at Sephora now account for 70-80% of total sales after the pandemic.

Preview for Detailed TOC: https://www.insightaceanalytic.com/report/global-artificial-intelligence-ai-in-beauty-and-cosmetics-market/1051

Competitive Analysis:

There has been an influx of Beauty Tech implementations on the global market with the rapid expansion of the beauty and cosmetic industry. Key companies are constantly testing and launching new features with key strategic partners with innovative services, covering the market's demands. Their focus on serving their clients' needs, both brands and end-consumers, and the constant technological development are the key factors in boosting market growth. Companies like L'Oral, and PROVEN, among others, have already recognized such potential and are applying A.I. in different ways. For instance, L'Oral is implementing A.I. strategies on their business. Followed L'Oral, by PROVEN has the largest skincare database and, with the input from the consumer, matches their data, creating unique and customized products using A.I. mechanisms.

The prominent players in the Artificial Intelligence (A.I.) in Beauty and Cosmetics industry include:

Beiersdorf (NIVEA SKiN GUiDE), L'Oral's (Modiface, Hair Coach), Olay (Skin Care App), CRIXlabs (DBA Quantified Skin), Shiseido (Optune System), Procter & Gamble (Opte Wand), My Beauty Matches, Yours Skincare, EpigenCare Inc., mySkin, Haut.AI, Luna Fofo, Revieve, ANOKAI. CA., Pure & Mine, Youth Laboratories, Spruce Beauty, Nioxin, New Kinpo Group, Perfect Corp, Symrise (Philyra), Sephora USA, Inc. (Virtual Artist), Function of Beauty LLC, Este Lauder, Coty Inc. (Rimmel), Givaudan, Beautystack and Polyfins Technology Inc and Other Prominent Players.

Key Industry Developments from Leading Players:

Artificial Intelligence (A.I.) in Beauty and Cosmetics Market Regional Analysis:

North America is expected to dominate the growth of A.I. in the beauty and cosmetic market due to the expansion of the beauty and cosmetic industry and prominent e-commerce companies like Amazon and Sephora. Asia Pacific region is expected to experience the fastest growth in the global A.I. in beauty and cosmetic market due to rapidly increasing consumer spending and expansion of the e-commerce sector across the region. In emerging countries like China, India, and Japan, the beauty e-commerce space is adapting to multiple models to enhance the e-commerce shopping experience for consumers.

Enquiry Before Buying:https://www.insightaceanalytic.com/enquiry-before-buying/1051

The Global Artificial Intelligence (A.I.) in Beauty and CosmeticsMarket Segments

The Global Artificial Intelligence (A.I.) in Beauty and Cosmetics Market Estimates (Value US$ Billion) & Trend and Forecast Analysis, 2020 to 2030 based onService/Product

The Global Artificial Intelligence (A.I.) in Beauty and Cosmetics Market Estimates (Value US$ Billion) & Trend and Forecast Analysis, 2020 to 2030 based on Application

The Global Artificial Intelligence (A.I.) in Beauty and Cosmetics Market Estimates (Value US$ Billion) & Trend and Forecast Analysis, 2020 to 2030 based on Region

North America Artificial Intelligence (A.I.) in Beauty and Cosmetics Market Estimates Revenue (US$ Billion) by Country, 2020 to 2030

Europe Artificial Intelligence (A.I.) in Beauty and Cosmetics Market Estimates Revenue (US$ Billion) by Country, 2020 to 2030

Asia PacificArtificial Intelligence (A.I.) in Beauty and Cosmetics Market Estimates Revenue (US$ Billion) by Country,2020 to 2030

Latin AmericaArtificial Intelligence (A.I.) in Beauty and Cosmetics Market EstimatesRevenue (US$ Billion) by Country, 2020 to 2030

The Middle East & Africa Artificial Intelligence (A.I.) in Beauty and Cosmetics Market Estimates Revenue (US$ Billion) by Country, 2020 to 2030

For Customized Information @ https://www.insightaceanalytic.com/report/global-artificial-intelligence-ai-in-beauty-and-cosmetics-market/1051

Other Related Reports Published by InsightAce Analytic:

Global Next-Generation Personalized Beauty Market

Global Artificial Intelligence (A.I.) In Beauty and Cosmetics Market

Global Personalized Skin Care Market

Global Bio-Based Cosmetics and Personal Care Ingredients Market

About Us:

InsightAce Analytic is a market research and consulting firm that enables clients to make strategic decisions. Our qualitative and quantitative market intelligence solutions inform the need for market and competitive intelligence to expand businesses. We help clients gain a competitive advantage by identifying untapped markets, exploring new and competing technologies, segmenting potential markets, and repositioning products. Our expertise is in providing syndicated and custom market intelligence reports with an in-depth analysis with key market insights in a timely and cost-effective manner.

Contact Us:

Priyanka TilekarInsightAce Analytic Pvt. Ltd.Tel : +1 551 226 6109Asia: +91 79 72967118Visit:www.insightaceanalytic.comEmail:[emailprotected]Follow Us on LinkedIn @bit.ly/2tBXsgSFollow Us OnFacebook@bit.ly/2H9jnDZ

SOURCE InsightAce Analytic Pvt. Ltd.

Excerpt from:
Global Artificial Intelligence (AI) in Beauty and Cosmetics Market worth US$ 13.34 Billion by 2030 - Exclusive Report by InsightAce Analytic -...

Posted in Uncategorized

Artificial intelligence (AI): Top trending companies on Twitter Q4 2021 – Verdict

Verdict has listed five companies that trended the most in Twitter discussions related to AI, using research from GlobalDatas Technology Influencer platform.

The top companies are the most mentioned companies among Twitter discussions of more than 150 AI experts tracked by GlobalDatas Technology Influencer platform during the fourth quarter (Q4) of 2021.

Google discovering that AI becomes more aggressive as it becomes more advanced, Google using machine learning to improve chip design, and a new AI developed by Google interpreting and reading sign language aloud were some of the popular discussions on Alphabet Inc in Q4 2021.

Mario Pawlowski, CEO of trucking industry news and technology website iTrucker, shared a video on how AI becomes more aggressive as it becomes more advanced. Researchers at technology company Googles AI research company DeepMind conducted a study on AI by developing an AI video game called Gathering. The games goal was to collect more apples than the opponent. Both players were provided with lasers that can be used to incapacitate the opponent for a short time. Researchers wanted to check whether the AI will use co-operation or aggression to achieve the games goal.

The study found that the AI did not use the laser when there were plenty of apples, but laser usage skyrocketed when the number of apples decreased. The study also found that more complex AIs used the laser more and behaved less cooperatively regardless of the number of apples, providing new insight into the nature of cooperation of both human and artificial intelligence.

Headquartered in Mountain View, California, US, Alphabet is a holding company for various subsidiaries including Google, life sciences company Verily Life Sciences, venture capital firm GV, biotech company Calico, and research and development company X. Google is the biggest subsidiary under the holding company.

Nvidia developing an AI system capable of translating text into landscape images, a robot toolbox released by the company to deepen support for AI-powered robotics in robot operating system (ROS), and Nvidia partnering with non-profit corporation Open Robotics to improve the performance of ROS 2 were some of the trending discussions around Nvidia Corp in the fourth quarter.

Dr. Ganapathi Pulipaka, chief data scientist at technology company Accenture, shared an article on Nvidia developing an AI system called GauGAN2 that can translate text into landscape images. GauGAN2 can combine techniques such as segmentation mapping, inpainting, and text-to-image production into a single tool to create photorealistic art using a combination of words and drawings.

The tool allows users to build realistic landscape photos, which do not exist. It can understandinterconnections between objects such as snow, trees, water, flowers, bushes, hills, and mountains.The tool can be used in movies, software, video games, product design, fashion, and interior design, the article noted.

Nvidia Corp is a technology company headquartered in Santa Clara, California, US. The company invented the graphics processing unit (GPU)in 1999, which drove the expansion of the PC gaming business and redefined modern computer graphics. Nvidia has since developed AI-powered GPUs, laptops and supercomputers to power the next generation of computing.

IBMs Squawk Bot AI assisting in the interpretation of massivefinancialdata, IBM using AI to predict Alzheimers disease, and IBM acquiring McDonalds AI-enabled voice system developer McD Tech Labs were some of the popular discussions on IBM in Q4 2021.

Mike Potter, senior solution architect at technology company Tech Data, shared an article on IBMs Squawk Bot AI that helps in interpreting massiveamounts of financialdata. The Squawk Bot uses multimodal learning to process large amounts of textual information and extract potentially relevant bits that are related to the financial performance of an entity of interest. The bot automatically detects cross-modality correlations and ranks them based on their importance thereby providing users with the required guidance to understand the results.

Headquartered in Armonk, New York, US, IBM is a provider of cloud services and cognitive solutions. The company provides advanced AI-driven technologies and services for the financial services, healthcare, video streaming and hosting and business automation sectors through its IBM Watson division.

Microsoft providing businesses access to OpenAIs GPT-3 AI language model, Microsoft partnering with Nvidiato train AI-powered language modelMegatron-Turing Natural Language Generation(MT-NLG), and an AIsolution created by Microsoft researchersto help programmers debug their code were some of the popular discussions on Microsoft in the last quarter.

Spiros Margaris, venture capitalist and board member at venture capital firm Margaris Ventures, shared an article on Microsoft allowing businesses to use OpenAIs GPT-3 AI language model as part of its suite of Azure cloud tools. GPT-3servesas an autocomplete tool, in which the AI system attempts to finish a fragment of text such as an email or a poem that a person has started. It will assist users withtheir capacity to understand language letting them to perform other functions such as summarising papers, detecting text emotion, and generating project and storyideas.

Microsoft is a technology company headquartered in Redmond, Washington, US. The companys AI products and services include Microsoft 365 that incorporates AI tools and techniques, AI-based language models, and the Microsoft AI platform that provides a framework for development of AI-based solutions to businesses.

Semiconductor company Intel revealing its upcoming high-performance AI accelerators, Intel developing neuromorphic chips that mimic the brains functions, and the launch of Intels 12th generation Intel Core processors were some of the popular discussions on Intel in Q4 2021.

Ronald van Loon, CEO of the Intelligent World, an influencer network that connects businesses and experts to audiences, shared an article on Intel revealing its upcoming high-performance AI accelerators. The AI accelerators include Intel Nervana neural network processors (NNP), with the NNP-T for training and the NNP-I for inference. The Intel Nervana NNP-Twas designed with two major real-world aspects such astraining a network as quickly as possible within a given power budget.Theprocessor was designed to be flexible, maintaining a balance between computation, communication and memory, according to the article.

Intel is a semiconductor company headquartered in Santa Clara, California, US. The company offers AI-based software and hardware including visual processing units and processors such as the Xeon scalable processors and Intel oneAPI AI analytics toolkit that provides developers with Python libraries and frameworks.

Related Report Download the full report from GlobalData's Report StoreGet the Report

Latest report from Visit GlobalData Store

See the article here:
Artificial intelligence (AI): Top trending companies on Twitter Q4 2021 - Verdict

Posted in Uncategorized

Our children are growing up with AI: what you need to know – World Economic Forum

A 2019 study conducted by DataChildFutures found that 46% of participating Italian households had AI-powered speakers, while 40% of toys were connected to the internet. More recent research suggests that by 2023 more than 275 million intelligent voice assistants, such as Amazon Echo or Google Home, will be installed in homes worldwide.

As younger generations grow up interacting with AI-enabled devices, more consideration should be given to the impact of this technology on children, their rights and wellbeing.

AI-powered learning tools and approaches are often regarded as critical drivers of innovation in the education sector. Often recognized for its ability to improve the quality of learning and teaching, artificial intelligence is being used to monitor students' level of knowledge and learning habits, such as rereading and task prioritization, and ultimately to provide a personalized approach to learning.

Knewton is one example of AI-enabled learning software that identifies knowledge gaps and curates education content in line with user needs. Algorithms are also behind Microsoft`s Presentation Translator that provides real-time translation in 60 different languages as a presentation is being delivered. This software helps increase access to learning, in particular for students who have a hearing impairment. AI, though not always successfully, is also increasingly used to automate grading and feedback activities.

With such broad potential for use in the education system, forecasts by Global Market Insights suggest that the market value of AI in education will reach $20 billion by 2027.

In addition to education, AI is also advancing children's health. In recent years, progress in research on the role of AI in the early detection of autism, signs of depression from children's speech and rare genetic disorders has made headlines. There are also growing examples of the deployment of AI to ensure child safety by identifying online predators and practices such as grooming and child exploitation.

Despite the positive applications of AI, there is still a lot of hesitation towards the technology in certain regions. A 2019 survey conducted by IEEE revealed that 43% of US and 33% of UK millennial parents respectively would be comfortable with leaving their children in the care of an AI-powered nurse during hospitalization. In contrast, millennial parents in China, India and Brazil are more receptive to artificial intelligence where 88%, 83% and 63% respectively would be comfortable with a virtual nurse caring for their child in hospital. Similar findings were found for the use of AI-powered robots in paediatric surgery.

Scepticism on the widespread use of AI is also present in discussions on children`s privacy and safety. Children's information including sensitive and biometric data is captured and processed by intelligent devices including virtual assistants and smart toys. In the wrong hands, such data could put children's safety at risk.

For example, amid security fears, in 2017 CloudPets teddy bears were withdrawn from the shelves following a data breach that exposed private information including photos and recordings of more than two million children's voice messages.

The latest figures show that 56% of 8-12-year-olds across 29 countries are involved in at least one of the world's major cyber-risks: cyberbullying, video-game addiction, online sexual behaviour or meeting with strangers encountered on the web.

Using the Forum's platform to accelerate its work globally, #DQEveryChild, an initiative to increase the digital intelligence quotient (DQ) of children aged 8-12, has reduced cyber-risk exposure by 15%.

In March 2019, the DQ Global Standards Report 2019 was launched the first attempt to define a global standard for digital literacy, skills and readiness across the education and technology sectors.

The 8 Digital Citizenship Skills every child needs

Image: DQ Institute

Our System Initiative on Shaping the Future of Media, Information and Entertainment has brought together key stakeholders to ensure better digital intelligence for children worldwide. Find our more about DQ Citizenship in our Impact Story.

Serious concerns have also been raised over the use of children's data, such as juvenile records in AI systems, to predict future criminal behaviour and recidivism. Other than posing a threat to privacy, civil society representatives and activists have warned against possible discrimination, bias and unfair treatment.

To ensure that AI is child-centred, decision-makers and tech innovators must prioritize children's rights and wellbeing when designing and developing AI systems. UNICEF and OHCHR have been particularly vocal in this regard. As part of its AI for Children project, UNICEF has worked closely with the World Economic Forum to develop policy guidance on artificial intelligence for children featuring a set of recommendations for building AI policies and systems that, among other things, uphold children's rights to privacy and data protection.

As part of its Generation AI initiative and conversations on global standards for children and AI, the World Economic Forum is also spearheading the Smart Toys Awards to maximize the learning opportunities of smart toys and minimize the risks posed to safety and children.

Rate of automation in the workforce.

Image: World Economic Forum

Estimates suggest that, by 2065, 65% of children in primary school today will work in positions that have not yet been created. From a practical standpoint, AI should be incorporated into school curricula to equip future generations with coding skills and provide them with adequate AI training. At the same time, children should be taught to think critically about the technology and to inform their judgements about related threats and opportunities. Such efforts should be inclusive of all children and therefore should seek to bridge the digital literacy gap between the Global North and Global South.

More global action will be needed to ensure that children's best interests are reflected and implemented in national and international policies, design and development of AI technologies. There is no doubt that artificial intelligence will change the way children interact with their surroundings including their learning, play and development environment. However, it is our responsibility to ensure that this change becomes a force for good.

Written by

Natasa Perucica, Research and Analysis Specialist, Cybersecurity Industry Solutions, World Economic Forum

The views expressed in this article are those of the author alone and not the World Economic Forum.

Originally posted here:
Our children are growing up with AI: what you need to know - World Economic Forum

Posted in Uncategorized

Artificial intelligence in the management of NPC | CMAR – Dove Medical Press

Introduction

According to the International Agency for Research on Cancer, nasopharyngeal carcinoma (NPC) is the twenty-third most common cancer worldwide. The global number of new cases and deaths in 2020 were 133,354 and 80,008, respectively.1,2 Although it is not uncommon, it has a distinct geographical distribution where it is most prevalent in Eastern and South-Eastern Asia, accounting for 76.9% of global cases. It was also found that almost half of the new cases occurred in China.2 Because of its late symptoms and anatomical location, it makes it difficult to be detected in the early stages. Radiotherapy is the primary treatment modality, and concomitant/adjunctive chemotherapy is often needed for advanced locoregional disease.3 Furthermore, there are many organs-at-risk (OARs) nearby that are sensitive to radiation; these include the salivary glands, brainstem, optic nerves, temporal lobes and the cochlea.4 Hence, it is of interest whether the use of artificial intelligence (AI) can help improve the diagnosis, treatment process and prediction of outcomes for NPC.

With the advances of AI over the past decade, it has become pervasive in many industries playing both major and minor roles. This includes cancer treatment, where medical professionals search for methods to utilize it to improve treatment quality. AI refers to any method that allows algorithms to mimic intelligent behavior. It has two subsets, which are machine learning (ML) and deep learning (DL). ML uses statistical methods to allow the algorithm to learn and improve its performance, such as random forest and support vector machine. Artificial neural network (ANN) is an example of ML and is also a core part of DL.5 DL can be defined as a learning algorithm that can automatically update its parameters through multiple layers of ANN. Deep neural networks such as convolutional neural network (CNN) and recurrent neural network are all DL architectures.

Besides histological, clinical and demographic information, a wide range of data ranging from genomics, proteomics, immunohistochemistry, and imaging must be integrated by physicians when developing personalized treatment plans for patients. This has led to an interest in developing computational approaches to improve medical management by providing insights that will enhance patient outcomes and workflow throughout a patients journey.

Given the increased use of AI in cancer care, in this systematic literature review, papers on AI applications for NPC management were compiled and studied in order to provide an overview of the current trend. Furthermore, possible limitations discussed within the articles were explored.

A systematic literature search was conducted to retrieve all studies that used AI or its subfields in NPC management. Keywords were developed and combined using boolean logic to produce the resulting search phrase: (artificial intelligence OR machine learning OR deep learning OR Neural Network) AND (nasopharyngeal carcinoma OR nasopharyngeal cancer). Using the search phrase, a search of research articles from the past 15 years to March 2021 was performed on PubMed, Scopus and Embase. The results from the three databases were consolidated, and duplicates were removed. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) was followed where possible, and the PRISMA flow diagram and checklist were used as a guidelines to consider the key aspects of a systematic literature review.6

Exclusion and inclusion criteria were determined to assess the eligibility of the retrieved publications. The articles were first checked to remove those that were not within the exclusion criteria. These included book chapters, conference reports, literature reviews, editorials, letters to the editors and case reports. In addition, articles in languages other than English or Chinese and papers with inaccessible full-texts were also excluded.

The remaining studies were then filtered by reading the title and abstract to remove any articles that were not within the inclusion criteria (applications of AI or its subfield and experiments on NPC). A full-text review was further performed to confirm the eligibility of the articles based on both these criteria. The process was conducted by two independent reviewers (B.B & H.C.).

Essential information from each article was extracted and placed in a data extraction table (Table 1). These included the author(s), year of publication, country, sample type, sample size, AI algorithm used, application type, study aim, performance metrics reported, results, conclusion, and limitations. The AI model with the best performance metrics from each study was selected and included. Moreover, the performance results of models trained with the training cohort were obtained from evaluating the test cohort instead of the training cohort. This was to prevent overfitting by avoiding to train and test models with the same dataset.

The selected articles were assessed for risk of bias and applicability using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 tool in Table 2.7 Studies with more than one section rated high or unclear were eliminated. Further quality assessment was also completed to ensure the papers meet the required standard. This was performed using the guidelines for developing and reporting ML predictive models from Luo et al and Alabi et al (Table 3).8,9 The guideline was summarised, and a mark was given for each guideline topic followed. The threshold was set at half of the maximum marks, and the score was presented in Table 4.

Table 2 Quality Assessment via the QUADAS-2 Tool

Table 3 Quality Assessment Guidelines

The selection process was performed using the PRISMA flow diagram in Figure 1. 304 papers were retrieved from the three databases. After 148 duplicates were removed, one inaccessible article was rejected. The papers not meeting the inclusion (n=59) and exclusion (n=20) criteria were also filtered out. Moreover, two additional studies found in literature reviews were included after removing one for being duplicated and another that did not meet the exclusion criteria. Finally, 78 papers were then assessed for quality (Figure 1).

Figure 1 PRISMA flow diagram 2020.

Notes: Adapted from Page MJ, McKenzie JE, Bossuyt PM, et al.The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. Creative Commons license and disclaimer available from: http://creativecommons.org/licenses/by/4.0/legalcode.6

18 papers failed due to having more than one section with a high or unclear rating, leaving 60 studies to be further evaluated. The QUADAS-2 tool showed that 48.3% of articles showed an overall low risk of bias, while 98.3% of them had a low concern regarding applicability (Table 2).

An additional evaluation was performed based on Table 3, which was adapted from the guidelines by Luo et al and the modified version from Alabi et al8,9 Of the 60 relevant studies, 52 of them scored greater than 70% (Table 4). It should also be noted that 23 papers included the evaluation criteria items but did not fully follow the structure of the proposed guidelines.1032 However, this only affects the ease of reading and extracting information from the articles, but not the content and quality of the papers.

The characteristics of the 7articles finally included in the current study were shown in Table 1. The articles were published in either English (n=57)1066 or Chinese (n=3);6769 3 studies examined sites other than the NPC.10,17,34

When observing the origins of the studies, 45 were published in Asia, while Morocco and France contributed one study each. Furthermore, 13 papers were collaborated work from multiple countries. The majority of the studies were from the endemic regions.

The articles used various types of data to train the models. 66.7% (n=40) only used imaging data such as magnetic resonance imaging, computed tomography or endoscopic images.15,16,18,19,2124,2628,30,32,34,3739,4143,4556,5863,67,69 There were also four studies that included clinicopathological data as well as images for training models,25,31,36,40 while three other studies developed models using images, clinicopathological data, and plasma Epstein-Barr virus (EBV) DNA.29,33,35 Furthermore, 4 studies used treatment plans,6466,68 while proteins and microRNA expressions data were each extracted by one study.10,44 There were also four articles that trained with both clinicopathological and plasma EBV DNA/serology data,1214,17 while one article trained its model with clinicopathological and dosimetric data.57 Risk factors (n=2), such as demographic, medical history, familial cancer history, dietary, social and environmental factors, were also used to develop AI models.11,20

The studies could be categorized into 4 domains, which were auto-contouring (n=21),15,16,18,22,24,3032,4555,67,69 diagnosis (n=17),10,15,16,23,26,27,49,52,54,5663 prognosis (n=20)1214,17,19,25,28,29,3344 and miscellaneous applications (n=7),11,20,21,6466,68 which included risk factor identification, image registration and radiotherapy planning (Figure 2A). Five studies examined both diagnosis and auto-contouring simultaneously.15,16,49,52,54

Figure 2 Comparison of studies on AI application for NPC management. (A) Application types of AI and its subfields on NPC; (B) Main performance metrics of application types on NPC.

Abbreviations: AI, artificial intelligence; AUC, area under the receiver operating characteristic curve; DSC, dice similarity coefficient; ASSD, average symmetric surface distance; NPC, nasopharyngeal carcinoma.

Notes: aMore than one AI subfield (artificial intelligence, machine learning and deep learning) was used in the same study. bAuto-contouring and diagnosis accuracy values were found in the same study.54.

Analyses on the purpose of the application showed that, only in auto-contouring, DL is the most heavily used (with 19 out of 22 instances). For the rest of the categories (NPC diagnosis, prognosis and miscellaneous applications), ML is the most common technique (more than half of the publications in each category) (Figure 2A). In addition, studies applying DL models selected in this literature review were published from 2017 to 2021, where there was a heavier focus on experimenting with DL. It was observed that the majority of the papers applying DL models used various forms of CNN (n=30),15,18,19,2124,2834,36,4553,55,56,60,65,67,69 while the main ML method used was ANN (n=12).13,16,26,4244,54,6164,68

The primary metrics reported were the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, dice similarity coefficient (DSC) and average symmetric surface distance (ASSD), as shown in Figure 2B.

AUC was used to evaluate the models capabilities in 25 papers, with the majority measuring the prognostic (n=13)1214,19,28,3335,37,39,40,42,44 and diagnostic abilities (n=10).15,23,26,27,49,5660 Similarly, accuracy was the parameter most frequently reported in the diagnosis and prognosis application: 11 and 5 out of 20 articles respectively.10,12,15,2628,35,43,44,49,54,56,6063 Sensitivity was the most common studied parameter for diagnostic performance: 15 out of 23 papers.10,15,16,23,26,27,49,52,54,56,5963 The specificity was only reported for prognosis (n=7)12,14,28,34,39,40,43 and diagnosis (n=15).10,15,16,23,26,27,49,52,54,56,5963 In addition, the DSC (n=20)15,18,22,24,3032,4553,55,65,67,69 and ASSD (n=10)18,22,24,31,32,45,46,48,51,69 were the primary metrics reported in studies on auto-contouring (Figure 2B).

Performance metrics with five or more instances of each application method were presented in a boxplot (Figure 3). The median AUC, accuracy, sensitivity and specificity of prognosis were 0.8000, 0.8300, 0.8003 and 0.8070 respectively, while their range were 0.63300.9510, 0.75590.9090, 0.34400.9200 and 0.52001.000 respectively. For diagnosis, the AUCs median was 0.9300, while the median accuracy was 0.9150. In addition, the median sensitivity and specificity were 0.9307 and 0.9413, respectively. The range for diagnosis AUC, accuracy, sensitivity and specificity were 0.69000.9900, 0.65000.9777, 0.02151.000 and 0.80001.000, respectively. The median DSC value for auto-contouring was 0.7530, while the range was 0.62000.9340. Furthermore, the median ASSD for auto-contouring was 1.7350 mm, and the minimum and maximum values found in the studies were 0.5330 mm and 3.4000 mm, respectively.

Figure 3 Performance metric boxplots of AI application types on NPC. (A) Prognosis and diagnosis: accuracy, AUC, sensitivity and specificity metric; (B) Auto-contouring: DSC metric; (C) Auto-contouring: ASSD metric.

Abbreviations: AI, artificial intelligence; ASSD, average symmetric surface distance; AUC, area under the receiver operating characteristic curve; DSC, dice similarity coefficient; NPC, nasopharyngeal carcinoma.

Publications on auto-contouring experimented on segmenting gross tumor volumes, clinical target volume, OARs and primary tumor volumes. The target most delineated was the gross target volume (n=7),30,48,49,51,53,55,69 while the second most were the OARs (n=3).50,52,67 The clinical target volumes and the primary tumor volume were studied in two and one articles respectively.46,55,56 However, nine articles did not mention the specific target volume contoured.15,16,18,22,24,31,32,47,54 Two out of three articles reported that the DSC for delineating optic nerves was substantially lower than the other OARs.52,67 In contrast, for the remaining paper, although the segmentation of the optic nerve is not the worst, the paper reported that the three OARs it tested, which included optic nerves, were specifically more challenging to contour.50 This is because of the low soft-tissue contrast in computed tomography images and their diverse morphological characteristics. When analyzing the OARs, automatic delineation of the eyes yielded the best DSC. Furthermore, apart from the spinal cord, optic nerve and optic chiasm, the AI models have a DSC value greater than 0.8 when contouring OARs.50,52,67

As for the detection of NPC, six papers compared the performance of AI and humans. Two of them found that AIs had better diagnostic capabilities than humans (oncologists and experienced radiologists),15,49 while another two reported that AIs had similar performances to ear, nose and throat specialists.16,62 However, the last two papers found that it depends on the experience of the person. For example, senior-level clinicians performed better than the AI, while junior level ones were worse.23,60 This is because of the variations in possible sizes, shapes, locations, and image intensities of NPC, making it difficult to determine the diagnosis. These factors make it challenging for clinicians with less experience, and it showed that AI diagnostic tools could support junior-level clinicians.

On the other hand, within the 17 papers experimenting on the diagnostic application of AI, three articles analyzed radiation-induced injury diagnosis.27,57,58 Two of which were concerned with radiation-induced temporal lobe injury,57,58 while the remaining one predicted the fibrosis level of neck muscles after radiotherapy.27 It was suggested that through early detection and prediction of radiation-induced injuries, preventive measures could be taken to minimize the side effects.

For studies on NPC prognosis, 11 out of 20 publications focused on predicting treatment outcomes, with the majority including disease-free survival as one of the study objectives.12,13,17,19,29,33,36,3942 The rest studied treatment response prediction (n=2),35,43 predicting patients risk of survival (n=5),14,25,37,38,44 T staging prediction and the prediction of distant metastasis (n=2).28,34 Therefore, the versatility of AI in different functionalities was demonstrated. The performances of the models were reported in (Table 1) and the main metric analyzed was AUC with 13 out of 25 articles (Figure 2B).

In addition to the above aspects, AI was also used to study risk factor identification (n=2),11,20 image registration (n=1)21 and dose/dose-volume histogram (DVH) distribution (n=4).6466,68 In particular, dose/DVH distribution prediction was frequently used for treatment planning. A better understanding of the doses given to the target and OARs can help clinicians give a more individualized treatment plan with better consistency and a lower planning duration. However, further development is required to obtain similar plan qualities as created by people. This is because one papers model showed the same quality as manual planning by an experienced physicist,64 but another study using a different model was unable to achieve a similar plan quality designed by even a junior physicist.68

As evident in this systematic review, there is an exponential growth in interest to apply AI for the clinical management of NPC. A large proportion of the articles collected were published from 2019 to 2021 (n=45) compared to that from 2010 to 2018 (n=15).

A heavier focus is also placed on specific fields of AIs, such as ML and DL. There are only three reports on AI, while there are 31 studies on ML and 37 on DL. The choice of AI subfield sometimes depends on the task. For example, 86% of the papers focused on DL for NPC auto-contouring (n=19), while although the majority of the studies in the other applications used ML, they were more evenly distributed (Figure 2A). The reason why there is such a significant difference in the type of AI used in auto-contouring may be due to the capability of the algorithms and the nature of the data. The medical images acquired have many factors affecting the auto-contouring quality; these include the varying tumour sizes and shapes, image resolution, contrast between regions, noise and lack of consistency during data acquisition being collected from different institutions.70 Because of these challenges, ML-based algorithms have difficulty in performing automated segmentation on NPC as image processing before training is required, which is time-consuming. Furthermore, handcrafted features are necessary to precisely contour each organ or tumour as there are significant variations in size and shape for NPC. On the other hand, DL does not have this issue as they can process the raw data directly without the need for handcrafted features.70

ANN is the backbone of DL, as DL algorithms are ANNs with multiple (2 or more) hidden layers. In the development of AI applications for NPC, 80% of the studied articles incorporated either ANN or DL technique in their models12,13,1519,2126,2834,36,38,39,4256,6069 because neural networks are generally better for image recognition. However, one study cautioned that ANNs were not necessarily better than other ML models in NPC identification.61 Hence, even though DL-based models and ANNs should be considered the primary development focus, other ML techniques should still not be neglected.

Based on the literature collected, the integration of AI applications in each category is beneficial to the practitioner. Automated contouring by AIs not only can make contouring less time-consuming for clinicians,46,51,53,64 it can also help to improve the users accuracy.51 Similarly, AI can be used to reduce the treatment planning time for radiotherapy,64 thus improve the efficiency and effectiveness of the radiotherapy planning process.

For some NPC studies, additional features from images and parameters were extracted to further improve the performances of models. However, it should be noted that not all features are suitable as some features have a more significant impact on the models performance than others.40,57,58,61 Therefore, feature selection should be considered where possible.

At its current state, AI cannot yet replace humans to perform the most complex and time-consuming tasks. This is because multiple articles which compared the performance of their developed model with medical professionals showed conflicting results. The reason for this is that the experience of the clinician is an important factor that affects the resulting comparison. The models developed by Chuang et al and Diao et al performed better than junior-level professionals, but performed worse when compared to more experienced clinicians.23,60 One article even showed that an AI model had a lower capability than a junior physicist.68 Furthermore, the quality of the training data and the experience of the AI developers are critical.

The review revealed that AI at its current state still has several limitations. The first concern was the uncertainty regarding the generalizability of the models, because datasets of many studies are retrospective and single institutional in nature.15,19,28,33,3538,41,48,5759 The dataset may not represent the true population and may only represent a population subgroup or a region. Hence, this reduces the applicability of the models and affects their performance when applied to other datasets. Another reason was the difference in scan protocol between institutions. Variations in tissue contrasts or field of views may affect the performance as the model was not trained for the same condition.45,56 Therefore, consistency of scan protocols among different institutions is important to facilitate AI model training and validation.

Another limitation was the small amount of data used to train the models. 33% (n=20) of the articles chosen had 150 total samples for both training and testing the model. The reason for this was not only were the articles usually based on single-centre data, but also because NPC is less common compared to other cancers. This particularly affects DL-based models as they are more reliant on a much larger dataset to achieve their potential when compared to ML models; over-fitting will likely occur when there is only limited data; thus, data augmentation is often used to increase the dataset size. In addition, some studies had patient selection bias, while others had concerns about not implementing multi-modality inputs into the training model (Table 1).

Future work should address these issues when developing new models. Possible solutions include incorporating other datasets or cooperating with other institutions for external validation or to expand the dataset, which were lacking in most of the analysed papers in this review. The former suggestion can boost generalizability and avoid any patient selection bias, while the latter method can increase the capability of the AI models by providing more training samples. Other methods to expand dataset have also been explored, one of which is by using big data which can be done at a much larger scale. Big data can be defined as the vast data generated by technology and the internet of things, allowing easier access to information.71 In the healthcare sector, it will allow easier access to an abundance of medical data which will facilitate AI model training. However, with the large collection of data, privacy protection becomes a serious challenge. Therefore, future studies are required to investigate how to implement it.

The performances of the AI models could also be improved by increasing the amount of data and diversifying it with data augmentation techniques which were performed in some of the studies. However, it should be noted that with an increase in training samples, more data labelling will be required, making the process more time-consuming. Hence, one study proposed the use of continual learning, which it found to boost the models performance while reducing the labelling effort.47 However, continual learning is susceptible to catastrophic forgetting, which is a long-standing and highly challenging issue.72 Thus, further investigation into methods to resolve this problem would be required to make it easier to implement in other research settings.

There are several limitations in this literature review. The metric performance results extracted from the publications were insufficient to perform a meta-analysis. Hence, the insight obtained from this review is not comprehensive enough. The quality of the included studies was also not consistent, which may affect the analysis performed.

There is growing evidence that AI can be applied in various situations, particularly as a supporting tool in prognostic, diagnostic and auto-contouring applications and to provide patients with a more individualized treatment plan. DL-based algorithm was found to be the most frequently used AI subfield and usually obtained good results when compared to other methods. However, limited dataset and generalizability are key challenges that need to be overcome to further improve the performances and accessibility of AI models. Nevertheless, studies on AI demonstrated highly promising potential in supporting medical professionals in the management of NPC; therefore, more concerted efforts in swift development is warranted.

Dr Nabil F Saba reports personal fees from Merck, GSk, Pfizer, Uptodate, and Springer, outside the submitted work; and Research funding from BMS and Exelixis. Professor Raymond KY Tsang reports non-financial support from Atos Medical Inc., outside the submitted work. The authors report no other conflicts of interest in this work.

1. Sung H, Ferlay J, Siegel RL, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021;71(3):209249. doi:10.3322/caac.21660

2. Ferlay J, Ervik M, Lam F, et al. Global cancer observatory: cancer today; 2020. Available from: https://gco.iarc.fr/today. Accessed June 4, 2021.

3. Lee AWM, Ma BBY, Ng WT, Chan ATC. Management of nasopharyngeal carcinoma: current practice and future perspective. J Clin Oncol. 2015;33(29):33563364. doi:10.1200/JCO.2015.60.9347

4. Chan JW, Parvathaneni U, Yom SS. Reducing radiation-related morbidity in the treatment of nasopharyngeal carcinoma. Future Oncology. 2017;13(5):425431. doi:10.2217/fon-2016-0410

5. Shimizu H, Nakayama KI. Artificial intelligence in oncology. Cancer Sci. 2020;111(5):14521460. doi:10.1111/cas.14377

6. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. doi:10.1136/bmj.n71

7. Whiting PF, Rutjes AWS, Westwood ME. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529536. doi:10.7326/0003-4819-155-8-201110180-00009

8. Luo W, Phung D, Tran T, et al. Guidelines for developing and reporting machine learning predictive models in biomedical research: a multidisciplinary view. J Med Internet Res. 2016;18(12):e323. doi:10.2196/jmir.5870

9. Alabi RO, Youssef O, Pirinen M, et al. Machine learning in oral squamous cell carcinoma: current status, clinical concerns and prospects for futureA systematic review. Artif Intell Med. 2021;115:102060. doi:10.1016/j.artmed.2021.102060

10. Wang HQ, Zhu HL, Cho WCS, Yip TTC, Ngan RKC, Law SCK. Method of regulatory network that can explore protein regulations for disease classification. Artif Intell Med. 2010;48(2):119127. doi:10.1016/j.artmed.2009.07.011

11. Aussem A, de Morais SR, Corbex M. Analysis of nasopharyngeal carcinoma risk factors with Bayesian networks. Artif Intell Med. 2012;54(1):5362. doi:10.1016/j.artmed.2011.09.002

12. Kumdee O, Bhongmakapat T, Ritthipravat P. Prediction of nasopharyngeal carcinoma recurrence by neuro-fuzzy techniques. Fuzzy Sets Syst. 2012;203:95111. doi:10.1016/j.fss.2012.03.004

13. Ritthipravat P, Kumdee O, Bhongmakap T. Efficient missing data technique for prediction of nasopharyngeal carcinoma recurrence. Inf Technol J. 2013;12:11251133. doi:10.3923/itj.2013.1125.1133

14. Jiang R, You R, Pei X-Q, et al. Development of a ten-signature classifier using a support vector machine integrated approach to subdivide the M1 stage into M1a and M1b stages of nasopharyngeal carcinoma with synchronous metastases to better predict patients survival. Oncotarget. 2016;7(3):36453657. doi:10.18632/oncotarget.6436

15. Li C, Jing B, Ke L, et al. Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies. Cancer Commun. 2018;38(1):59. doi:10.1186/s40880-018-0325-9

16. Mohammed MA, Abd Ghani MK, Arunkumar N, Mostafa SA, Abdullah MK, Burhanuddin MA. Trainable model for segmenting and identifying Nasopharyngeal carcinoma. Comput Electr Eng. 2018;71:372387. doi:10.1016/j.compeleceng.2018.07.044

17. Jing B, Zhang T, Wang Z, et al. A deep survival analysis method based on ranking. Artif Intell Med. 2019;98:19. doi:10.1016/j.artmed.2019.06.001

18. Ma Z, Zhou S, Wu X, et al. Nasopharyngeal carcinoma segmentation based on enhanced convolutional neural networks using multi-modal metric learning. Phys Med Biol. 2019;64(2):025005. doi:10.1088/1361-6560/aaf5da

19. Peng H, Dong D, Fang M-J, et al. Prognostic value of deep learning PET/CT-based radiomics: potential role for future individual induction chemotherapy in advanced nasopharyngeal carcinoma. Clin Cancer Res. 2019;25(14):42714279. doi:10.1158/1078-0432.CCR-18-3065

20. Rehioui H, Idrissi A. On the use of clustering algorithms in medical domain. Int J Artifi Intell. 2019;17:236.

21. Zou M, Hu J, Zhang H, et al. Rigid medical image registration using learning-based interest points and features. Comput Mater Continua. 2019;60(2):511525. doi:10.32604/cmc.2019.05912

22. Chen H, Qi Y, Yin Y, et al. MMFNet: a multi-modality MRI fusion network for segmentation of nasopharyngeal carcinoma. Neurocomputing. 2020;394:2740. doi:10.1016/j.neucom.2020.02.002

23. Chuang W-Y, Chang S-H, Yu W-H, et al. Successful identification of nasopharyngeal carcinoma in nasopharyngeal biopsies using deep learning. Cancers (Basel). 2020;12(2):507. doi:10.3390/cancers12020507

24. Guo F, Shi C, Li X, Wu X, Zhou J, Lv J. Image segmentation of nasopharyngeal carcinoma using 3D CNN with long-range skip connection and multi-scale feature pyramid. Soft Comput. 2020;24(16):1267112680. doi:10.1007/s00500-020-04708-y

25. Jing B, Deng Y, Zhang T, et al. Deep learning for risk prediction in patients with nasopharyngeal carcinoma using multi-parametric MRIs. Comput Methods Programs Biomed. 2020;197:105684. doi:10.1016/j.cmpb.2020.105684

26. Mohammed MA, Abd Ghani MK, Arunkumar N, et al. Decision support system for nasopharyngeal carcinoma discrimination from endoscopic images using artificial neural network. J Supercomput. 2020;76(2):10861104. doi:10.1007/s11227-018-2587-z

27. Wang J, Liu R, Zhao Y, et al. A predictive model of radiation-related fibrosis based on the radiomic features of magnetic resonance imaging and computed tomography. Transl Cancer Res. 2020;9(8):47264738. doi:10.21037/tcr-20-751

28. Yang Q, Guo Y, Ou X, Wang J, Hu C. Automatic T staging using weakly supervised deep learning for nasopharyngeal carcinoma on MR images. Journal of Magnetic Resonance Imaging. 2020;52(4):10741082. doi:10.1002/jmri.27202

29. Zhong L-Z, Fang X-L, Dong D, et al. A deep learning MR-based radiomic nomogram may predict survival for nasopharyngeal carcinoma patients with stage T3N1M0. Radiother Oncol. 2020;151:19. doi:10.1016/j.radonc.2020.06.050

30. Bai X, Hu Y, Gong G, Yin Y, Xia Y. A deep learning approach to segmentation of nasopharyngeal carcinoma using computed tomography. Biomed Signal Process. 2021;64:102246. doi:10.1016/j.bspc.2020.102246

31. Cai M, Wang J, Yang Q, et al. Combining images and t-staging information to improve the automatic segmentation of nasopharyngeal carcinoma tumors in MR images. IEEE Access. 2021;9:2132321331. doi:10.1109/ACCESS.2021.3056130

32. Tang P, Zu C, Hong M, et al. DA-DSUnet: dual attention-based dense SU-net for automatic head-and-neck tumor segmentation in MRI images. Neurocomputing. 2021;435:103113. doi:10.1016/j.neucom.2020.12.085

33. Zhang L, Wu X, Liu J, et al. MRI-based deep-learning model for distant metastasis-free survival in locoregionally advanced Nasopharyngeal carcinoma. J Magn Reson Imaging. 2021;53(1):167178. doi:10.1002/jmri.27308

34. Wu X, Dong D, Zhang L, et al. Exploring the predictive value of additional peritumoral regions based on deep learning and radiomics: a multicenter study. Med Phys. 2021;48(5):23742385. doi:10.1002/mp.14767

35. Zhao L, Gong J, Xi Y, et al. MRI-based radiomics nomogram may predict the response to induction chemotherapy and survival in locally advanced nasopharyngeal carcinoma. Eur Radiol. 2020;30(1):537546. doi:10.1007/s00330-019-06211-x

36. Zhang F, Zhong L-Z, Zhao X, et al. A deep-learning-based prognostic nomogram integrating microscopic digital pathology and macroscopic magnetic resonance images in nasopharyngeal carcinoma: a multi-cohort study. Ther Adv Med Oncol. 2020;12:1758835920971416. doi:10.1177/1758835920971416

37. Xie C, Du R, Ho JWK, et al. Effect of machine learning re-sampling techniques for imbalanced datasets in 18F-FDG PET-based radiomics model on prognostication performance in cohorts of head and neck cancer patients. Eur J Nucl Med Mol Imaging. 2020;47(12):28262835. doi:10.1007/s00259-020-04756-4

38. Liu K, Xia W, Qiang M, et al. Deep learning pathological microscopic features in endemic nasopharyngeal cancer: prognostic value and protentional role for individual induction chemotherapy. Cancer Med. 2020;9(4):12981306. doi:10.1002/cam4.2802

39. Cui C, Wang S, Zhou J, et al. Machine learning analysis of image data based on detailed MR image reports for nasopharyngeal carcinoma prognosis. Biomed Res Int. 2020;2020:8068913. doi:10.1155/2020/8068913

40. Du R, Lee VH, Yuan H, et al. Radiomics model to predict early progression of nonmetastatic nasopharyngeal carcinoma after intensity modulation radiation therapy: a multicenter study. Radiology. 2019;1(4):e180075. doi:10.1148/ryai.2019180075

41. Zhang B, Tian J, Dong D, et al. Radiomics features of multiparametric MRI as novel prognostic factors in advanced nasopharyngeal carcinoma. Clin Cancer Res. 2017;23(15):42594269. doi:10.1158/1078-0432.CCR-16-2910

42. Zhang B, He X, Ouyang F, et al. Radiomic machine-learning classifiers for prognostic biomarkers of advanced nasopharyngeal carcinoma. Cancer Lett. 2017;403:2127. doi:10.1016/j.canlet.2017.06.004

43. Liu J, Mao Y, Li Z, et al. Use of texture analysis based on contrast-enhanced MRI to predict treatment response to chemoradiotherapy in nasopharyngeal carcinoma. J Magn Reson Imaging. 2016;44(2):445455.

44. Zhu W, Kan X, Calogero RA. Neural network cascade optimizes MicroRNA biomarker selection for nasopharyngeal cancer prognosis. PLoS One. 2014;9(10):e110537. doi:10.1371/journal.pone.0110537

45. Wong LM, Ai QYH, Mo FKF, Poon DMC, King AD. Convolutional neural network in nasopharyngeal carcinoma: how good is automatic delineation for primary tumor on a non-contrast-enhanced fat-suppressed T2-weighted MRI? Jpn J Radiol. 2021;39(6):571579. doi:10.1007/s11604-021-01092-x

46. Xue X, Qin N, Hao X, et al. Sequential and iterative auto-segmentation of high-risk clinical target volume for radiotherapy of nasopharyngeal carcinoma in planning CT images. Front Oncol. 2020;10:1134. doi:10.3389/fonc.2020.01134

47. Men K, Chen X, Zhu J, et al. Continual improvement of nasopharyngeal carcinoma segmentation with less labeling effort. Phys Med. 2020;80:347351. doi:10.1016/j.ejmp.2020.11.005

48. Wang X, Yang G, Zhang Y, et al. Automated delineation of nasopharynx gross tumor volume for nasopharyngeal carcinoma by plain CT combining contrast-enhanced CT using deep learning. J Radiat Res Appl Sci. 2020;13(1):568577. doi:10.1080/16878507.2020.1795565

49. Ke L, Deng Y, Xia W, et al. Development of a self-constrained 3D DenseNet model in automatic detection and segmentation of nasopharyngeal carcinoma using magnetic resonance images. Oral Oncol. 2020;110:104862. doi:10.1016/j.oraloncology.2020.104862

50. Zhong T, Huang X, Tang F, Liang S, Deng X, Zhang Y. Boosting-based cascaded convolutional neural networks for the segmentation of CT organs-at-risk in nasopharyngeal carcinoma. Med Phys. 2019;46(12):56025611. doi:10.1002/mp.13825

51. Lin L, Dou Q, Jin Y-M, et al. Deep learning for automated contouring of primary tumor volumes by MRI for nasopharyngeal carcinoma. Radiology. 2019;291(3):677686. doi:10.1148/radiol.2019182012

52. Liang S, Tang F, Huang X, et al. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur Radiol. 2019;29(4):19611967. doi:10.1007/s00330-018-5748-9

More here:
Artificial intelligence in the management of NPC | CMAR - Dove Medical Press

Posted in Uncategorized

How Artificial Intelligence Will Boost the Cryptocurrency Market to Reach USD 1902.5 Million by 2028 – GlobeNewswire

Pune, India, Jan. 27, 2022 (GLOBE NEWSWIRE) -- The global cryptocurrency market size is expected to gain momentum by reaching USD 1,902.5 million by 2028 while exhibiting a CAGR of 11.1% between 2021 to 2028. In its report titled Cryptocurrency Market, Fortune Business Insight mentions that the market stood at USD 826.6 million in 2020.

The demand for crypto has increased due to rising investments in venture capital. Additionally, the increasing popularity of digital assets such as bitcoin and litecoin is likely to accelerate the market in upcoming years. Furthermore, it has been seen that the digital currency is also used in the integration of blockchain technology to get decentralization and control efficient transactions. Thus, advantages such as these are also encouraging people to invest in crypto. For instance, In October 2018, Qtum Chain Foundation made a partnership with Amazon Web Services (AWS) China to use blockchain systems on the AWS cloud. With this collaboration, AWS will be able to help its users in using Amazon Machine Images (AMI) to develop and publish smart contracts easily and efficiently.

Get Sample PDF Brochure: https://www.fortunebusinessinsights.com/enquiry/request-sample-pdf/cryptocurrency-market-100149

Companiesin Cryptocurrency Market:

What does the Report Provide?

The market report offers in depth analysis of various factors, which are influencing the market growth. Additionally, the report provides insights into the regional analysis of different regions. It includes the competitive landscape that involves the leading companies and the adoption of strategies to introduce new products, announce partnerships, and collaboration that contribute in boosting the market.

COVID-19 Impact

The COVID-19 pandemic adversely affected the world economy. However, the relationship between Bitcoin and the equity market expanded amid pandemic. For example, in March 2020, the price of Bitcoin declined and went below USD 4,000 after a decline in the S&P Index in the U.S. Thus, as the Initial Coin Offering (ICO) market crashed, blockchain companies are emerging as major alternative to raise investment capital.

Click here to get the short-term and long-term impact of COVID-19 on this Market.

Please visit: https://www.fortunebusinessinsights.com/industry-reports/cryptocurrency-market-100149

Market Segmentation:

By component, the market is bifurcated into hardware, and software. By type, it is divided into bitcoin, ether, litecoin, ripple, ether classic, and others. By end-use, it is divided into trading, E-commerce and retail, peer-to-peer payment, and remittance.

Based on end use, the trading segment held the market share of 42.8% in 2020, because it focuses on crypto solutions that are used for trading such as Pionex, Cryptohopper, Bitsgap, Coinrule, and others.

Lastly, in terms of geography, the market is divided into North America, Europe, Asia Pacific, the Middle East & Africa and Latin America.

Driving Factor

Focus on Mitigating Financial Crisis and Regional Instability Drives the Demand for Virtual Currency

In recent times, financial disaster is one of the primary issues that occurs in the conventional banking system. This financial instability disrupts the economy by lowering the value of money. For instance, ICICI bank of India, in the year 2008, confronted the Lehman brother crisis, which hugely impacted the nations economy. But with using bitcoins, and other cryptocurrency, such situations of economic downfall can be avoided. Therefore, Cryptocurrencies are emerging as alternative options in the regions with unstable economical structure, and this has been a major driving factor for the cryptocurrency market growth.

Speak to Analyst: https://www.fortunebusinessinsights.com/enquiry/speak-to-analyst/cryptocurrency-market-100149

Regional Insights

North America to Dominate Backed by Presence of Prominent Players

North America is expected to remain at the forefront and hold the largest position in the market during the forecast period. This is because in most parts of the region bitcoins have become a medium of exchange for tax purposes rather than the actual currency. Although these are not legally regulated by the government, still many of the countries in the region are focused on using digital currencies. The regions market stood at USD 273.0 million in 2020.

Asia Pacific is expected to showcase significant cryptocurrency market share in upcoming years, owing to several technological developments and acceptance of virtual currency for some platforms within Japan and Taiwan. Additionally, the strategic collaborations, partnerships by key players are also fueling the regional market. For instance, in January 2020, Z Corporation, Inc. and TaoTao, Inc. collaborated with the financial service agency to widen the crypto market by confirming regulatory compliance in the Japanese market.

Competitive Landscape

Key Players to Focus on Introduction of New Services to Strengthen the Market Growth

The market is consolidated by major companies striving to maintain their position by focusing on new launches, collaborations & partnerships and acquisitions. Such strategies taken up by key players are expected to strengthen its market prospects. Below is the industry development:

March 2021 Visa Inc. aims to introduce crypto as a direct payment. With this key initiative, the company aims to accept cryptocurrencies as a payment method for the finance industry.

Quick Buy Cryptocurrency Market Research Report: https://www.fortunebusinessinsights.com/checkout-page/100149

Part II: Artificial Intelligence Market

The global artificial intelligence market size is expected to reach USD 360.36 billion by 2028. As per the report, the market size was valued at USD 35.92 billion in 2019 and is estimated to display a stellar CAGR of 31.9% during the forecast period. This information is presented by Fortune Business Insights, in its report, titled, Artificial Intelligence Market, 2021-2028. The increasing number of linked devices and rising implementation ofInternet of Things (IoT)are steering the market growth. Multiplying usage of cloud-based applications in various industries such as medical, online retail, production, and Banking, Financial Services, & Insurance (BFSI) coupled with rising complexity of cyber-crimes are presenting exciting opportunities to expand the utilization of artificial intelligence in the market. For example, use of machine learning (ML) in precisely identifying cancerous cells is anticipated to propel its demand in the healthcare industry.

Request a Sample Copy of Report: https://www.fortunebusinessinsights.com/enquiry/request-sample-pdf/artificial-intelligence-market-100114

AI Technology that Traces COVID-19 Patients Set to Promote Market Growth

The medical industry is projected to considerably benefit from AI applications during the COVID-19 pandemic. For example, in the clinical health care procedures, AI will assist in improving the precision and efficacy in diagnosing the disease, suggesting treatments, and predicting results. In the United States, the government is employing essential data from detachable devices to trace COVID-19 positive patients. AI assists in developing and mining the coronavirus stress and using it to improve and scale the testing equipment. The extracted data can be useful for drug discovery. For example, the TCSI lab is making use of AI capabilities to recognize potential molecules and to target it against the COVID stress. Therefore, amid pandemic, the artificial intelligence market is anticipated to observe substantial growth.

Report Coverage

The report provides a thorough study of the market segments and detailed analysis of the market overview. A profound evaluation of the current market trends as well as the future opportunities is presented in the report. It further shares an in-depth analysis of the regional insights and how they shape the market growth. The COVID-19 impacts have been added to the report to help investors and business owners understand the threats better. The report sheds light on the key players and their prominent strategies to stay in the leading position.

To get to know more about the short-term and long-term impact of COVID-19 on this market, please visit: https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-market-100114

CompaniesCovered in the Artificial Intelligence Market Report

Segmentation

By component, the artificial intelligence market is divided into hardware, software and services. The services segment is estimated to gain momentum during the forecast period. The incorporation of AI with the prevailing systems in companies needs suitable skillset and expertise. Furthermore, for maintenance and to support artificial intelligence, an insightful set of expertise is essential. Additionally, the software segment held a share of 40.9% in the year 2019.

On the basis of technology, the market is segregated into computer vision, machine learning and natural language processing. Based on deployment, it is further bifurcated into cloud and on-premise. By industry, the market is separated into healthcare, retail, IT and telecom, BFSI, automotive, advertising and media, and manufacturing among others. In terms of region, the global market is categorized into North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa.

Drivers and Restraints

Budding BFSI Industry to Inflate Opportunities for Artificial Intelligence Market

The BFSI industry is estimated to extend the applications of artificial intelligence (AI). It is already consuming the technology for making trading decisions, for chatting robots, credit scoring applications, and to study the financial market impact analysis, among others. For example, several banks are utilizing ML tools to generate trading robots that are capable of self-analysing and to teach trading, based on past data. Moreover, BFSI is making use of AI technology to provide personalized guidance to its users concerning debt administration, investment tactics, refinancing, and much more. The technology is also efficient in detecting fraud activities. This is expected to create widespread opportunities for the application of the technology, thereby initiating in the artificial intelligence market growth in the near future.

Speak To Our Analyst: https://www.fortunebusinessinsights.com/enquiry/speak-to-analyst/artificial-intelligence-market-100114

Regional Insights

North America to Hold Command Backed by Active Government Initiatives

The artificial intelligence market share in the North American region was USD 11.40 billion in 2019, where the U.S. was a major contributor due to increasing government initiatives and investments in the market. This is expected to boost demand for artificial intelligence in the near future.

Europe is estimated to be an equal contributor to the global economy in the artificial intelligence market. Countries in the European region are tactically financing in AI. For example, the European Investment Fund, assigned USD 111 million for the AI-based start-ups in 2020.

Asia Pacific is estimated to witness speedy growth during the forecast period. In this region, China is responsible for generating the main income share, owing to collective investments by leading players in the technology. Furthermore, to offer strong outcomes in the field, it also presented the New Generation Artificial Intelligence Development Plan.

Competitive Landscape

Partnerships and Mergers to Help Developers Innovate New Ideas and Expand Business

Prominent players in the market often come up with efficient strategies that include partnerships, acquisitions and mergers, product launches, etc. These strategies bolster their position as leading players and also benefit the other involved companies as well.

For instance, in May 2020, IPsoft Inc. protracted its collaboration with Unisys Corporation to apply AI capabilities in InteliServe and Amelia. The incorporated suite will aid organizations to solve workplace concerns with its intellectual technology.

Industry Development

June 2020: Microsoft Corporation made an investment in the Mount Sinai Health System. The company is a healthcare based firm and will be using AI to improve the COVID-19 related care through its advanced digital tools. This is likely to boost demand for artificial intelligence in the upcoming years.

Quick Buy - Artificial Intelligence Market Research Report: https://www.fortunebusinessinsights.com/checkout-page/100114

About Us:

Fortune Business Insights delivers accurate data and innovative corporate analysis, helping organizations of all sizes make appropriate decisions. We tailor novel solutions for our clients, assisting them to address various challenges distinct to their businesses. Our aim is to empower them with holistic market intelligence, providing a granular overview of the market they are operating in.

Contact Us:

Fortune Business Insights Pvt. Ltd.

308, Supreme Headquarters,

Survey No. 36, Baner,

Pune-Bangalore Highway,

Pune - 411045, Maharashtra, India.

Phone:

US: +1 424 253 0390

UK: +44 2071 939123

APAC: +91 744 740 1245

Email: sales@fortunebusinessinsights.com

LinkedIn: https://www.linkedin.com/company/fortune-business-insights

Facebook: https://www.facebook.com/FortuneBusinessInsightsPvtLtd

Continued here:
How Artificial Intelligence Will Boost the Cryptocurrency Market to Reach USD 1902.5 Million by 2028 - GlobeNewswire

Posted in Uncategorized

Women in AI Awards to Honor Top Female Innovators in the Field of Artificial Intelligence in North America – Business Wire

WASHINGTON--(BUSINESS WIRE)--Women from across the US, Mexico and Canada are launching the Women in AI (WAI) Awards North America to honor female pioneers who take the road less traveled and pave the way for others to reach even further.

The kick-off event takes place virtually on February 1, 2022 at 5pm ET when applications for category nominations will also open for: AI in Startups, AI in Research, AI for Good, AI in Government, AI in Industry, Young Role Model in AI.

Winners will be announced at a hybrid event on May 13, 2022. Please email us at waiawardsna@womeninai.co or visit our website for more information.

Susan Verdiguel, WAI Ambassador to Mexico says, Through this collaboration, we are able to amplify the AI ecosystem in Mexico for a more robust, informed, and organized community.

Sponsors/partners include Mila - Quebec Artificial Intelligence Institute, Alberta Machine Intelligence Institute, Topcoder, The Institute for Education, IVOW AI and GET Cities, an initiative designed to accelerate the leadership and representation of women, trans, and nonbinary people in tech.

"We know that AI and machine learning are the future, and we know the risks of not including diverse perspectives in designing solutions for this future. That's why we're thrilled to be a strategic partner for the Women in AI Summit and to celebrate all of the amazing people leading the way in co-creating an inclusive tech economy." - Leslie Lynn Smith, National Director of GET Cities

Davar Ardalan, CEO of IVOW AI and Senior Advisor to WAI North America says, Our ultimate goal is to recognize the role women are playing in AI and to encourage more young women to enter the field of computer science and AI.

This collaborative endeavor of Women in AI will provide an exclusive platform for every women AI professional across North America, no matter their age, role or field of work, to be recognized for their contributions in AI, says Frincy Clement, WAI Ambassador to Canada.

Our diverse communities at the grassroots level drive amazing societal impact by applying AI, ML and Data Science integrated with United Nations Sustainable Development Goals, noted Bhuva Subram, Founder of Wallet Max and the Regional Head of Women in AI North America & USA.

Read the original here:
Women in AI Awards to Honor Top Female Innovators in the Field of Artificial Intelligence in North America - Business Wire

Posted in Uncategorized

Indian Navy ropes in new-age tech with 30 Artificial Intelligence projects in the works – The New Indian Express

Express News Service

NEW DELHI: Indian Navy has launched major projects and initiatives to incorporate new-age advanced technology into the service at systems and processes levels. Along with the centres of excellence, the navy has begun exposing its personnel to academics and experts from outside, keeping the future in mind.

Commander Vivek Madhwal, Spokesperson Indian Navy told on Thursday, Navy is progressing around 30 AI projects and initiatives encompassing Autonomous Systems, Language Translation, Predictive Maintenance, Inventory Management, Text Mining, Perimeter Security, Maritime Domain Awareness and Decision Making.

Indian Navy is focused on the incorporation of Artificial Intelligence (AI) and Machine Learning (ML) in critical mission areas. AI initiatives being steered by the Navy are envisaged to have both tactical and strategic level impact, added Madhwal.

The Indian Navy is organising seminars and workshops keeping the capacity building in mind. Navys premier technical training institute INS Valsuraorganiseda workshop on the contemporary topic 'Leveraging Artificial Intelligence (Al) for Indian Navy' from 19 to 21 Jan 2022. This was conducted under the aegis of Southern Naval Command, prominent speakers from renowned IT Companies like Google, IBM, Infosys and TCS shared the industry perspective during the three-day event.

Distinguished academicians from IIT Delhi, New York University, and Indian private universities also spoke about the latest trends and applications of Al. The keynote address was delivered by Vice Admiral MA Hampiholi, Flag Officer Commanding in Chief, Southern Naval Command who stressed the strategic importance of this niche technology and its application in the Indian Navy. The webinar conducted saw online participation by over 500 participants from across the country.

Located at Jamnagar, INS Valsura has already been designated as the Center of Excellence (CoE) in the field of Big Data and a state of art lab on AI and Big Data Analysis (BDA) was set up in Jan 2020.

Regarding its future endeavour, the Indian Navy in a statement said, In addition, the Navy is currently in the process of creating a Center of Excellence (CoE) in the field of AI at INS Valsura, which has been instrumental in the progress of pilot projects pertaining to the adoption of AI and BDA in the domain of maintenance, HR and perception assessment, in collaboration with academia and industry.

Additionally, the Navy is deeply involved in unifying and reorganising its enterprise data, as data is the fuel for all AI engines, said Navy.

At the organisational level, the Navy has formed an AI core group that meets twice a year for assessing all AI/ ML initiatives only to keep a tab on timelines. Periodic reviews of AI projects are being held so as to ensure adherence to the promulgated timelines. The Navy also conducts training in AI/ ML across all levels of speciality for its officers and sailors. The Navy told.

This training is held both within the Navys own training schools as well as renowned IITs. Several personnel have undergone big and small AI linked courses over the last three years. These initiatives of the Indian Navy are in sync with the countrys vision of making India the global leader in AI, ensuring responsible and transformational AI for All.

More:
Indian Navy ropes in new-age tech with 30 Artificial Intelligence projects in the works - The New Indian Express

Posted in Uncategorized