Evolution, rewards, and artificial intelligence – TechTalks

This article is part of the philosophy of artificial intelligence, a series of posts that explore the ethical, moral, and social implications of AI today and in the future

Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.

In this post, Ill try to disambiguate in simple terms where the line between theory and practice stands.

In their paper, the DeepMind scientists present the following hypothesis: Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.

Scientific evidence supports this claim.

Humans and animals owe their intelligence to a very simple law: natural selection. Im not an expert on the topic, but I suggest reading The Blind Watchmaker by biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet.

In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that dont get eliminated.

According to Dawkins, In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, the reasons for survival are anything but simple that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.

But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills).

If these mutations help improve the chances of the organisms survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didnt, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye.

The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals.

The same self-reinforcing mechanism has also created the brain and its associated wonders. In her book Conscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms.

Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMinds scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated.

In their paper, DeepMinds scientists make the claim that the reward hypothesis can be implemented with reinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment.

According to the DeepMind scientists, A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agents behaviour.

In an online debate in December, computer scientist Richard Sutton, one of the papers co-authors, said, Reinforcement learning is the first computational theory of intelligence In reinforcement learning, the goal is to maximize an arbitrary reward signal.

DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that can outmatch humans in Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress in some of the most complex problems of science.

The scientists further wrote in their paper, According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximising a singular reward in a single, complex environment [emphasis mine].

This is where hypothesis separates from practice. The keyword here is complex. The environments that DeepMind (and its quasi-rival OpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources of very wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum.

(It is worth noting that the scientists do acknowledge in their paper that they cant offer theoretical guarantee on the sample efficiency of reinforcement learning agents.)

Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we dont have a fraction of the compute power needed to create quantum-scale simulations of the world.

Lets say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first lifeforms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still dont have a definite theory on that.

An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation.

Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power youll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent lifeforms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet.

Many will say that you dont need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in.

For example, in their paper, the scientists mention the example of a house-cleaning robot: In order for a kitchen robot to maximise cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behaviour that maximises cleanliness must therefore yield all these abilities in service of that singular goal.

This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot.

Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutors mental state. We might make wrong assumptions, but those are the exceptions, not the norm.

And finally, developing a notion of cleanliness as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it?

A robot that has been optimized for cleanliness would have a hard time co-existing and cooperating with living beings that have been optimized for survival.

Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge.

In theory, reward only is enough for any kind of intelligence. But in practice, theres a tradeoff between environment complexity, reward design, and agent design.

In the future, we might be able to achieve a level of computing power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.

Read the original post:
Evolution, rewards, and artificial intelligence - TechTalks

Address artificial intelligence threats, politicians told – Business in Vancouver

B.C. Information and Privacy Commissioner Michael McEvoy: There is much good that comes from advancing AI technologies but if the public is to haveconfidence in itsuse we must first ensure trust and transparency is built into its development" | Photo: Jeremy Hainsworth

Governments' increasing use of artificial intelligence (AI) technology and peoples inability to avoid official computer services present threats politicians must address with law, privacy watchdogs say.Regulatory intervention is necessary, the B.C. and Yukon ombudsman and information and privacy commissioners said in a report released June 17.

The regulatory challenge is deciding how to adapt or modernize existing regulatory instruments to account for the new and emerging challenges brought on by governments use of AI. The increasing automation of government decision-making undermines the applicability or utility of existing regulations or common law rules that would otherwise apply to and sufficiently address those decisions.

Just as fairness and privacy issues resulting from the use of AI in commercial facial recognition systems have been shown to have bias and infringe peoples privacy rights, government use of AI can have serious, long-lasting impacts on peoples lives and could create tension with the fairness and privacy obligations of democratic institutions, the report said.

And that, they said, undermines trust in government.

While we recognize that delivering public services through artificially intelligent machine-based systems can be appealing to public bodies for cost reasons, we are concerned if not done right, this perceived efficiency may come at the expense of important rights to fair treatment, said B.C. Ombudsperson Jay Chalke.

The late Prof. John McCarthy of the Massachusetts Institute of Technology and Stanford University coined the term AI. The report said McCarthys definition frames AI in terms of the development of machines that can perform tasks normally requiring human intelligence, such as visual perception, speech recognition, language translation and decision-making. Its a capacity to respond to challenges and opportunities based on inputs and goals.

Whats happening, the officials said, is that AI is replacing the judgement of human decision-makers in governments around the world. Such computer judgments could include predicting criminals recidivism rates, approving building permits, determining government program eligibility and deciding car insurance premiums.

There is much good that comes from advancing AI technologies but if the public is to haveconfidence in itsuse we must first ensure trust and transparency is built into its development, said B.C. Information and Privacy Commissioner Michael McEvoy.

The report said global spending on AI was US$12.4 billion in 2019 and is expected to reach US$232 billion by 2025. As part of Canadas national AI strategy, Ottawa has invested $355 million to develop synergies between retail, manufacturing, infrastructure and information and communications technology sectors to build intelligent supply chains through AI and robotics.

However, another challenge comes from people themselves and a desire for fast service that could put highly personal and private data at risk.The report said Peter Tyndall, former president of the International Ombudsman Institute and the ombudsman of the Republic of Ireland, has argued one of the biggest challenges facing independent oversight offices and core government alike is peoples expectation of speedy results and high levels of interactivity.

They expect to interact with public services as they do with Amazon or Facebook, to communicate as they do on WhatsApp, Tyndall said.

Other concerns highlighted in the report include the challenge of explaining to the public how decisions are made if algorithms are used, a lack of notice provided to people that these systems will be used in decision-making that impacts them and the absence of effective appeals from AI-generated decisions.

When our offices reviewed how AI is being used, we saw there is a real gap in uniform guidance, regulation and oversight that governs the use of AI, said Ombudsman and Information and Privacy Commissioner for Yukon Diane McLeod-McKay. We are hopeful that public bodies will carefully consider the guidance we are providing when they are using AI.

The report makes several recommendations aimed at public bodies delivering public services including:

the need for public bodies to commit to guiding principles for AI use;

the need for public bodies to notify an individual when an AI system is used to make a decision about them and describe how the AI system operates in a way that is understandable to the individual;

the need for government to promote capacity-building, co-operation and public engagement on AI;

a requirement for all public bodies to complete and submit an artificial intelligence fairness and privacy impact assessment for all existing and future AI programs for review by the relevant oversight body; and

the establishment of special rules or restrictions for the use of highly sensitive information by AI.

jhainsworth@glaciermedia.ca

@jhainswo

Visit link:
Address artificial intelligence threats, politicians told - Business in Vancouver

Artificial Intelligence ‘Creates’ Its Own PLAYABLE Version of ‘GTA 5’, And It Is FREAKY – Tech Times

Artificial intelligence is still considered a long way from its feared destructive capabilities, but one thing is for certain: big things start from small beginnings. And that small beginning might as well be this.

(Photo : Andrea Verdelli/Getty Images)SHANGHAI, CHINA - JUNE 18: Cutting edge applications of Artificial Intelligence are seen on display at the Artificial Intelligence Pavilion of Zhangjiang Future Park during a state organized media tour on June 18, 2021 in Shanghai, China.

Engadgetreports that a certain artificial intelligence managed to somehow "create" its own version of "GTA V," and also make it actually playable--at least a short stretch of it, that is. And all they did was make the AI "watch" a portion of gameplay.

YouTuber Harrison Kinsley, who goes by the name sentdex on the video platform, shareda videoof artificial intelligence achieving the technically impressive feat. Working with a collaborator named Daniel Kukiela, Kinsley used a program called GameGAN Neural Network to create the simulation.

GameGAN, according toits website, is an artificial intelligence-based program created to simulate game environments on the fly. Made by the NVIDIA Toronto AI Lab (at least according to its Google Results Page heading), GameGAN has actually done the same exact thing last year, only with a different game: Pac-Man.

Before it managed to create its own playable version of "GTA V," GameGAN made its own version of Pac-Man by watching another AI play through it. According toEngadget, artificial intelligence managed to essentially "develop" an entire video game in merely four days. The real Pac-Man, on the other hand, took over a year to make.

This also isn't really the first time thatAI has dabbled in game visuals. NVIDIA's DLSS and so-called AI upscaling that's turned the graphics of "GTA V" into photorealistic images already exist.

Read also:Artificial Intelligence to Hunt for Dark Energy Using this INSANELY POWERFUL Supercomputer

To achieve the feat, Kinsley needed some major help. No consumer-class computer would be able to deal with this kind of workload, so NVIDIA loaned him and his partner a $200,000 data center; essentially a small-scale supercomputer. This machine contained eight A100 GPUs, which are optimized for hardware-accelerated artificial intelligence, as well as two 64-core server CPUs from AMD.

(Photo : China/Barcroft Media via Getty Images)HANGZHOU, CHINA - JUNE 06 2021: Visitors stop by an AI server based on NVIDIA A100 chips at the 2021 Global Artificial Intelligence Technology Conference (GAITC2021) in Hangzhou in East China.

With the data center, Kinsley and Kukiela "trained" the GameGAN using actual gameplay of "GTA V." They let 12 simultaneous AI instances drive the same stretch of road, allowing the hardware to collect enough data to build its own world. The result was an amazing testament to the power of artificial intelligence, which is also a little bit freaky.

And the AI itself didn't just create an image that moved. It also rendered rudimentary 3D graphics in real-time. What this means is that as the car drove around, the shadow underneath it also moved relative to where the light source is. That's absolutely amazing since doing that convincingly inside a game engine would take years of hard coding. A machine managed to do that in barely a fraction of the time.

So does this mean that game developers will be replaced by artificial intelligence? No. Absolutely not. For now, AI can stick to doing things likebeating humans in DOTA 2, or being funnily dumb in broken, unfinished games.

Related: Super Mario Artificial Intelligence Learns How to Feel, Plays Own Game [Video]

This article is owned by Tech Times

Written by RJ Pierce

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Excerpt from:
Artificial Intelligence 'Creates' Its Own PLAYABLE Version of 'GTA 5', And It Is FREAKY - Tech Times

Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report By Component, By Application And Segment Forecasts, 2021 – 2028…

Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report By Component (Software Solutions, Hardware, Service), By Application (Robot Assisted Surgery, Connected Machines, Clinical Trials), And Segment Forecasts, 2021 - 2028

New York, June 18, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report By Component, By Application And Segment Forecasts, 2021 - 2028" - https://www.reportlinker.com/p06096560/?utm_source=GNW

Artificial Intelligence In Healthcare Market Growth & Trends

The global artificial intelligence in healthcare market size is expected to reach USD 120.2 billion by 2028 and is expected to expand at a CAGR of 41.8% over the forecast period. Growing technological advancements coupled with an increasing need for efficient and innovative solutions to enhance clinical and operational outcomes is contributing to market growth. The pressure for cutting down spending is rising globally as the cost of healthcare is growing faster than the growth of economies. Advancements in healthcare IT present opportunities to cut down spending by improving care delivery and clinical outcomes. Thus, the demand for AI technologies is expected to increase in the coming years.

Moreover, the ongoing COVID-19 pandemic and the introduction of technologically advanced products to improve patient care are factors anticipated to drive growth further in the coming years.The ongoing COVID-19 pandemic is further driving the adoption of AI in various applications such as clinical trials, diagnosis, and virtual assistants to add value to health care by analyzing complicated medical images of patients complications and supporting clinicians in detection as well as diagnosis.

Moreover, an increase in the number of AI startups coupled with high investments by venture capitalist firms for developing innovative technologies that support fast and effective patient management, due to a significant increase in the number of patients suffering from chronic diseases, is driving the market.

In addition, the shortage of public health workforce has become a major concern in many countries around the world.This can mainly be attributed to the growing demand for physicians, which is higher than the supply of physicians.

As per the WHO estimates in 2019, the global shortage of skilled personnel including nurses, doctors, and other professionals was approximately 4.3 million. Thus, the shortage of a skilled workforce is contributing to the demand for artificial intelligence-enabled systems in the industry.

Artificial Intelligence In Healthcare Market Report Highlights The market is anticipated to witness significant growth over the forecast period owing to the rapidly increasing application of artificial intelligence in this space The software component segment dominated the market in 2020 owing to the increased development of AI-based software solutions The clinical trials segment dominated the market in 2020 owing to the easy commercial availability of AI-based product in clinical trial applications that use AI technology to identify patterns from doctor-patient interaction related data to deliver a personalized medicine North America dominated the market in 2020 owing to the growing adoption of healthcare IT solutions, increasing funding for the development of AI capabilities, and the well-established healthcare sectorRead the full report: https://www.reportlinker.com/p06096560/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

The rest is here:
Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report By Component, By Application And Segment Forecasts, 2021 - 2028...

Artificial intelligence won’t replace the role of financial advisors, UBS CEO says – CNBC

LONDON One of the world's biggest wealth managers doesn't think artificial intelligence can replace the role of financial advisors.

Ralph Hamers, the CEO of UBS, said Wednesday that technologies like AI were better suited to handling day-to-day functions like opening an account or executing trades than advising clients.

"There is no added value for client advisors to be engaged in a process like that," Hamers told CNBC's Geoff Cutmore at the virtual CNBC Evolve Global Summit. "They're advisors. They should advise."

"Our financial advisors actually should be supported by the technology," Hamers said, adding that AI could be used to make sense of the research and other data that advisors don't have time for.

"That is what artificial intelligence can do, because even our client advisors can't read all the research that is there," he said. "Our client advisors can't comprehend all the product options that are out there."

Europe's banking industry has seen radical change over the last decade, with new entrants like Monzo, Revolut and N26 emerging to take on incumbents with slick, digital-only services.

Covid-19 has further accelerated digital transformation in the banking sector, with many lenders racing to move away from their aging IT systems to cloud-based technology. Some are partnering with tech companies like Microsoft, Amazon and Google, as well as fintech upstarts, to hasten the process.

Hamers said UBS is looking to adopt a "Netflix experience" where clients have access to a "dashboard" of different research and products to choose from.

"That's where things are going, and that's where UBS is making the next step, in terms of dealing with technology to deliver a much better service for our clients," he added.

See the original post here:
Artificial intelligence won't replace the role of financial advisors, UBS CEO says - CNBC

The promise and perils of Artificial Intelligence partnerships – Hindustan Times

A period that had been broadly described as engagement has come to an end, Kurt Campbell, the Indo-Pacific Coordinator at the United States (US) National Security Council, told a virtual audience in May on the subject of US-China relations. The dominant paradigm is going to be competition.

On several occasions, Campbell has highlighted that one of the major arenas of this competition will concern technology. This is increasingly reflected in US national security structures. Today, there is both a senior director and coordinator for technology and national security at the White House; the National Economic Council has briefed the Cabinet on supply chain resilience; and the focus of Department of Defense policy reviews have been on emerging military technologies.

The subject of intensifying technology competition is also making its way into new US avenues for cooperation with partners, including with India. This could take the form of bilateral cooperation, coordination at multilateral institutions, or through loose coalitions such as Quad. At the virtual summit in March of Quad, the four leaders (of India, Japan, Australia and the US) agreed, among other things, to establish a working group on critical and emerging technologies, which has already convened.

Artificial Intelligence (AI) has emerged as one technology of particular importance because of its role as an accelerator, its versatility, and its wide applicability. Driven by recent breakthroughs in machine learning made possible by plentiful data, cheap computing power, and accessible algorithms, AI is a good bellwether for the possibilities and challenges of international cooperation on emerging technologies. It is also incredibly lucrative, and may generate hundreds of billions of dollars in revenue over the coming decade.

There are some obvious areas of commonality and cooperation between India, the US, and other partners when it comes to AI. For example, there is a similar concern about developing AI in a broadly democratic setting. AI can be used in many positive ways to foster innovation, increase efficiency, improve development, and enhance consumer experience. For India, AI deployment will be tied closely to inclusive growth and its development trajectory, with potentially positive implications for agriculture, health, and education, among other sectors.

But AI can also be used for a host of undesirable purposes generating misinformation, criminal activity, and encroaching upon personal privacy. Quad countries and others including in Europe and North America generally seek partners amenable to broadly upholding a responsible, human-centric approach to AI.

Additionally, despite the nominally more nationalistic rhetoric (e.g. Build Back Better, Atmanirbhar Bharat), there is a fundamental recognition that international partnerships are valuable and necessary. AI development and deployment is inherently international in character.

Basic and applied research involves collaborations across universities, research centres, and countries. Data can be gathered more easily, a lot of development relies on open-source information, and funding for AI start-ups is a global enterprise. There is also a recognition that countries can learn from each others experiences and mistakes, and that the successful deployment of AI would serve as a model for others. India, for example, is one of the few developing countries large enough to marshal considerable resources for AI, in a manner that could be replicated, including in South Asia or Africa.

India and its partners also confront some similar challenges when it comes to the development and deployment of AI. One imperative involves nurturing, attracting, and retaining the requisite talent. According to Macro Polos Global AI Talent Tracker, 12% of elite AI researchers in the world received their undergraduate degrees from India, the most after the United States (35%) and more than China (10%). Yet, very little top-tier AI research is being conducted in India (over 90% is taking place in the United States, China, European Union, Canada, and the UK).

Beyond talent, additional challenges lie in securing the necessary infrastructure; ensuring resilient supply chains, especially for components such as microprocessors; alignment on standards, governance, and procurement; and ensuring critical minerals and other raw materials required for the development of the necessary physical infrastructure.

Given that various governments have only recently established AI policies, and in some cases are still formulating them, international cooperation is still very much a work in progress. More detailed efforts will be outlined in the coming months and years.

Nevertheless, the contours of cooperation are already discernible. Some areas are proving relatively easy, such as coordination in the setting of standards at the multilateral level, which is already underway. Other areas will prove more challenging. Supply chain security and building resilience should theoretically be easier, given the political-level agreement on this issue. However, ensuring bureaucratic and regulatory harmonisation remains complicated. India and its partners may have the most trouble aligning their approaches to data a particularly touchy subject at the moment and, in the long-run, incentivising joint research and development.

The future looks bright for organic cooperation on AI the demand is there and India and its partners all hold relative strengths. But critical decisions made in the near future could have transformative effects for international cooperation on AI, which, in turn, could decisively shape the contours of what some have described as the Fourth Industrial Revolution.

Dhruva Jaishankar is executive director, ORF America

The views expressed are personal

Read more:
The promise and perils of Artificial Intelligence partnerships - Hindustan Times

Artificial Intelligence for Rapid Exclusion of COVID-19 Infection – SciTechDaily

rtificial intelligence (AI) may offer a way to accurately determine that a person is not infected with COVID-19. An international retrospective study finds that infection with SARS-CoV-2, the virus that causes COVID-19, creates subtle electrical changes in the heart. An AI-enhanced EKG can detect these changes and potentially be used as a rapid, reliable COVID-19 screening test to rule out COVID-19 infection.

The AI-enhanced EKG was able to detect COVID-19 infection in the test with a positive predictive value people infected of 37% and a negative predictive value people not infected of 91%. When additional normal control subjects were added to reflect a 5% prevalence of COVID-19 similar to a real-world population the negative predictive value jumped to 99.2%. The findings are published in Mayo Clinic Proceedings.

COVID-19 has a 10- to 14-day incubation period, which is long compared to other common viruses. Many people do not show symptoms of infection, and they could unknowingly put others at risk. Also, the turnaround time and clinical resources needed for current testing methods are substantial, and access can be a problem.

If validated prospectively using smartphone electrodes, this will make it even simpler to diagnose COVID infection, highlighting what might be done with international collaborations, says Paul Friedman, M.D., chair of Mayo Clinics Department of Cardiovascular Medicine in Rochester. Dr. Friedman is senior author of the study.

The realization of a global health crisis brought together stakeholders around the world to develop a tool that could address the need to rapidly, noninvasively and cost-effectively rule out the presence of acute COVID-19 infection. The study, which included data from racially diverse populations, was conducted through a global volunteer consortium spanning four continents and 14 countries.

The lessons from this global working group showed what is feasible, and the need pushed members in industry and academia to partner in solving the complex questions of how to gather and transfer data from multiple centers with their own EKG systems, electronic health records and variable access to their own data, says Suraj Kapa, M.D., a cardiac electrophysiologist at Mayo Clinic. The relationships and data processing frameworks refined through this collaboration can support the development and validation of new algorithms in the future.

The researchers selected patients with EKG data from around the time their COVID-19 diagnosis was confirmed by a genetic test for the SARS-Co-V-2 virus. These data were control-matched with similar EKG data from patients who were not infected with COVID-19.

Researchers used more than 26,000 of the EKGs to train the AI and nearly 4,000 others to validate its readings. Finally, the AI was tested on 7,870 EKGs not previously used. In each of these sets, the prevalence of COVID-19 was around 33%.

To accurately reflect a real-world population, more than 50,000 additional normal EKGs were then added to reach a 5% prevalence rate of COVID-19. This raised the negative predictive value of the AI from 91% to 99.2%.

Zachi Attia, Ph.D., a Mayo Clinic engineer in the Department of Cardiovascular Medicine, explains that prevalence is a variable in the calculation of positive and negative predictive values. Specifically, as the prevalence decreases, the negative predictive value increases. Dr. Attia is co-first author of the study with Dr. Kapa.

Accuracy is one of the biggest hurdles in determining the value of any test for COVID-19, says Dr. Attia. Not only do we need to know the sensitivity and specificity of the test, but also the prevalence of the disease. Adding the extra control EKG data was critical to demonstrating how a variable prevalence of the disease as we have encountered with regions having widely different rates of disease at different stages of the pandemic would impact how the test would perform.

This study demonstrates the presence of a biological signal in the EKG consistent with COVID-19 infection, but it included many ill patients. While it is a hopeful signal, we must prospectively test this in asymptomatic people using smartphone-based electrodes to confirm that it can be practically used in the fight against the pandemic, notes Dr. Friedman. Studies are underway now to address that question.

Reference: 15 June 2021, Mayo Clinic Proceedings.

This study was designed and conceived by Mayo Clinic investigators, and the work was made possible in part by a philanthropic gift from the Lerer Family Charitable Foundation Inc., and by the voluntary support from participating physicians and hospitals around the world who contributed in an effort to combat the COVID-19 pandemic. Technical support was donated by GE Healthcare, Philips and Epiphany Healthcare for the transfer of EKG data.

Visit link:
Artificial Intelligence for Rapid Exclusion of COVID-19 Infection - SciTechDaily

5 Uses of Artificial Intelligence to Improve Customer Experience Measurement – Small Business Trends

Customer experience plays an important role in the growth of your brand. Thats why its essential to not only offer a great experience but also understand if you truly are able to cater to your customers well. Thats where you can use artificial intelligence to improve your customer experience measurement.

But why is customer experience so important?

Nearly 84% of consumers say they go out of their way to spend more money on great experiences. So, its safe to say that a better customer experience translates into higher sales and revenue.

Image via Gladly

But to improve your customer experience, you must know where you stand in the first place. For this, its important to do customer experience measurement. Artificial intelligence (AI) plays a major role in automating and speeding up various marketing activities and it can help improve this process as well.

So, lets take a look at how you can use artificial intelligence to improve your customer experience measurement.

Here are the different ways through which you can use artificial intelligence to improve your customer experience measurement.

To truly get an idea of where you stand in terms of your customer experience, its essential to collect and analyze customer feedback. The idea here is to hear all about your customer experience from the customers themselves.

Its the best mode of understanding where youre excelling and or lagging in certain aspects of customer experience. Accordingly, you can understand what changes need to be implemented to improve the experience. This, in turn, can help boost the sales of your ecommerce or brick-and-mortar business.

So, how can artificial intelligence help with this?

Collecting customer feedback may be simple. However, analyzing the feedback can take a lot of time and effort, especially if youve got a lot of customers. Youd have to manually go through individual feedback and then analyze that unstructured data.

However, AI can speed up this process of measurement. Using text analytics platforms, you can seamlessly analyze large amounts of feedback data from your customers. This quick analysis will help you derive valuable insights that you can leverage to improve your customer experience strategy.

Another way in which artificial intelligence can help you collect, analyze, and improve your customer experience measurement is through the use of chatbots and live chat.

Using AI-powered chatbots, you can converse with your customers in real-time. Using the power of machine learning and natural language processing, these chatbots can understand the questions posed by your customers and answer them.

Whats more?

Apart from chatbots, you should also use live chat platforms to offer customer service in case the chatbots arent able to answer the questions posed by the customers.

But how does customer experience measurement come into the picture here?

When your customers chat with your chatbot or customer support representatives, they can ask them to rate the interactions. The feedback data collected can be analyzed by artificial intelligence-based tools to help you understand how well you were able to service their questions.

To understand your customer experience, its important to get an idea of their emotions as well. You need to understand and predict them to find out if theyre satisfied with your brands services or not.

Until recently, there was no easy way of going about this. You had to rely on the customers telling you about their emotions, and such instances, unfortunately, arent many.

However, with the advent of artificial intelligence, its possible to detect the emotions of your customers from multiple channels.

For instance, artificial intelligence tools can seamlessly detect the customers emotions based on the messages theyve sent or the conversations theyve had with your customer support team.

Emotion AI tools can pick up emotional signals by observing the tone and pitch of the customers voice. They can also analyze the text written by your customers to understand if theyre happy, sad, stressed out, angry, etc.

Whats more?

Even if youve got videos of the customers, these tools can identify their emotions using their body language, changes in facial expressions, etc.

All of this analysis can help you identify how well youre performing when it comes to customer experience.

For instance, Grammarly, the popular writing tool, can recognize the emotions in the text thats written. This helps you better understand the customer experience and you can accordingly take steps to improve it.

Image via Grammarly

Most call center records are converted into transcripts for reviewing at a later stage. However, the one thing that transcripts cant help you identify is the emotions of the customer at each point in the conversation.

You wouldnt know if the customer raised their voice, had an angry tone, felt sad, or was elated by your service. Transcripts wont be able to tell these things to you and when it comes to customer experience, these are all important cues that you must not miss.

All of these cues would only be available if youve recorded the customers call in its audio format. By getting access to this speech, you would be better able to understand if your customer experience was positive or negative.

Artificial intelligence can help improve your customer experience measurement in this case too. Using AI-powered speech analysis tools, you can understand the tone of each customer. Also, these tools can help you find out the:

This measurement process would be quick too as artificial intelligence would be able to go through a large number of calls with ease as compared to listening to them manually. All this information would be extremely useful for helping you understand the customers current situation. Based on that, you would be able to determine the future course of action as well.

One of the toughest tasks that you might face as a customer experience professional is that of finding out the customer experience throughout the sales funnel.

But why is this task difficult?

The customers may go through numerous stages during the sales funnel and they may connect with you at various touchpoints too. As a result, all the customer data would be in different silos. These silos can act as deterrents to determining the customer experience as you wouldnt have a unified database for each customer.

Analytics and insights derived from such segregated data might not be very accurate and wont paint the whole picture for your customer experience.

However, customer journey analytics tools based on artificial intelligence can help you change this. They can unify your customer data from the entire customer journey and analyze it. This singular customer journey view will help you get an accurate measurement of the customer experience.

Customer experience plays a pivotal role in the success of your brand as it influences customer retention. Thats why its essential to measure your customer experience regularly and improve it.

Artificial intelligence can help with this by analyzing customer feedback and deriving insights from it. Also, you can use chatbots and live chat to collect and analyze customer feedback.

Whats more?

Tools powered by AI can also recognize customer emotions in text, voice, and videos. This can help you understand their experience and improve it. Finally, these tools can also help unify all your customer data from across their journey and analyze it. As a result, youll be able to get an accurate measurement of your customer experience.

Do you have any questions about the various methods of using artificial intelligence to improve customer experience measurement mentioned above? Ask them in the comments.

Image: Depositphotos

Read the original here:
5 Uses of Artificial Intelligence to Improve Customer Experience Measurement - Small Business Trends

Researchers ask industry for military technologies in artificial intelligence (AI) and unmanned aircraft – Intelligent Aerospace

ARLINGTON, Va. - U.S. military researchers are asking the defense industry to develop revolutionary enabling technologies for land, sea, air, and space applications that would put U.S. forces far ahead of any potential adversaries, John Keller reports for MIlitary & Aerospace Electronics.Continue reading original article.

The Intelligent Aerospace take:

June 16, 2021 -Potential U.S. adversaries such as Russia and China have developed ways to counter today's U.S. military systems that are built around exquisite, monolithic integrated systems. Instead, DARPA researchers want to develop revolutionary system architectures that are separate, dispersed, disruptive, and that instill doubt in U.S. adversaries.

DARPA experts want to identify promising technologies and move them quickly to the next phase of research and development. Technologies should improve resilience, responsiveness, range, lethality, access, endurance, and affordability to enable new joint force warfighting concepts.

For aircraft, researchers point out that stealth and low-observability technologies simply do not offer the advantages they used to. Adversaries have come up with generations of countermeasures since stealth was invented, and today the ability to make platforms survivable is approaching physical limits, which makes continuing the traditional path of stealth technologies impractical.

Related: Commercial aviation at forefront of innovation in artificial intelligence, digital twins, mobile applications, and unmanned aircraft

Related: Loyal Wingman combat drone powers up engine for the first time

Related: Northrop Grumman invests in artificial intelligence (AI) to promote onboard processing of satellite data

Jamie Whitney, Associate EditorIntelligent Aerospace

See the original post here:
Researchers ask industry for military technologies in artificial intelligence (AI) and unmanned aircraft - Intelligent Aerospace

The role of artificial intelligence in the fight against Covid-19 – SmartCitiesWorld

In early January of 2020, the US Centers for Disease Control and Prevention issued its first warnings about the potential spread of a flu-like pandemic. Days later, the World Health Organization notified the public of the dangers of the novel coronavirus Covid-19, and warned about the possibility of a dangerous outbreak.

Despite the far-reaching resources of both the CDC and the WHO, a Canadian health start-up called BlueDot had already broken news of the threat to its users. It was able to do this using artificial intelligence and machine learning to spot patterns and track the spread of the virus.

BlueDots tracking system was the first of many AI-influenced technologies that are now being employed to fight the first global public health crisis of the decade. But aside from tracking, tracing, and predicting the spread of the virus, how are todays modern smart cities leveraging AI to fight against Covid-19?

Thanks to the abundance of modern smart devices located across cities, from IoT connected sensors to wearable tech and communication devices, cities now collect more data than ever. Small data can be processed by humans, but big data requires machines to make use of it. And thats where AI comes into play.

AI has been deployed across cities to help ease the damage caused by the Covid-19 pandemic. Diagnosing Covid-19 patients has been crucial in the first against the virus, and AI has played a leading role. Early on, it was assisting the detection of the virus using a deep learning tool that could identify the difference between Covid-19 and pneumonia using 2D and 3D modelling of CT scans.

By building on these models, doctors were able to learn more about the virus and track how it affects individual patients, and give researchers a better idea of the type of transmission and the scale of the spread of the virus. Early detection and diagnosis have been essential to preventing cities and other densely populated areas from being overwhelmed by the virus.

With early diagnosis and the ability to identify symptoms, cities have been able to harness the power of interconnected IoT networks, in partnership with other smart devices, from smart phones to smart bins, to help track, trace, and predict the potential spread of infection too.

Diagnosis is one thing, but actually alleviating the burden being placed on overcrowded hospitals has been an even bigger concern. Fortunately, AI has been able to step in and reduce that burden thanks to the introduction of AI triage systems that can automate medical processes and use the data supplied by patients to minimize the time that health professionals need to spend with individual patients. These triage systems have been able to classify patients depending on the severity and nature of their symptoms, allowing doctors and nurses to handle patients more effectively.

Telemedicine is another way that AI is being harnessed to reduce the burden on urban hospitals and provide better care to citizens in remoter regions. These intelligent platforms can be used reduce the need for unnecessary hospital trips, either by using consultation calls with real doctors, or via machine-learning enabled chatbots such as the CDCs Clara service.

Similarly, AI has also been used to optimize the use of ventilator settings to ensure that patients are being administered oxygen correctly. Prolonged ventilator use can cause lung damage to patients but any ventilator that is being used longer than necessary deprives another patient of its use, particularly in small hospitals with limited resources.

The use of AI has evolved beyond the realm of data analysis and optimization. In some hospitals, AI-enabled healthcare robots have been used to perform a number of tasks, such as cleaning and disinfecting rooms, monitoring patients, and carrying out other routine tasks. According to some experts, AI-enabled robots will become more prevalent in crisis management in the future, too.

The rapid development of successful Covid-19 treatments and vaccines can also be attributed to the use of artificial intelligence. Vaccines often take years to develop, however, thanks to new ways of analysing data, Covid-19 vaccines were developed relatively quickly. The most significant tool used in the development of these vaccines was the Vaxign reverse vaccinology machine learning platform. By examining vast amounts of data about existing medications and vaccines, AI deep learning processes were able to identify potentially effective drug molecules and combinations, greatly expediting the vaccine production process.

As with many urban and smart city processes, the data required to enact many solutions already exists. However, it requires machine learning and artificial intelligence to adequately process it. By deep diving into vast data repositories filled with information about existing, approved, and validated drugs, the time required to develop new vaccines by a significant margin.

Trawling through data to find existing solutions to potential problems is what AI excels at. However, the recent pandemic has also forced governments to respond to an entirely new threat: the infodemic that has grown and spread with Covid-19. With populations more connected than ever before, and the delivery of information being freely available, tools that governments have relied on to raise awareness of societal issues have been used to spread harmful misinformation too.

To help counter this, governments and social media platforms have harnessed the power of AI and machine learning to curtail the spread of rumours and false information. Machine learning programmes have successfully been able to identify information from dubious origins and promote accurate and correct information instead.

On top of that, AI software can accurately identify and predict the threat level and danger of the virus by considering historical data and incorporating a wide range of factors. This accurate information has helped reduce panic and fear within cities, allowing governments and service workers to concentrate their efforts on tackling the virus rather than calming the population during this highly dynamic and changeable situation.

The global pandemic has sparked a greater appreciation for artificial intelligence technologies within cities. While its easy to feel mistrustful of AI-governed decision-making processes, the pandemic has highlighted their value and the need for modern cities to embrace data to find practical and efficient solutions to present-day and future challenges.

The current situation wont usher in a blanket adoption of AI-enabled technologies, but it certainly helped to accelerate the trend towards embracing them.

Original post:
The role of artificial intelligence in the fight against Covid-19 - SmartCitiesWorld