Page 45«..1020..44454647..5060..»

Category Archives: Artificial Intelligence

Global Artificial Intelligence in Contact Centers Market Report 2022-2036: Use Cases for AI Today and the Exciting Future of this Technology – PR…

Posted: March 17, 2022 at 2:18 am

DUBLIN, March 14, 2022 /PRNewswire/ -- The "The State of Artificial Intelligence in Contact Centers" report has been added to ResearchAndMarkets.com's offering.

This Report provides a maturity model for the service experience in contact centers, looking ahead to the next 15 years.

It describes how AI can and should be used, application by application, to enhance contact center performance and provides recommendations and best practices for implementing AI-enabled solutions. It offers both a strategic perspective and tactical guidance to help companies realize the maximum benefits from their AI initiatives.

Artificial intelligence is being added to all of the systems and applications used by contact center agents. It has already introduced a basic form of human-like understanding and intelligence into self-service solutions and is on its way to delivering practical and quantifiable improvements to many other applications.

The State of Artificial Intelligence in the Contact Center analyzes how artificial intelligence (AI) can be applied to transform the customer experience (CX), drive a new era in servicing, and significantly improve the performance of contact centers. It explains AI, its underlying technologies and how it enhances contact center systems and applications.

The Report provides use cases for AI today and anticipates the exciting future of this technology, also analyzing the value proposition and payback for its adoption in each application.

Report Includes

Key Topics Covered:

1. Executive Summary

2. Introduction

3. Contact Center AI Defined and Explained3.1 Rules vs. AI3.2 Where Automation Fits in the World of AI3.3 Data is a Key to the Success of AI Initiatives

4. The Role of AI in Enhancing the CX

5. The Vision for AI in Contact Centers5.1 Operational Impact of the AI Hub in Contact Centers

6. Contact Center AI-Enabled Applications6.1 Contact Center Portfolio of AI-Enabled Systems and Applications6.2 AI-Enabled Systems and Applications for Contact Centers6.2.1 Intelligent Virtual Agent/Conversational AI6.2.2 Interaction (Speech and Text) Analytics6.2.3 Analytics-Enabled Quality Management6.2.4 Virtual Assistant6.3 Targeted AI Systems and Applications for Contact Centers6.3.1 Transcription6.3.2 Real-Time Guidance6.3.3 Predictive Behavioral Routing6.3.4 Predictive Analytics6.4 Emerging AI Systems and Applications for Contact Centers6.4.1 Workforce Management6.4.2 Customer Journey Analytics6.4.3 Customer Relationship Management6.4.4 Contact Center Performance Management6.4.5 Automatic Call Distributor6.4.6 Dialer/Campaign Management6.5 Contributing AI Systems and Applications for Contact Centers6.5.1 Robotic Process Automation6.5.2 Intelligent Hiring6.5.3 Desktop Analytics6.5.4 Knowledge Management6.5.5 Voice Biometrics6.5.6 Voice-of-the-Customer/Surveying

7. The Contact Center AI Journey7.1 The Contact Center Maturity Model7.1.1 Reactive Contact Centers, 20217.1.2 Responsive Contact Centers, 2022 - 20257.1.3 Real-Time Contact Centers, 2026 - 20307.1.4 Proactive Contact Centers, 2031 - 20357.1.5 Predictive Contact Centers, 20367.2 Role and Contributions of AI in Contact Centers

8. Final Thoughts

For more information about this report visit https://www.researchandmarkets.com/r/pt0g92

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

Read more from the original source:

Global Artificial Intelligence in Contact Centers Market Report 2022-2036: Use Cases for AI Today and the Exciting Future of this Technology - PR...

Posted in Artificial Intelligence | Comments Off on Global Artificial Intelligence in Contact Centers Market Report 2022-2036: Use Cases for AI Today and the Exciting Future of this Technology – PR…

Breakthrough Study Validates Artificial Intelligence as a Novel Biomarker in Predicting Immunotherapy Response – Published in Journal of Clinical…

Posted: at 2:18 am

The JCO is an international, peer-reviewed medical journal published by the American Society of Clinical Oncology (ASCO), with an impact factor (IF) of 44.54. This is the first time that research on AI biomarkers has been published in an international SCI-grade journal of JCO's prestige.

"Immune phenotyping of tumor microenvironment is a logical biomarker for immunotherapy, but objective measurement of such would be extremely challenging," said Professor Tony Mok from the Chinese University of Hong Kong, co-senior author of the journal. "This is the first study that adopted AI technology to define the tumor immune phenotype, and to demonstrate its ability in predicting treatment outcomes of anti-PD-L1 therapy in two large cohorts of patients with advanced non-small cell lung cancer."

Immune checkpoint inhibitors (ICI) are a standard therapy method for advanced NSCLC with programmed death ligand-1 (PD-L1) expression. However, outcomes vary depending on the patient's tumor microenvironment.

Assessing the PD-L1 tumor proportion score (TPS) can bring predictive benefit for patients with high expression (over 50%), who show superior response to ICI therapy over standard chemotherapy. However, ICIs lose their potency in patients with PD-L1 TPS between 1% and 49%, showing outcomes similar to chemotherapy. Therefore, the development of an accuracy-enhanced biomarker to predict ICI response in NSCLC patients with low PD-L1 expression is highly warranted.

While tumor infiltrating lymphocytes (TIL) are promising biomarkers for predicting ICI treatment outcomes apart from PD-L1, clinical application remains challenging as TIL quantification involves a manual evaluation process bound to practical limitations of interobserver bias and intensive labor. Employing AI's superhuman computational capabilities should open new possibilities for the objective quantification of TIL.

To validate immune phenotyping as a complementary biomarker in NSCLC, researchers divided 518 NSCLC patients into three groups based on their tumor microenvironment: inflamed, immune-excluded, and immune-desert. As a result, clinical characteristics based on each immune phenotype group showed statistically significant differences in progression-free survival (PFS) and overall survival (OS).

Furthermore, analysis of NSCLC patients with PD-L1 TPS between 1% and 49% based on their immune phenotype found that the inflamed group showed significantly higher results in objective response rate (ORR) and progression-free survival (PFS), compared to the non-inflamed groups. This shows Lunit SCOPE IO's ability to supplement PD-L1 TPS as a biomarker by accurately predicting immunotherapy response for patients with low PD-L1 TPS.

"Lunit has demonstrated through several abstracts the credibility of Lunit SCOPE IO as a companion diagnostic tool to predict immunotherapy treatment outcomes," said Chan-Young Ock, Chief Medical Officer at Lunit. "This study is a proof-of-concept that compiles all of our past research that elucidates Lunit AI's ability to optimize cancer treatment selection."

Last year, Lunit announced a strategic investment of USD 26 million from Guardant Health, Inc., a leading precision oncology company. Following this major collaboration intended to reshape and innovate the precision oncology landscape, Lunit continues to refine its global position by validating the effectiveness of its AI technology through various studies.

SOURCE Lunit

Go here to see the original:

Breakthrough Study Validates Artificial Intelligence as a Novel Biomarker in Predicting Immunotherapy Response - Published in Journal of Clinical...

Posted in Artificial Intelligence | Comments Off on Breakthrough Study Validates Artificial Intelligence as a Novel Biomarker in Predicting Immunotherapy Response – Published in Journal of Clinical…

Production methods of tomorrow: More efficient tool use through artificial intelligence – ETMM Online

Posted: at 2:18 am

16.03.2022A guest post by Mathias Schmidt*

When machining a component, tool wear and the metal removal rate are the decisive factors. Machine learning can provide a valuable contribution to the optimisation of production costs by supporting the decision-making process for tool changes.

Related Companies

(Source: Nikola Krieger)

As with all industrial fields, there is ever growing cost pressure in the machining sector. The more efficiently tools are used, the lower the costs become. However, there are no patent solutions here, individual processes are too different from one application to the next. Transfer learning can offer a solution: Here, knowledge from related tasks that have already been learned is used to train machine learning (ML) models more quickly for new, but related tasks. A research project funded by the German Federal Ministry of Education and Research (BMBF) has been running since June 2021 to explore the possibilities of transfer learning in machining and make it usable for industry.

The production costs of a machined component are largely determined by the metal removal rate and tool wear. With constantly increasing cost pressure, optimising tool use is therefore a promising starting point for reducing costs and increasing efficiency. If tools are replaced too late, the wear has a negative effect on the workpiece quality. In addition to deviations from the required geometric tolerances, increased burr formation, increased roughness and the impairment of the metallurgical and mechanical properties of the workpiece edge zone are consequences of worn tools. Therefore, in industrial practice, tools are often replaced far too early as a precaution. But this also has a negative effect on production costs. In addition to the wasted tool life potential, set-up times and tool costs also increase. AI-supported, intelligent tool management can help to optimise tool life.

By first learning suitable models, it is possible to predict tool wear during machining by in-situ measurement of vibrations, acoustic signals or process forces. Conversely, the expected process forces and temperatures can be estimated with a known initial state of wear. In addition, it is possible to predict the production costs and component properties such as roughness, burr height and the microstructure or microhardness present in the microstructure with a known selection of the process parameters for different manufacturing processes. This means that tools can be used for much longer without the risk of problematic wear. In this way, a resource-efficient and sustainable improvement in productivity can be realised, which can contribute significantly to an increase in the competitiveness of manufacturing companies.

Not all machining is the same, however. In addition to a variety of materials that can be machined, it is always important to consider the process itself. Even with standard tools, there are significant differences. The tools not only consist of different materials suitable for the respective application, but usually have different geometries and possibly even coatings. The results of one application therefore cannot be easily transferred to other applications. Furthermore, training the systems is often very time-consuming. Up to now, available solutions for optimisation by means of ML usually refer to a specific cutting process on a material with defined tools and a defined range of cutting parameters, usually under laboratory conditions. As a result, it is not possible to transfer the models to real, varying machining processes in manufacturing companies using current methods.

Transfer learning can offer a possible solution. Here, knowledge from related applications that have already been learned is used to train ML models more quickly for new tasks or use cases. However, there are still no procedural models that enable transfer learning to be used for applications in everyday industry. This is where the research project "Control of machining processes through transferable artificial intelligence - basis for process improvements and new business models (TransKI)" in the funding measure "Learning production technology - use of artificial intelligence (AI) in production (ProLern)" comes in, which is funded by the German Federal Ministry of Education and Research (BMBF).

The overall objective of the project, the development of transfer learning for the creation of ML models that can be transferred to new fields of application with little effort, was divided into three sub-objectives. The first sub-goal is the determination and modelling of causal interactions in machining. The second sub-goal was defined as ensuring transferability, which finally results in the third sub-goal of making the models usable.

When choosing ML models for machining, it is always important to consider the process itself.

(Source: Nicola Krieger)

In the first phase of the research project, industrial use cases are defined, and machining tests are carried out and evaluated. With the processed data from these tests, basic ML models can be developed. The second phase is about making the models suitable for new use cases. In this process, the test environment, i.e. the process, the machine and sensor technology as well as the material are changed step by step, wear-dependent commonalities are identified and expert knowledge is included in the investigations. In the third project phase, an assistance system for process pre-control and transfer-learning-based business models will be developed to make the optimised ML models industrially usable.

The knowledge gained will be validated in several heterogeneous pilot applications for drilling and milling. Furthermore, the project not only addresses the specific problem of the tool industry, but also opens up new ways through transfer learning to leverage previously untapped value creation potential, for example with capital goods manufacturers and manufacturing companies in other sectors.

This forward-looking and comprehensive project requires expertise and resources from various fields. That is why a total of seven partners are involved in the joint project. The experts for precision tools from K.-H. Mller Przisionswerkzeuge are coordinating the project and are responsible for the development of innovative, AI-based business models. Robert Bosch is examining the transferability of the ML models to industrially relevant milling processes and is contributing existing experience in the use of AI/ML methods in production technology to the project. As an industrial partner in the field of precision drilling technology, Botek Przisionsbohrtechnik is an essential part of the project, both in carrying out the experiments and in validating the transfer learning. Empolis Information Management is responsible for the data preparation as well as the development of the ML models and ensuring the transferability. The analysis of machining mechanisms in drilling and milling using parametric models and machine learning is being carried out at the Chair of Production Engineering and Organisation FBK at the Technical University of Kaiserslautern. The tool manufacturer Paul Horn is responsible for carrying out and evaluating the milling tests and plays a major role in the data preparation. The Institute for Machine Tools IfW at the University of Stuttgart focuses on research into process pre-control and is responsible for the work at the interface between ML models and machine control. The project is scheduled to run until 31 May 2024.

* Mathias Schmidt is managing partner of K.-H. Mller Przisionswerkzeuge GmbH.

(ID:48097648)

Read more from the original source:

Production methods of tomorrow: More efficient tool use through artificial intelligence - ETMM Online

Posted in Artificial Intelligence | Comments Off on Production methods of tomorrow: More efficient tool use through artificial intelligence – ETMM Online

Award-winner warns of the failures of artificial intelligence – The Australian Financial Review

Posted: at 2:18 am

On a positive note, he says AI has been identified as a key enabler on 79 per cent (134 targets) of the United Nations Sustainable Development Goals (SDGs). However, 35 per cent (59 targets) may experience a negative impact from AI.

Unfortunately, he says unless we start to address the inequities associated with the development of AI right now, were in grave danger of not achieving the UNs SDG goals and, more pertinently, if AI is not properly governed and proper ethics are applied from the beginning, it will have not only a negative physical impact, it will also have a significant social impact globally.

There are significant risks to human dignity and human autonomy, he warns.

If AI is not properly governed and its not underpinned by ethics, it can create socio-economic inequality and impact on human dignity.

A part of the problem at present is most AI is being developed for a commercial outcome, with estimates suggesting its commercial worth to be $15 trillion a year by 2030.

Unfortunately, the path were on poses some significant challenges.

Samarawickrama says AI ethics is underpinned by human ethics and the underlying AI decision-making is driven by data and a hypothesis created by humans.

The danger is much AI is built off the back of the wrong hypothesis because there is an unintentional bias built into the initial algorithm. Every conclusion the AI is making is reached from the hypothesis, which means every decision and the quality of that decision its making is based off a humans ethics and biases.

For Samarawickrama, this huge flaw in AI can only be rectified if diversity, inclusion and socio-economic inequality are taken into account from the very beginning of the AI process.

We can only get to that point if we ensure we have good AI governance and ethics.

The alternative is were basically set up to fail if we do not have that diversity of data.

Much of his work in Australia is with the Australian Red Cross and its parent the International Federation of Red Cross and Red Crescent Societies (IFRC), where he has built a framework linking AI to the seven Red Cross principles in a bid to link AI to the IFRCs global goal of mitigating human suffering.

And while this is enhancing the data literacy across the Red Cross, it also has a potential usage in many organisations, because its about increasing diversity and social justice around AI.

Its a complex problem to solve because there are lot of perspectives as to what mitigating human suffering involves. It goes beyond socio-economic inequality and bias.

For example, the International Committee of the Red Cross is concerned about autonomous weapons and their impact on human suffering.

Samarawickrama says if we are going to achieve the UNSDGs as well as reap the benefits of a $15 trillion a year global economy by 2030, we have to work hard to ensure we get AI right now by focussing on AI governance and ethics.

If we dont, we create a risk of failing to achieve those goals and we need to reduce those by ensuring AI can bring the benefits and value it promises to all of us.

Its why the Red Cross is a good place to start because its all about reducing human suffering, wherever its found and, we need to link that to AI, Samarawickrama says.

Go here to read the rest:

Award-winner warns of the failures of artificial intelligence - The Australian Financial Review

Posted in Artificial Intelligence | Comments Off on Award-winner warns of the failures of artificial intelligence – The Australian Financial Review

Use of Mobile and Wearable Artificial Intelligence in Child and Adolescent Psychiatry: Scoping Review – Newswise

Posted: at 2:18 am

Background: Mental health disorders are a leading cause of medical disabilities across an individuals lifespan. This burden is particularly substantial in children and adolescents because of challenges in diagnosis and the lack of precision medicine approaches. However, the widespread adoption of wearable devices (eg, smart watches) that are conducive for artificial intelligence applications to remotely diagnose and manage psychiatric disorders in children and adolescents is promising.

Objective: This study aims to conduct a scoping review to study, characterize, and identify areas of innovations with wearable devices that can augment current in-person physician assessments to individualize diagnosis and management of psychiatric disorders in child and adolescent psychiatry.

Methods: This scoping review used information from the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. A comprehensive search of several databases from 2011 to June 25, 2021, limited to the English language and excluding animal studies, was conducted. The databases included Ovid MEDLINE and Epub ahead of print, in-process and other nonindexed citations, and daily; Ovid Embase; Ovid Cochrane Central Register of Controlled Trials; Ovid Cochrane Database of Systematic Reviews; Web of Science; and Scopus.

Results: The initial search yielded 344 articles, from which 19 (5.5%) articles were left on the final source list for this scoping review. Articles were divided into three main groups as follows: studies with the main focus on autism spectrum disorder, attention-deficit/hyperactivity disorder, and internalizing disorders such as anxiety disorders. Most of the studies used either cardio-fitness chest straps with electrocardiogram sensors or wrist-worn biosensors, such as watches by Fitbit. Both allowed passive data collection of the physiological signals.

Conclusions: Our scoping review found a large heterogeneity of methods and findings in artificial intelligence studies in child psychiatry. Overall, the largest gap identified in this scoping review is the lack of randomized controlled trials, as most studies available were pilot studies and feasibility trials.

Read the original here:

Use of Mobile and Wearable Artificial Intelligence in Child and Adolescent Psychiatry: Scoping Review - Newswise

Posted in Artificial Intelligence | Comments Off on Use of Mobile and Wearable Artificial Intelligence in Child and Adolescent Psychiatry: Scoping Review – Newswise

Artificial Intelligence and the Future of War – The National Interest

Posted: at 2:18 am

Consider an alternative history for the war in Ukraine. Intrepid Ukrainian Army units mount an effort to pick off Russian supply convoys. But rather than rely on sporadic air cover, the Russian convoys travel under a blanket of cheap drones. The armed drones carry relatively simple artificial intelligence (AI) that can identify human forms and target them with missiles. The tactic claims many innocent civilians, as the drones kill nearly anyone close enough to the convoys to threaten them with anti-tank weapons. While the Ukrainians attempt to respond to the setback with their own drones, they are overwhelmed by the more numerous Russian drones.

It is increasingly plausible that this scenario could be seen in the next major war. In fact, the future of AI in war is already here, even if its not yet being employed in Ukraine. The United States, China, Russia, Britain, Israel, and Turkey are all aggressively designing AI-enabled weapons that can shoot to kill with no humans in the decision-making loop. These include fleets of ghost ships, land-based tanks and vehicles, AI-enabled guided missiles, and, most prominently, aircraft. Russia is even developing autonomous nuclear weapons; the 2018 U.S. Nuclear Posture Review stated that Russia is developing a new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo. Lethal autonomous weapons (LAWs) have already been used in offensive operations to attack human combatants. In March 2021, a Turkish Kargu-2 drone was used in Libya to mount autonomous attacks on human targets. According to a UN Security Council report, the Kargu-2 hunted down retreating logistics and military convoys, attack[ing] targets without requiring data connectivity between the operator and the munition.

In reality, autonomous weapons that kill without an active human decision are now hundreds of years old. Land and naval mines have been used since at least the 1700s. Missile defense systems such as the Patriot and Phalanx can operate autonomously to attack enemy aircraft or surface vessels. Furthermore, sentry guns that automatically fire at targets in combat patrol zones have been deployed on armored vehicles.

That said, these systems have largely been defensive in nature. The Rubicon the world is now crossing would allow offensive weaponsequipped with enhanced intelligence for more complex decisionsto play a major role in conflicts. This would create a battlefield on which robots and autonomous systems are more numerous than human soldiers.

Why Governments Love Killer Robots

The attraction of killer robots and autonomous systems is clear. Using them to do the dirty work means that valuable soldiers do not have to die and expensive pilots do not have to fly costly equipment. Robots dont go to the bathroom, need water, or miss a shot when they sneeze or shake. While Robots make mistakes, so do humans. Protagonists of offensive AI assume that robot mistakes will be more predictable, with little regard to the increasing unpredictability of the behavior that arises from the emergent properties of complex systems. Finally, robots can be trained instantly, and replacing them is much faster and cheaper than replacing human combatants.

Most importantly, the political cost of using robots and LAWs is far lower. There would be no footage of captured soldiers or singed corpses, of pilots on their knees in a snowy field begging for mercy. This is why warfare will likely continue to become more remote and faceless. AI on weapons just takes the next logical step along this path. It enables robot weapons to operate at a wider scale and react without needing human inputs. This makes the military rationale crystal clear: not having AI capabilities will put an army at a great disadvantage. Just as software is eating the world of business, it is also eating the military world. AI is the sharp end of the software spear, leveling the playing field and allowing battlefield systems to evolve at the same speed as popular consumer products. The choice not to use AI on the battlefield will become akin to a bad business decision, even if there are tangible moral repercussions.

The Benefits and Risks of Fewer Humans in the Loop

As we explained in our book, Driver in the Driverless Car, supporters of autonomous lethal force argue that AI-controlled robots and drones might prove to be far more moral than their human counterparts. They claim that a robot programmed not to shoot women or children would not make mistakes in the pressure of battle. Furthermore, they argue, programmatic logic has an admirable ability to reduce the core moral issue down to binary decisions. For example, an AI system with enhanced vision might instantly decide not to shoot a vehicle painted with a red cross as it hurtles toward a checkpoint.

These lines of thought are essentially counterfactuals. Are humans more moral if they can program robots to avoid the weaknesses of the human psyche that can cause experienced soldiers to lose their sense of reason and morality in the heat of battle? When it is hard to discern if an adversary follows any moral compass, such as in the case of ISIS, is it better to rely on the cold logic of the robot warrior rather than on an emotional human being? What if a non-state terrorist organization develops lethal robots that afford them a battlefield advantage? Is that a risk that the world should be willing to take in developing them?

There are clear, unacceptable risks with this type of combat, particularly in cases when robots operate largely autonomously in an environment with both soldiers and civilians. Consider the example of Russian drones flying air cover and taking out anything that moves on the ground. The collateral damage and the deaths of innocent non-combatants would be horrific. In several instances, including a famous 1979 incident in which a human inadvertently set off alarms warning of a Russian nuclear strike, automated systems have given incorrect information that human operators debunked just in time to avert a nuclear exchange. With AI, decisions are made far too quickly for humans to correct them. As a result, catastrophic mistakes are inevitable.

We also shouldnt expect that LAWs will remain exclusive to nation-states. Because their manufacturing costs follow Moores Law, they will quickly enter the arsenals of sophisticated non-state actors. Affordable drones can be fitted with off-the-shelf weapons, and their sensors can be tethered to home-grown remote AI systems to identify and target human-like forms.

We currently sit at a crossroads. The horrific brutality of Russias invasion of Ukraine has demonstrated yet again that even great powers may cast aside morality for national narratives that are convenient to autocrats and compromised political classes. The next great war will likely be won or lost in part due to the smart use of AI systems. What can be done about this looming threat?

While a full ban on AI-based technologies would have been ideal, it is now impossible and counterproductive. For example, a ban would handcuff NATO, the United States, and Japan in future combat and make their soldiers vulnerable. A ban on applying AI systems to weapons of mass destruction is more realistic. Some may say this is a distinction without a difference, but the world has successfully limited weapons that can have global impacts. However, we have crossed the Rubicon and have few choices in a world where madmen like Putin attack innocent civilians with thermobaric rockets and threaten nuclear escalation.

Vivek Wadhwa and Alex Salkever are the authors of The Driver in the Driverless Car and From Incremental to Exponential: How Large Companies Can See the Future and Rethink Innovation. Their work explains how advancing technologies can be used for both good and evil, to solve the grand challenges of humanity or to destroy it.

Image: Reuters.

Follow this link:

Artificial Intelligence and the Future of War - The National Interest

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence and the Future of War – The National Interest

‘Shadow’ Review: A Darkly Humorous Conversation About Fighting Artificial Intelligence | DMT – DMT

Posted: at 2:18 am

People with any form of intellectual disability have largely been portrayed on-screen by able-bodied actors. Some of the most popular films that are guilty of doing so are Forrest Gump (1994), I Am Sam (2001), Silver Linings Playbook (2012), Barfi! (2012), Rain Man (1988), etc. As you can see, these movies have been widely lauded by critics and audiences alike, to the point that the conversation has shifted from inclusivity to artistic accuracy. And petitions to let people with intellectual disabilities play characters with intellectual disabilities are being thwarted with the sentiment that theyre not good actors. While there are many movies and shows to prove that thats not the case at all, Shadow (2022) is the most recent and one of the best examples.

Directed by Bruce Gladwin, Shadow follows Simon, Scott, and Sarah, who are a trio of activists with intellectual disabilities. They are worried about the future impacts of artificial intelligence and think that its obviously going to harm able-bodied people. But its going to be particularly damaging for people with disabilities since theyre already a marginalized part of society. So, they held a town hall meeting to discuss it. However, for some reason, Simon goes into savior-complex mode. As Scott becomes a little too enamored with his power as a facilitator, he starts dismissing everyones opinions. Sarah, who feels she has been overlooked and underestimated all her life, explodes after being sidelined again. Now, whether theyll be able to put their differences aside and fulfill the purpose of the meeting is what forms the crux of the story.

Shadow is part documentary and part narrative-driven. The central and supporting cast play themselves in the movie. Thats made obvious in the listing, of course. But its further established in the opening minutes when you see people practicing a piece of dialogue in front of a camera (which youre seeing through another camera). And then, the final version of that dialogue is integrated into the linear narrative. It happens again when Simon is about to give a history lesson about how society has treated people with intellectual disabilities. Cinematographer and editor Rhian Hinkley dollies the camera forward, bringing Simon into focus as he speaks his line. However, Scott thinks he needs to do another take. So, the camera resets back to its original position, and you see the retake instead of just seeing the final take like you do in every other movie. In doing so, Gladwin advances Scotts character as this over-assertive person while presenting Simons flaws on the proverbial table.

The elements in Shadows narrative and its overall presentation are hugely educational. Theres a massive chunk of the movie thats dedicated to talking about how people with disabilities were experimented upon in Iowa, the exploitation of women with intellectual disabilities in the Irish Magdalene Laundries controversy, and how a Hasbro branch in Ireland subcontracted said laundries to assemble and package their games. Its tied back to the central topic of the dangers of A.I. with the through line that people (or any entity for that matter) who think that they are superior to a section of society will always look to oppress them. So, if A.I. is allowed to do its thing, its eventually going to enslave human beings. But the hilarious devolution into a power struggle between Sarah, Scott, and Simon acts as a great counterpoint to Hollywood movies that have depicted people with disabilities as either geniuses or people who arent able to form coherent thoughts. It shows that people exist between those two extremes and why they need to be represented more.

Performance-wise, its really tough to judge how much of what were seeing is acting and how much of it is unscripted riffing between the activists. Which, in and of itself, is a testament to the writing by the screenwriters and the effortlessness with which the people on-screen carry themselves. Sarah, Scott, and Simon are undoubtedly the linchpins of Shadow. Simon is measured throughout the film, which gives his eventual realization about where the whole meeting is going, the melancholic feeling it wants to achieve. Sarah and Scott are given a lot of room to breathe, and they utilize it fully. The whole bit where Scott gets locked out and then has to find his way back to the meeting (while being flipped off by Mark because he was condescending to him earlier) is undoubtedly comedic gold. This again helps the film and the filmmakers to double down on the point that theyre trying to make about the representation of people with intellectual disabilities in mainstream discourse.

In conclusion, Shadow is one of the significant movies of the year. Those who are for inclusivity and dont partake in the heinous act of othering people who are, for the lack of a better term, atypical will find echoes of their belief system in this film. Those who are fighting for the same will definitely come out of it with a sense of reassurance that they are walking in the right direction. But its those who do not fall into either of those categories, who are ignorant towards disabilities and think theyve been aptly represented by Hollywood, who will probably have a profound revelation. And even if they fail to understand how urgently people with disabilities need equity and equality of opportunity, well, at least theyll walk away with knowledge of how this community has been historically exploited.

Shadow is a 2022 Drama Film directed by Bruce Gladwin. The film had its screening at the SXSW film festival.

Read this article:

'Shadow' Review: A Darkly Humorous Conversation About Fighting Artificial Intelligence | DMT - DMT

Posted in Artificial Intelligence | Comments Off on ‘Shadow’ Review: A Darkly Humorous Conversation About Fighting Artificial Intelligence | DMT – DMT

People trust AI fake faces more than real ones, according to a new study – World Economic Forum

Posted: at 2:18 am

Fake faces created by artificial intelligence (AI) are considered more trustworthy than images of real people, a new study has found.

The results highlight the need for safeguards to prevent deep fakes, which have already been used for revenge porn, fraud and propaganda, the researchers behind the report say.

Real (R) and synthetic (S) faces were rated for trustworthiness with statistically significant results.

Image: PNAS

The study - by Dr Sophie Nightingale from Lancaster University in the UK and Professor Hany Farid from the University of California, Berkeley, in the US - asked participants to identify a selection of 800 faces as real or fake, and to rate their trustworthiness.

After three separate experiments, the researchers found the AI-created synthetic faces were on average rated 7.7% more trustworthy than the average rating for real faces. This is statistically significant, they add. The three faces rated most trustworthy were fake, while the four faces rated most untrustworthy were real, according to the magazine New Scientist.

The fake faces were created using generative adversarial networks (GANs), AI programmes that learn to create realistic faces through a process of trial and error.

The study, AI-synthesized faces are indistinguishable from real faces and more trustworthy, is published in the journal, Proceedings of the National Academy of Sciences of the United States of America (PNAS).

It urges safeguards to be put into place, which could include incorporating robust watermarks into the image to protect the public from deep fakes.

Guidelines on creating and distributing synthesized images should also incorporate ethical guidelines for researchers, publishers, and media distributors, the researchers say.

The four most (top row) and four least (bottom row) trustworthy faces, according to the study.

Image: PNAS

Using AI responsibly is the immediate challenge facing the field of AI governance, the World Economic Forum says.

In its report, The AI Governance Journey: Development and Opportunities, the Forum says AI has been vital in progressing areas like innovation, environmental sustainability and the fight against COVID-19. But the technology is also challenging us with new and complex ethical issues and racing ahead of our ability to govern it.

The report looks at a range of practices, tools and systems for building and using AI.

These include labelling and certification schemes; external auditing of algorithms to reduce risk; regulating AI applications, and greater collaboration between industry, government, academia and civil society to develop AI governance frameworks.

The World Economic Forums Centre for the Fourth Industrial Revolution, in partnership with the UK government, has developed guidelines for more ethical and efficient government procurement of artificial intelligence (AI) technology. Governments across Europe, Latin America and the Middle East are piloting these guidelines to improve their AI procurement processes.

Our guidelines not only serve as a handy reference tool for governments looking to adopt AI technology, but also set baseline standards for effective, responsible public procurement and deployment of AI standards that can be eventually adopted by industries.

We invite organizations that are interested in the future of AI and machine learning to get involved in this initiative. Read more about our impact.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Original post:

People trust AI fake faces more than real ones, according to a new study - World Economic Forum

Posted in Artificial Intelligence | Comments Off on People trust AI fake faces more than real ones, according to a new study – World Economic Forum

Artificial Intelligence In Agriculture Poses a Number Of Risks; Read To Find Out More – Krishi Jagran

Posted: at 2:18 am

AI in Agriculture

Although artificial intelligence (AI) has the potential to improve crop management and agricultural output, experts warn that there are significant risks associated with implementing new AI technologies that are being overlooked.

"The implications of machine learning (ML) models, expert systems, and autonomous machines for farms, farmers, and food security are little understood and underappreciated," according to the authors of a recent risk studypublished in Nature.

The researchers examined the risks of AI in agriculture, including interoperability, safety and security, data dependability, and unforeseen socio-ecological effects from the deployment of machine learning models to optimizeyields.

By quickly recognizingplant illnesses and administering agrochemicals efficiently, AI may be utilizedin agriculture to improve crop managementand yield. Machine learning can aid in quick plant phenotyping, agricultural monitoring, soil composition assessment, weather forecasting, and yield prediction.

However, according to Asaf Tzachor of the University of Cambridge's Centre for the Study of Existential Risk (CSER), the implementation of AI and ML design might jeopardizeecosystems and expose producers and agri-foodproviders to accidents and cyberattacks.

Before safely adopting AI for agriculture, the authors have highlighted a number of risks that must be considered.

According to the experts, cyber-attackers may contaminate databases and, among other things, shut off sprayers, autonomous drones, and robotic harvesters.

The reliability and usefulness of agricultural data is also a challenge, as indigenous farming methods are largely underrepresented in statistics, despite their significant contribution to local food security.

Cognitive computing is being utilised in India to learn, comprehend, and interact with various contexts in order to increase productivity. Microsoft is partnering with 175 farmers in Andhra Pradesh to give agricultural, land, and fertiliser advisory services, which resulted in a 30% improvement in yield per hectare in 2016.

United Phosphorous(UPL), India's largest manufacturer of agrochemicals, has also partnered with Google to build a Pest Risk Prediction API that utilises AI to predict the risk of pest assault in advance.

In the initial phase, the app supplied automated voice calls for cotton harvests to roughly 3,000 marginal farmers in Telangana, Maharashtra, and Madhya Pradesh who had less than five acres of land. Based on weather conditions and sowing recommendations, the calls offered information on pest attack threats. One of the most serious concerns of AI in India is that it may expose such farmers to false information.

Furthermore, due to marginalisation, poor internet penetration, and a digital divide in India, smallholders may not be able to adopt such modern technology, expanding the gap between commercial and subsistence farmers.

The researchers advocate enlisting the help of 'white hat hackers' in uncovering security holes in order to safeguard people from intrusions.

The dangers also highlight the need for "agricultural AI systemsand services that are sensitive to context, taking into account potential social and ecological repercussions," according to the report.

Risks may be avoided if extensive risk assessments and governance mechanisms were implemented.

Read the original:

Artificial Intelligence In Agriculture Poses a Number Of Risks; Read To Find Out More - Krishi Jagran

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence In Agriculture Poses a Number Of Risks; Read To Find Out More – Krishi Jagran

Artificial Intelligence in Internet content The Stute – The Stute

Posted: March 11, 2022 at 11:49 am

Recent developments in artificial intelligence (AI) and computer vision have resulted in the ability to create fake people. These people are either generated entirely from scratch by an AI or are made by digitally altering the appearance of actors that appear similar to them.

A movement called RepresentUS recently tried to publish deep fake advertisements on several news outlets such as CNN and Fox News. The advertisements depicted Russian President Vladimir Putin and North Korean leader Kim Jong-un, both stating that Americans need to act in order to protect their democracy. The news station did not let these advertisements run and took them down shortly before being posted.

While it was intended to be for a good cause, the possibility of this technology being used for nefarious purposes remains. At the end of the advertisements, these leaders stated that the video wasnt real, but theres no reason that this statement would be included if someone wanted to tarnish another persons reputation. Without this information, it becomes difficult to identify what videos are real and which are not.

Artificially generated influencers are also gaining popularity on social media such as Instagram. The AI generated influencer Rozy is an example of an Instagram influencer developed by a company that is able to represent various companies. Unlike a real person, an influencer generated by an AI has no chance of being involved in a scandal that would negatively impact a companys reputation, as well as never aging, meaning they could represent a company for decades to come. These Instagram accounts are less like people, and more like characters such as Mickey Mouse, with the only difference being the level of detail and photorealism that these accounts possess.

All of this points towards the need for transparency from users of this kind of artificial intelligence. Digital influencers usually indicate that they are artificially generated in their bio or elsewhere, but similarly to deep fakes, there are no laws or regulations regarding their use. We are still in the early stages of this technology, but we are quickly approaching the point where it will become widely accessible and easy to use.

As AI generation of this content improves, similar techniques for detecting the work of an AI are also being developed. However, it is unclear whether these detective AIs will be able to keep up with the rate at which their counterparts are being developed.

Further legislation will be needed in order to regulate these technologies. In the same way that a carton of orange juice has nutritional information on the back, perhaps Instagram accounts will have a required artificially generated tag. The societal repercussions of this kind of technology are unclear, but theres no doubt that it will have a huge impact on our lives.

For now, theres not much that we can do in order to control the path that artificial intelligence development can take. Instead, we should focus on trying to identify sources of misinformation and prevent its spread. Sources such as Votesmart and Fact Check are great ways to stay informed and confirm (or deny) certain political facts. Besides this, we can lobby our politicians and push for more regulation and research in the hopes of a brighter and more transparent future.

Senioritis is an Opinion column written by one or two Stevens student(s) in their last year of study to discuss life experiences during their final year at Stevens, and other related subject matter.

More here:

Artificial Intelligence in Internet content The Stute - The Stute

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence in Internet content The Stute – The Stute

Page 45«..1020..44454647..5060..»