What Makes Food Delivery Companies Bet Big on Artificial Intelligence and Machine Learning | Learn More in Quantzig’s Recent Article – Business Wire

LONDON--(BUSINESS WIRE)--Quantzig, a global data analytics and advisory firm that delivers actionable analytics solutions to resolve complex business problems, brings to you comprehensive insights into the benefits of machine learning and artificial intelligence for food delivery companies in its recent article.

Whats in it for you?

Talk to our analytics experts for comprehensive insights on how artificial intelligence and machine learning for food delivery industry is transforming the complete food and beverages industry.

After several years of restricted use and confinement to tech labs, today artificial intelligence and machine learning have become dominant focal points for businesses especially in the food delivery industry. Food delivery industry is immensely popular among the millennials due to the convenience involved in the usage. The increasing competition among food delivery industry players to improve customer retention rates and improve product quality has forced the companies to explore new ways of improvement, and this is where big data, artificial intelligence and machine learning for food delivery industry comes into the picture.

Request a FREE proposal to learn more about our customized machine learning for food industry.

According to Quantzigs artificial intelligence and machine learning experts, Food delivery industry players are now revolutionizing the food industry by leveraging artificial intelligence and machine learning to enhance their market reach and customer satisfaction rates.

Three Reasons Why Machine Learning for Food Delivery Industry is Important

1: Improve operational efficiency: Machine learning for food delivery industry helps to understand the customer behavior better and provide services as per their preferences. Artificial intelligence and machine learning is of great use when it come to analyze factors like the impact of temperature on food or the impact of market trends on consumption.

2: Enhance customer relationship: The rapid spread of artificial intelligence and machine learning in the food delivery industry have contributed significantly to the growing popularity of chatbots. These chatbots enable food delivery companies to enhance customer relationships. It is growing immensely popular due to its ability to drive better customer experiences.

3: Accurate delivery time estimates: Machine learning for food delivery industry helps to collect real-time data about traffic, and route plans and hence provides companies with an accurate estimation of the delivery time. Moreover, artificial intelligence and machine learning combined together can help in predicting the impact of these factors on food items and hence food delivery industry players can take preventive measures for food wastage.

Book a FREE solution demo to know how food delivery apps can leverage artificial intelligence and machine learning to evaluate their customers sentiments and take appropriate actions whenever required.

About Quantzig

Quantzig is a global analytics and advisory firm with offices in the US, UK, Canada, China, and India. For more than 15 years, we have assisted our clients across the globe with end-to-end data modeling capabilities to leverage analytics for prudent decision making. Today, our firm consists of 120+ clients, including 45 Fortune 500 companies. For more information on our engagement policies and pricing plans, visit: https://www.quantzig.com/request-for-proposal

Go here to see the original:
What Makes Food Delivery Companies Bet Big on Artificial Intelligence and Machine Learning | Learn More in Quantzig's Recent Article - Business Wire

Perfectly Imperfect: Coping With The Flaws Of Artificial Intelligence (AI) – Forbes

Perfectly imperfect

What is the acceptable failure rate of an airplane? Well, it is not zero no matter what how hard we want to believe otherwise. There is a number, and it is a very low number. When it comes to machines, computers, artificial intelligence, etc., they are perfectly imperfect. Mistakes will be made. Poor recommendations will occur. AI will never be perfect. That does not mean they do not provide value. People need to understand why machines may mistakes and set their beliefs accordingly. This means understanding the three key areas on why AI fails: implicit bias, poor data, and expectations.

The first challenge is implicit bias, which are the unconscious perceptions people have that cloud thoughts and actions. Consider, the recent protests on racial justice and police brutality and the powerful message that Black Lives Matter. The Forbes article AI Taking A Knee: Action To Improve Equal Treatment Under The Law is a great example of how implicit bias has played a role in the discrimination and just how hard (but not impossible) it is to use AI to reduce prejudice in our law enforcement and judicial systems. AI learns from people. If implicit bias is in the training, then the AI will learn this bias. Moreover, when the AI performs work, that work will reflect this bias even if the work is for social good.

Take for example the Allegheny Family Screening Tool. It is meant to predict which welfare children might be at risk from foster parent abuse. The initial rollout of this solution had some challenges though. The local Department of Human Services acknowledged that the tool might have racial and income bias. Triggers like neglect were often confused or misconstrued by associating foster parents who lived in poverty with inattention or mistreatment. Since learning of these problems, tremendous steps were taken to reduce the implicit bias in the screening tool. Elimination is much harder. When it comes to bias, how do people manage the unknown unknowns? How is social context addressed? What does right or fair behavior mean? If people cannot identify, define, and resolve these questions, then how will they teach the machine? This is a major driver AI will be perfectly imperfect because of implicit bias.

Coronavirus 2019 - ncov flu infection - 3D illustration

The second challenge is data. Data is the fuel for AI. The machine trains through ground truth (i.e. rules on how to make decisions, not the decisions themselves) and from lots of big data to learn the patterns and relationships within the data. If our data is incomplete or flawed, then AI cannot learn well. Consider COVID-19. John Hopkins, The COVID Tracking Project, U.S. Centers for Disease Control (CDC), and the World Health Organization all report different numbers. With such variation, it is very difficult for an AI to gleam meaningful patterns from the data let alone find those hidden insights. More challenging, what about incomplete or erroneous data? Imagine teaching an AI about healthcare but only providing data on womens health. That impedes how we can use AI in healthcare.

Then there is a challenge in that people may provide too much data. It could be irrelevant, unmeaningful, or even a distraction. Consider when IBM had Watson read the Urban Dictionary, and then it could not distinguish when to use normal language or to use slang and curse words. The problem got so bad that IBM had to erase the Urban Dictionary from Watsons memory. Similarly, an AI system needs to hear about 100 million words to become fluent in a language. However, a human child only seems to need around 15 million words to become fluent. This implies that we may not know what data is meaningful. Thus, AI trainers may actually focus on superfluous information that could lead the AI to waste time, or even worse, identify false patterns.

The third challenge is expectations. Even though humans make mistakes, people still expect machines to be perfect. In healthcare, experts have estimated that the misdiagnosis rate may be as high as 20%, which means potentially one out of five patients are misdiagnosed. Given this data as well as a scenario where an AI assisted diagnosis may have an error rate of one out of one hundred thousand, most people still prefer to see only the human doctor. Why? One of the most common reasons given is that the misdiagnosis rate of the AI is too high (even though it is much lower than a human doctor.) People expect AI to be perfect. Potentially even worse, people expect the human AI trainers to be perfect too.

On March 23, 2016, Microsoft launched Tay (Thinking About You), a Twitter bot. Microsoft had trained its AI to the level of language and interaction of a 19-year-old, American girl. In a grand social experiment, Tay was released to the world. 96,000 tweets later, Microsoft had to shut Tay down about 16 hours after launch because it had turned sexist, racist, and promoted Nazism. Regrettably, some individuals decided to teach Tay about seditious language to corrupt it. In conjunction, Microsoft did not think to teach Tay about inappropriate behavior so it had no basis (or reason) to know that something like inappropriate behavior and malicious intent might exist. The grand social experiment resulted in failure, and sadly, was probably a testament more about human society than the limitations of AI.

nobodys perfect

Implicit bias, poor data, and people expectations show that AI will never be perfect. It is not the magic bullet solution many people hope to have. AI can still do some extraordinary things for humans like restore mobility to a lost limb or improve food production while using less resources. People should not discount the value we can get. We should always remember: AI is perfectly imperfect, just like us.

Go here to read the rest:
Perfectly Imperfect: Coping With The Flaws Of Artificial Intelligence (AI) - Forbes

Artificial Intelligence and the Fight Against COVID-19 – nesta

This report studies the levels, evolution, geography, knowledge base and quality of AI research in the COVID-19 mission field using a novel dataset taken from open preprints sites arXiv, bioRxiv and medRxiv, which we have enriched with geographical, topical and citation data.

Although there has been rapid growth in the levels of AI research to tackle COVID-19 since the beginning of the year, AI remains underrepresented in this area compared to its presence in research outside of COVID-19. So far in 2020, 7.1 per cent of research on COVID-19 references AI, while 12 per cent of research on topics outside COVID-19 references it. After growing rapidly earlier in the year, the share of AI papers in COVID-19 research has stagnated in recent weeks.

More than a third of publications to tackle COVID-19 involve predictive analyses of patient data and in particular medical scans. AI is also being deployed to analyse social media data, predict the spread of the disease and develop biomedical applications.

China, the US, the UK, India and Canada are the global leaders in the development of AI applications to tackle COVID-19 research, accounting for 62 per cent of the institutional participations for which we have geographical data. China in particular is overrepresented in COVID-19 AI research. We have also identified a substantial number of publications involving institutions that we are unable to match with the global research institution database we are using. This is consistent with the idea that new actors are entering the COVID-19 mission field.

AI and non-AI researchers working in COVID-19 tend to draw on different bodies of knowledge. AIs share of citations to computer science is five times higher than outside and its share of citations to medicine is a third lower. These differences hold, even after we control for the topic within COVID-19 that different publications focus on .

In general, AI papers to tackle COVID-19 tend to receive less citations than other papers in the same topic. The population of AI researchers active in the COVID-19 mission field also tends to have a less established track record proxied through the citations they have received in recent years. This result holds when we compare researchers working in the same topics, suggesting that it is not simply driven by variation in the citation behaviours of different communities and disciplines.

Our analysis highlights the velocity with which research communities including AI researchers are mobilising to tackle the COVID-19 pandemic. We find many opportunities to apply powerful AI algorithms to prevent, diagnose and treat the virus. At the same time, deep learning algorithms reliance on big datasets, difficulties interpreting their findings, and a disconnect between AI researchers and relevant bodies of knowledge in the medical and biological sciences may limit the impact of AI in the fight with COVID-19. The persistent underrepresentation of AI research in the COVID-19 field we evidence, and its focus on computer vision analyses that play to the strengths of current algorithms, but require substantial investments in hardware and changing how hospitals work, are consistent with the notion that AIs may play a limited role tackling this pandemic.

There is also the risk that researchers facing low barriers to entry into the field may produce low-quality contributions making it harder to find valuable studies and discourage interdisciplinary contributions that could take longer to develop. Our finding that AI research tends to be less cited than other research, even inside the same publication topics, and that AI researchers entering the field have a weaker track record on average than others, lends some support to these concerns.

In the shorter term, creating bigger higher-quality open datasets related to COVID-19 could make it easier to deploy state-of-the-art deep learning algorithms. Spurring interdisciplinary collaborations, bringing together AI researchers and subject experts, may help to prioritise those AI applications with the greatest relevance and value. It might also reduce the risk of AI imperialism; where AI researchers ignore relevant bodies of knowledge about the complex biological and social systems where their techniques will be applied, reducing their value and creating unintended consequences. We also need technological and social solutions for the challenge of navigating a vast and fast-growing body of research of uncertain quality. Going forward, research funders should encourage the development of AI algorithms that are easier to deploy in small-data, high-stakes domains.

Novel data sources and methods, such as those we have used in our analysis, can play an important role in informing these strategies.

The data set used in this report is open for other researchers to analyse and build on.

Visit link:
Artificial Intelligence and the Fight Against COVID-19 - nesta

InfoSystems To Host 2 Virtual Workshops Related To Artificial Intelligence In Business – The Chattanoogan

As artificial intelligence spreads into more industries, InfoSystems, an IBM Platinum Business Partner, is committed to preparing the local business community to utilize the technology in a meaningful way, said officials.

As part of this mission, they are hosting two free virtual workshops on Wednesday and Thursday. While both will focus on artificial intelligence and how the technology is evolving, the two workshops will have slightly different messaging.

It is important to us that we continue to find ways to invest in the business community, said Keith Hales, chief operating officer at InfoSystems and event host. Even in the face of very real health and economic challenges, businesses still must find ways to improve, innovate, and ultimately make enough profit to keep the doors open.

The first workshop will take place from 11:30 a.m.-1 p.m. on Wednesday. It will provide foundational knowledge about artificial intelligence, a technology that has the power to help companies quickly gain competitive insights. During this virtual workshop, participants will learn steps they can take to get their existing technology ready for AI projects in the future.

LinkedIn event link: https://www.linkedin.com/events/6669583402284408832/Eventbrite event link: https://www.eventbrite.com/e/106093961896

The second workshop will take place from 11:30 a.m.-1 p.m. on Thursday. It will elaborate on artificial intelligence by teaching ways in which businesses can effectively store, manage, and secure their data. Participants will learn actionable solutions that can be deployed to protect their companys data, as well as how to prepare to use that data with AI applications and other new technologies.

LinkedIn event link: https://www.linkedin.com/events/6669594794299277312/Eventbrite event link: https://www.eventbrite.com/e/106104379054

There are very few instances in history in which a new technology will create the type of disruption that artificial intelligence is going to, said Josh Davis, vice president of Marketing at InfoSystems. We feel it's important for businesses in our community to have the information they need to get prepared for this.

While these workshops include technical topics, they are really a how-to guide intended to help businesses considering these technologies, said Mr. Hales. Our job is to educate and provide the best technology options available. Ultimately, if we help businesses make better technology decisions, we all win.

Read more from the original source:
InfoSystems To Host 2 Virtual Workshops Related To Artificial Intelligence In Business - The Chattanoogan

Burden of COVID-19 on the Market & Rehabilitation Plan | Global Artificial Intelligence (AI) Market In Retail Sector 2019-2023 | The Increased…

LONDON--(BUSINESS WIRE)--Technavio has been monitoring the global artificial intelligence (AI) market in retail sector and it is poised to grow by USD 14.05 billion during 2019-2023, progressing at a CAGR of 35% during the forecast period. The report offers an up-to-date analysis regarding the current market scenario, latest trends and drivers, and the overall market environment.

Although the COVID-19 pandemic continues to transform the growth of various industries, the immediate impact of the outbreak is varied. While a few industries will register a drop in demand, numerous others will continue to remain unscathed and show promising growth opportunities. Technavios in-depth research has all your needs covered as our research reports include all foreseeable market scenarios, including pre- & post-COVID-19 analysis. Download a Free Sample Report

The market is fragmented, and the degree of fragmentation will accelerate during the forecast period. IBM Corp., Intel Corp., Microsoft Corp., NVIDIA Corp., and Oracle Corp are some of the major market participants. To make the most of the opportunities, market vendors should focus more on the growth prospects in the fast-growing segments, while maintaining their positions in the slow-growing segments.

Buy 1 Technavio report and get the second for 50% off. Buy 2 Technavio reports and get the third for free.

View market snapshot before purchasing

The increased efficiency of operations has been instrumental in driving the growth of the market.

Technavio's custom research reports offer detailed insights on the impact of COVID-19 at an industry level, a regional level, and subsequent supply chain operations. This customized report will also help clients keep up with new product launches in direct & indirect COVID-19 related markets, upcoming vaccines and pipeline analysis, and significant developments in vendor operations and government regulations. https://www.technavio.com/report/global-artificial-intelligence-ai-market-in-retail-sector-industry-analysis

Artificial Intelligence (AI) Market in Retail Sector 2019-2023: Segmentation

Artificial Intelligence (AI) Market in Retail Sector is segmented as below:

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR31763

Artificial Intelligence (AI) Market in Retail Sector 2019-2023: Scope

Technavio presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources. The artificial intelligence (AI) market in retail sector report covers the following areas:

This study identifies the increased applications in e-commerce as one of the prime reasons driving the artificial intelligence (AI) market growth in retail sector during the next few years.

Technavio suggests three forecast scenarios (optimistic, probable, and pessimistic) considering the impact of COVID-19. Technavios in-depth research has direct and indirect COVID-19 impacted market research reports.

Register for a free trial today and gain instant access to 17,000+ market research reports.

Technavio's SUBSCRIPTION platform

Artificial Intelligence (AI) Market in Retail Sector 2019-2023: Key Highlights

Table of Contents:

PART 01: EXECUTIVE SUMMARY

PART 02: SCOPE OF THE REPORT

PART 03: MARKET LANDSCAPE

PART 04: MARKET SIZING

PART 05: FIVE FORCES ANALYSIS

PART 06: MARKET SEGMENTATION BY APPLICATION

PART 07: CUSTOMER LANDSCAPE

PART 08: GEOGRAPHIC LANDSCAPE

PART 09: DECISION FRAMEWORK

PART 10: DRIVERS AND CHALLENGES

PART 11: MARKET TRENDS

PART 12: VENDOR LANDSCAPE

PART 13: VENDOR ANALYSIS

PART 14: APPENDIX

PART 15: EXPLORE TECHNAVIO

About Us

Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

Go here to see the original:
Burden of COVID-19 on the Market & Rehabilitation Plan | Global Artificial Intelligence (AI) Market In Retail Sector 2019-2023 | The Increased...

Artificial Intelligence Is Poised to Take More Than Unskilled Jobs – CMSWire

PHOTO:Shutterstock

Recently, Microsoft announced that it was terminating dozens of journalists and editorial workers at its Microsoft News and MSN organizations. Instead, the company said, it will rely on artificial intelligence to curate and edit news and content that is presented on MSN.com, inside Microsofts Edge browser, and in the companys Microsoft News apps.

Explaining the decision, Microsoft issued a statement to the Verge. The statement reads: Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic. The decision will result in the loss of 50 jobs in the US and a further 27 in the UK. Not a huge number, you might think, but Microsoft has been moving steadily in the direction of artificial intelligence and it would not be surprising to see other jobs in other areas disappear too.

However, the initial results have not been encouraging. Many of the affected workers are part of Microsofts SANE (search, ads, News, Edge) division, and are contracted as human editors to help pick stories.

The Guardian newspaper in the UK reported soon after that, the newly-instated robot editors of MSN.com selected a story about Little Mix singer Jade Thirlwalls experience with racism to appear on the homepage, but used a picture of Thirlwalls bandmate Leigh-Anne Pinnock to illustrate it. Thirwall herself pointed out the mistake in an Instagram post that read @MSN If youre going to copy and paste articles from other accurate media outlets, you might want to make sure youre using an image of the correct mixed race member of the group. She added, It offends me that you couldnt differentiate the two women of colour out of four members of a group DO BETTER!

Not an auspicious start, but something that is likely to happen again with other AI users and other AI managed content. It is also likely to impact other industries as AI slowly makes its way across the enterprise. More to the point, it is also likely to happened in industries where mistakes will have far reaching and potentially disastrous impacts. Until now it has been argued that the main jobs that will lose out to AI are unskilled, manual labour. However, the MSN decision indicates that this is not the case. So, what other skilled jobs are likely to be impacted?

Irwin Kirsch is director of the Princeton-based Center for Research on Human Capital and Education. He says that there are two issues to be considered when looking at the future of work and the use of AI.

Although many jobs will be replaced by AI, many others will need to evolve to become complementary to new technologies. Production workers an occupation hit hard by the impacts of technological change provides a powerful case in point. Firms regularly must hire new workers to replace incumbents who retire, leave the labor force for personal or health reasons, or change careers.

The Bureau of Labor Statistics projects that employment in production will fall by 406,000 workers between 2016 and 2026 due to pressure from automation and trade. Yet, over this same period, 1.97 million currently employed U.S. production workers will attain or exceed retirement age. In other words, U.S. firms will need to hire 1.52 million new production workers net of the projected decline of 406,000 production jobs simply to compensate for these retirements. And, over those years, there is little doubt that advancements will continue to shape and morph roles and levels of skills needed for success in the emerging jobs.

Another way of considering the issue of AI is not just to explore the jobs that will be lost, but also to understand the nature of the jobs that will remain. Can we successfully fill these emerging jobs?

Looking at degree attainment data, it might seem that we are set. But as with concerns about the impact of AI skilled jobs at risk, there is more to this story. The percentage of the U.S. population 25 and over who completed high school and college has risen dramatically over the past several decades. Yet, data from assessments of adult literacy and numeracy skills indicate that skill deficiencies are large, with half of Americans aged 16 to 34 lacking levels of literacy many experts deem critical for success in the labor market. While many acknowledge that AI is increasingly important and its impacts increasingly stark, we must also reckon with the fact that too few of America's young adults are well positioned for the emerging jobs of 21st-century economy, he said. To address this, we must pave the way for them to develop the skills they need today and into the future. So, what skilled jobs are likely to disappear, or be subject to substantial change? There are many, but when we asked a few people across different industries 6 were identified:

Roy Cohen is a career coach and author of The Wall Street Professionals Survival Guide. Over the next couple of years, he said technology will likely eliminate thousands of Wall Street jobs that are currently performed by skilled humans. FinTech, or the financial technology industry, is speeding to create new automated trading systems that outperform and outsmart traditional traders. These folks are likely to become obsolete unless they retool with strong programming skills. One example, IBMs Watson, is a supercomputer that consistently beats the market.

Pete Sosnowski is VP People and co-founder at Poland-based Zety, which publishes high-quality guides and articles for job seekers. He sees two skilled jobs changing drastically or disappearing altogether. They include:

Lawyers very often must dig through and analyze thousands of pages of case files. That is why AI and big data technology is developing to replace lower-level law firms employees in performing legal research. There are also algorithms that can already better predict the court verdicts something that we often hire the best lawyers and pay them top dollars for. On the bright side, artificial intelligence in law firms can mean a reduction in the costs of handling cases.

With a variety of AI tools available out there, it seems like the need for architects may decrease. We are now able, using AI technology and virtual-reality tools, to design our own houses, apartments, and offices. Every major furniture store will have a program available for its customers so that they can design their own interiors. The process is becoming more and more automated, and the human factor is less needed.

For Grant Aldrich, CEO of OnlineDegree, there are also two professions that will be hard hit by AI. Bookkeeping clerks will likely become redundant because most bookkeeping is automated. Tools like FreshBooks, QuickBooks, and Microsoft Office already offer bookkeeping procedures for a more affordable price than providing a persons salary and benefits. Bookkeeping jobs are already expecting to decline by 8% by 2024.

Proofreaders are likely to be replaced by AI, despite the skill needed to be a great editor. An editor needs to have a good command of the English language, but many websites and companies are already using grammar check software like Hemingway App and Grammarly. There are plenty of technologies that make it easy to self-check your writing.

Finally, Ian Kelly is VP of operations at Denver-based NuLeaf Naturals. He says it is likely that in the future general practitioners will also be replaced soon. Although this seems impossible, he says you need to consider what a doctor does. A doctor sees a patient, the patient tells them their symptoms and the GP gives a diagnosis. The issues lie in human error, where misdiagnosis isnt just a frivolous mistake its a fatal mistake. Its estimated that 40,000 to 80,000 people die annually from complications from misdiagnosis, while women and minorities are 20 to 30 percent more likely to be misdiagnosed.

See the rest here:
Artificial Intelligence Is Poised to Take More Than Unskilled Jobs - CMSWire

Artificial Intelligence: how man and machine are progressively working as one – Euronews

Futuris looks at how the relationship between man and machine in modern manufacturing is evolving through the adoption of Artificial Intelligence and automation.

No one knows the job better than the person who is doing it - that is the idea behind a package of novel ideas designed to make the most of factory workers' knowledge and experience.

In Seinajoki, Finland, metal company Prima Power is trialing two of the EUs Factory2Fit project solutions.

This 4m study explores new ways for people and machines to work together.

Dr Eija Kaasinen from technical research centre VTT says the aim is to put people at the centre and to enable them to participate in designing their own work environment.

Globally, automation and robotics are transforming manufacturing as part of the fourth industrial revolution. But this doesn't mean the human element is removed from work.

"Of course there are manual elements - but the work is changing towards knowledge work," explains Kaasinen. "It's more like working with the virtual counterparts of the physical things in the physical world."

A Pre-training Solution, for example, uses 3D models and cloud-based tutorials, while a so-called Knowledge Sharing solution makes the most of all the experience a worker gathers while running complex machinery, especially when something goes wrong, as Prima Power's Mariia Kreposna explains.

"So here the operator can open the additional dialogue box to get extra information about the situation. This is done by sharing the additional text, description, pictures or videos so the idea is in the future whenever the alarm with the same code happens, the operator will be able to learn not only the standard remedies but also other possible reasons and how to prevent this alarm happening in the future."

At the Elekmerk factory in Keuruu, Finland, workers have tested the Worker Feedback Dashboard - a biometric monitoring tool - like a fitbit - and an app.

It charts someone's work achievements and their well being - such as sleep and steps taken per day - and shows how the two can be linked.

"When we interviewed factory workers during the project we heard that often they had negative feedback when something is not going well," says VTT's Pivi Heikkil.. "So we wanted to develop an application that would also give you positive feedback of the fluency of your work and your accomplishments, so feedback of the things that are going well."

Ville Vuarola was one of five workers who wore the wristband for the three-month pilot scheme. He was happy to take part and says he was surprised at how a good night's sleep had a positive impact on his job.

"I was surprised to see how sleeping well influenced my work performance. Together with leisure activities, sleep was really important for my general performance at work," he says.

Of all the Factory2Fit solutions, this was the one that proved the most controversial with fears expressed over the possible misuse of workers' data.

But Pivi Heikkil says these concerns are unfounded.

"When we are developing these kind of solutions we always consider the ethics," she says, "and I want to stress and highlight that this should always be voluntary."

The data gathered is kept on a separate server, not in the factory system.

Researchers expect at least some of their Factory2Fit solutions to be commercially available by the end of next year.

Go here to read the rest:
Artificial Intelligence: how man and machine are progressively working as one - Euronews

Artificial intelligence enhances blurry faces into ‘super-resolution images’ – The Independent

Researchers have figured out a way to transform a few dozen pixels into a high resolution image of a face using artificial intelligence.

A team from Duke University in the US created an algorithm capable of "imagining" realistic-looking faces from blurry, unrecognisable pictures of people, with eight-times more effectiveness than previous methods.

"Never have super-resolution images been created at this resolution before with this much detail," said Duke computer scientist Cynthia Rudin, who led the research.

Sharing the full story, not just the headlines

The images generated by the AI do not resemble real people, instead they are faces that look plausibly real. It therefore cannot be used to identify people from low resolution images captured by security cameras.

The PULSE (Photo Upsampling via Latent Space Exploration) system developed by Dr Rudin and her team creates images with 64-times the resolution than the original blurred picture.

The PULSE algorithm is able to achieve such high levels of resolution by reverse engineering the image from high resolution images that look similar to the low resolution image when down scaled.

The images generated by enhancing the pixels do not represent real people (Duke University)

Through this process, facial features like eyelashes, teeth and wrinkles that were impossible to see in the low resolution image become recognisable and detailed.

"Instead of starting with the low resolution image and slowly adding detail, PULSE traverses the high resolution natural image manifold, searching for images that downscale to the original low resolution image," states a paper detailing the research.

The AI algorithm is able to enhance a few dozen pixels into a high-resolution picture of a face (Duke University)

"Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible."

The system could theoretically be used on low resolution images of almost anything, ranging from medicine and microscopy, to astronomy and satellite imagery.

This means noisy, poor-quality images of distant planets and solar systems could be imagined in high resolution.

The research will be presented at the 2020 Conference on Computer Vision and Pattern Recognition (CVPR) this week.

See original here:
Artificial intelligence enhances blurry faces into 'super-resolution images' - The Independent

How to improve cybersecurity for artificial intelligence

In January 2017, a group of artificial intelligence researchers gathered at the Asilomar Conference Grounds in California and developed 23 principles for artificial intelligence, which was later dubbed the Asilomar AI Principles. The sixth principle states that AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible. Thousands of people in both academia and the private sector have since signed on to these principles, but, more than three years after the Asilomar conference, many questions remain about what it means to make AI systems safe and secure. Verifying these features in the context of a rapidly developing field and highly complicated deployments in health care, financial trading, transportation, and translation, among others, complicates this endeavor.

Much of the discussion to date has centered on how beneficial machine learning algorithms may be for identifying and defending against computer-based vulnerabilities and threats by automating the detection of and response to attempted attacks.1 Conversely, concerns have been raised that using AI for offensive purposes may make cyberattacks increasingly difficult to block or defend against by enabling rapid adaptation of malware to adjust to restrictions imposed by countermeasures and security controls.2 These are also the contexts in which many policymakers most often think about the security impacts of AI. For instance, a 2020 report on Artificial Intelligence and UK National Security commissioned by the U.K.s Government Communications Headquarters highlighted the need for the United Kingdom to incorporate AI into its cyber defenses to proactively detect and mitigate threats that require a speed of response far greater than human decision-making allows.3

A related but distinct set of issues deals with the question of how AI systems can themselves be secured, not just about how they can be used to augment the security of our data and computer networks. The push to implement AI security solutions to respond to rapidly evolving threats makes the need to secure AI itself even more pressing; if we rely on machine learning algorithms to detect and respond to cyberattacks, it is all the more important that those algorithms be protected from interference, compromise, or misuse. Increasing dependence on AI for critical functions and services will not only create greater incentives for attackers to target those algorithms, but also the potential for each successful attack to have more severe consequences.

Increasing dependence on AI for critical functions and services will not only create greater incentives for attackers to target those algorithms, but also the potential for each successful attack to have more severe consequences.

This policy brief explores the key issues in attempting to improve cybersecurity and safety for artificial intelligence as well as roles for policymakers in helping address these challenges. Congress has already indicated its interest in cybersecurity legislation targeting certain types of technology, including the Internet of Things and voting systems. As AI becomes a more important and widely used technology across many sectors, policymakers will find it increasingly necessary to consider the intersection of cybersecurity with AI. In this paper, I describe some of the issues that arise in this area, including the compromise of AI decision-making systems for malicious purposes, the potential for adversaries to access confidential AI training data or models, and policy proposals aimed at addressing these concerns.

One of the major security risks to AI systems is the potential for adversaries to compromise the integrity of their decision-making processes so that they do not make choices in the manner that their designers would expect or desire. One way to achieve this would be for adversaries to directly take control of an AI system so that they can decide what outputs the system generates and what decisions it makes. Alternatively, an attacker might try to influence those decisions more subtly and indirectly by delivering malicious inputs or training data to an AI model.4

For instance, an adversary who wants to compromise an autonomous vehicle so that it will be more likely to get into an accident might exploit vulnerabilities in the cars software to make driving decisions themselves. However, remotely accessing and exploiting the software operating a vehicle could prove difficult, so instead an adversary might try to make the car ignore stop signs by defacing them in the area with graffiti. Therefore, the computer vision algorithm would not be able to recognize them as stop signs. This process by which adversaries can cause AI systems to make mistakes by manipulating inputs is called adversarial machine learning. Researchers have found that small changes to digital images that are undetectable to the human eye can be sufficient to cause AI algorithms to completely misclassify those images.5

An alternative approach to manipulating inputs is data poisoning, which occurs when adversaries train an AI model on inaccurate, mislabeled data. Pictures of stop signs that are labeled as being something else so that the algorithm will not recognize stop signs when it encounters them on the road is an example of this. This model poisoning can then lead an AI algorithm to make mistakes and misclassifications later on, even if an adversary does not have access to directly manipulate the inputs it receives.6 Even just selectively training an AI model on a subset of correctly labeled data may be sufficient to compromise a model so that it makes inaccurate or unexpected decisions.

These risks speak to the need for careful control over both the training datasets that are used to build AI models and the inputs that those models are then provided with to ensure security of machine-learning-enabled decision-making processes. However, neither of those goals are straightforward. Inputs to their machine learning systems, in particular, are often beyond the scope of control of AI developerswhether or not there will be graffiti on street signs that computer vision systems in autonomous vehicles encounter, for instance. On the other hand, developers have typically had much greater control over training datasets for their models. But in many cases, those datasets may contain very personal or sensitive information, raising yet another set of concerns about how that information can best be protected and anonymized. These concerns can often create trade-offs for developers about how that training is done and how much direct access to the training data they themselves have.7

Research on adversarial machine learning has shown that making AI models more robust to data poisoning and adversarial inputs often involves building models that reveal more information about the individual data points used to train those models.8 When sensitive data are used to train these models, this creates a new set of security risks, namely that adversaries will be able to access the training data or infer training data points from the model itself. Trying to secure AI models from this type of inference attack can leave them more susceptible to the adversarial machine learning tactics described above and vice versa. This means that part of maintaining security for artificial intelligence is navigating the trade-offs between these two different, but related, sets of risks.

In the past four years there has been a rapid acceleration of government interest and policy proposals regarding artificial intelligence and security, with 27 governments publishing official AI plans or initiatives by 2019.9 However, many of these strategies focus more on countries plans to fund more AI research activity, train more workers in this field, and encourage economic growth and innovation through development of AI technologies than they do on maintaining security for AI. Countries that have proposed or implemented security-focused policies for AI have emphasized the importance of transparency, testing, and accountability for algorithms and their developersalthough few have gotten to the point of actually operationalizing these policies or figuring out how they would work in practice.

Countries that have proposed or implemented security-focused policies for AI have emphasized the importance of transparency, testing, and accountability for algorithms and their developers.

In the United States, the National Security Commission on Artificial Intelligence (NSCAI) has highlighted the importance of building trustworthy AI systems that can be audited through a rigorous, standardized system of documentation.10 To that end, the commission has recommended the development of an extensive design documentation process and standards for AI models, including what data is used by the model, what the models parameters and weights are, how models are trained and tested, and what results they produce. These transparency recommendations speak to some of the security risks around AI technology, but the commission has not yet extended them to explain how this documentation would be used for accountability or auditing purposes. At the local government level, the New York City Council established an Automated Decision Systems Task Force in 2017 that stressed the importance of security for AI systems; however, the task force provided few concrete recommendations beyond noting that it grappled with finding the right balance between emphasizing opportunities to share information publicly about City tools, systems, and processes, while ensuring that any relevant legal, security, and privacy risks were accounted for.11

A 2018 report by a French parliamentary mission, titled For a Meaningful Artificial Intelligence: Towards a French and European Strategy, offered similarly vague suggestions. It highlighted several potential security threats raised by AI, including manipulation of input data or training data, but concluded only that there was a need for greater collective awareness and more consideration of safety and security risks starting in the design phase of AI systems. It further called on the government to seek the support of specialist actors, who are able to propose solutions thanks to their experience and expertise and advised that the French Agence Nationale pour la Securite des Systemes dinformation (ANSSI) should be responsible for monitoring and assessing the security and safety of AI systems. In a similar vein, Chinas 2017 New Generation AI Development Plan proposed developing security and safety certifications for AI technologies as well as accountability mechanisms and disciplinary measures for their creators, but the plan offered few details as to how these systems might work.

For many governments, the next stage of considering AI security will require figuring out how to implement ideas of transparency, auditing, and accountability to effectively address the risks of insecure AI decision processes and model data leakage.

Transparency will require the development of a more comprehensive documentation process for AI systems, along the lines of the proposals put forth by the NSCAI. Rigorous documentation of how models are developed and tested and what results they produce will enable experts to identify vulnerabilities in the technology, potential manipulations of input data or training data, and unexpected outputs.

Thorough documentation of AI systems will also enable governments to develop effective testing and auditing techniques as well as meaningful certification programs that provide clear guidance to AI developers and users. These audits would, ideally, leverage research on adversarial machine learning and model data leakage to test AI models for vulnerabilities and assess their overall robustness and resilience to different forms of attacks through an AI-focused form of red teaming. Given the dominance of the private sector in developing AI, it is likely that many of these auditing and certification activities will be left to private businesses to carry out. But policymakers could still play a central role in encouraging the development of this market by funding research and standards development in this area and by requiring certifications for their own procurement and use of AI systems.

Finally, policymakers will play a vital role in determining accountability mechanisms and liability regimes to govern AI when security incidents occur. This will involve establishing baseline requirements for what AI developers must do to show they have carried out their due diligence with regard to security and safety, such as obtaining recommended certifications or submitting to rigorous auditing and testing standards. Developers who do not meet these standards and build AI systems that are compromised through data poisoning or adversarial inputs, or that leak sensitive training data, would be liable for the damage caused by their technologies. This will serve as both an incentive for companies to comply with policies related to AI auditing and certification, and also as a means of clarifying who is responsible when AI systems cause serious harm due to a lack of appropriate security measures and what the appropriate penalties are in those circumstances.

The proliferation of AI systems in critical sectorsincluding transportation, health, law enforcement, and military technologymakes clear just how important it is for policymakers to take seriously the security of these systems. This will require governments to look beyond just the economic promise and national security potential of automated decision-making systems to understand how those systems themselves can best be secured through a combination of transparency guidelines, certification and auditing standards, and accountability measures.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Read the original post:
How to improve cybersecurity for artificial intelligence

Propelling Data Analytics with the Power of Artificial Intelligence – Analytics Insight

Can your data talk intelligently? AI plugged into data management systems aims to do just that!

Intelligent analytics offers a classic approach to discover the hidden intelligence behind historical and real-time data. This myriad suite of analytical techniques and algorithms can parse mind-boggling amounts of data generated in real-time to discover the hidden gems that are often missed or go undetected by traditional statistical methods.

The methodology of mixing intelligence with analytics reaches far beyond. It erects the foundation in algorithmic methods removing any bias introduced by an individual analyst. Whats more, the sheer volume of data adds to the veracity and accuracy of the results, rather than causing an unnecessary air of confusion for the analyst.

An artificial intelligence (AI) and analytics platform encapsulate the means to derive untapped value from the wealth of information, data constantly generates. While advanced analytics helps enterprises to uncover insights on current business processes and even draw predictions from historical information silos, AI acts as a force multiplier on this data crunching by pledging machine learning capabilities into these data models.

The best artificial intelligence algorithms and analytics software leverage machine learning solutions into big data platform. This way they transform data into intelligent pieces of information, self-service data visualization dashboards, automation-ready capabilities to maximize revenue and operational efficiencies.

AI can actually transform data into an intelligent piece of Intelligence

1. Unearthing new insights from data analytics

Artificial Intelligence excels in finding hidden patterns and insights from large datasets which are often unseen from human eyes, this is done at an unprecedented speed and scale. AI-powered tools exist answering the questions about your enterprise operations, for instance, which operations cycle had the quickest turn-around in a specific quarter.

2. Deploy analytics to predict data outcomes

AI-powered algorithms analyze data from multiple sources offering predictions on an enterprises next strategic move. It can also deep dive into data to share insights about your customers letting you know about their preferences, and which marketing channels would be the best to target them.

3. Unifying data across Platforms

Artificial Intelligence unifies data captured from different sources and platforms, accelerating data-driven innovation across data science, business analytics and data engineering categories.

Data analytics software

Think business intelligence gathered from a data analytics software that identifies patterns and formulates data relationships. This paves way for actionable alerts, smart data discovery and interactive dashboards, using a comprehensive set of data analytics software on an enterprise-grade analytics platform.

Machine learning and predictive analytics platform

An able platform lets you analyze structured and unstructured big data stored in data management platforms and external sources. AI and open-source data analytics platforms combine open-source machine learning with self-service analytics and predictive analytics to achieve data intelligence.

Natural language processing and text mining

Unstructured data explains stories, sentiments, emotions of your customers, employees and stakeholders. NLP and Text mining extracts terms and concepts from brochures, legal documents, emailers, social media messages, videos, audio files, web pages to unlock the value hidden in unstructured text and yield valuable business insights.

Interactive visualizations

Data visualization is the graphic representation of data. Interactive data visualizations and rich interactive dashboards are the major takeaways from Intelligent Analytics helping enterprises know their data more personally.

AI solution for sentiment analysis

Intelligent data analytics helps an enterprise to understand and highlight what is the peoples perception on social networks and the web about its products and services. Intelligent analytics is thus a blessing to enterprises for targeted customer servicing, customer engagement and retention.

In crux, AI blended data analytics aims to make the enterprise more efficient and productive thereby increasing its brand loyalty, drive revenues and eliminate the need for manual data processing mechanisms. With customised business insights that are accessible and relatable to the most critical objectives of the enterprise, Intelligent Analytics is here to stay.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Kamalika Some is an NCFM level 1 certified professional with previous professional stints at Axis Bank and ICICI Bank. An MBA (Finance) and PGP Analytics by Education, Kamalika is passionate to write about Analytics driving technological change.

The rest is here:
Propelling Data Analytics with the Power of Artificial Intelligence - Analytics Insight