Page 196«..1020..195196197198..210220..»

Category Archives: Ai

The Media’s Coverage of AI is Bogus – Scientific American

Posted: November 25, 2019 at 2:46 pm

Headlines about machine learning promise godlike predictive power. Here are four examples:

With articles like these, the press will have you believe that machine learning can reliably predict whether you're gay, whether you'll develop psychosis, whether youll have a heart attack and whether you're a criminalas well as other ambitious predictions such as when you'll die and whether your unpublished book will be a bestseller.

It's all a lie. Machine learning cant confidently tell such things about each individual. In most cases, these things are simply too difficult to predict with certainty.

Here's how the lie works. Researchers report high "accuracy," but then later revealburied within the details of a technical paperthat they were actually misusing the word "accuracy" to mean another measure of performance related to accuracy but in actuality not nearly as impressive.

But the press runs with it. Time and again, this scheme succeeds in hoodwinking the media and generating flagrant publicity stunts that mislead.

Now, don't get me wrong; machine learning does deserve high praise. The ability to predict better than random guessing, even if not with high confidence for most cases, serves to improve all kinds of business and health care processes. That's pay dirt. And, in certain limited areas, machine learning can deliver strikingly high performance, such as for recognizing objects like traffic lights within photographs or recognizing the presence of certain diseases from medical images.

But, in other cases, researchers are falsely advertising high performance. Take Stanford University's infamous "gaydar" study. In its opening summary, the 2018 report claims its predictive model achieves 91 percent accuracy distinguishing gay and straight males from facial images. This inspired journalists to broadcast gross exaggerations. The Newsweek article highlighted above kicked off with "Artificial intelligence can now tell whether you are gay or straight simply by analyzing a picture of your face."

This deceptive media coverage is to be expected. The researchers opening claim has tacitly conveyedto lay readers, nontechnical journalists and even casual technical readersthat the system can tell who's gay and who isn't and usually be correct about it.

That assertion is false. The model can't confidently "tell" for any given photograph. Rather, what Stanford's model can actually do 91 percent of the time is much less remarkable: It can identify which of a pair of two males are gay when it's already been established that one is and one is not.

This "pairing test" tells a seductive story, but it's a deceptive one. It translates to low performance outside the research lab, where there's no contrived scenario presenting such pairings. Employing the model in the real world would require a tough trade-off. You could tune the model to correctly identify, say, two thirds of all gay individuals, but that would come at a price: When it predicted someone to be gay, it would be wrong more than half of the timea high false positive rate. And if you configure its settings so that it correctly identifies even more than two thirds, the model will exhibit an even higher false positive rate.

The reason for this is that one of the two categories is infrequentin this case, gay individuals, which amount to about 7 percent of males (according to the Stanford report). When one category is in the minority, that intrinsically makes it more challenging to reliably predict.

Now, the researchers did report on a viable measure of performance, called AUCalbeit mislabeled in their report as "accuracy." AUC (Area Under the receiver operating characteristic Curve) indicates the extent of performance trade-offs available. The higher the AUC, the better the trade-off options offered by the predictive model.

In the field of machine learning, accuracy means something simpler: How often the predictive model is correctthe percent of cases it gets right. When researchers use the word to mean anything else, they're at best adopting willful ignorance and at worst consciously laying a trap to ensnare the media.

But researchers face two publicity challenges: How can you make something as technical as AUC sexy and at the same time sell your predictive models performance? No problem. As it turns out, the AUC is mathematically equal to the result you get running the pairing test. And so, a 91 percent AUC can be explained with a story about distinguishing between pairs that sounds to many journalists like "high accuracy"especially when the researchers commit the cardinal sin of just baldlyand falselycalling it "accuracy." Voila! Both the journalists and their readers believe the model can "tell" whether you're gay.

This accuracy fallacy scheme is applied far and wide, with overblown claims about machine learning accurately predicting, among other things, psychosis, criminality, death, suicide, bestselling books, fraudulent dating profiles, banana crop diseases and various medical conditions. For an addendum to this article that covers 20 more examples, click here.

In some of these cases, researchers perpetrate a variation on the accuracy fallacy scheme: they report the accuracy you would get if half the cases were positivethat is, if the common and rare categories took place equally often. Mathematically, this usually inflates the reported "accuracy" a bit less than AUC, but it's a similar maneuver and overstates performance in much the same way.

In popular culture, "gaydar" refers to an unattainable form of human clairvoyance. We shouldnt expect machine learning to attain supernatural abilities either. Many human behaviors defy reliable prediction. Its like predicting the weather many weeks in advance. There's no achieving high certainty. There's no magic crystal ball. Readers at large must hone a certain vigilance: Be wary about claims of "high accuracy" in machine learning. If it sounds too good to be true, it probably is.

Original post:

The Media's Coverage of AI is Bogus - Scientific American

Posted in Ai | Comments Off on The Media’s Coverage of AI is Bogus – Scientific American

4 Ways to Address Gender Bias in AI – Harvard Business Review

Posted: at 2:46 pm

Executive Summary

Any examination of bias in AI needs to recognize the fact that these biases mainly stem from humans inherent biases. The models and systems we create and train are a reflection of ourselves. So its no surprise to find that AI is learning gender bias from humans. For instance, natural language processing (NLP), a critical ingredient of common AI systems like Amazons Alexa and Apples Siri, among others, has been found to show gender biasesand this is not a standalone incident. There have been several high profile cases of gender bias, including computer vision systems for gender recognition that reported higher error rates for recognizing women, specifically those with darker skin tones. In order to produce technology that is more fair, there must be a concerted effort from researchers and machine learning teams across the industry to correct this imbalance. We have an obligation to create technology that is effective and fair for everyone.

Any examination of bias in AI needs to recognize the fact that these biases mainly stem from humans inherent biases. The models and systems we create and train are a reflection of ourselves.

So its no surprise to find that AI is learning gender bias from humans. For instance, natural language processing (NLP), a critical ingredient of common AI systems like Amazons Alexa and Apples Siri, among others, has been found to show gender biasesand this is not a standalone incident. There have been several high profile cases of gender bias, including computer vision systems for gender recognition that reported higher error rates for recognizing women, specifically those with darker skin tones. In order to produce technology that is more fair, there must be a concerted effort from researchers and machine learning teams across the industry to correct this imbalance. Fortunately, we are starting to see new work that looks at exactly how that can be accomplished.

Building fair and equitable machine learning systems.

Of particular note is the bias research being carried out with respect to word-embeddings, which is when words are converted to numerical representations, which are then used as inputs in natural language processing models. Word-embeddings represent words as a sequence, or a vector of numbers. If two words have similar meanings, their associated embeddings will be close to each other in a mathematical sense. The embeddings encode this information by assessing the context in which a word occurs. For example, AI has the ability to objectively fill in the word queen in the sentence Man is to king, as woman is to X. The underlying issue arises in cases where AI fills in sentences like Father is to doctor as mother is to nurse. The inherent gender bias in the remark reflects an outdated perception of women in our society that is not based in fact or equality.

Few studies have assessed the effects of gender bias in speech with respect to emotion and emotion AI is starting to play a more prominent role in the future of work, marketing, and almost every industry you can think of. In humans, bias occurs when a person misinterprets the emotions of one demographic category more often than another for instance, mistakenly thinking that one gender category is angry more often than another. This same bias is now being observed in machines and how they misclassify information related to emotions. To understand why this is, and how we can fix it, its important to first look at the causes of AI bias.

What Causes AI Bias?

In the context of machine learning, bias can mean that theres a greater level of error for certain demographic categories. Because there is no one root cause of this type of bias, there are numerous variables that researchers must take into account when developing and training machine-learning models, with factors that include:

Four Best Practices for Machine-Learning Teams to Avoid Gender Bias

Like many things in life, the causes and solutions of AI bias are not black and white. Even fairness itself must be quantified to help mitigate the effects of unwanted bias. For executives who are interested in tapping into the power of AI, but are concerned about bias, its important to ensure that the following happens on your machine-learning teams:

Although examining these causes and solutions is an important first step, there are still many open questions to be answered. Beyond machine-learning training, the industry needs to develop more holistic approaches that address the three main causes of bias, as outlined above. Additionally, future research should consider data with a broader representation of gender variants, such as transgender, non-binary, etc., to help expand our understanding of how to handle expanding diversity.

We have an obligation to create technology that is effective and fair for everyone. I believe the benefits of AI will outweigh the risks if we can address them collectively. Its up to all practitioners and leaders in the field to collaborate, research, and develop solutions that reduce bias in AI for all.

Originally posted here:

4 Ways to Address Gender Bias in AI - Harvard Business Review

Posted in Ai | Comments Off on 4 Ways to Address Gender Bias in AI – Harvard Business Review

What is AI (artificial intelligence)? – Definition from …

Posted: November 17, 2019 at 2:33 pm

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson Assistant, Microsoft Cognitive Services and Google AI services.

While AI tools present a range of new functionality for businesses ,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

AI is incorporated into a variety of different types of technology. Here are seven examples.

Artificial intelligence has made its way into a number of areas. Here are six examples.

The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes , convincingly fabricated videos of public figures saying or doing things that never took place .

Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe's GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

The rest is here:

What is AI (artificial intelligence)? - Definition from ...

Posted in Ai | Comments Off on What is AI (artificial intelligence)? – Definition from …

Microsoft sends a new kind of AI processor into the cloud – Ars Technica

Posted: at 2:33 pm

Microsoft rose to dominance during the '80s and '90s thanks to the success of its Windows operating system running on Intels processors, a cosy relationship nicknamed Wintel.

Now Microsoft hopes that another another hardwaresoftware combo will help it recapture that successand catch rivals Amazon and Google in the race to provide cutting-edge artificial intelligence through the cloud.

Microsoft hopes to extend the popularity of its Azure cloud platform with a new kind of computer chip designed for the age of AI. Starting today, Microsoft is providing Azure customers with access to chips made by the British startup Graphcore.

Graphcore, founded in Bristol, UK, in 2016, has attracted considerable attention among AI researchersand several hundred million dollars in investmenton the promise that its chips will accelerate the computations required to make AI work. Until now it has not made the chips publicly available or shown the results of trials involving early testers.

Microsoft, which put its own money into Graphcore last December as part of a $200 million funding round, is keen to find hardware that will make its cloud services more attractive to the growing number of customers for AI applications.

Unlike most chips used for AI, Graphcores processors were designed from scratch to support the calculations that help machines to recognize faces, understand speech, parse language, drive cars, and train robots. Graphcore expects it will appeal to companies running business-critical operations on AI, such as self-driving-car startups, trading firms, and operations that process large quantities of video and audio. Those working on next-generation AI algorithms may also be keen to explore the platforms advantages.

Microsoft and Graphcore today published benchmarks that suggest the chip matches or exceeds the performance of the top AI chips from Nvidia and Google using algorithms written for those rival platforms. Code written specifically for Graphcores hardware may be even more efficient.

The companies claim that certain image-processing tasks work many times faster on Graphcores chips, for example, than on its rivals using existing code. They also say they were able to train a popular AI model for language processing, called BERT, at rates matching those of any other existing hardware.

BERT has become hugely important for AI applications involving language. Google recently said that it is using BERT to power its core search business. Microsoft says it is now using Graphcores chips for internal AI research projects involving natural language processing.

Karl Freund, who tracks the AI chip market at Moor Insights, says the results show the chip is cutting-edge but still flexible. A highly-specialized chip could outperform one from Nvidia or Google but would not be programmable enough for engineers to develop new applications. Theyve done a good job making it programmable, he says. Good performance in both training and inference is something they've always said they would do, but it is really, really hard.

Freund adds that the deal with Microsoft is crucial for Graphcores business, because it provides an on-ramp for customers to try the new hardware. The chip may well be superior to existing hardware for some applications, but it takes a lot of effort to redevelop AI code for a new platform. With a couple of exceptions, Freund says, the chips benchmarks are not eye-popping enough to lure companies and researchers away from the hardware and software they are already comfortable using.

Graphcore has created a software framework called Poplar, which allows existing AI programs to be ported to its hardware. Plenty of existing algorithms may still be better-suited to software that runs on top of rival hardware, though. Googles Tensorflow AI software framework has become the de facto standard for AI programs in recent years, and it was written specifically for Nvidia and Google chips. Nvidia is also expected to release a new AI chip next year, which is likely to have better performance.

Graphcore

Nigel Toon, cofounder and CEO of Graphcore, says the companies began working together a year after his companys launch, through Microsoft Research Cambridge in the UK. His companys chips are especially well-suited to tasks that involve very large AI models or temporal data, he says. One customer in finance supposedly saw a 26-fold performance boost in an algorithm used to analyze market data thanks to Graphcores hardware.

A handful of other, smaller companies also announced today that they are working with Graphcore chips through Azure. This includes Citadel, which will use the chips to analyze financial data, and Qwant, a European search engine that wants the hardware to run an image-recognition algorithm known as ResNext.

The AI boom has already shaken up the market for computer chips in recent years. The best algorithms perform parallel mathematical computations, which can be done more effectively on a graphics chips (or GPUs) that have hundreds of simple processing cores as opposed to conventional chips (CPUs) that have a few complex processing cores.

The GPU-maker Nvidia has ridden the AI wave to riches, and Google announced in 2017 that it would develop its own chip, the Tensor Processing Unit, which is architecturally similar to a GPU but optimized for Tensorflow.

Graphcores chips, which it calls intelligence processing units (IPUs), have many more cores than GPUs or TPUs. They also feature memory on the chip itself, which removes a bottleneck that comes with moving data onto a chip for processing and off again.

Facebook is also working on its own AI chips. Microsoft has previously touted reconfigurable chips made by Intel and customized by its engineers for AI applications. A year ago, Amazon revealed it was also getting into chipmaking, but with a more general-purpose processor optimized for Amazons cloud services.

More recently, the AI boom has sparked a flurry of startup hardware companies to develop more specialized chips. Some of these are optimized for specific applications such as autonomous driving or surveillance cameras. Graphcore and a few others offer much more flexible chips, which are crucial for developing AI applications but also much more challenging to produce. The companys last investment round gave the company a valuation of $1.7 billion.

Graphcores chips might first find traction with top AI experts who are able to write the code needed to exploit their benefits. Several prominent AI researchers have invested in Graphcore, including Demis Hassabis, cofounder of DeepMind, Zoubin Ghahramani, a professor at the University of Cambridge and the head of Ubers AI lab, and Peiter Abbeel, a professor at UC Berkeley who specializes in AI and robotics. In an interview with WIREDlast December, AI visionary Geoffrey Hinton discussed the potential for Graphcore chips to advance fundamental research.

Before long, companies may be tempted to try out the latest thing, too. As Graphcores CEO Toon says, Everybody's trying to innovate, trying to find an advantage.

This story originally appeared on wired.com.

Listing image by Graphcore

Read this article:

Microsoft sends a new kind of AI processor into the cloud - Ars Technica

Posted in Ai | Comments Off on Microsoft sends a new kind of AI processor into the cloud – Ars Technica

"AI washing" threatens to overinflate expectations for the technology – Axios

Posted: at 2:33 pm

Zealous marketing departments, capital-hungry startup founders and overeager reporters are casting the futuristic sheen of artificial intelligence over many products that are actually driven by simple statistics or hidden people.

Why it matters: This "AI washing" threatens to overinflate expectations for the technology, undermining public trust and potentially setting up the booming field for a backlash.

The big picture: The tech industry has always been infatuated with the buzzword du jour. Before AI landed in this role, it belonged to "big data."Before that, everyone was "in the cloud" or "mobile first." Even earlier, it was "Web 2.0" and "social software."

Plenty of companies rely on one or the other of those tactics, which straddle the line between attractive branding and misdirection.

"It's really tempting if you're a CEO of a tech startup to AI-wash because you know you're going to get funding," says Brandon Purcell, a principal analyst at Forrester.

The tech sector's fake-it-till-you-make-it attitude plays into the problem.

The confusion and deception get an assist from the fuzzy definition of AI. It covers everything from state-of-the-art deep learning, which powers most autonomous cars, to 1970s-era "expert systems" that are essentially huge sets of human-coded rules.

The rest is here:

"AI washing" threatens to overinflate expectations for the technology - Axios

Posted in Ai | Comments Off on "AI washing" threatens to overinflate expectations for the technology – Axios

Toyotas AI Bets Go Beyond Automotive – Forbes

Posted: at 2:33 pm

Vehicle manufacturers know that they need to invent in autonomous technologies if they want to continue to remain relevant. As such, it should be no surprise that many car companies are investing in AI technologies to keep themselves competitive and relevant. Interviewed on an AI Today podcast episode, Jim Adler, Founding Managing Director of Toyota AI Ventures shared insights into the sort of investments Toyota AI Ventures is making in the industry, how the automotive industry is benefiting from these investments, and what non-automotive related AI and ML investments they are making.

Jim Adler, Founding Managing Director at Toyota AI Ventures

Why AI-Related Investments are so important for Toyota

Founded in 2017, Toyota AI Ventures raised a $100 million fund to invest in artificial intelligence, cloud-based data, and robotics that may also leverage AI and cloud-based data. Toyota AI Ventures is a subsidiary of the Toyota Research Institute and helps AI ventures around the world to bring new artificial technology to the market. There are so many companies working to develop AI technology that can help improve the quality of life around the world.

Most of Toyota AI Ventures investments have been into very early-stage startups with seed and series A funding. Not only is Toyota looking to get in early on, but they are looking for the most emerging technologies. In the future, Toyota AI Ventures might consider also investing in later-stage startups, but for now, they are sticking with the front-line investments.

Investing in startups helps Toyota learn what is working in the industry and where customers interests are changing and evolving. The investments help can help Toyota learn about ways in which their own products might help them succeed as well. Adler says that if one of their startups succeeds, they celebrate that success with the startup and if they fail, they take it as a learning experience. What Toyota looks for when it comes to selecting investments are applications that start whole new markets. Companies must show that they are willing to develop a detailed, full-spectrum approach to their development.

AI is such a hot area for research and development at the moment, and opportunities for investment are abundant. Many would think that Toyota is focused only on those areas that deal with vehicles or other technology that directly impacts Toyota, however Toyota supports any technology that can change the future. They invest into a wide range of technology.

Specific AI investments

One company that is part of Toyotas AI portfolio is Intuition Robotics. Intuition Robotics is focused on developing artificial intelligence solutions that act as a companion for the elderly. These AIs can converse with users to remind them to take their medication, suggest being more active, and otherwise help them to live healthier lives as they continue to age. Interactions like this have been proven to help seniors become healthier and create healthy habits. It can also help them to feel as if they have more socialization especially if they live alone.

Another company that is part of Toyotas AI investment portfolio is Joby Aviation, a company aiming to deliver safe and affordable public air travel. This can get people off the road, lower commute times, and be better for the environment. Joby is developing a series of all-electric aircraft that are capable of utilizing VTOL (Vertical Take-Off and Landing) in order to transport people from one place to another. These airplanes will travel faster than helicopters and use complex software onboard to help with flight.

In a similar vein, SLAMcore, also a Toyota AI venture investment, is a London based startup that works with drones has also received funding from Toyota Ventures. Almost all drones currently on the market rely on GPS to be able to fly themselves. However with SLAMcores AI, both robots and drones can use spatial sensors to detect where they are and navigate.

These various companies exist in markets and areas that are outside of current core of Toyotas business. However, Toyota is making these investments to help discover what may be next for the company. The Toyota Institute was originally started to help Toyota develop self-driving cars but now the company is realizing there is a potential for more on the horizon such as home robots or mobility options.

Speaking more toward the automotive industry, Adler says that Toyota is focusing on data-centricity when it comes to the future of Toyota and AI. Companies in other industries have started to become more data-focused and are leaders in their industries, such as Netflix or Amazon, so Toyota sees this as important for them as well. Adler mentioned that you cant fake it in the AI world, especially when it comes to using AI technology in cars. The technology has to be real and has to have a full approach. Multiple companies have been found trying to pass off humans as AI and that wont work for many applications.

The future of AI

Adler acknowledges many of the challenges facing AI and their portfolio investments. While advancements in computer vision, machine learning, AI, and robotics are showing that the technology is able to do more than it has in the past, such as being able to deliver products to customers autonomously, successfully navigating real streets and terrain while avoiding obstacles and arriving at the correct destination consistently is something that hasnt been as easily achieved.

Autonomous vehicles as well as just more intelligent human-driven vehicles are areas where a lot of development is going on and even more innovative products are on the way. Toyota is working on a program known as Guardian that works to guard the driver against dangers on the road. The AI-enabled Guardian is designed to help make sure that drivers do not end up in situations that could be dangerous to them.

Toyota is just one of many companies to be increasing their AI investments and applications. Startups around the world have promised a lot of new AI applications and some have already delivered on them. What is in store for technology with the possibilities brought to the table by AI is very promising.

View original post here:

Toyotas AI Bets Go Beyond Automotive - Forbes

Posted in Ai | Comments Off on Toyotas AI Bets Go Beyond Automotive – Forbes

What Is The Future Of Enterprise AI? – Forbes

Posted: at 2:33 pm

Depositphotos

Due to the increasing involvement of state players in automation warfare, when AI-driven automation is on its way to becoming a war weapon, what will it mean for an enterprise to stay competitive for survival?

Introduction

Artificial intelligence is redefining the very meaning of being an enterprise.The rapidly advancing artificial intelligence (AI) capability is on its way to revolutionizing every aspect of an enterprise. The ability to access data has leveled the playing field and brought every enterprise a unique possibility of progress. What needs to be seen is in this level playing field, which enterprises will be able to compete and lay a new foundation for fundamental transformation and which ones will decline.

Acknowledging this evolving reality,Risk Groupinitiated a much-needed discussion on The Future of Enterprise AI with Ankur Dinesh Garg onRisk Roundup.

Disclosure: I am the CEO of Risk Group LLC.

Risk Group discusses "The Future of Enterprise AI" with Ankur Dinesh Garg, chief of artificial intelligence at Hotify Inc., board member and chief of artificial intelligence at Sonasoft, a board member at Iamwire, advisor to many companies and member at Forbes Technology Council based in the United States.

Purpose of Enterprise AI

Enterprises across industries are undergoing a profound and lasting shift in the relative balance of AI adoption. AI application will offer each enterprise as many opportunities as it does challenges. While access to technology, data, and information is common to all enterprises, what is not common is how each enterprise uses that informationand for what reason. While AI has given enterprises across industries and nations the same starting point in access to AI technology, it is crucial to understand the parameters that will define their individual and collective success.

There are many variables in each enterprise ecosystem that will determine whether an enterprise will be able to use the data and information from its ecosystem to develop AI, automate, and transform to succeed.Ankur Garg expands on this notion on Risk Roundup: All the enterprises are running the race of AI, and who is going to win largely depends on many crucial elements. For example, how accurately enterprise leaders can articulate the problem that they are facing, and the business impact and the value associated with the problem.

As the state of AI deployment accelerates, it is difficult to grasp what staying competitive means for an enterprises survival. It is an understatement that enterprises across nations are expected to face extraordinary challenges and changes in the coming years, with automation driven growth as the only constant in those changes. As a result, it is vital to understand what does AI-driven growth means for enterprises.

Emerging Trends

The emerging trends in AI-driven automation reflect significant shifts of players and actions in the AI sphere that reveal the reconfigurations of ideas, interests, influence, and investments in the AI domain of enterprise adoption and transformation. Enterprises are beginning to understand the consequences of the evolving artificial intelligence-driven automation ecosystem far beyond narrow artificial intelligence, crossing economic, commerce, education, governance, and trade supply chains. While the relationship between enterprises and automation is complicated, and at times indirect, the force and pace of AI-driven automation change expected in the coming years will present each enterprise challenges and opportunities for its: products, services, processes, operations, and supply chains. From what it seems, the AI applications of tomorrow will be hybrid systems composed of several components and reliant on many different data sets, methodologies, and models.

The growing layers of cyberspace are connecting humans and machines across cyberspace, aquaspace, geospace, and space (CAGS). It is not only the human users that are getting connected, but the growing number of internet of things (IoT) devices are also getting active and operational with the rollout of 5G. Individually and collectively, the ever-increasing connectivity of man and machines, living and non-living, is creating enormous amounts of data and is driving the rapid expansion of AI across enterprises.

However, so far, there was not enough processing power for enterprises to implement ideal AI techniques. While the AI-driven automation emerged a few years ago, it is only now maturing as cloud computing, and massively parallel processing systems advance AI implementation further. As a result, AI-driven automation adoption is now progressing further as an essential trend.

There are many functional parts of enterprises that are already benefiting from the AI transformation. From R&D projects, customer service, finance, accounting, andIT, there are rapid shifts from experimental to applied AI technology across enterprises. There is no doubt that each enterprise will benefit from intelligent decision making to streamlined supply chains, customer relations to recruitment practices. At the same time, AI-driven automation is on its way to becoming a war weapon, as shown by the increasing involvement of state players in automation warfare. This is aimed at crippling AI competition and is progressing rapidly despite the growing complexities and challenges.

As Enterprise AI demand grows, so does the rise of AI-as-a-service. Moreover, AI-driven automation, data analytics, and low-code platforms are converging as AI fundamentally shifts the competitive landscape. New organizational capabilities are becoming critical, and so is the need to effectively manage the growing security risks of dual-use of AI.

When common sense tasks become more straightforward for computers to process, AI-driven intelligent applications and robots will become extremely useful in enterprise operations and supply chains. While a limited understanding of use cases what problems can be solved using AI, where to apply AI, what data sets to use, how to get credible data and skilled resources still slows down AI adoption, company culture also plays a vital role in AI adoption strategies and is proving to be a barrier to AI adoption.

Enterprise Digital Data Infrastructure

While enterprises are taking advantage of AI and are beginning to harness these technologies and benefits, the AI growth for any industry is driven and shaped by several variables and external factors, many of which can be amplified or influenced by data choices made at the enterprise or industry level. So, how will availability, affordability, accessibility, and integrity of data impact potential AI growth for enterprises across nations?

As seen, many enterprises lack the necessary digital data infrastructure. The lack of digital support, in turn, discourages opportunities and innovations in AI, making it challenging to address enterprise needs adequately leaving each of its enterprises with outdated data, information, and intelligence. Moreover, the credibility of the data sets also is an emerging concern. That brings us to two important questions: how are enterprises addressing digital data infrastructure challenges? What are the different data types that are important for enterprises?

While enterprises are currently using AI in areas for which they already have some data and analytics in place, many meaningful data partnerships are emerging. The emerging integrated structured data and text, when available to train AI systems, will bring necessary progress in enterprise AI. It will be interesting to see how this new data-driven world reality brings each enterprise across industries, both opportunities, and risks.

What Next?

The potential of Enterprise AI can transform the enterprise ecosystem in many ways. From decision making to supply chain intelligence and tracking capabilities to the automation of business processes, AI can change the entire enterprise ecosystem across CAGS. The time is now to understand its risks and rewards.

NEVER MISS ANY OF JAYSHREES POST

Simply join here for a regular update from Jayshree

Read more here:

What Is The Future Of Enterprise AI? - Forbes

Posted in Ai | Comments Off on What Is The Future Of Enterprise AI? – Forbes

Where AI and ethics meet – Cosmos

Posted: at 2:33 pm

By Stephen Fleischresser

Given a swell of dire warnings about the future of artificial intelligence over the last few years, the field of AI ethics has become a hive of activity.

These warnings come from a variety of experts such as Oxford Universitys Nick Bostrom, but also from more public figures such as Elon Musk and the late Stephen Hawking. The picture they paint is bleak.

In response, many have dreamed up sets of principles to guide AI researchers and help them negotiate the maze of human morality and ethics. Now, a paper in Nature Machine Intelligence throws a spanner in the works by claiming that such high principles, while laudable, will not give us the ethical AI society we need.

The field of AI ethics is generally broken into two areas: one concerning the ethics guiding humans who develop AIs, and the other machine ethics, guiding the moral behaviour of the AIs or robots themselves. However, the two areas are not so easily separated.

Machine ethics has a long history. In 1950 the great science fiction writer Isaac Asimov clearly articulated his now famous three laws of robotics in his work I, Robot, and proposed them as such:

1-A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2-A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later a zeroth law was added: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

These laws together were Asimovs (and editor John W Campbells) musing on how to ensure an artificially intelligent system would not turn on its creators: a safety feature designed to produce friendly and benevolent robots.

Isaac Asimov articulated his three laws of robotics in 1950

Alex Gotfryd/CORBIS/Corbis via Getty Images

Asimov explored the limits of the three laws in numerous writings, often finding them wanting. While the laws were a literary device, they have nonetheless informed the real-world field of AI ethics.

In 2004, the film adaptation of I, Robot was released, featuring an AI whose interpretation of the three laws led to a plan to dominate human beings in order to save us from ourselves.

To highlight the flaws in the ethical principles of the three laws, an organisation called the Singularity Institute for Artificial Intelligence (now the Machine Intelligence Research Institute), headed up by the American AI researcher Eliezer Yudkowsky, started an online project called Three Laws Unsafe.

Yudkowsky, an early theorist of the dangers of super-intelligent AI and proponent of the idea of Friendly AI, argued that such principles would be hopelessly simplistic if AI ever developed to the stage depicted in Asimovs fictions.

Despite widespread recognition of the drawbacks of the three laws, many organisations, from private companies to governments, nonetheless persisted with projects to develop principle-based systems of AI ethics, with one paper listing 84 documents containing ethical principles or guidelines for AI that have been published to date.

This continued focus on ethical principles is partly because, while the three laws were designed to govern AI behaviour alone, principles of AI ethics apply to AI researchers as well as the intelligences that they develop. The ethical behaviour of AI is, in part, a reflection of the ethical behaviour of those that design and implement them, and because of this, the two areas of AI ethics are inextricably bound to one another.

AI development needs strong moral guidance if we are to avoid some of the more catastrophic scenarios envisaged by AI critics.

A review published last year by AI4People, an initiative of the international non-profit organisation Atomium-European Institute for Science, Media and Democracy, reports that many of these projects have developed sets of principles that closely resemble those in medical ethics: beneficence (do only good), nonmaleficence (do no harm), autonomy (the power of humans to make individual decisions), and justice.

This convergence, for some, lends a great deal of credibility to these as possible guiding principles for the development of AIs in the future.

However, Brent Mittelstadt of the Oxford Internet Institute and the British Governments Alan Turing Institute an ethicist whose research concerns primarily digital ethics in relation to algorithms, machine learning, artificial intelligence, predictive analytics, Big Data and medical expert systems now argues that such an approach, called principlism, is not as promising as it might look.

Mittelstadt suggests significant differences between the fields of medicine and AI research that may well undermine the efficacy of the formers ethical principles in the context of the latter.

His first argument concerns common aims and fiduciary duties, the duties in which trusted professionals, such as doctors, place others interests above their own. Medicine is clearly bound together by the common aim of promoting the health and well-being of patients and Mittelstadt argues that it is a defining quality of a profession for its practitioners to be part of a moral community with common aims, values and training.

For the field of AI research, however, the same cannot be said. AI is largely developed by the private sector for deployment in public (for example, criminal sentencing) and private (for example, insurance) contexts, Mittelstadt writes. The fundamental aims of developers, users and affected parties do not necessarily align.

Similarly, the fiduciary duties of the professions and their mechanisms of governance are absent in private AI research.

AI developers do not commit to public service, which in other professions requires practitioners to uphold public interests in the face of competing business or managerial interests, he writes. In AI research, public interests are not granted primacy over commercial interests.

In a related point, Mittelstadt argues that while medicine has a professional culture that lays out the necessary moral obligations and virtues stretching back to the physicians of ancient Greece, AI development does not have a comparable history, homogeneous professional culture and identity, or similarly developed professional ethics frameworks.

Medicine has had a long time over which to learn from its mistakes and the shortcomings of the minimal guidance provided by the Hippocratic tradition. In response, it has codified appropriate conduct into modern principlism which provides fuller and more satisfactory ethical guidance.

AI research is obviously a far younger field, devoid of these rich historical opportunities to learn. Further complicating the issue is that the context of application for medicine is comparatively narrow, whereas AI can in principle be deployed in any context involving human expertise, leading it to be radically multi- and interdisciplinary, with researchers coming from varied disciplines and professional backgrounds, which have incongruous histories, cultures, incentive structures and moral obligations.

This makes it extraordinarily difficult to develop anything other than broadly acceptable principles to guide the people and processes responsible for the development, deployment and governance of AI across radically different contexts of use. The problem, says Mittelstadt, is translating these into actual good practice. At this level of abstraction, he warns, meaningful guidance may be impossible.

Finally, the author points to the relative lack of legal and professional accountability mechanisms within AI research. Where medicine has numerous layers of legal and professional protections to uphold professional standards, such things are largely absent in AI development. Mittelstadt draws on research showing that codes of ethics do not themselves result in ethical behaviour, without those codes being embedded in organisational culture and actively enforced.

This is a problem, he writes. Serious, long-term commitment to self-regulatory frameworks cannot be taken for granted.

All of this together leads Mittelstadt to conclude: We must therefore hesitate to celebrate consensus around high-level principles that hide deep political and normative disagreement.

Instead he argues that AI research needs to develop binding and highly visible accountability structures at the organisational level, as well as encouraging actual ethical practice in the field to inform higher level principles, rather than relying solely on top-down principlism. Similarly, he advocates a focus on organisational ethics rather than professional ethics, while simultaneously calling for the professionalisation of AI development, partly through the licensing of developers of high-risk AI.

His final suggestion for the future of AI ethics is to exhort AI researchers not to treat ethical issues as design problems to be solved. It is foolish to assume, he writes, that very old and complex normative questions can be solved with technical fixes or good design alone.

Instead, he writes that intractable principled disagreements should be expected and welcomed, as they reflect both serious ethical consideration and diversity of thought. They do not represent failure, and do not need to be solved. Ethics is a process, not a destination. The real work of AI ethics begins now: to translate and implement our lofty principles, and in doing so to begin to understand the real ethical challenges of AI.

The rest is here:

Where AI and ethics meet - Cosmos

Posted in Ai | Comments Off on Where AI and ethics meet – Cosmos

Why business leaders are short sighted on AI – ZDNet

Posted: at 2:33 pm

Artificial intelligence is one of the most marketed (hyped, you might say) and ill-defined technology categories being packaged and lobbed at those in the enterprise these days. You might think overexposure would lead to fatigue and a healthy dose of skepticism.

Not so, according to original research fromIFS, a global enterprise applications company. IFS recently released the findings of a global research study into the attitudes and strategies towards artificial intelligence among business leaders. The study polled 600 business leaders worldwide and a broad spectrum of industries involved with their companies' enterprise technology, including enterprise resource planning, enterprise asset management, and field service management.

The range of findings are worth a closer look, but the headline is that business leaders across a variety of industries are convinced AI will be an essential component of their companies' success in the near future. In fact, about 90% of respondents reported at least some plans to implement AI in various parts of their business. That's a telling statistic. Whether motivated by fear of missing out or clear-eyed optimism, business leaders seem sold on AI's promise.

According to the study, industrial automation was the most commonly reported area of investment, with 44.6% planning AI projects, while customer relationship management and inventory planning and logistics tied for second place at 38.9%.

"AI is no longer an emerging technology. It is being implemented to support business automation in the here and now, as this study clearly proves," IFS VP of AI and RPA Bob De Caux said. "We are seeing many real-world examples where technology is augmenting existing decision-making processes by providing users with more timely, accurate and pertinent information. In today's disruptive economy, the convergence of technologies such as AI, RPA, and IoT is bolstering a new form of business automation that will provide companies that are brave enough with the tools and services they need to be more competitive and outflank larger competitors."

When asked how they plan to use AI, 60.6% of respondents to the IFS study said they expected it would help them make existing workers more productive. Just under half, 47.9%, said they would use AI to add value to products and services they sell to customers. About 18.1% said they would proactively use it to replace existing workers.

The data suggest industrial and business leaders are enthusiastically planning to involve AI in their business. They may still be coming to terms, however, with the implications of the resulting transformation. While a majority cite increased productivity to justify AI investments, many executives aren't looking ahead to the inevitable reduced demand for labor. As the report points out, it's unlikely consumption levels or demand will increase in proportion to productivity.

That is, if technologies like RPA can really live up to the productivity hype. Generalized confidence in an outcome should never be mistaken for proof of that outcome.

Just how stark is the failure on the part of business leaders to think about the effect the technology could have on workers? Consider that while a majority of respondents anticipated productivity increases from AI, only 29.3% anticipated AI would lead to a reduction in headcount in their industry.

It doesn't take AI to recognize that something seems mighty amiss there.

More here:

Why business leaders are short sighted on AI - ZDNet

Posted in Ai | Comments Off on Why business leaders are short sighted on AI – ZDNet

Perception won’t be reality, once AI can manipulate what we see | TheHill – The Hill

Posted: at 2:33 pm

Voice-spoofing technology was used to steal a quarter-million dollars in March from the unwitting CEO of an energy company, who thought he was talking to his (German) boss. A recent study showed that 72 percent of people reading an AI-generated news story thought it was credible. In September, a smartphone app called Zao became a viral sensation in China; before the government abruptly outlawed it, Zao allowed people to seamlessly swap themselves into famous movie scenes.

Then there is that infamous case of doctored video of the House Speaker Nancy Pelosi (D-Calif.) that went viral before being detected as being manipulated to make her appear drunk.

Most of the recent advances in AI artificial intelligence have come in the realm of perceptual intelligence. This has enabled our devices to see (and recognize faces of our friends, for example), to hear (and recognize that song) and even to parse text (and recognize the rough intent of the email in your mailbox). Todays AI technology can also generate these percepts our devices can generate scenes and faces that never existed, clone voice to generate speech, and even write pithy (if stilted) responses to the emails in your inbox.

This ability to generate perceptions puts AI in a position of great promise and great peril.

Synthetic media can have many beneficial applications. After all, inducing suspension of disbelief in the audience is the cornerstone of much of entertainment. Nevertheless, it is the potential misuses of the technology especially going under the name of deep fakes that are raising alarms.

If perception is reality, then what happens to reality when AI can generate or manipulate perceptions? Although forgeries, fakes and spoofs have existed for much of human history, they had to be crafted manually until now. The advent of perceptual AI technology has considerably reduced the effort needed to generate convincing fakes. As we saw, the Zao app allowed lay users to swap themselves into movie scenes. What is more, as the technology advances, it will become harder to spot the fakes. Sites such as Which Face is Real? show that, already, most people cannot tell AI-generated images from real ones.

Easy generation and widespread dissemination of synthetic media can have quite significant adverse consequences for many aspects of civil society. Elections can be manipulated through spread of deep fake videos that put certain candidates in compromising positions. Spoofing voice and video calls can unleash a slew of new consumer scams. Individual privacy can be invaded by inserting peoples likenesses into compromising (and sometimes pornographic) pictures and videos.

What are our options in fighting this onslaught of AI-enabled synthetic media? To begin with, AI technology itself can help us detect deep fakes by leveraging the known shortfalls in the current AI technology; there are techniques that spot fake text, voice, images and video. For example, in the case of images, fakes can be detected by imperceptible pixel-level imperfections or background inconsistencies; it is hard for most fake-generators to get the background details correct. (In much the same way, when we remember our dreams in the morning, the parts that dont make sense are often not the faces of the people but, rather, the background story.) For detecting fake videos of people, current techniques focus on the correlations between lip movements, speech patterns and gestures of the original speaker. Once detected, fake media can be added to some global databases of known fakes, helping with their faster identification in the future.

Beyond detection, there are incipient attempts at regulation. California recently passed Assembly Bill 730 making deep fake videos illegal providing some measure of protection against invasion of individual privacy. Twitter is establishing its own guidelines to tag synthetic media (deep fakes) with community help. Non-profit organizations like Partnership on AI have established steering committees to study approaches to ensure the integrity of perceptual media. Other technology companies, including Facebook and AI Foundation, have supported gathering and sharing benchmark data sets to help accelerate research into deep fake detection. AI Foundation has released a platform, called Reality Defender 2020, specifically to help combat the impact of deep fakes on the 2020 elections.

While policies are important, so is educating the public about the need to be skeptical about perceptions in this age of AI. After all, the shortcomings of the generation technology today are not likely to persist into the future. In the long term, we should expect AI systems to be capable of producing fakes that cannot be spotted either by us or by our AI techniques. We have to gird ourselves for a future where our AI-generated doppelgangers may come across as more authentic to our acquaintances. Hopefully, by then, we will learn not to trust our senses blindly and, instead, insist on provenance such as cryptographic authentication techniques to establish the trustworthiness of what we perceive.Asking our loved ones on the phone to provide authentication may offend our sense of trust, but it may be the price we will have to pay as AIs ability to generate and manipulate media becomes ever more sophisticated.

As deep fakes increase in sophistication, so will our immunity to them: We will learn not to trust our senses, and to insist on authentication. The scary part of the deep fake future is not the long term but the short term, before we outgrow our seeing is believing" mindset. One consolation is that the short term may also be the only time when AI can still be an effective part of the solution to the problem it has wrought in this vulnerable period.

Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and chief AI officer for AI Foundation, which focuses on the responsible development of AI technologies. He served as president and is now past-president of the Association for the Advancement of Artificial Intelligence and was a founding board member of Partnership on AI. He can be followed on Twitter @rao2z.

More:

Perception won't be reality, once AI can manipulate what we see | TheHill - The Hill

Posted in Ai | Comments Off on Perception won’t be reality, once AI can manipulate what we see | TheHill – The Hill

Page 196«..1020..195196197198..210220..»