Page 256«..1020..255256257258..270280..»

Category Archives: Ai

Jeff Bezos lays out Amazon’s three-pronged approach to AI – Quartz

Posted: April 17, 2017 at 12:53 pm

Amazonians and investors alike, rest assured: Jeff Bezos knows artificial intelligence is a big deal.

The Amazon CEO took a few paragraphs in his annual shareholder letter to explain how the company is using AI throughout its business, having recognized and embraced the external trend. The letter itself focuses on the idea that Amazon is still in Day 1, still a startup thats well aware that irrelevance and excruciating decline begins when you reach Day 2.

Bezos outlined three major categories that artificial intelligence falls into at the company: visible products and moonshots, core operations, and enterprise cloud.

The outside world can push you into Day 2 if you wont or cant embrace powerful trends quickly. If you fight them, youre probably fighting the future. Embrace them and you have a tailwind, Bezos writes.

This category is what Bezos calls a practical application of machine learning, consumer-facing products or services that wouldnt be possible without AI. The Amazon Go grocery store fits in here, as well as Prime Air delivery drones, and of course, the companys virtual personal assistant, Alexa.

While Amazon Go and Prime Air delivery are still in extremely early stages, Alexa has become the virtual assistant du jourthis year we think it won CES without even showing up. Its easy to see Amazon as a leader here, since the Echos release practically invented the product category of voice-assistant speaker.

Artificial intelligence isnt all drones and virtual assistantsBezos says that a lot of the AI work within the company happens under the hood of its core business, e-commerce. The company uses artificial intelligence to predict product demand, power search rankings, create and recommend deals, detect fraud, and translate the site into other languages.

Though less visible, much of the impact of machine learning will be of this type quietly but meaningfully improving core operations, Bezos says.

This also fails to mention Amazons highly-automated warehouses, which use robots to accelerate processing and shipping time.

This is typically where peoples eyes start to glaze over. Its hard to overstate Amazons dominance in the sector, though: the company earns more than 5 times the revenue of its nearest competitor, Microsoft. Alongside S3 storage that supports much of the web (and cripples much of the web when it fails), Amazon offers easy tools for developers to integrate Amazons artificial intelligence algorithms into their own applications, as well as servers for AI companies to run their own algorithms.

The cloud is also a great marketing tool for providers like Amazon, because the work that others are doing on the platform usually sounds very impressive, without the provider having to do much of the tinkering and design that makes the final product work. Bezos touts the companys pre-packaged AI algorithms as helping predict disease and estimate crop yields for farmers, with no machine learning expertise required.

Capping AIs section of the letter, Bezos says: Watch this space. Much more to come.

Read more from the original source:

Jeff Bezos lays out Amazon's three-pronged approach to AI - Quartz

Posted in Ai | Comments Off on Jeff Bezos lays out Amazon’s three-pronged approach to AI – Quartz

To Get Consumers to Trust AI, Show Them Its Benefits – Harvard Business Review

Posted: at 12:53 pm

Executive Summary

Artificial intelligence (AI) is increasingly emerging in applications like autonomous vehicles and medical assistance devices, but consumers dont necessarily trust these applications. Research shows that operational safety and data security are decisive factors in getting people to trust new AI technology. Even more important is the balance between control and autonomy in the technology. And communication is key it should be proactive and open in the early stages of introducing the public to the technology. Consumers who can effectively communicate the benefits of an AI application have a reduction in their perceived risk, which results in greater trust, and ultimately, in greater adoption of the technology.

Artificial intelligence (AI) is emerging in applications like autonomous vehicles and medical assistance devices. But even when the technology is ready to use and has been shown to meet customer demands, theres still a great deal of skepticism among consumers. For example, a survey of more than 1,000 car buyers in Germany showed that only 5% would prefer a fully autonomous vehicle. We can find a similar number of skeptics of AI-enabled medical diagnosis systems, such as IBMs Watson. The publics lack of trust in AI applications may cause us to collectively neglect the possible advantages we could gain from them.

In order to understand trust in the relationship between humans and automation, we have to explore trust in two dimensions: trust in the technology and trust in the innovating firm.

How it will impact business, industry, and society.

In human interactions, trust is the willingness to be vulnerable to the actions of another person. But trust is an evolving and fragile phenomenon that can be destroyed even faster than it can be created. Trust is essential to reducing perceived risk, which is a combination of uncertainty and the seriousness of the potential outcome involved. Perceived risk in the context of AI stems from giving up control to a machine. Trust in automation can only evolve from predictability, dependability, and faith.

Three factors will be crucial to gaining this trust: 1.) performance that is, the application performs as expected; 2.) process that is, we have an understanding of the underlying logic of the technology, and 3.) purpose that is, we have faith in the designs intentions. Additionally, trust in the company designing the AI, and the way the way the firm communicates with customers, will influence whether the technology is adopted by customers. Too many high-tech companies wrongly assume that the quality of the technology alone will influence people to use it.

In order to understand how firms have systematically enhanced trust in applied AI, my colleagues Monika Hengstler and Selina Duelli and I conducted nine case studies in the transportation and medical device industries. By comparing BMWs semi-autonomous and fully autonomous cars, Daimlers Future Truck project, ZF Friedrichshafens driving assistance system, as well as Deutsche Bahns semi-autonomous and fully autonomous trains and VAG Nrnbergs fully automated underground train, we gained a deeper understanding of how those companies foster trust in their AI applications. We also analyzed four cases in the medical technology industry, including IBMs Watson as an AI-empowered diagnosis system, HPs data analytics system for automated fraud detection in the healthcare sector, AiCures medical adherence app that reminds patients to take their medication, and the Care-O-bot 3 of Frauenhofer IPA, a research platform for upcoming commercial service robot solutions. Our semi-structured interviews, follow-ups, and archival data analysis was guided by a theoretical discussion on how trust in the technology and in the innovating firm and its communication is facilitated.

Based on this cross-case analysis, we found that operational safety and data security are decisive factors in getting people to trust technology. Since AI-empowered technology is based on the delegation of control, it will not be trusted if it is flawed. And since negative events are more visible than positive events, operational safety alone is not sufficient for building trust. Additionally, cognitive compatibility, trialability, and usability are needed:

Cognitive compatibility describes what people feel or think about an innovation as it pertains to their values. Users tend to trust automation if the algorithms are understandable and guide them toward achieving their goals. This understandability of algorithms and the motives in AI applications directly affect the perceived predictability of the system, which, in turn, is one of the foundations of trust.

Trialability points to the fact that people who were able to visualize the concrete benefits of a new technology via a trial run reduced their perceived risk and therefore their resistance to the technology.

Usability is influenced by both the intuitiveness of the technology, and the perceived ease of use. An intuitive interface can reduce initial resistance and make the technology more accessible, particularly for less tech-savvy people. Usability testing with the target user group is an important first step toward creating this ease of use.

But even more important is the balance between control and autonomy in the technology. For efficient collaboration between humans and machines, the appropriate level of automation must be carefully defined. This is even more important in intelligent applications that are designed to change human behaviors (such as medical devices that incentivize humans to take their medications on time). The interaction should not make people feel like theyre being monitored, but rather, assisted. Appropriate incentives are important to keep people engaged with an application, ultimately motivating them to use it as intended. Our cases showed that technologies with high visibility e.g., autonomous cars in the transportation industry, or AiCure and Care-O-bot in the healthcare industry require more intensive efforts to foster trust in all three trust dimensions.

Our results also showed that stakeholder alignment, transparency about the development process, and gradual introduction of the technology are crucial strategies for fostering trust. Introducing innovations in a stepwise fashion can lead to more gradual social learning, which in turn builds trust. Accordingly, the established firms in our sample tended to pursue a more gradual introduction of their AI applications to allow for social learning, while younger companies such as AiCure tended to choose a more revolutionary introduction approach in order to position themselves as a technology leader. The latter approach has a high risk of rejection and the potential to cause a scandal if the underlying algorithms turn out to be flawed.

If youre trying to get consumers to trust a new AI-enabled application, communication should be proactive and open in the early stages of introducing the public to the technology, as it will influence the companys perceived credibility and trustworthiness, which will influence attitude formation. In the cases we studied, users who could effectively communicate the benefits of an AI application had a reduction in their perceived risk, which resulted in greater trust, and a higher likelihood to adopt the new technology.

Read the original here:

To Get Consumers to Trust AI, Show Them Its Benefits - Harvard Business Review

Posted in Ai | Comments Off on To Get Consumers to Trust AI, Show Them Its Benefits – Harvard Business Review

Why AI will both increase efficiency and create jobs – CIO

Posted: at 12:53 pm

Artificial Intelligence is already impacting every industry through automation and machine learning, bringing concerns that AI is on the fast track to replacing many jobs. But these fears aren't new, says Dan Jackson, director of Enterprise Technology at Crestron, a company that designs workplace technology.

"I'd argue this is no different than when we moved from an agricultural to an industrial economy at the turn of the last century. The percentage of people working in agriculture significantly decreased, and it was a big shift, but we still have plenty of jobs 100 years later," he says.

Anytime society experiences a major technological advancement, we need to be prepared for it to change the way we live and work. It's hard to imagine what the future of jobs will look like with AI, but that future exists. And optimists suggest that, like the sewing machine to the textile industry, AI will make us better, more efficient and faster workers.

[ Related story: Think twice before you hire a chief AI officer ]

Antonis Papatsaras, PhD, AI expert and CTO at SpringCM, a contract and document management company, agrees that some concern is warranted, noting it's "consistent with historical reactions to innovation." Similar concerns were voiced during the Industrial Revolution, but they never held up -- instead of replacing jobs, humans were needed to operate the machinery.

"Time after time, we see jobs adapt and shift," he says.

Adam Compain, CEO of ClearMetal, a predictive logistics company, agrees that most fears around AI are disproportional, and -- if we're being honest -- based off movies and TV. Instead of focusing on the fictional "what-ifs" of AI, we should be building strategies to ensure AI doesn't negatively impact employment.

"Artificial Intelligence is named so because it replicates our own way of thinking and, particularly in the application of machine learning, it's a helpful aid in recognizing patterns, managing overwhelming complexity, and handling tasks far too tedious for us to understand," says Compain.

Experts agree that AI has the potential to eliminate mundane, administrative work, while we will always rely on human workers to be empathetic, collaborative, creative and strategic. But it's impact on any industry lies in the hands of the business leaders who are responsible for adopting AI strategies.

Tim Estes, CEO of Digital Reasoning, a cognitive computing company, says that "we cannot reasonably expect the jobs market to remain inflexible to a changing world." Instead, businesses who approach AI with an open mind and embrace the change will find ways to create new jobs, while those who "shun opportunity are most at risk."

[ Related story: How AI, machine learning will impact tech recruiting ]

A recent study of 1,000 global companies by Accenture found that AI is already creating three new categories of jobs: trainers, explainers and sustainers. Trainers are the people who teach AI systems how to act -- whether it's language, human behavior or the intricacies of human interaction. Explainers are the liaison between technology and business leaders, providing more insight and clarity into machine learning for the non-tech workers. Sustainers are the workers required to maintain AI systems and troubleshoot any potential issues.

"Some jobs were highly technical and required advanced degrees, but other roles demanded innately human things such as empathy and interaction. Downstream jobs, such as those in sales, marketing, or service will change to take advantage of the insights from AI, but many of the core skills will remain," says Estes.

It might sound like any job related to AI will require years of technical knowledge, but that isn't the case. We've already seen a shift in tech hiring -- companies often need highly specific skill sets that are hard to find in potential candidates. As a result, more businesses are hiring employees with the right soft skills, and then training them in technical skills.

"This opens a new window of opportunity for a diverse and booming workforce, as many organizations don't necessarily require a college degree from their technical employees. If you onboard a person with a willingness to learn and an understanding of basic technology skills, you can train them on a multitude of systems and applications," says Papatsaras.

Papatsaras also expects to see an overall shift in the education system, where students will be trained from a young age on robotics and AI. It's already happening outside of the education system - games like Minecraft can help teach children the fundamentals of coding to kick start STEM education.

The real takeaway is that any approach to AI will need to consider the human aspect of every business. AI has great potential to increase efficiency and accuracy and it's already been proven in certain industries.

For example, Estes points to the use of AI In banking to identify "rogue traders" and money laundering schemes. It's also improved healthcare by "increasing the speed and accuracy" of cancer diagnosistics. AI can also help reduce the cost and length of human trafficking investigations, a situation where time is precious.

In these examples, argues Estes, AI hasn't replaced jobs, but has positively impacted efficiency. Yet, he still cautions against complacency with AI.

"We need to ensure our education system responds to equip young people with the appropriate skills and adaptability, while businesses and public organizations must invest in training. Perhaps most of all, we need to encourage imagination and willingness to experiment. The organizations that can innovate with AI will reap the benefits. Their growth will make them the primary source of future jobs," he says.

Companies have a choice when implementing AI. They can choose to effectively implement systems that make employee's lives easier and find creative ways to leverage the technology, says Papatsaras. It's up to employers to ease fears for workers around AI and build strategies that benefit everyone.

"At the end of the day, as employers and employees, we need to figure this out. If we play our cards right, AI is here to lessen the burden in our lives and create what we all crave today -- a work-life balance," he says.

Related Video

Read more:

Why AI will both increase efficiency and create jobs - CIO

Posted in Ai | Comments Off on Why AI will both increase efficiency and create jobs – CIO

New AI language hides TensorFlow complexity – InfoWorld

Posted: at 12:53 pm

Thank you

Your message has been sent.

There was an error emailing this page.

By Paul Krill

Editor at Large, InfoWorld | Apr 17, 2017

Bonsai's Inkling programming language, which makes it easier to build artificial intelligence applications, is moving closer to a 1.0 release.

Part of the Bonsai Platform for AI, Inkling is aproprietary higher level language that compiles down to Google's open source TensorFlow library for machine intelligence. Inkling is designed to represent AI in terms of what a developer wants to teach the system instead of focusing on low-level mechanics. It abstracts away from dynamic AI algorithms that would otherwise require expertise in machine learning. Declarative and strongly typed, the language resembles a cross between Python and SQL from a syntactic perspective, said Bonsai CEO Mark Hammond.

"Our core focus right now is on enabling enterprises and industrial companies to build control and optimization systems," which could take forms such as advanced robotics, supply chain optimization systems, or oil exploration, Hammond said. A 1.0 release of the language and the Bonsai Platform is targeted for late June. Plans call for eventually promoting development of additional implementations of Inkling. The company itself is focused on making machine learning technologies accessible to developers and engineers without a background in this area but who do have expertise in domain areas where they want to apply the technology.

Bonsai Platform currently is in an early access stage of release. Other components of the it include the Bonsai Artificial Intelligence Engine; command line and web interfaces; and simulators, generators, and data as training sources.

Paul Krill is an editor at large at InfoWorld, whose coverage focuses on application development.

Sponsored Links

Originally posted here:

New AI language hides TensorFlow complexity - InfoWorld

Posted in Ai | Comments Off on New AI language hides TensorFlow complexity – InfoWorld

Here’s what Google’s AI software can do, and how you can help improve it – Recode

Posted: at 12:53 pm

The future of Google lies in its artificial intelligence technologies, CEO Sundar Pichai has said multiple times.

That means Googles products increasingly rely on machine learning programming that allows computers to learn on their own.

Youll notice AI at work in Google Assistant to understand questions you ask, as well as in Google Photos, which uses AI to identify things like objects, animals and people.

But Google also wants developers to use its open source AI software including translation and visual recognition, to build new tools. Google has said making software open source allows outsiders to improve the companys technology.

Google publishes A.I. Experiments to show in simple ways the different things Googles AI software can do. It offers interactive demonstrations of Googles open source technology. The site is aimed at promoting the software and encouraging developers to use it, with some of the code for the experiments available on the site.

The tools are easy to play with if youre not a programmer and can offer a window into what Google is teaching computers to do. Sometimes users interactions with the tools support experiments and development by Google into the future.

At least one of the 10 experiments posted on the site, Quick, Draw!, has played a role in Googles AI research. The widget prompts users to draw a specific thing, like a seesaw, in under 20 seconds. While the user draws, the program tries to guess what the user is drawing.

Google used doodles from Quick, Draw! to teach artificial intelligence software how to draw on its own.

Last week, Google added another drawing tool, this one called AutoDraw, which turns doodles into clip art by comparing them against a database of professional drawings.

Other experiments include:

Continue reading here:

Here's what Google's AI software can do, and how you can help improve it - Recode

Posted in Ai | Comments Off on Here’s what Google’s AI software can do, and how you can help improve it – Recode

How Companies Are Already Using AI – Harvard Business Review

Posted: April 15, 2017 at 5:37 pm

Executive Summary

A survey by Tata Consultancy Services reveals that while some jobs have been lost to machine intelligence, thats not the major waycompanies are using AI today. Companies are more likely to be using AI to improvecomputer-to-computer tasks while employing the same number of people. The 170-year-old news service Associated Press offers a case in point. In 2013, demand for quarterly earnings stories was insatiable, andstaff reporters could barely keep up. So that year, AP began working with an AI firm to train software to automatically write short earnings news stories. By 2015, APs AI system was writing 3,700 quarterly earnings stories 12 times the number written by its business reporters. No AP business journalist lost a job. In fact, AI has freed up the staff to write more in-depth stories on business trends. Thats the next trend in AI, and one more businesses should try to emulate.

Every few months it seems another study warns that a big slice of the workforce is about to lose their jobs because of artificial intelligence. Four years ago, an Oxford University study predicted 47% of jobs could be automated by 2033. Even the near-term outlook has been quite negative: A 2016 report by the Organization for Economic Cooperation and Development (OECD) said 9% of jobs in the 21 countries that make up its membership could be automated. And in January 2017, McKinseys research arm estimated AI-driven job losses at 5%. My own firm released a survey recently of 835 large companies (with an average revenue of $20 billion) that predicts a net job loss of between 4% and 7% in key business functions by the year 2020 due to AI.

Yet our research also found that, in the shorter term, these fears may be overblown. The companies we surveyed in 13 manufacturing and service industries in North America, Europe, Asia-Pacific, and Latin America are using AI much more frequently in computer-to-computer activities and much less often to automate human activities. Machine-to-machine transactions are the low-hanging fruit of AI, not people-displacement.

For example, our survey, which asked managers of 13 functions, from sales and marketing to procurement and finance, to indicate whether their departments were using AI in 63 core areas, found AI was used most frequently in detecting and fending off computer security intrusions in the IT department. This task was mentioned by 44% of our respondents. Yet even in this case, we doubt AI is automating the jobs of IT security people out of existence. In fact, we find its helping such often severely overloaded IT professionals deal with geometrically increasing hacking attempts. AI is making IT security professionals more valuable to their employers, not less.

How it will impact business, industry, and society.

In fact, although we saw examples of companies using AI in computer-to-computer transactions such as in recommendation engines that suggest what a customer should buy next or when conducting online securities trading and media buying, we saw that IT was one of the largest adopters of AI. And it wasnt just to detect a hackers moves in the data center. IT was using AI to resolve employees tech support problems, automate the work of putting new systems or enhancements into production, and make sure employees used technology from approved vendors. Between 34% and 44% of global companies surveyed are using AI in in their IT departments in these four ways, monitoring huge volumes of machine-to-machine activities.

In stark contrast, very few of the companies we surveyed were using AI to eliminate jobs altogether. For example, only 2% are using artificial intelligence to monitor internal legal compliance, and only 3% to detect procurement fraud (e.g., bribes and kickbacks).

What about the automation of the production line? Whether assembling automobiles or insurance policies, only 7% of manufacturing and service companies are using AI to automate production activities. Similarly, only 8% are using AI to allocate budgets across the company. Just 6% are using AI in pricing.

So where should your company look to find such low-hanging fruit applications of AI that wont kill jobs yet could bestow big benefits? From our survey and best-practice research on companies that have already generated significant returns on their AI investments, we identified three patterns that separate the best from the rest when it comes to AI. All three are about using AI first to improve computer-to-computer (or machine-to-machine) activities before using it to eliminate jobs:

Put AI to work on activities that have an immediate impact on revenue and cost. When Joseph Sirosh joined Amazon.com in 2004, he began seeing the value of AI to reduce fraud, bad debt, and the number of customers who didnt get their goods and suppliers who didnt get their money. By the time he left Amazon in 2013, his group had grown from 35 to more than 1,000 people who used machine learning to make Amazon more operationally efficient and effective. Over the same time period, the company saw a 10-fold increase in revenue.

After joining Microsoft Corporation in 2013 as corporate vice president of the Data Group, Sirosh led the charge in using AI in the companys database, big data, and machine learning offerings. AI wasnt new at Microsoft. For example, the company had brought in a data scientist in 2008 to develop machine learning tools that would improve its search engine, Bing, in a market dominated by Google. Since then, AI has helped Bing more than double its share of the search engine market (to 20%); as of 2015, Bing generated more than a $1 billion in revenue every quarter. (That was the year Bing became a profitable business for Microsoft.) Microsofts use of AI now extends far beyond that, including to its Azure cloud computing service, which puts the companys AI tools in the hands of Azure customers. (Disclosure: Microsoft is a TCS client.)

Look for opportunities in which AI could help you produce more products with the same number of people you have today. The AI experience of the 170-year-old news service Associated Press is a great case in point. AP found in 2013 a literally insatiable demand for quarterly earnings stories, but their staff of 65 business reporters could write only 6% of the earnings stories possible, given Americas 5,300 publicly held companies. The earnings news of many small companies thus went unreported on APs wire services (other than the automatically published tabular data). So that year, AP began working with an AI firm to train software to automatically write short earnings news stories. By 2015, APs AI system was writing 3,700 quarterly earnings stories 12 times the number written by its business reporters. This is a machine-to-machine application of AI. The AI software is one machine; the other is the digital data feed that AP gets from a financial information provider (Zacks Investment Research). No AP business journalist lost a job. In fact, AI has freed up the staff to write more in-depth stories on business trends.

Start in the back office, not the front office. You might think companies will get the greatest returns on AI in business functions that touch customers every day (like marketing, sales, and service) or by embedding it in the products they sell to customers (e.g., the self-driving car, the self-cleaning barbeque grill, the self-replenishing refrigerator, etc.). Our research says otherwise. We asked survey participants to estimate their returns on AI in revenue and cost improvements, and then we compared the survey answers of the companies with the greatest improvements (call them AI leaders) to the answers of companies with the smallest improvements (AI followers). Some 51% of our AI leaders predicted that by 2020 AI will have its biggest internal impact on their back-office functions of IT and finance/accounting; only 34% of AI followers said the same thing. Conversely, 43% of AI followers said AIs impact would be greatest in the front-office areas of marketing, sales, and services, yet only 26% of the AI leaders felt it would be there. We believe the leaders have the right idea: Focus your AI initiatives in the back-office, particularly where there are lots of computer-to-computer interactions in IT and finance/accounting.

Computers today are far better at managing other computers and, in general, inanimate objects or digital information than they are at managing human interactions. When companies use AI in this sphere, they dont have to eliminate jobs. Yet the job-destroying applications of AI are what command the headlines: driverless cars and trucks, robotic restaurant order-takers and food preparers, and more.

Make no mistake: Automation and artificial intelligence will eliminate some jobs. Chatbots for customer service have proliferated; robots on the factory floor are real. But we believe companies would be wise to use AI first where their computers already interact. Theres plenty of low-hanging fruit there to keep them busy for years.

The rest is here:

How Companies Are Already Using AI - Harvard Business Review

Posted in Ai | Comments Off on How Companies Are Already Using AI – Harvard Business Review

Science proves that training AI to be ‘human’ makes it sexist and … – BGR

Posted: at 5:37 pm

As we get closer and closer to create artificial intelligence that can think and reason in ways that mimic a human brain its becoming increasingly clear that allowing a machine mind to learn from humans is a very bad idea. Weve seen examples of it in the past, but a new studyon AI biases reveals that not only does training an artificial brain create biases, but those leanings reinforce many societal issues regarding race and gender that plague humanity today.

The study, which was conducted by scientists at Princeton University and published in the journal Science, sought to determine not just if the behavior of an AI exhibited specific biases, but whether the machine learning systems that determine the outcome inherently lean one way or the other. To do this, the team trained an AI usingstandard datasets that are popular choices for machine learning. These kinds of sets include millions of words and are often gathered from many sources, including the internet.

The AI studies the words, how their used, and what words theyre used in association with, in order to provide natural language responses and answers in a way that we can understand. It also, as it turns out, learns some of our more unfortunate quirks.

After training the AI, scientists tested how it associates various words with others. For example, flower is more likely to be associated with pleasant than weapon is. That, of course, makes perfect sense. However, the trained AI also had a habit of associatingtypically caucasian-sounding names with other things that it considered to be pleasant, rather than African-American names. The AI also shied away from pairing female pronouns with mathematics, and instead often associated them with artistic terms.

This is obviously a huge issue since, in the name of creating AI that sound and behave more human, some of the standard training materials being used carry with them some of the worst parts of us. Its a very interesting problem, and one that you can bet will get a lot of attention now that evidence seems to be mounting.

View original post here:

Science proves that training AI to be 'human' makes it sexist and ... - BGR

Posted in Ai | Comments Off on Science proves that training AI to be ‘human’ makes it sexist and … – BGR

The 3 things that make AI unlike any other technology – Huffington Post

Posted: at 5:37 pm

Artificial Intelligence has been around for sixty years, and through this long time it has had many ups and downs, but mostly downs. The AI winters, as they are commonly known, were caused because of the insurmountable obstacles that declarative programming presented when building the knowledge base of an intelligent system. Hand coding a complete description of the world proved to be an impossible task. Systems were limited by the knowledge that could be coded into them, and therefore unable to cope with the unexpected. And then, sometime around the late 1990s, the unexpected happened: a mostly discredited notion in AI, that one should emulate the human brain and its neural intricacies, came back in vogue. This resurrection of old ideas coincided with the cost of parallel processing dropping significantly thanks to Graphic Processing Units (GPUs) used to process video game graphics. The AI everyone is talking about is this new kind of brain-like AI, where you do not code the world; instead you teach the machine how to learn about the world.

Brain-like AI, in the guise of machine learning or deep learning systems, is the heart of the debate on the future of work and humans, as intelligent machines begin to take over many cognitive tasks that used to require human intelligence. This new technology is truly unlike anything else we have ever invented. As we ponder the future, and how we will live and collaborate with those machines, it is important to reduce what is particular and unique about AI. I would like to suggest that there are three unique characteristics of AI that we should consider; self-improvement, prescience and autonomy.

Lets look first at the ability of AI for self-improvement. No other technological artefact has ever had such ability before. For every machine that humans created in the past its performance was forever determined in advance; with wear and tear that performance was reduced over time, and it was always necessary for humans to undertake regular maintenance, just to keep the performance of those machine level. Improvements would only occur when humans replaced parts of the machine, or produced new, improved, versions. In AI we have machines that do not need humans in order to improve their performance. With machine learning, AI systems become better every time they ingest and process a new set of data.

The second unique characteristic of AI is prescience, or its ability to predict. This ability is sometimes based on mathematical approaches that pre-existed the advent of hardware capable of executing them. But the fact that those predictive mathematical algorithms have become executable in machines is a great achievement in itself. More sophisticated approaches, such as reinforcement learning combined with convoluted neural networks, are delivering systems capable of strategizing in complex situations. Just think of what AlphaGo achieved a few months ago, beating a highly-skilled and experienced human in the most difficult game of strategy every invented by humans. Prescience is the prerequisite for strategy. In the biological world only highly-advanced predators are capable of strategies that require prediction. AIs prescience furnishes this technology with the ability to adapt its behaviour according to unexpected events so that it achieves a final goal. No other technology is capable of such outcomes.

Finally, and not least because of self-improvement and prescience, AI is also capable of autonomy. Autonomy means that the system can take decisions about its future actions based on its internal states changing according to perceived sensory data. This makes AI systems similar to biological systems. The military is already testing a number of Lethal Autonomous Weapon Systems (LAWS) that can be perform complex combat missions without the need of human guidance. Many of the AI systems currently in use require a human-in-the-loop; mostly because of the necessity for clean and tagged data sets for use in supervised learning. But AI systems are becoming increasingly independent. Soon they will be able to explore the world for themselves, purely out of curiosity.

In combination, the 3 characteristics of AI point to a logical eventuality: a system that can learn and therefore self-improve, and is also prescient, will eventually maximize its autonomy. And this is why it is absolutely necessary to think of AI ethics now. The autonomous, intelligent machines of the future, must have a code of ethics that limits the autonomy of their decisions.

Visit link:

The 3 things that make AI unlike any other technology - Huffington Post

Posted in Ai | Comments Off on The 3 things that make AI unlike any other technology – Huffington Post

AI Computer Beats Human Poker Players by Nearly $800000 – Voice of America

Posted: at 5:37 pm

An artificial intelligence, or AI program has again beaten a group of human poker players to win $792,000 in virtual money.

The AI program won during a recent competition against experienced poker players in China. More than 36,000 hands were played during a 5-day competition on Chinas Hainan Island.

The computer went up against a group of six human players led by Alan Du, a winner in the 2016 World Series of Poker tournament. The human team said it attempted to play against the AI system like a machine, rather than using traditional methods of humans.

The winning system is called Lengpudashi, or cold poker master. It was developed by engineers at Americas Carnegie Mellon University. A previous version of the AI system beat four top poker players in the world in a U.S. competition last January.

Results of the competition between Lengpudashi AI and top poker players in Hainan, China. (Sinovation Ventures)

Artificial intelligence is the capability of a computer to learn to perform human-like operations and make decisions. This can be achieved by putting large amounts of data into a computer for processing.

Algorithms are also used to help computers learn through experiences the same way humans do. This kind of AI technology is used in machine translation systems like Google Translate.

Last year, Googles AI system AlphaGo beat a Korean champion in the ancient Chinese board game Go.

The two wins show how AI development has greatly increased in the ability to succeed against humans. But poker differs from Go in that a player keeps his cards hidden from the opponent. Poker players also use techniques to trick opponents into thinking they have a better hand than they actually do. This is one area where a computer can find it hard to match human thinking and actions.

But a co-creator of the Lengpudashi program, Noam Brown, said the computer even performed well in this part of the competition.

People think that bluffing is very human -- it turns out that's not true, said Brown, a computer scientist and student. A computer can learn from experience that if it has a weak hand and it bluffs, it can make more money, Brown told Bloomberg.

Noam brown, a Ph.D. candidate at Carnegie Mellon University and co-creator of the Lengpudashi AI system, confers with Alan Du, head of the human team in the competition. (Sinovation Ventures)

Brown and Carnegie Mellon professor Tuomas Sandholm won $290,000 in the competition. The money will go to Strategic Machine Inc., a company started by the two to develop AI.

The company is involved in many other areas besides games. These include AI solutions that can be applied to business, negotiation, cybersecurity, political campaigns, and medical treatment.

Im Bryan Lynn.

Bryan Lynn wrote this story for VOA Learning English. Hai Do was the editor.

We want to hear from you. What are your thoughts on AI? Have you had any personal experiences with it? Write to us in the Comments section, and visit our Facebook page.

________________________________________________________________

poker n. card game in which players bet money of the value of their cards

virtual adj. representing something without actually being it

hand n. the cards held by a player in a card game

master n. person who becomes very skilled at something

algorithm n. set of steps that are followed in order to solve a mathematical problem or to complete a computer process

card n. small piece of stiff paper used to play games

bluff n. pretend to do or know something to trick someone into doing what you want

Read the rest here:

AI Computer Beats Human Poker Players by Nearly $800000 - Voice of America

Posted in Ai | Comments Off on AI Computer Beats Human Poker Players by Nearly $800000 – Voice of America

As Boston Children’s launches clinical decision support challenge, a warning about AI hype – MobiHealthNews

Posted: April 13, 2017 at 11:49 pm

Boston Childrens Hospital held its second Innovation and Digital Health Accelerator Innovators Showcase yesterday, inviting more than 20 startups working with the hospital in some capacity to network and share their work with each other.

At the event, the company also kicked off its newest open innovation challenge, which focuses specifically on clinical decision support.

The idea there is were sourcing ideas from frontline staff or researchers and others on the administrative side to find new ideas where we can use technology to improve clinical decisions, be it through machine learning, AI, image recognition, or even operational improvements, IDHA Innovation Lead Matt Murphy said.

The hospital will accept applications through April 28, and one or two winners will get $50,000 in grant funding and other support from the hospital.

To get the wheels turning about ways that AI could help improve clinical decision support, Boston Childrens invited some doctors doing research in that area to speak to the assembled crowd.

Dr. Garry Steil showed off some results of a project to use artificial intelligence to help people with Type 1 diabetes predict their blood sugar spikes and take insulin accordingly. Steils system has already shown some previously unknown correlations between exercise, food, and sleep that could help people with diabetes stay on track.

The other speaker, Dr. Doug Perrin, spoke more broadly about artificial intelligence. His biggest warning was to avoid overhyping AI, which in 2017 is just a sophisticated form of computing, not the creation of an actual artificial mind.

Perrin gave an interesting history lesson about AI. He said that some of the first attempts at AI were in 1957 and revolved around a computing element called a Perceptron. So much hype was built around the Perceptron that when Marvin Minsky mathematically proved its limitations in a 1969 book, it led to a dramatic fall-off in all AI research.

This kicked off what we call the AI Winter, Perrin said. If you talk to anyone who was doing artificial intelligence in the 80s and 90s this term comes up. It was an era where you could not get funded if you said you worked on AI or if any of the words you were using to describe your research looked anything like strong AI. So we came up with other terms: informatics, machine learning, intelligent systems, intelligent agents, and computational intelligence.

The AI Winter ended only recently, when computers became fast enough and storage became cheap enough that the results of the few AI experiments did get funding were impossible to ignore.

Fighting AI hype is the best way to avoid another winter, Perrin said.

For medical care specifically, Perrin cautioned that AI could only really be used to support clinical decisions, not to take them over, because the cost of failure is so high.

This is unlikely to change, he said. These methods mostly rely on probabilistic approaches, so its going to be mostly right most of the time. But if it failing some of the time is going to be terrible, you dont want to use these methods in an unsupervised fashion. The best approach is on collaborations on decision support, not in making the decision. Radiologists should not be replaced. Radiologists should be using these things.

See more here:

As Boston Children's launches clinical decision support challenge, a warning about AI hype - MobiHealthNews

Posted in Ai | Comments Off on As Boston Children’s launches clinical decision support challenge, a warning about AI hype – MobiHealthNews

Page 256«..1020..255256257258..270280..»