Page 198«..1020..197198199200..210..»

Category Archives: Artificial Intelligence

How Are Retailers Using Artificial Intelligence? – eMarketer

Posted: February 28, 2017 at 8:08 pm

Artificial intelligence (AI) adoption is still limited, but among retail marketers who do employ the technology, many use it in their search or recommendation engines.

A January 2017 survey from Sailthru, which aggregates and analyzes user data sets for companies to create personalized customer experiences, polled retail marketers in Canada, the UK and the US about how they use AI to accomplish their goals. Respondents were primarily from ecommerce retailers, but included retailers with ecommerce and physical retail locations.

The largest share of retail marketers using AIover a thirdsaid they use it in search efforts. Nearly as many said they do so in recommendation engines for their products or content.

Meanwhile, more than a quarter reported using AI in their programmatic advertising efforts, and another 13% in chatbot efforts.

Overall, marketersnot just retail marketersuse different forms of AI.

A separate survey from Narrative Science found that among the 58% of US business executives who already had AI-powered solutions deployed at their company, nearly a third cited voice recognition and voice response solutions as the AI technology they use most.

Just under a quarter (24%) of respondents used AI primarily for machine learning, and 15% used it for virtual personal assistants.

Rimma Kats

Not a PRO subscriber? Find out how to become one.

Read more:

How Are Retailers Using Artificial Intelligence? - eMarketer

Posted in Artificial Intelligence | Comments Off on How Are Retailers Using Artificial Intelligence? – eMarketer

Britain banks on robots, artificial intelligence to boost growth – Information Management

Posted: at 8:08 pm

(Bloomberg) -- Britain is betting that the rise of the machines will boost the economy as the country exits the European Union.

As part of its strategy to champion specific industries, the U.K. government said in a statement on Sunday that it would invest 17.3 million pounds ($21.6 million) in university research on robotics and artificial intelligence. The government cited an estimate from consultancy Accenture that AI could add 654 billion pounds to the U.K. economy by 2035.

We are already pioneers in todays artificial intelligence revolution and the digital strategy will build on our strengths to make sure U.K.-based scientists, researchers and entrepreneurs continue to be at the forefront, Culture Secretary Karen Bradley said in the statement, which referenced AIs contribution to smartphone voice and touch recognition technologies.

The announcement is part of U.K. Prime Minister Theresa Mays plan to identify industries worth supporting to help transform the economy and boost productivity. The government has said it intends to target areas where it thinks the U.K. could excel in the future, including biotechnology and mobile networking.

The U.K.s digital strategy proposal, set to be unveiled on Wednesday, also includes a review of AI to determine how the government and industry can provide further support.

Investment in robotics and artificial intelligence will help make our economy more competitive, build on our world-leading reputation in these cutting-edge sectors and help us create new products, develop more innovative services and establish better ways of doing business, Business Secretary Greg Clark said in the statement.

View original post here:

Britain banks on robots, artificial intelligence to boost growth - Information Management

Posted in Artificial Intelligence | Comments Off on Britain banks on robots, artificial intelligence to boost growth – Information Management

Should artificial intelligence be used in science publishing? – PRI

Posted: at 8:08 pm

Advances in automation technology mean that robots and artificial intelligence programs are capable of performing an ever-greater share of our work, including collecting and analyzing data. For many people, automated colleagues are still just office chatter, not reality, but the technology is already disrupting industries once thought to be just for humans. Case in point: science publishing.

Increasingly, publishers are experimenting with using artificial intelligence in the peer review process for scientific papers. In a recent op-ed for Wired, one editor described how computer programs can handle tasks like suggesting reviewers for a paper, checking an authors conflicts of interestand sending decision letters.

In 2014 alone, an estimated 2.5 million scientific articles were published in about 28,000 journals (and thats just in English). Given the glut in the industry, artificial intelligence could be a valuable asset to publishers: The burgeoning technology can already provide tough checks for plagiarism and fraudulent dataand address the problem of reviewer bias. But ultimately, do we want artificial intelligence evaluating what new research does and doesnt make the cut for publication?

The stakes are high: Adam Marcus, co-founder of the blog Retraction Watch, has two words for why peer review is so important to science: Fake news.

Peer review is science's version of a filter for fake news, he says. It's the way that journals try to weed out studies that might not be methodologically sound, or they might have results that could be explained by hypotheses other than what the researchers advanced.

The way Marcus sees it, artificial intelligence cant necessarily do anything better than humans can they can just do it faster and in greater volumes. He cites one system, called statcheck, which was developed by researchers to quickly detect errors in statistical values.

They can do, according to the researchers, in a nanosecond what a person might take 10 minutes to do, he says. So obviously, that could be very important for analyzing vast numbers of papers. But as it trawls through statistics, the statcheck system can also turn up a lot of noise, or false positives, Marcus adds.

Another area where artificial intelligence could do a lot of good, Marcus says, is in combating plagiarism. Many publishers, in fact every reputable publisher, should be using right now plagiarism detection software to analyze manuscripts that get submitted. At their most effective, these identify passages in papers that have similarity with previously published passages.

But in the case of systems like statcheck and anti-plagiarism software, Marcus says its crucial that theres still human oversight, to make sure the program is turning up legitimate red flags. In other words, we need humans to ensure that algorithms arent mistakenly keeping accurate science from being published.

Despite his caution, Marcus thinks programs can and should be deployed to keep sloppy or fraudulent science out of print. Researchers recently pored over images published in over 20,000 biomedical research papers, and found that about one in 25 of them contained inappropriately duplicated images.

I'd like to see that every manuscript that gets submitted be run through a plagiarism detection software system, [and] a robust image detection software system, Marcus says. In other words, something that looks for duplicated images or fabricated images.

Such technology, he says, is already in the works. And then [wed] have some sort of statcheck-like programthat looks for squishy data.

This article is based on aninterviewthat aired on PRI'sScience Friday.

The rest is here:

Should artificial intelligence be used in science publishing? - PRI

Posted in Artificial Intelligence | Comments Off on Should artificial intelligence be used in science publishing? – PRI

Why 2017 Is The Year Of Artificial Intelligence – Forbes

Posted: at 6:18 am

Why 2017 Is The Year Of Artificial Intelligence
Forbes
A recent acceleration of innovation in artificial intelligence (AI) has made it a hot topic in boardrooms, government and the media. But it is still early, and everyone seems to have a different view of what AI actually is. Having investigated the ...

See more here:

Why 2017 Is The Year Of Artificial Intelligence - Forbes

Posted in Artificial Intelligence | Comments Off on Why 2017 Is The Year Of Artificial Intelligence – Forbes

Artificial intelligence will change America. Here’s how. – Washington Post

Posted: at 6:18 am

By Jonathan Aberman By Jonathan Aberman February 27 at 7:00 AM

The term artificial intelligence is widely used, but less understood. As we see it permeate our everyday lives, we should deal with its inevitable exponential growth and learn to embrace it before tremendous economic and social changes overwhelm us.

Part of the confusion about artificial intelligence is in the name itself. There is a tendency to think about AI as an endpoint the creation of self-aware beings with consciousness that exist thanks to software. This somewhat disquieting concept weighs heavily; what makes us human when software can think, too? It also distracts us from the tremendous progress that has been made in developing software that ultimately drives AI: machine learning.

Machine learning allows software to mimic and then perform tasks that were until very recently carried out exclusively by humans. Simply put, software can now substitute for workers knowledge to a level where many jobs can be done as well or even better by software. This reality makes a conversation about when software will acquire consciousness somewhat superfluous.

When you combine the explosion in competency of machine learning with a continued development of hardware that mimics human action (think robots), our society is headed into a perfect storm where both physical labor and knowledge labor are equally under threat.

The trends are here, whether through the coming of autonomous taxis or medical diagnostics tools evaluating your well-being. There is no reason to expect this shift towards replacement to slow as machine learning applications find their way into more parts of our economy.

The invention of the steam engine and the industrialization that followed may provide a useful analogue to the challenges our society faces today. Steam power first substituted the brute force of animals and eventually moved much human labor away from growing crops to working in cities. Subsequent technological waves such as coal power, electricity and computerization continued to change the very nature of work. Yet, through each wave, the opportunity for citizens to apply their labor persisted. Humans were the masters of technology and found new ways to find income and worth through the jobs and roles that emerged as new technologies were applied.

Heres the problem: I am not yet seeing a similar analogy for human workers when faced with machine learning and AI. Where are humans to go when most things they do can be better performed by software and machinery? What happens when human workers are not users of technology in their work but instead replaced by it entirely? I will admit to wanting to have an answer, but not yet finding one.

Some say our economy will adjust, and we will find ways to engage in commerce that relies on their labor. Others are less confident and predict a continued erosion of labor as we know it, leading to widespread unemployment and social unrest.

Other big questions raised by AI include what our expectations of privacy should be when machine learning needs our personal data to be efficient. Where do we draw the ethical lines when software must choose between two peoples lives? How will a society capable of satisfying such narrow individual needs maintain a unified culture and look out for the common good?

The potential and promise of AI requires a discussion free of ideological rigidity. Whether change occurs as our society makes those conscious choices or while we are otherwise distracted, the evolution is upon us regardless.

Jonathan Aberman is a business owner, entrepreneur and founderof Tandem NSI, a national community that connects innovators to government agencies. He is host of Whats Working in Washington on WFED, a program that highlights business and innovation, and he lectures at the University of Marylands Robert H. Smith School of Business.

Excerpt from:

Artificial intelligence will change America. Here's how. - Washington Post

Posted in Artificial Intelligence | Comments Off on Artificial intelligence will change America. Here’s how. – Washington Post

Artificial Intelligence: Removing The Human From Fintech – Forbes

Posted: at 6:18 am


Forbes
Artificial Intelligence: Removing The Human From Fintech
Forbes
As I'm sure many in the technology industry have thought today, there should have been a way to avoid the Oscars Envelopegate. But, is artificial intelligence the answer to all of our human error problems? A recent Accenture report found that the ...

Link:

Artificial Intelligence: Removing The Human From Fintech - Forbes

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence: Removing The Human From Fintech – Forbes

4 challenges Artificial Intelligence must address – The Next Web – TNW

Posted: at 6:18 am

If news, polls and investment figures are any indication, Artificial Intelligence and Machine Learning will soon become an inherent part of everything we do in our daily lives.

Backing up the argument are a slew of innovations and breakthroughs that have brought the power and efficiency of AI into various fields including medicine, shopping, finance, news, fighting crime and more.

But the explosion of AI has also highlighted the fact that while machines will plug some of the holes human-led efforts leave behind, they will bring disruptive changes and give rise to new problems that can challenge the economical, legal and ethical fabric of our societies.

Here are four issues that need Artificial Intelligence companies need to address as the technology evolves and invades even more domains.

Automation has been eating away at manufacturing jobs for decades. Huge leaps in AI have accelerated this process dramatically and propagated it to other domainspreviously imagined to remain indefinitely in the monopoly of human intelligence

From driving trucks to writing news and performing accounting tasks, AI algorithms are threatening middle class jobs like never before. They might set their eyes on other areas as well, such as replacing doctors, lawyers or even the president.

Its also true that the AI revolution will create plenty of new data science, machine learning, engineering and IT job positions to develop and maintain the systems and software that will be running those AI algorithms. But the problem is that, for the most part, the people who are losing their jobs dont have the skill sets to fill the vacant posts, creating an expanding vacuum of tech talent and a growing deluge of unemployed and disenchanted population. Some tech leaders are even getting ready for the day the pitchforks come knocking at their doors.

In order to prevent things from running out of control, the tech industry has a responsibility to help the society to adapt to the major shift that is overcoming the socio-economic landscape and smoothly transition toward a future where robots will be occupying more and more jobs.

Teaching new tech skills to people who are losing or might lose their jobs to AI in the future can complement the efforts. In tandem, tech companies can employ rising trends such as cognitive computing and natural language generation and processing to helpbreak down the complexity of tasks and lower the bar for entry into tech jobs, making them available to more people.

In the long run governments and corporations must consider initiatives such as Universal Basic Income (UBI), unconditional monthly or yearly payments to all citizens, as we slowly inch toward the day where all work will be carried out by robots.

As has been proven on several accounts in the past years, AI can be just as or even more biased than humans.

Machine Learning, the popular branch of AI that is behind face recognition algorithms, product suggestions, advertising engines, and much more, depends on data to train and hone its algorithms.

The problem is, if the information trainers feed to these algorithms is unbalanced, the system will eventually adopt the covert and overt biases that those data sets contain. And at present, the AI industry is suffering from diversity troubles that some label the white guy problem, or largely dominated by white males.

This is the reason why an AI-judged beauty contest turned out to award mostly white candidates, a name-ranking algorithm ended up favoring white-sounding names, and advertising algorithms preferred to show high-paying job ads to male visitors.

Another problem that caused much controversy in the past year was the filter bubble phenomenon that was seen in Facebook and other social media that tailored content to the biases and preferences of users, effectively shutting them out from other viewpoints and realities that were out there.

While for the moment much of the cases can be shrugged off as innocent mistakes and humorous flaws, some major changes need to be made if AI will be put in charge of more critical tasks, such as determining the fate of a defendant in court. Safeguards also have to be put in place to prevent any single organization or company to skew the behavior of an ML algorithm in its favor by manipulating the data.

This can be achieved by promoting transparency and openness in algorithmic datasets. Shared data repositories that are not owned by any single entity and can be vetted and audited by independent bodies can help move toward this goal.

Whos to blame when a software or hardware malfunctions? Before AI, it was relatively easy to determine whether an incident was the result of the actions of a user, developer or manufacturer.

But in the era of AI-driven technologies, the lines are not as clearcut.

ML algorithms figure out for themselves how to react to events, and while data gives them context, not even the developers of those algorithms can explain every single scenario and decision that their product makes.

This can become an issue when AI algorithms start making critical decisions such as when a self-driving car has to choose between the life of a passenger and a pedestrian.

Extrapolating from that, there are many other conceivable scenarios where determining culpability and accountability will become difficult, such as when an AI-driven drug infusion system or robotic surgery machine harms a patient.

When the boundaries of responsibility are blurred between the user, developer, and data trainer, every involved party can lay the blame on someone else. Therefore, new regulations must be put in place to clearly predict and address legal issues that will surround AI in the near future.

AI and ML feed on data reams of it and companies that center their business around the technology will grow a penchant for collecting user data, with or without the latters consent, in order to make their services more targeted and efficient.

In the hunt for more and more data, companies may trek into uncharted territory and cross privacy boundaries. Such was the case of a retail store that found out about a teenage girls secret pregnancy, and the more recent case of UK National Health Services patient data sharing program with Googles DeepMind, a move that was supposedly aimed at improving disease prediction.

Theres also the issue of bad actors, of both governmental and non-governmental nature, that might put AI and ML to ill use. A very effective Russian face recognition app rolled out last year proved to be a potential tool for oppressive regimes seeking to identify and crack down on dissidents and protestors. Another ML algorithm proved to be effective at peeking behind masked images and blurred pictures.

Other implementations of AI and ML are making it possible to impersonate people by imitating their handwriting, voice and conversation style, an unprecedented power that can come in handy in a number of dark scenarios.

Unless companies developing and using AI technology regulate their information collection and sharing practices and take necessary steps to anonymize and protect user data, theyll end up causing harm than good to users. The use and availability of the technology must also be revised and regulated in a way to prevent or minimize ill use.

Users should also become more sensible about what they share with companies or post on the Internet. Were living in an era where privacy is becoming a commodity, and AI isnt making it any better.

There are benefits and dark sides to every disruptive technology, and AI is no exception to the rule. What is important is that we identify the challenges that lay before us and acknowledge our responsibility to make sure that we can take full advantage of the benefits while minimizing the tradeoffs.

The robots are coming. Lets make sure they come in peace.

This post is part of our contributor series. It is written and published independently of TNW.

Read next: Microsoft will soon let you block desktop apps from installing on Windows 10

Read more from the original source:

4 challenges Artificial Intelligence must address - The Next Web - TNW

Posted in Artificial Intelligence | Comments Off on 4 challenges Artificial Intelligence must address – The Next Web – TNW

Honda Chases Silicon Valley With New Artificial-Intelligence Center – Wall Street Journal (subscription)

Posted: at 6:18 am


Financial Express
Honda Chases Silicon Valley With New Artificial-Intelligence Center
Wall Street Journal (subscription)
TOKYOHonda Motor Co. is creating a research arm focused on artificial intelligence, an area where one of its American advisers says it risks falling behind. R&D Center X will open in Tokyo in April as a software-focused counterpart to Honda's ...
New Honda R&D centre to develop technologies such as autonomous driving and artificial intelligenceFinancial Express

all 3 news articles »

View post:

Honda Chases Silicon Valley With New Artificial-Intelligence Center - Wall Street Journal (subscription)

Posted in Artificial Intelligence | Comments Off on Honda Chases Silicon Valley With New Artificial-Intelligence Center – Wall Street Journal (subscription)

‘Artificial intelligence is the next big thing’ – The Hindu – The Hindu

Posted: at 6:18 am


The Hindu
'Artificial intelligence is the next big thing' - The Hindu
The Hindu
Spencer Kelly, presenter of the BBC's Click technology programme, discusses Indian jugaad, South Korea's jellyfish-hunting robots, and how self-driving cars ...

and more »

Go here to see the original:

'Artificial intelligence is the next big thing' - The Hindu - The Hindu

Posted in Artificial Intelligence | Comments Off on ‘Artificial intelligence is the next big thing’ – The Hindu – The Hindu

Christianity is engaging Artificial Intelligence, but in the right way – Crux: Covering all things Catholic

Posted: at 6:18 am

In a recent essay in The Atlantic, Jonathan Merritt laments that theologians and Christian leaders, including Pope Francis, have not addressed what he claims will be the greatest challenge that Christianity has ever faced: Artificial Intelligence, or AI.

In his view, intelligent machines threaten to overturn many Christian beliefs, a trial that theologians seem blind to because theyre stuck rehashing old questions instead of focusing on the coming ones.

Such a criticism would be devastating if true, but is it?

A fuller reading of Pope Franciss work suggests that he is actually engaging the issues with AI that most directly affect the contemporary Church and society. Before I get to that, though, its necessary to give Merritts argument his due. Most theologians are indeed not addressing the specific aspects of AI that he considers essential, but this is a wise choice on their part.

First, its important to note that rehashing old questions, or what Catholics like to call the development of tradition, provides many insights into these questions. For example, Merritt claims that Christians have mostly understood the soul to be a uniquely human element, an internal and eternal component that animates our spiritual sides.

This is not an accurate characterization.

Drawing upon the heritage of Greek philosophy, most theologians have understood the soul to be what makes a specific living thing what it is. It is the principle of growth and development in all living things, movements and sensation in animals, and rationality in humans.

Therefore, animals have souls, plants have souls, and an AI that could think and manipulate the world around it would have to have something like a soul.

Merritt qualifies himself in the next sentence to refer to the image of God that each person possesses in her soul. Yet again, major figures in the tradition such as Thomas Aquinas do not see the image of God restricted to humans.

For him (some other theologians have very different interpretations), we imagine God primarily in our potential for reason and free will, so any being with reason and free will would possess that image, including angels, for Aquinas, rational aliens, for Francis, even true AI, if it existed.

Of course, this reason is not mere instrumental reason, but one that understands purposes, meaning, and the moral law.

Still, based on Merritts argument one might ask, how can such spiritual faculties arise out of silicon circuits (or nanotubes, or any other material)? While a problem, it is no more difficult, nor much different, than the question of how the spiritual arises from lowly flesh, a question that thinkers have wrestled with throughout the Western tradition.

Theologians struggle with this problem in ordinary human development how and when new life gains a soul is a central theological question, for obvious practical reasons. The predominant answer in the Catholic tradition is that, in the process of procreation in which human parents cooperate, God creates an individual spiritual soul for each human body. Something like this framework could be used to think about AI.

It is true that some issues are more difficult, like how AI could be redeemed.

Christianity argues for Gods special care for humanity, with the second person of the Trinity assuming a human nature in the Incarnation. This doctrine raises questions about Christs relation to any possible AI, but ones not fundamentally different to questions of how Christ redeems all of nonhuman creation, questions that have become ever more pressing given environmental devastation.

Given these resources, why havent more theologians directly addressed AI?

First, I would guess that most theologians are less optimistic than the ones Merritt quotes about the actual possibility of true AI. Beyond the sixty years of unfulfilled promises that AI is just around the corner, AI theorists have not addressed philosophical concerns as to whether their programs can have consciousness and grasp meaning.

In his Chinese Room argument, John Searle pointed out that while computer programs manipulate symbols (syntax), allowing them to imitate behavior, they cannot really grasp the meaning (semantics) of the things they manipulate, which would be necessary for consciousness.

A second source of skepticism for engaging AI is that, along with many contemporary non-Christian thinkers, theologians recognize making an AI is an extremely bad idea. If a machine has the free choice necessary for true AI, then it has the possibility of sin, leading to large downside risks, such as human extinction.

This concern about risk raises the final problem with Merritts analysis if one reads Francis carefully, one finds that he addresses the problems of todays limited AI that are harming people right now rather than future speculative possibilities.

Laudato Si, Franciss recent encyclical, is just as much about technology in human ecology as it is about the natural environment.

He addresses contemporary mental pollution and isolation, reflecting concerns in other papal addresses over people only receiving information that confirms their opinions, problems that arise in part due to AI algorithms reflecting our opinions back to us in search results and news feeds, a solipsism whose political effects were chillingly documented in Adam Curtis documentary HyperNormalisation.

In a second and even more important example, he laments a kind of technological progress in which the costs of production are reduced by laying off workers and replacing them with machines.These are not only issues of automation impacting blue collar jobs, but now, even many white collar jobs are disappearing due to the applications of AI.

Pope Francis demonstrates that dealing with Merritts speculative problems may distract us from more pressing challenges, such as knowledge workers in their late 40s whose positions become redundant due to AI and who thus wont be able to make their mortgages while they retrain.

Problems like that may not be as hot a topic for a TED talk as speculating on the prayer life of AI, but these are the challenges of technology that a Church whose members will be judged by their care for the least in society should be addressing.

Paul Scherz is an assistant professor of moral theology/ethics at The Catholic University of America. He examines how the daily use of biomedical technologies shapes the way researchers, doctors, and patients see and manipulate the world and their bodies. Scherz has a Ph.D. in Genetics from Harvard University and a Ph.D. in moral theology from the University of Notre Dame.

See the original post:

Christianity is engaging Artificial Intelligence, but in the right way - Crux: Covering all things Catholic

Posted in Artificial Intelligence | Comments Off on Christianity is engaging Artificial Intelligence, but in the right way – Crux: Covering all things Catholic

Page 198«..1020..197198199200..210..»