Page 172«..1020..171172173174..180190..»

Category Archives: Artificial Intelligence

3 types of artificial intelligence, but only 2 are valid – VentureBeat

Posted: May 23, 2017 at 10:50 pm

For all of the visions of robots taking over the world, stealing jobs, and outpacing humans in every facet of existence, we havent seen many cases of AI drastically changing industries, or even our day-to-day lives, just yet. For this reason, media and AI deniers alike question whether true broad-scale AI even exists. Some go as far as to conclude that it doesnt.

The answer is a bit more nuanced than that.

Current AI applications can be broken down into three loose categories: Transformative AI, DIY (Do It Yourself) AI, and Faux AI. The latter two are the most common and therefore tend to be the measure by which all AI is judged.

The everyday AI applications weve seen most of so far are geared toward accessing and processing data for you, making suggestions based on that data, and sometimes even executing very narrow tasks. Alexa turning on yourmusic, telling you whats happening in your day, and reporting on the weather outside are good examples. Another is your iPhone predicting a phone number for a contact you dont already have saved.

While these applications might not live up to the image of AI we have in our heads, it doesnt mean theyre not AI. It just means theyre not all thatlife-changing.

The kind of AI that will take over the world or at least have the most dramatic effect on how people live and work is what I think of as Transformative AI. Transformative AI turns data into insights and insights into instructions. Then, instead of simply delivering those instructions to the user so he or she can make more informed decisions, it gets to work, autonomously carrying out an entire complex process on its own based on what it has learned and continues to learn along the way.

This type of AI isnt yet ubiquitous. The most universally known manifestation of this is likely the self-driving car. Self-driving cars are an accessible example of what it looks like for a machine to take in constantly changing information, process it, and act on it, thereby eliminating the need for human participation at any stage.

Driving is not a fixed process that is easily automated. (If it were, AI wouldnt be necessary.) While there is indeed a finite set of actions involved in driving, the data set the AI must process shifts every single time the passenger gets into the car based on road conditions, destination, route, oncoming and surrounding traffic, street lanes, street closures, proximity to neighboring vehicles,a pedestrian stepping out in front of the car, and so on. The AI must be able to take all of this in, make a decision about it, and act on it right then and there, just like a human driver would.

This is Transformative AI, and we know its real because its already happening.

Now imagine the implications of this technology applied elsewhere. Most people will likely experience Transformative AI through their jobs or industries before it directly affects the way they live. In business, the massive amount of big data that companies are collecting will be the fuel that AI uses to single-handedly power processes currently handled by entire teams, and it will do so with far greater precision and efficiency.

Were seeing this in the marketing space, as brands like Cosabella and Dole Asia have replaced their digital account teams and agencies built onartificial intelligence platforms.

But these are still early days, and it will be a while before these types of stories are commonplace. In the meantime, well mostly see different manifestations of DIY AI and Faux AI.

DIY AI is any artificial intelligence platform whose end goal is to make you, the user, more informed so that you can then do the remaining workyourself. This type of AI can take in and process large amounts of data to produce insights, but thats the end of the line for it. Put another way, its practical and prescriptive but not curative.

Nevertheless, it can be extremely valuable to companies and organizations that have been relying on data scientists to make sense of their data manually. Even the most talented data scientists need far more time to process, analyze, and make recommendations from data than a machine does. A few of the many reasons for this is that humans require things like sleep, food, and weekends off. A more significant reason is that humans simply dont have the same processing power that machines do.

An example of DIY AI is Salesforces Einstein. In an ad placed in the New York Times in early May, Salesforce described how Einstein qualifies leads, predicts when customers are ready to buy, and helps close more deals. In other words, the AI is reading companies CRM data, making sense of it, and setting up salespeople for more success than theyd have if they had to wade through the same data on their own. But the execution elements of the sales process are ultimately still DIY for the user.

Its worth noting that DIY AI isoften bolt-on, meaning that the AI is essentially bolted onto an existing technology. It then acts as the brain that makes a once dumb (or static) system smart (or insightful). For the sake of comparison, Transformative AI must be built from the ground up, meaning there are no parts of the technology that arent AI-driven.

The final category of AI were seeing is the one that spoils it for everyone: Faux AI. While DIY AI might seem lackluster or boring, Faux AI is pretending to be something that its not. As with any new technology that creates hype and intrigue, AI has inspired companies to prey on the publics lack of understanding. Many of the companies doing this are re-positioning their predictive and automation technologies as AI, when really they are just offering rules-based applicationsthat arent governed by machine learning.

Not to single out any chatbots, but there are a few culprits in that space. They look and act like AI agents, but they are not really using machine learning. They are pretenders.

Programmatic ad buying is a good example of an insight-driven, predictive technology that many people confuse with AI and which often passes itself off as the same. Because programmatic technology has been around for over a decade, learning that it is AI (which its not) can leave people feeling like artificial intelligence isnt so special.

The way AI will evolve and begin infiltrating our lives is two-fold.

Some of the more robust DIY AI out there isactually Transformative AI in training. The data being collected and processed will train algorithms over time so that theyre ultimately equipped with all the information they need to begin acting on that data (assuming theyve been programmed to do so). And technologists whoare just getting started on their platforms will build them with AI from the ground up, rather than bolting training wheels on after the fact. The result will be active sources of Transformative AI that ultimately shape up into what we imagine AI can be ideally, in the most positive way possible.

Or Shani is the CEO of Albert, an AI marketing platform.

More here:

3 types of artificial intelligence, but only 2 are valid - VentureBeat

Posted in Artificial Intelligence | Comments Off on 3 types of artificial intelligence, but only 2 are valid – VentureBeat

Killer artificial intelligence returns in ‘Alien: Covenant’ – Reading Eagle

Posted: May 22, 2017 at 3:42 am

LOS ANGELES - Modern movie culture would have you believe artificial intelligence is out to kill us all.

In "2001: A Space Odyssey," Hal, the AI computer aboard a space flight to Jupiter, develops a mind of its own and turns against the crew. "The Terminator" makes his mission clear in the movie's title. Ava, the pretty-faced android in "Ex Machina," has a killer instinct. David, the pretty-faced android in "Prometheus," also doesn't have the best intentions for human survival.

"Prometheus" director Ridley Scott, who further explores the cunning side of artificial intelligence in his new "Alien: Covenant," said, "If you're going to use something that's smarter than you are, that's when it starts to get dangerous."

It has been a running theme through Scott's three films set in the "Alien" universe, dating back to the 1979 original in which Sigourney Weaver battles not only an alien killing machine but also Ash, an android who views his human crewmates as expendable. "Prometheus" in 2012 introduced David, an earlier android version with a similar lack of scruples about protecting humanity.

Filmmakers have long projected that artificial intelligence could spell the end of humanity, and some top scientists and tech leaders - including Stephen Hawking and Elon Musk - share their concern.

Musk, an early investor in the development of AI, told Vanity Fair earlier this year that he worries the technology could ultimately "produce something evil by accident," such as "a fleet of artificial intelligence-enhanced robots capable of destroying mankind."

But astrophysicist, author and film fan Neil deGrasse Tyson said he believes there's nothing to worry about. Killer androids may make for fun film fodder, but he doesn't think they're an imminent, or eventual, reality.

"I'm completely fearless of AI," Tyson said.

Tyson noted that human beings have been inventing machines to replace human labor since the days of the Industrial Revolution, and computers have succeeded in outsmarting people since before Watson beat Ken Jennings at "Jeopardy!"

In movies, artificially intelligent beings might look human, but most real-life robots don't, he said. The robots welding parts on automobile assembly lines look like machines, not mechanics.

"The first thing we think of when we have a machine that has capacity is not to put it into something that looks human," Tyson said. "Because the human form is not very good at anything, so why have it look human?"

An exception would be "sex robots," he said, adding rhetorically, "Is this robot going to take over the world?"

For Scott, the possibility of evil artificial intelligence comes back to the question of the creator: Who is doing the creating, and for what purpose?

"Whoever the inventor is, he's going to want to go the whole nine yards," the filmmaker said. "Hence you get the expression of the mad professor who makes a mistake in going too far where the alien is way smarter than he is or the monsters are way smarter than he is, and that's where you get problems.

"But we will definitely go there. Because what it's leading to is the question of creation. And creation, I don't care who you are, is on everyone's mind."

Tyson is also fascinated with creation. His latest book, "Astrophysics for People in a Hurry," is about the birth of the universe and carbon-based life.

Androids, though, "are just completely pointless," he said. And they couldn't become self-aware without consciousness, something scientists have yet to fully grasp.

"You're saying we're going to end up programming this into a machine and then it's going to decide we shouldn't exist, when we don't even understand our own consciousness? I just don't see it," he said.

Besides, if somehow artificially intelligent androids do go rogue, Tyson has a solution.

"This is America," he said. "I can shoot the robot."

See the original post here:

Killer artificial intelligence returns in 'Alien: Covenant' - Reading Eagle

Posted in Artificial Intelligence | Comments Off on Killer artificial intelligence returns in ‘Alien: Covenant’ – Reading Eagle

5 Ways Artificial Intelligence May Help Us Live At Home Longer – Forbes

Posted: at 3:42 am


Forbes
5 Ways Artificial Intelligence May Help Us Live At Home Longer
Forbes
That wariness may especially be true when it comes to the digital innovation that seems destined to become the next game-changer artificial intelligence or AI. The name alone conjures up notions of talking robots and other brainy devices. That can ...

Continued here:

5 Ways Artificial Intelligence May Help Us Live At Home Longer - Forbes

Posted in Artificial Intelligence | Comments Off on 5 Ways Artificial Intelligence May Help Us Live At Home Longer – Forbes

Uber uses artificial intelligence to figure out your personal price hike – ZDNet

Posted: at 3:42 am

File Photo

Uber has admitted using artificial intelligence to charge customers based on what they are likely to be willing to pay.

As reported by Bloomberg, the ride-hailing service says that the new system is based on AI and algorithms which estimate fare rates that groups of customers will be willing to pay depending on destination, time of day, and location.

In the past, your fare would be generated based on mileage, time, and geographic demand. However, the new "route-based pricing" system utilizes machine-learning techniques to tweak pricing in relation to a number of sociological factors.

Currently operating in 14 cities, the new system will take into account estimated wealth. For example, should a passenger book a ride from one wealthy area to another, they may be asked to pay more than someone heading to a less affluent area -- no matter the distance, mileage, or time necessary to travel.

In addition, as noted by the publication, despite the tailored potential price hikes, this does not mean that drivers see a cut of inflated fees.

The issue stems from what Uber calls "upfront pricing," which guarantees customers a fixed fare before they book. Passengers know how much they are expected to pay, but drivers have claimed that they are yet to see any additional compensation from jumped-up prices produced by the AI system.

In a statement, an Uber spokesperson said that routes are priced differently "based on our understanding of riders' choices so we can serve more people in more places at fares they can afford."

"Riders will always know the cost of a trip before requesting a ride, and drivers will earn consistently for the work they perform with full transparency into what a rider pays and what Uber makes on every trip," the spokesperson added.

Uber's product head Daniel Graf said the AI system was first introduced last year as an experiment to stay ahead of the game due to competition not only by traditional taxi firms but also rival companies such as Lyft.

Uber pockets the leftover amount between the driver's pay and what a customer is charged in what Graf calls a way to create a "sustainable" businesses, which the executive argues could one day be the difference between profitability and a red bank balance.

In an attempt to ease driver concerns, Uber plans to report the price that passengers pay for each ride and an updated service agreement will lay out the structure of the new system.

Uber says that skimming off the top is used to pay for driver bonuses and to invest more funds in the system as a whole, but it remains to be seen whether fares being driven by data analytics will alienate drivers.

See also: Uber rejects claims iPhone app tracked users after being deleted

In related news, over the past year, Uber and Google-owned Waymo have been embroiled in a legal dispute related to the alleged theft of trade secrets by the former head of Uber's self-driving car project, former Google engineer Anthony Levandowski.

In related news, over the past year, Uber and Google-owned Waymo have been embroiled in a legal dispute related to the alleged theft of trade secrets by the former head of Uber's self-driving car project, former Google engineer Anthony Levandowski.

The executive has now stepped aside from his leadership role, but has refused to deny or acknowledge the theft of files relating to Google's LiDAR sensor and mapping technology.

To lay the matter to rest, Uber has demanded that Levandowski either return the allegedly stolen files or formally deny the theft -- or face losing his job.

Follow this link:

Uber uses artificial intelligence to figure out your personal price hike - ZDNet

Posted in Artificial Intelligence | Comments Off on Uber uses artificial intelligence to figure out your personal price hike – ZDNet

Mobile-first to AI-first: Google’s quest to dominate artificial intelligence arena – Moneycontrol.com

Posted: at 3:42 am

Moneycontrol News

Artificial Intelligence (AI) is the latest frontier for Google as it moves away from a mobile-first approach. Many of Google's offerings will change with this direction which relies on machine learning and deep learning.

In an AI-first world, we are rethinking all our products, Google CEO Sundar Pichai said at the company's annual developers conference Google I/O 2017 last week.

The announcements made at the conference suggest that nifty additions to Google's existing products will be driven by machine learning. Take, for example, Google Photos. The company is developing new features for the app that will rely on machine learning and prompt users to share photos with people who appear in them.

Next, Google Home is being empowered to suggest users about road conditions, traffic, nearby restaurants, hands-free call, meeting schedules, etc.

While Pichai announced the AI-first strategy last week, the signs of change of approach were visible quite earlier. Google intensified its search for acquisitions and as a result it leads the race for AI domination now it bought out 11 firms in the past five years.

In 2013, Google acquired University of Toronto-led deep learning and neural network startup startup DNNresearch. The pick-up helped Google revamp its image search feature, according to market researcher CB Insights. Next year, Google shelled out USD 600 million for British firm DeepMind Technologies and in 2016, it bpought visual search startup Moodstock and Api.ai a bot-based platform. Predictive analytics company Kaggle is one of Google's latest acquisitions.

Tech giants like Apple, Microsoft, IBM, Yahoo and Intel, and more recently, Samsung, Ford and Uber, too, are vying for a slice of the AI pie. "Over 200 private companies using AI algorithms across different verticals have been acquired since 2012, with over 30 acquisitions taking place in the first quarter if 2017, according to the CB Insights report.

Read more here:

Mobile-first to AI-first: Google's quest to dominate artificial intelligence arena - Moneycontrol.com

Posted in Artificial Intelligence | Comments Off on Mobile-first to AI-first: Google’s quest to dominate artificial intelligence arena – Moneycontrol.com

How Artificial Intelligence will impact professional writing – TNW

Posted: May 20, 2017 at 6:50 am

Professional writing isnt easy. As a blogger, journalist or reporter, you have to meet several challenges to stay at the top of your trade. You have to stay up to date with the latest developments and at the same time write timely, compelling and unique content.

The same goes for scientists, researchers and analysts and other professionals whose job involves a lot of writing.

With the deluge of information being published on the web every day, things arent getting easier. You have to juggle speed, style, quality and content simultaneously if you want to succeed in reaching your audience.

Fortunately, Artificial Intelligence, which is fast permeating every aspect of human life, has a few tricks up its sleeve to boost the efforts of professional writers.

In 2014, George R. R. Martin, the acclaimed writer of the Song of Ice and Fire saga, explained in an interview how he avoids modern word processors because of their pesky autocorrect and spell checkers.

Software vendors have always tried to assist writers by adding proofreading features to their tools. But as writers like Martin will attest, those efforts can be a nuisance to anyone with more-than-moderate writing skills.

However, that is changing as AI is gettingbetter at understanding the context and intent of written text. One example is Microsoft Words new Editor feature, a tool that uses AI to provide more than simple proofreading.

Editor can understand different nuances in your prose much better than code-and-logic tools do. It flags not only to grammatical errors and style mistakes, but also the use of unnecessarily complex words and overused terms. For instance, it knows when youre using the word really to emphasize a point or to pose a question.

It also gives eloquent descriptions of its decisions and provides smart suggestions when it deems something as incorrect. For example if it marks a sentence as passive, it will provide a reworded version in active voice.

Editor has been well received by professional writers (passive voice intended), though its still far from perfect.

Nonetheless AI-powered writing assistance is fast becoming a competitive market. Grammarly, a freemium grammar checker that installs as a browser extension, uses AI to help with all writing tasks on the web. Atomic Reach is another player in the space, which uses machine learning to provide feedback on the readability of written content.

Writing good content relies on good reading. I usually like to go through different articles describing conflicting opinions about a topic before I fire up my word processor. The problem is theres so much material and so little time to readall of it. And things tend to get tedious when youre trying to find key highlights and differences between articles written about a similar topic.

Now, Artificial Intelligence is making inroads in the field by providing smart summaries of documents. An AI algorithm developed by researchers at Salesforce generates snippets of text that describe the essence of longtext. Though tools for summarizing texts have existed for a while, Salesforces solution surpasses others by using machine learning. The system uses a combination of supervised and reinforced learning to get help from human trainers and learn to summarize on its own.

Other algorithms such as Algorithmias Summarizer provide developers with libraries that easily integrate text summary capabilities into their software.

These tools can help writers skim througha lot of articles and find relevant topics to write about. It can also help editors to read through tons of emails, pitches and press releases they receive every day. This way theyll be better positioned to decide which emails need further attention. Having hundreds of unread emails in my inbox, I fully appreciate the value this can have.

Advances in Natural Language Processing have contributed widely to this trend. NLP helps machines understand the general meaning of text and relations between different elements and entities.

To be fair, nothing short of human level intelligence can have the commonsense knowledge and mastery of language required to provide flawless summary of all text. The technology still has more than and few kinks to iron out, but it shows a glimpse of what the future of reading might look like.

No matter how high-quality and relevant your content is, itll be of no use if you cant reach out to the right audience. Unfortunately, old keyword-based search algorithms pushed online writers toward stuffing their content with keywords in order to increase their relevance for search engine crawlers.

Although with PageRank, Google did a great job in organizing the web, it also created a web where keywords ruled over content, says Gennaro Cuofano, growth hacker at WordLift, a company that develops tools for semantic web. Eventually, web writers ended up spending a significant amount of time improving the findability. The trend resulted in poor quality writing getting higher search ranking.

But thanks to Artificial Intelligence, search engines are able to parse and understand content, and the rules of search engine optimization have changed immensely in past years.

Since new semantic technologies are now mature enough to read human language, journalists and professional writers can finally go back to writing for people, Cuofano says. This means you can expect more quality content to appear both on websites and search engine results.

Where do we go from here? The next revolution (which is already coming) is the leap from NLP to a subset of it called NLU (Natural Language Understanding), Cuofano says. In fact, while NLP is more about giving structure to data, defining it and making it readable by machines; NLU instead is about taking unclear, unstructured and undefined inputs and transforming them to an output that is close to human understanding.

Were already seeing glimmers of this next generation in AI-powered journalism. The technology is still in its infancy, but will not remain so indefinitely. Writing can someday become a full-time machine occupation, just like many other tasks that were believed to be the exclusive domain of human intelligence the past.

How does this affect writing? Currently, the web is a place where how-to articles, tutorials and guides are dominant, Cuofano says. This makes sense in an era where people are still in charge of most tasks. Yet in a future where AI takes over, wouldnt it make more and more sense to write about why we do things? Thus, instead of focusing on content that has a short shelf life, we can focus again on content that has the capability to outlive us.

Read next: Apple blames exploding Beats headset on AAA batteries

Read the rest here:

How Artificial Intelligence will impact professional writing - TNW

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence will impact professional writing – TNW

Now artificial intelligence is inventing sounds that have never been heard before – ScienceAlert

Posted: at 6:50 am

As well as beating us at board games, driving cars, and spotting cancer, artificial intelligence is now generating brand new sounds that have never been heard before, thanks to some advanced maths combined with samples from real instruments.

Before long, you might hear some of these fresh sounds pumping out of your radio, as the researchers responsible say they're hoping to give musicians an almost limitless new range of computer-generated instruments to work with.

The new system is called NSynth, and it's been developed by an engineering team called Google Magenta, a small part of Google's larger push into artificial intelligence.

"Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer," explains the team.

You can check out a couple of NSynth samples below, courtesy of Wired:

NSynth takes samples from about a thousand different instruments and blends two together them together, but in a highly sophisticated way. First, the AI program learns to identify the audible characteristics of each instrument so they can be reproduced.

That detailed knowledge is then used to produce a mix of instruments that doesn't sound like a mix of instruments the properties of the audio are adjusted to create something that sounds like a single, new instrument rather than a mash of multiple sounds.

So instead of having a flute and violin play together, you've got a brand new, algorithm-driven digital instrument somewhere between the two. How much of the flute and how much of the violin are in the final sound is up to the musician.

Like many of Google's AI initiatives, NSynth relies on deep learning: a specific approach to AI where vast amounts of data can be processed in a similar way to the human brain, which is why these systems are often described as artificial neural networks.

So not only can deep learning systems use millions of cat pictures to correctly identify a cat, for instance, they can also learn from their mistakes and get better over time, teaching themselves how to improve just like our brains do.

The idea of deep learning has been around for decades, but we're only now seeing the kind of software and computing power appear that can make it a reality.

One consequence of that is that the NSynth demos built by the Google Magenta team all work in real time, allowing new compositions to be created.

Music critic Marc Weidenbaum told Wired that Google's new approach to the traditional trick of combining instruments together shows promise.

"Artistically, it could yield some cool stuff, and because it's Google, people will follow their lead," he said.

Google engineers have just been demoing NSynth at the Moogfest festival, and you can read a paper on their work at arXiv.

Read more:

Now artificial intelligence is inventing sounds that have never been heard before - ScienceAlert

Posted in Artificial Intelligence | Comments Off on Now artificial intelligence is inventing sounds that have never been heard before – ScienceAlert

How artificial intelligence might help achieve the SDGs – Devex

Posted: at 6:50 am

Sid Dixit of Planet presents at Train AI in San Francisco. Photo by: Catherine Cheney / Devex

Machine learning is the ultimate way to go faster, said Peter Norvig, director of research at Google, as he showed a slide image of a race car to a crowd of professionals gathered to learn more about artificial intelligence.

But speed can also lead to accidents, Norvig warned, clicking to another slide with an image of a dramatic crash. Norvig, who is also the author of a leading textbook on artificial intelligence, or AI, was speaking at Train AI, a conference hosted by CrowdFlowerin San Francisco, California, this week.

In every industry, theres a place where AI can make things better, Norvig told Devex. Look at all of the AI technologies, and all problems, and its just a question of fitting them together and figuring out, whats the right technological match and whats the right policy match?

Machine intelligence will have profound implications for the development sector. AI is a way to understand data, Norvig continued, and the global development community will be unable to understand and act on information coming in from cell phones to satellites without both human and machine intelligence.

Norvig will also speak at the upcoming AI for Global Goodsummit hosted at the International Telecommunications Unitin Geneva, Switzerland. Gatherings such as these help technologists connect solutions to problems, he said. From Silicon Valley to Hyderabad, India, where the ninth annual Information and Communications Technology for Development, or ICT4D, conferencehas just taken place, there is growing interest in bringing the technology community together with the global development community, in order to leverage AI to achieve the Sustainable Development Goals.

Every day we see a new news report on how AI is changing the future of every part of society, said Robin Bordoli, chief executive officer of CrowdFlower, which describes itself as an essential platform for data science. Despite some of the concerns around job loss, we believe in the power of AI to create positive change at all levels of society.

That is the thinking behind the launch of CrowdFlowers AI for Everyonechallenge to put the power of AI into the hands of people who want to use machine intelligence to solve social problems. The company expects to see applications addressing global challenges in areas such as health care, food and nutrition, and climate change, and it is just the latest in a number of similar such competitions.

Earlier this month, XPRIZE, which puts on competitions together with partners, announced the 147 teamsfrom 22 countries that would advance in the $5 million IBM Watson AI XPRIZE. The teams entering this global competition are working to develop AI applications that demonstrate how humans, together with AI, can tackle global challenges. Examples include Harvesting, a global intelligence platform for agriculture.

With any automated and digital system you have to make sure you are not shutting out or creating unintended problems for people who can't read, don't have devices, or otherwise are not able to access the new system.

And the Digital Financial Services Innovation Lab, an early stage incubator for entrepreneurs building financial technology companies in developing countries, has open challenges for biometrics and chatbots, with a deadline of May 30. DFS Lab is housed within Caribou Digital, a research and delivery consultancy in Seattle, Washington. TheBill & Melinda Gates Foundationfunded the incubator to engage top scientists and engineers in challenges such as these around boosting financial inclusion, said Jake Kendall, director of the DFS Lab, who was formerly on the Financial Services for the Poor team at the Gates Foundation.

Human interactions are always going to be necessary, but any time you can remove the need for them from a process through automation or NLP [natural language processing] conversational interfaces, that can be a game changer in terms of scalability and efficiency, he told Devex via email. NLP and bots give people tools to help themselves in the digital realm which can be really empowering. But there are downsides. With any automated and digital system you really have to make sure you are not shutting out certain people or creating unintended problems for people who can't read, don't have devices, or otherwise are not able to access the new system.

In the popular imagination, AI can feel like something that solves the problems of the affluent with products such as Alexa, the Amazon device that allow users to get information, play music, or control their smart homes using their voices. But many experts in the field believe there is a major role for AI in helping achieve the SDGs, and the founder of Arifu is making the casefor the role of chatbots and AI in achieving the SDGs at the ICT4D conference. His education technology company, which launched in Kenya in 2015, points to how a chatbot leveraging AI can deliver personalized learning on mobile devices to provide access to information on topics such as farming, entrepreneurship or financial literacy to the worlds least served.

Were moving from the enterprise and the abstract to the consumer and the personal, said Robert Munro, principal product manager at Amazon AI, at Train AI.

What that means for global health, for example, is a shift toward point of care tests, even in resource limited settings, he said.

Munros talk in San Francisco revisited AIs progress after a series of talks he gave five years ago called Wheres My Talking Robot. AI is now making more decisions in our lives than most people realize, he said. Its making us smarter, choosing our friends, selecting our news, aiding our health, moving us around, and protecting our security.

For example, he mentioned how the first alert for the swine flu outbreak in Mexico came from reading AI reports about potential disease outbreaks.

However, at a conference covering high-definition mapping, AI and medicine, and deep learning, examples of applications of machine learning in developing countries were few and far between.

Lukas Biewald, founder & executive chairman of CrowdFlower, did talk about how one of his clients is using drones for conservation.

And in a series of presentations from CrowdFlower customers, Sid Dixit, director of product program management at Planet, talked about how AI combined with millions of images from its small satellites can determine the health of forests and water resources, and monitor harvests and agriculture everywhere.

Anthony Goldbloom, CEO of Kaggle, talked about how Genentech, a biotechnology corporation,opened a challengeon his platform for machine learning competitions to predict which women would not be screened on schedule for cervical cancer, a largely preventable disease that several leaders in the global health community, including PATHin Seattle, Washington,are saying needs more attention in developing countries.

But the list of examples of applications of AI to the SDGs continues to grow. This week, at ICT4D, the international agriculture research consortium known as CGIAR launched a platform forbig data in agriculture. It unites agricultural research institutes and companies with the goal of closing the digital divide between farmers in developed and developing countries. Amazon will bring its cloud computing and data processing capabilities, IBM, creator of the Watson artificial intelligence system, will bring its data analysis, and PepsiCo will bring its use of big data to manage supply chains.

One of the lines that came up at the AI conference in San Francisco was recent comments by physicist Stephen Hawking, who said thatthis technology will be either the best, or the worst thing, ever to happen to humanity.

Silicon Valley is behind a number of initiatives working to ensure that AI benefits humanity, including OpenAI, a nonprofit AI research company, to ensure that the benefits of machine learning are as widely and evenly distributed as possible.

And increasingly, forward looking thinkers in the global development community are presenting themselves as natural partners in these efforts, as the ITU has done in organizing the AI for Global Good Summit together with XPRIZE and other United Nations agencies.

As the U.N. specialized agency for information and communication technologies, ITU aims to guide AI innovation towards the achievement of the U.N. SDGs, ITU Secretary-General Houlin Zhao said of the event, which kicks off June 07. We are providing a neutral platform for international dialogue to build a common understanding of the capabilities of emerging AI technologies.

The ITUs most recent magazineis entirely focused on how AI can boost sustainable global development.

The biggest risks posed by the rise of AI is not so much the singularity, in which machine intelligence matches then surpasses human intelligence, but wasted projects and dollars, said two venture capitalists from Bloomberg Beta, which makes early stage investments in artificial intelligence startups. Echoing some of the points made by Norvig of Google, they said the key is to use AI to solve real problems. Of course, global development professionals are working on complex problems that might appeal to machine learning experts looking to use their skills for good, which is why any effort to ensure AI benefits humanity might consider bringing these communities together.

Read more international development newsonline, and subscribe to The Development Newswireto receive the latest from the worlds leading donors and decision-makers emailed to you free every business day.

Link:

How artificial intelligence might help achieve the SDGs - Devex

Posted in Artificial Intelligence | Comments Off on How artificial intelligence might help achieve the SDGs – Devex

Techstars: How Artificial Intelligence Can Make The Music Industry … – Forbes

Posted: at 6:50 am


Forbes
Techstars: How Artificial Intelligence Can Make The Music Industry ...
Forbes
New darling tech startups are transforming the music value chain to the industry's benefit.
Can Artificial Intelligence Save The Music Industry? - UproxxUPROXX

all 2 news articles »

The rest is here:

Techstars: How Artificial Intelligence Can Make The Music Industry ... - Forbes

Posted in Artificial Intelligence | Comments Off on Techstars: How Artificial Intelligence Can Make The Music Industry … – Forbes

Qualified humans must not fear bots: Adobe on Artificial Intelligence – Business Standard

Posted: at 6:50 am

Saying AI will take over the creativity of humans is not right: Adobe

IANS | New Delhi May 20, 2017 Last Updated at 14:09 IST

As artificial intelligence (AI)-powered smart devices and solutions gather momentum globally amid fears of "bots" taking over jobs soon, a top Adobe executive has allayed such fears, saying AI will actually assist people intelligently.

"Saying AI will take over the creativity of humans is not right. It will take away a lot of stuff that you have to do in a mundane way. A human mind is a lot more creative than a machine," Shanmugh Natarajan, Executive Director and Vice President (Products) at Adobe, told IANS in an interview.

"With AI, we are trying to make the work easier. It is not like self-driving cars where your driver is getting replaced. I think creativity is going to stay for a long time," Natarajan added.

Market research firm Gartner recently said that CIOs will have a major role to play in preparing businesses for the impact that AI will have on business strategy and human employment.

Global enterprises like Adobe are now betting on India to boost AI in diverse sectors across the country.

The company has a massive set-up in India, with over 5,200 employees spread across four campuses in Noida and Bengaluru and its R&D labs claim a significant share of global innovations.

According to Natarajan, a lot of work related to AI, machine learning and Internet of Things (IoT) is being done in Adobe's India R&D Labs.

"The way we have structured our India labs is very similar to how larger companies have structured it. There are separate lab initiatives and areas, including digital media, creative lab, Big Data and marketing-related labs and, obviously, document is a big part and we have labs associated with it as well," the executive said.

"With the Cloud platform, we are trying to provide a framework where people with the domain expertise can come and set their data and machine learning algorithms in play and then train the systems and let the systems learn," Natarajan explained.

Speaking on the significance of India R&D labs, Natarajan said earlier the R&D labs were focused on North America where scientists used to come in from esteemed universities.

With India becoming a crucial market for research and development, Adobe started its data labs in Bengaluru under the leadership of Shriram Revankar.

"Nearly 30 per cent of our total R&D staff is here. Apart from other works, we file patents. Every year, Adobe India has been filing nearly 100 patents from a global perspective. We have eight patents coming in soon," Natarajan told IANS.

Interestingly, a big part of "Adobe Sensei" -- a new framework and set of intelligent services that use deep learning and AI to tackle complex experience challenges -- was developed in India.

On why there is a technology gap between India and other developed economies in terms of use of concepts like AI, machine learning and IoT, Natarajan said that people underestimate the country.

"The transitions and generational things might not be at the same level and sophistication, or the pace as compared to other countries, but here, the changes are dramatic," Natarajan told IANS.

"Everyone has a smartphone now and people have figured out that they can speak to their smartphones and retrive data. The data may be small as compared to 100 trillion that Adobe gets, but it is a Cloud and IoT device. People are interacting with them and machine is learning from this," the executive noted.

As artificial intelligence (AI)-powered smart devices and solutions gather momentum globally amid fears of "bots" taking over jobs soon, a top Adobe executive has allayed such fears, saying AI will actually assist people intelligently.

"Saying AI will take over the creativity of humans is not right. It will take away a lot of stuff that you have to do in a mundane way. A human mind is a lot more creative than a machine," Shanmugh Natarajan, Executive Director and Vice President (Products) at Adobe, told IANS in an interview.

"With AI, we are trying to make the work easier. It is not like self-driving cars where your driver is getting replaced. I think creativity is going to stay for a long time," Natarajan added.

Market research firm Gartner recently said that CIOs will have a major role to play in preparing businesses for the impact that AI will have on business strategy and human employment.

Global enterprises like Adobe are now betting on India to boost AI in diverse sectors across the country.

The company has a massive set-up in India, with over 5,200 employees spread across four campuses in Noida and Bengaluru and its R&D labs claim a significant share of global innovations.

According to Natarajan, a lot of work related to AI, machine learning and Internet of Things (IoT) is being done in Adobe's India R&D Labs.

"The way we have structured our India labs is very similar to how larger companies have structured it. There are separate lab initiatives and areas, including digital media, creative lab, Big Data and marketing-related labs and, obviously, document is a big part and we have labs associated with it as well," the executive said.

"With the Cloud platform, we are trying to provide a framework where people with the domain expertise can come and set their data and machine learning algorithms in play and then train the systems and let the systems learn," Natarajan explained.

Speaking on the significance of India R&D labs, Natarajan said earlier the R&D labs were focused on North America where scientists used to come in from esteemed universities.

With India becoming a crucial market for research and development, Adobe started its data labs in Bengaluru under the leadership of Shriram Revankar.

"Nearly 30 per cent of our total R&D staff is here. Apart from other works, we file patents. Every year, Adobe India has been filing nearly 100 patents from a global perspective. We have eight patents coming in soon," Natarajan told IANS.

Interestingly, a big part of "Adobe Sensei" -- a new framework and set of intelligent services that use deep learning and AI to tackle complex experience challenges -- was developed in India.

On why there is a technology gap between India and other developed economies in terms of use of concepts like AI, machine learning and IoT, Natarajan said that people underestimate the country.

"The transitions and generational things might not be at the same level and sophistication, or the pace as compared to other countries, but here, the changes are dramatic," Natarajan told IANS.

"Everyone has a smartphone now and people have figured out that they can speak to their smartphones and retrive data. The data may be small as compared to 100 trillion that Adobe gets, but it is a Cloud and IoT device. People are interacting with them and machine is learning from this," the executive noted.

IANS

http://bsmedia.business-standard.com/_media/bs/wap/images/bs_logo_amp.png 177 22

See original here:

Qualified humans must not fear bots: Adobe on Artificial Intelligence - Business Standard

Posted in Artificial Intelligence | Comments Off on Qualified humans must not fear bots: Adobe on Artificial Intelligence – Business Standard

Page 172«..1020..171172173174..180190..»