Page 136«..1020..135136137138..150160..»

Category Archives: Artificial Intelligence

Artificial intelligence market: Weighing the IT channel’s role – TechTarget

Posted: August 16, 2017 at 6:16 pm

Like mobile technology, cloud computing, big data and IoT before it, artificial intelligence may just be the next big thing that channel partners should have on their radars. But as with any new technology that comes along, partners need to ensure they have the right business skill sets for system implementations.

Mobile devices are disrupting your customers IT strategies, leading to lots of problems that need solving. Find out where the best opportunities lie and get advice from experts on how to approach the market, including what not to do.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

The hype about AI is rising, although the jury is still out on whether the artificial intelligence market will be a great opportunity for the channel, said Seth Robinson, senior director of technology analysis at CompTIA.

"AI is not going to be on its own something that is all that tangible that you can grasp and pursue," he said.

A sure sign of the growing interest is recent vendor product releases, noted Steve White, program vice president of channels and alliances at IDC. "When we spoke to channel and alliance folks 18 months ago, it was IoT, and now it's AI. It's the new golden child," he said.

Once vendors like Microsoft, Google, Salesforce and Cisco have announced offerings it creates greater access and interest for the technology, "and they're not investing unless they see opportunity," White said. "That's definitely why the channel should be interested."

Another appeal of the artificial intelligence market is that the technology is applicable for use in most industries, observers said.

That said, AI is still "very early in overall adoption cycle" and people are mainly curious about the technology and are in the exploratory phase right now, Robinson said.

He believes AI platforms will be complicated and there needs to be a deeper conversation with customers about the business needs for deploying the technology. That conversation should be about what AI is and how it fits into their business, Robinson said. "We haven't seen much readiness to move into that strategic conversation yet."

Without a clear understanding of the business objectives, companies may not utilize the technology's features, such as machine learning and cognitive computing, and then they obviously won't reap the benefits, he said. For example, if a company spends extra on help desk software with AI baked in but uses it in a standard way without utilizing AI, "you haven't moved the needle," he cautioned. "So, channel firms have to be careful about overselling without helping companies transition to new processes and workflows and the best usage of these things that are available today."

Probably the best and earliest example of AI in the enterprise is IBM's Watson technology, Robinson said. Within the channel, he said, there has been a lot of buzz about CrushBank, a spinoff of a managed service provider (MSP) that built an IT help desk application on top of Watson. "They're building a help desk application that utilizes Watson, so if you want to get CrushBank's product or are working with them, you'll get this new app with AI baked into it," he said. This is not an example of reselling or installing AI, but rather, incorporating the technology into apps, "which a lot of MSPs and VARs [value-added reseller] aren't thinking about,'' Robinson said.

In CrushBank's case, they are helping customers change workflow processes to utilize help desk features in a more efficient way, he said. "And that gets out of the wheelhouse of channel firms," since the channel historically has been built on management of technology, Robinson added.

"AI is a very natural way to supplement the tasks and work we do every day to support our clients," CrushBank co-founder and CTO David Tan said."I think the need has been there, but the growth in technology and platforms has made it more pronounced, and the technology to power the solutions is finally becoming mature."

CrushBank sells its platform to other MSPs, Tan said. The next step the channel needs to focus on is to really integrate technology into a business, which is at the core of what digital transformation is all about, he said. AI, according to Tan, can be particularly effective at making that happen.

Another example of a company using AI to change business processes is Actionable Science. The company has created AI-powered bots to help medium- and large-sized businesses improve productivity, enhance customer experiences, increase employee satisfaction and reduce costs, said Manish Sharma, co-founder and head of business development.

The bots address a range of tasks for sales, servicing, IT help desk, HR help desk and other functions. Actionable Science's advanced bots have natural language conversations, evolve using machine learning and execute tasks by leveraging robotic process automation, Sharma said.

The company has about a dozen partners so far, he said, adding that the artificial intelligence market "has got to be one of the top priorities for channel partners that want to stay relevant and grow their business in the future." They can do that by developing "an expertise in one or several specific applications," Sharma said.

The skills he believes a partner needs for AI work include a combination of process analytics, user experience and "requirements management that is very specific to AI."

This [technology] is going to be a lot more consulting-heavy, so you have to have those professional consulting folks with a depth of knowledge around [AI]. Steve Whiteprogram vice president of channels and alliances, IDC

White concurred that if a partner is already doing work in business intelligence or analytics, AI "would seem like a fairly obvious add-on that they should be looking at" because it takes the products they're offering to their customers to the next level.

"At the end of the day, AI is even smarter to leverage that platform you've already built,'' and expand upon it as an opportunity for growth, White said.

Partners also need to be able to build a consulting practice around AI, White believes. "This [technology] is going to be a lot more consulting-heavy, so you have to have those professional consulting folks with a depth of knowledge around [AI]. Like most tech trends, we see the partners who act quicker, funnily enough, are the ones who are more successful."

Learn about robotic process automation

How to get started in IoT managed services

Opinion: Why BI and AI are a natural fit

Visit link:

Artificial intelligence market: Weighing the IT channel's role - TechTarget

Posted in Artificial Intelligence | Comments Off on Artificial intelligence market: Weighing the IT channel’s role – TechTarget

Messenger Launches New Artificial Intelligence Features – Huffington Post Australia

Posted: at 6:16 pm

Messenging app 'Messenger' launched a range of new artificial intelligence (AI) features in Australia on Wednesday.

The AI, called 'M', works almost like a prompting service, where it recognises words and phrases used in a conversation and then suggests relevant content and actions based on the chat between the two users.

For example, if you're speaking to someone on their birthday, 'M' will recognise either through a phrase used or their Messenger profile when their birthday is and then prompt you to send a birthday message.

Similarly, if you are chatting about making plans or struggling to come to a group decision about something, the AI will suggest you make a plan or start a group poll respectively. If you are chatting in a one-on-one conversation, and one person rises the idea of making a call, 'M' will prompt you to start a video or voice chat.

Other features include stickers for commonly used phrases including 'thankyou' or 'bye-bye' and a prompt to share your location with someone if phrases like 'where are you?' and 'see you soon' are used. Messenger also launched a content saving option that encourages you to save videos, Facebook posts and pages from your conversations to look at later.

If you tire of the notifications and suggestions from Messenger, it's easy to opt-out of the AI technology by adjusting your Messenger settings. It's also possible to dismiss a suggestion made by 'M' if you feel it is irrelevant.

The 'M' artificial intelligence technology was first launched in the U.S. in April and is also currently available in Mexico and Spain. Canada, South Africa and the U.K. will gain assess to the technology at the same time as all of us here in Australia.

See more here:

Messenger Launches New Artificial Intelligence Features - Huffington Post Australia

Posted in Artificial Intelligence | Comments Off on Messenger Launches New Artificial Intelligence Features – Huffington Post Australia

Elon Musk Is Wrong Again. AI Isn’t More Dangerous Than North Korea. – Fortune

Posted: at 6:16 pm

Elon Musk's recent remark on Twitter that artificial intelligence (AI) is more dangerous than North Korea is based on his bedrock belief in the power of thought. But this philosophy has a dark side.

If you believe that a good idea can take over the world and if you conjecture that computers can or will have ideas, then you have to consider the possibility that computers may one day take over the world. This logic has taken root in Musk's mind and, as someone who turns ideas into action for a living, he wants to make sure you get on board too. But hes wrong, and you shouldnt believe his apocalyptic warnings.

Here's the story Musk wants you to know but hasn't been able to boil down to a single tweet. By dint of clever ideas, hard work, and significant investment, computers are getting faster and more capable. In the last few years, some famously hard computational problems have been mastered, including identifying objects in images, recognizing the words that people say, and outsmarting human champions in games like Go. If machine learning researchers can create programs that can replace captioners, transcriptionists, and board game masters, maybe it won't be long before they can replace themselves. And, once computer programs are in the business of redesigning themselves, each time they make themselves better, they make themselves better at making themselves better.

The resulting intelligence explosion would leave computers in a position of power, where they, not humans, control our future. Their objectives, even if benign when the machines were young, could be threatening to our very existence in the hands of an intellect dwarfing our own. That's why Musk thinks this issue is so much bigger than war with North Korea. The loss of a handful of major cities wouldn't be permanent, whereas human extinction by a system seeking to improve its own capabilities by turning us into computational components in its mega-brainthat would be forever.

Musks comparison, however, grossly overestimates the likelihood of an intelligence explosion. His primary mistake is in extrapolating from recent successes of machine learning the eventual development of general intelligence. But machine learning is not as dangerous as it might look on the surface.

For example, you may see a machine perform a task that appears to be superhuman and immediately be impressed. When people learn to understand speech or play games, they do so in the context of the full range of human experiences. Thus when you see something that can respond to questions or beat you soundly in a board game, it is not unreasonable to infer that it also possesses a range of other human capacities. But that's not how these systems work.

In a nutshell, here's the methodology that has been successful for building advanced systems of late: First, people decide what problem they want to solve and they express it in the form of a piece of code called an objective functiona way for the system to score itself on the task. They then assemble perhaps millions of examples of precisely the kind of behavior they want their system to exhibit. After that they design the structure of their AI system and tune it to maximize the objective function through a combination of human insight and powerful optimization algorithms.

At the end of this process, they get a system that, often, can exhibit superhuman performance. But the performance is on the particular task that was selected at the beginning. If you want the system to do something else, you probably will need to start the whole process over from scratch. Moreover, the game of life does not have a clear objective functioncurrent methodologies are not suited to creating a broadly intelligent machine.

Someday we may inhabit a world with intelligent machines. But we will develop together and will have a billion decisions to make that shape how that world develops. We shouldn't let our fears prevent us from moving forward technologically.

Michael L. Littman is a professor of computer science at Brown University and co-director of Brown's Humanity Centered Robotics Initiative.

Original post:

Elon Musk Is Wrong Again. AI Isn't More Dangerous Than North Korea. - Fortune

Posted in Artificial Intelligence | Comments Off on Elon Musk Is Wrong Again. AI Isn’t More Dangerous Than North Korea. – Fortune

MIT’s new artificial intelligence could kill buffering – Alphr

Posted: at 6:16 pm

For some, the sight of the buffer circle is enough to bring on spasms of existential angst. When that spinning circle of death appears, the digital world cracks, its illusory sense of control slips from your sweaty palm, and you are reminded, however briefly, that you are not the master of this realm, and you have no real idea how the machine you are using works. Its also very annoying if youre trying to show a video to someone.

Researchers at MIT may have come up with a way to stave of techno-existential panic for good, thanks to a new artificial intelligence system that can keep video steaming buttery smooth.

Buffering happens because video streaming occurs in chunks, with your device downloading sequential portions of a file that are then stitched together. This means you can start watching the video before downloading the entire thing, but if connection wavers you might finish one chunk before the next has been fully downloaded.

Sites like YouTube use Adaptive Bitrate (ABR) algorithms to work out what resolution a video should display at. In a nutshell, these allow the system to maintain the flow of images be measuring a networks speed and lowering the resolution appropriately, or by working to maintain a sufficient buffer at the tip of the video. The issue is that neither of these techniques on their own can prevent annoying pauses in a clips if the network has a sudden drop in traffic flow say, if youre in a particularly crowded area, or if youre moving in and out of tunnels.

MIT's Computer Science and Artificial Intelligence Lab (CSAIL) AI, dubbed Pensive, takes these algorithms, but uses a neural network to intelligently work out when a system should flip between one and the other. The AI was trained on a months worth of video content, and was given reward and penalty conditions, to push it to calculate the most effective times to switch between ABR algorithms.

This system is adjustable, meaning it can be tweaked depending on what a content provider might want to prioritise such as consistent image quality or smoother playback. "Our system is flexible for whatever you want to optimise it for," commented MIT professor Mohammad Alizadeh in a statement. "You could even imagine a user personalising their own streaming experience based on whether they want to prioritise rebuffering versus resolution."

While the death of the buffer symbol might be cause for celebration, the researchers also point to the benefits the AI system could have for virtual reality potentially making it much easier for people to stream high-resolution VR games and films. This is really just the first step in seeing what we can do, noted Alizadeh.

Read more here:

MIT's new artificial intelligence could kill buffering - Alphr

Posted in Artificial Intelligence | Comments Off on MIT’s new artificial intelligence could kill buffering – Alphr

Elon Musk Is Very Freaked Out by This Artificial Intelligence System’s Victory Over Humans – Inc.com

Posted: August 15, 2017 at 12:17 pm

With all that's happening in the world, Elon Musk wants to make sure you don't forget about what he thinks is the biggest danger to humanity.

Over the weekend, Musk returned to tweeting about one of his favorite topics of discussion: artificial intelligence. He referenced the threat of nuclear war with North Korea to help make his point.

Musk's tweets came hours after an A.I. system developed by OpenAI defeated some of the world's best players at a military strategy game called Dota 2. According to a blog post by OpenAI, successfully playing the game involves predicting how an opponent will move, improvising in unfamiliar scenarios, and convincing the opponent's allies to help you instead.

OpenAI is the nonprofit artificial intelligence company Musk co-founded along with Peter Thiel and Sam Altman. The company's purpose is to research and develop A.I. and develop best practices to help ensure that the technology is used for good.

Musk has in the past called A.I. humanity's "biggest existential threat." A known A.I. fear monger, he recently got in a brief public spat with Mark Zuckerberg about the danger that the technology poses to humans. Zuckerberg, whose Facebook--like Tesla--invests heavily in artificial intelligence, referred to Musk's prophesizing about doomsday scenarios as "irresponsible." Musk responded on Twitter the next day by calling Zuckerberg's understanding of the topic "limited."

Comparing the threat of A.I. to that of nuclear war with North Korea is clearly a tactic meant to shock, as Musk has been wont to do on this topic. Earlier this year, he laid out a scenario in which A.I. systems meant to farm strawberries could lead to the destruction of mankind.

Even if Musk is speaking in hyperbole, though, it's not hard to see why an A.I. system that outsmarts humans at military strategy might be cause for concern.

Musk's opinions on the technology have been at odds with those of tech leaders like Zuckerberg, Amazon's Jeff Bezos, and Google co-founders Larry Page and Sergey Brin. All have advocated for A.I. in recent years with few, if any, reservations.

While Tesla relies heavily on artificial intelligence in developing self-driving cars, Musk's opinions have been at odds with those of his fellow tech titans. In July, Musk told a group at the National Governors Association Summer Meeting in Rhode Island that he believes A.I. should be regulated proactively, before the need for such limitations even arise.

"I have exposure to the very cutting-edge A.I.," he said, "and I think people should be really concerned about it."

Read the original:

Elon Musk Is Very Freaked Out by This Artificial Intelligence System's Victory Over Humans - Inc.com

Posted in Artificial Intelligence | Comments Off on Elon Musk Is Very Freaked Out by This Artificial Intelligence System’s Victory Over Humans – Inc.com

What does AI mean for the future of manufacture? – Telegraph.co.uk

Posted: at 12:17 pm

The world is on the brink of the fourth industrial revolution, and it could change the way we use everything from cars to shoes.

The first three industrial revolutions brought us mechanisation, mass production and automation. Now, more than half a century after the first robots worked on production lines, artificial intelligence (AI) and machine learning are shaking things up again.

Manufacturing is becoming less about muscle and more about brainsGreg Kinsey, VP, Hitachi Insight Group

Industry 4.0 uses technologies such as the internet of things to make manufacturing smarter allowing companies to revolutionise the way they make and ship goods. Manufacturing is becoming less about muscle and more about brains, says Greg Kinsey, vice president of Hitachi Insight Group.

It becomes less place-specific. You start to look at 3D printing. The shoe industry is contemplating: do we actually need to produce all these shoes in lots of variations in southeast Asia, ship them around the world, only to go to the shop and it doesnt have your size? Why not produce them at the point of sale put your foot in the scanner, measure the size and shape, swipe your credit card and pick your shoes up later that day?

The digital transformation of manufacturing and supply chains means that data from factories is directly analysed using technologies such as machine learning and AI. The process can lead to drastic efficiency gains up to 10pc, says Mr Kinsey. Companies can also see manufacturing lead times slashed in half.

Consumers will see a wider variety of products, to the point of mass customisation, where you can design your own, says Mr Kinsey. Product will become linked to emerging demand, so well never be in a position where things are just out of stock.

The first stage, says Mr Kinsey, is to get rid of paper-based processes something that many factories still rely on. Once digitised, the data can be crunched to ensure factories are operating efficiently. But the idea isnt to get rid of people; its to augment what they do.

When I graduated from university, I was heavily into industrial robots, says Mr Kinsey. Everyone said that robots were going to take our jobs. But the companies that invested heavily in robots like German car makers are now world leaders, employing many more people than they would otherwise have done.

When we use AI tools to predict bad quality, or to optimise the settings for a production line, we can manage it with more confidence. We have had a lot of clients tell us that this technology helps them improve the way they work. This is should be the real driver of innovation.

European companies are currently leading the charge in the digital transformation of industry, says Mr Kinsey. Many are also working closely with start-ups to enhance industrial processes.

Theres a lot of interest in working with start-ups, Mr Kinsey explains. When you embark on innovation, you dont always know what the solutions are.

Companies that invested heavily in robots are now world leaders, and employ more peopleGreg Kinsey, VP, Hitachi Insight Group

The resulting Industry 4.0 may change the way we all think about products, Mr Kinsey says and the first signs are already here.

In Europe, you have a lot of people thinking: Do I need to own a car? That would have been unthinkable 20 or 30 years ago. Michelin already has aircraft tyres that are on a pay-per-use basis: people pay based on the number of times the jet takes off.

You need to embrace this technology; if you dont, because you fear that you might lose some jobs, you are going to lose all the jobs, as your company will no longer be competitive. In fact, digital technologies can improve the workplace and quality of work.

Modern life is saturated with data, and new technologies are emerging nearly every day but how can we use these innovations to make a real difference to the world?

Hitachi believes that Social Innovation should underpin everything they do, so they can find ways to tackle the biggest issues we face today.

Visit social-innovation.hitachi to learn how Social Innovation is helping Hitachi drive change across the globe.

See the rest here:

What does AI mean for the future of manufacture? - Telegraph.co.uk

Posted in Artificial Intelligence | Comments Off on What does AI mean for the future of manufacture? – Telegraph.co.uk

Tiny IDF Unit Is Brains Behind Israeli Army Artificial Intelligence – Haaretz

Posted: at 12:17 pm

The operational research unit of the Military Intelligence Unit the software unit of the Israeli armys J6/C4i Directorates Lotem Unit doesnt look like the kind of place where state-of-the art artificial intelligence is being put to work.

There are no espresso machines, brightly colored couches or views of Tel Aviv from the top floors of an office tower. The unit conducts its work in the backwater of Ramat Gan and has the look and feel of any other army office.

But the unit is engaged in the same kind of AI work that the worlds biggest tech companies, like Google, Facebook and Chinas Baidu are doing in a race to apply machine learning to such functions as self-driving cars, analysis of salespeoples telephone pitches and cybersecurity or to fight Israels next war more intelligently.

Maj. Sefi Cohen, 34, is head of the unit, which in effect makes him the armys chief data officer. As he explains it, his units mission is to provide soldiers in the field data-based insights with the help of smart tools. We embed these capabilities in applications that help commanders in the field, he said.

One example is a system for predicting rocket launches from the Gaza Strip. After Operation Protective Edge we developed an app that learns from field sensors and other data we collected what are the most likely areas launchers will be set up and at what hours. That enables us to know in advance what will happen and what areas should be attacked in order to fight them more effectively, he explained.

We've got more newsletters we think you'll find interesting.

Please try again later.

This email address has already registered for this newsletter.

In one project the unit built a system based on neural networks whose purpose is to extract from a video a suspicious object and describe it in writing. It wont replace human observers, but instead of looking at five cameras, it will be able to be responsible for dozens, said Cohen.

Cohen said the amount of data at his disposal from the army is endless, reaching into petabytes (one million gigabytes) in some areas. It also makes use of data from outside sources and the apps it develops use open-source code. We return to the world things that we use, Cohen says, Models that are operational obviously do not go out.

Cohen got his start in combat signals corps. Near the end of his compulsory service he completed a course in Lotem and spent another 10 years at its command and control systems unit. Ive always loved algorithms. I was already involved with them in high school and worked in the field. When I saw drafted I wanted to combine the technology with a combat, he recalls.

Cohen set up the unit he now leads with the help of local high-tech executives. I convinced my commanders that we could use machine learning in combat, and from there I started to bring in more and more people, he said. The unit now comprises about 20 officers, all of them in the career army and holding advanced degrees in computer science, focusing on AI.

The units only female member left recently, so for the moment its an all-male team. Cohen says most are graduates of the armys elite Talpiot program; the one who isnt has a masters from the Technion Israel Institute of Technology. Everyone whos here is the tops. I learn a lot from them, he said.

Want to enjoy 'Zen' reading - with no ads and just the article? Subscribe today

Read the original post:

Tiny IDF Unit Is Brains Behind Israeli Army Artificial Intelligence - Haaretz

Posted in Artificial Intelligence | Comments Off on Tiny IDF Unit Is Brains Behind Israeli Army Artificial Intelligence – Haaretz

AI washing muddies the artificial intelligence products market – TechTarget

Posted: at 12:17 pm

Analysts predict that by 2020, artificial intelligence technologies will be in almost every new software and service release. And if they're not actually in them, technology vendors will probably use smoke and mirrors marketing tactics to make users believe they are.

Many tech vendors already shoehorn the AI label into the marketing of every new piece of software they develop, and it's causing confusion in the market. To muddle things further, major software vendors accuse their competitors of egregious mislabeling, even when the products in question truly do include artificial intelligence technologies.

AI mischaracterization is one of the three major problems in the AI market, as highlighted by Gartner recently. More than 1,000 vendors with applications and platforms describe themselves as artificial intelligence products vendors, or say they employ AI in their products, according to the research firm. It's a practice Gartner calls "AI washing" -- similar to the cloudwashing and greenwashing, which have become prevalent over the years as businesses overexaggerate their association to cloud computing and environmentalism.

When a technology is labelled AI, the vendor must provide information that makes it clear how AI is used as a differentiator and what problems it solves that can't be solved by other technologies, explained Jim Hare, a research VP at Gartner, who focuses on analytics and data science.

You have to go in with the assumption that it isn't AI, and the vendor has to prove otherwise. Jim Hareresearch VP, Gartner

"You have to go in with the assumption that it isn't AI, and the vendor has to prove otherwise," Hare said. "It's like the big data era -- where all the vendors say they have big data -- but on steroids."

"What I'm seeing is that anything typically called machine learning is now being labelled AI, when in reality it is weak or narrow AI, and it solves a specific problem," he said.

IT buyers must hold the vendor accountable for its claims by asking how it defines AI and requesting information about what's under the hood, Hare said. Customers need to know what makes the product superior to what is already available, with support from customer case studies. Also, Hare urges IT buyers to demand a demonstration of artificial intelligence products using their own data to see them in action solving a business problem they have.

Beyond that, a vendor must share with customers the AI techniques it uses or plans to use in the product and their strategy for keeping up with the quickly changing AI market, Hare said.

The second problem Gartner highlights is that machine learning can address many of the problems businesses need to solve. The buzz around more complicated types of AI, such as deep learning, gets so much hype that businesses overlook simpler approaches.

"Many companies say to me, 'I need an AI strategy' and [after hearing their business problem] I say, 'No you don't,'" Hare said.

Really, what you need to look for is a solution to a problem you have, and if machine learning does it, great," Hare said. "If you need deep learning because the problem is too gnarly for classic ML, and you need neural networks -- that's what you look for."

When to use AI versus BI tools was the focus of a spring TDWI Accelerate presentation led by Jana Eggers, CEO of Nara Logics, a Cambridge, Mass., company, that describes its "synaptic intelligence" approach to AI as the combination of neuroscience and computer science.

BI tools use data to provide insights through reporting, visualization and data analysis, and people use that information to answer their questions. Artificial intelligence differs in that it's capable of essentially coming up with solutions to problems on its own, using data and calculations.

Companies that want to answer a specific question or problem should use business analytics tools. If you don't know the question to ask, use AI to explore data openly, and be willing to consider the answers from many different directions, she said. This may involve having outside and inside experts comb through the results, perform A/B testing, or even outsource via platforms such as Amazon's Mechanical Turk.

With an AI project, you know your objectives and what you are trying to do, but you are open to finding new ways to get there, Eggers said.

A third issue plaguing AI is that companies don't have the skills on staff to evaluate, build and deploy it, according to Gartner. Over 50% of respondents to Gartner's 2017 AI development strategies survey said the lack of necessary staff skills was the top challenge to AI adoption. That statistic appears to coincide with the data scientist supply and demand problem.

Companies surveyed said they are seeking artificial intelligence products that can improve decision-making and process automation, and most prefer to buy one of the many packaged AI tools rather than build one themselves. Which brings IT buyers back to the first problem of AI washing; it's difficult to know which artificial intelligence products truly deliver AI capabilities, and which ones are mislabeled.

After determining a prepackaged AI tool provides enough differentiation to be worth the investment, IT buyers must be clear on what is required to manage it, Hare said; what human services are needed to change code and maintain models over the long term? Is it hosted in a cloud service and managed by the vendor, or does the company need knowledgeable staff to keep it running?

"It's one thing to get it deployed, but who steps in to tweak and train models over time?" he said. "[IBM] Watson, for example, requires a lot of work to stand up and you need to focus the model to solve a specific problem and feed it a lot of data to solve that problem."

Companies must also understand the data and compute requirements to run the AI tool, he added; GPUs may be required and that could add significant costs to the project. And cutting-edge AI systems require lots and lots of data. Storing that data also adds to the project cost.

A look at deep learning vs. machine learning

The power, promise and controversy of AI

AI apps need a reality check

Read more from the original source:

AI washing muddies the artificial intelligence products market - TechTarget

Posted in Artificial Intelligence | Comments Off on AI washing muddies the artificial intelligence products market – TechTarget

What An Artificial Intelligence Researcher Fears About AI – IFLScience

Posted: at 12:17 pm

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

Fear of the unforeseen

The HAL 9000 computer, dreamed up byscience fiction author Arthur C. Clarkeand brought to life bymovie director Stanley Kubrickin 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Fear of misuse

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Read more:

What An Artificial Intelligence Researcher Fears About AI - IFLScience

Posted in Artificial Intelligence | Comments Off on What An Artificial Intelligence Researcher Fears About AI – IFLScience

Three ways your business can leverage artificial intelligence – The Globe and Mail

Posted: at 12:17 pm

Chris Catliff, president and CEO, BlueShore Financial

Artificial intelligence (AI) is all around us, largely unseen, but it is fundamentally changing our world and our perceptions. The algorithms that Facebook or other apps use to hook us with our preferred news are based on our past clicks and preferences on our apps. We no longer receive the news thats fit to print, but information thats not necessarily grounded in fact or objectively true the mind candy and click bait we have shown we have an appetite for. The deep divisions of political tribalism that are gripping our neighbour down South are in part a reflection of individuals chosen feedback loop.

We are in an Information Age era where AI will affect and shape how we absorb information. Stephen Hawking and Elon Musk have said the development of full artificial intelligence could spell the end of the human race. While regulation and ethical considerations need to be factored in as AI continues to rapidly evolve, there are ways that AI can be used effectively in business today.

AI is simply the mixture of machine learning, language processing and adaptive or cognitive computing. With this sophisticated blend, AI simulates the intelligence of humans into machines. Trained humans can perform tasks but never at the speed or scale of machines, so AI plays a major role in overcoming this unfortunate limitation to human intelligence.

Technology can open up a world of limitless possibilities. Yet it can be hard to separate hype from reality. Getting a realistic picture of AIs current practical application and its most relevant benefits can be challenging. In the past, AI has stumbled in pinning down the subtle nuances of images or voice recognition. Despite the limitations, 8 out of 10 businesses have already implemented, or are planning to adopt, AI solutions. Nowhere is this truer than in banking and wealth management.

As the financial services industry is so data intensive and the ability to analyze all of the data has become more crucial, financial institutions have been early adopters. Increasing work-force productivity, identifying opportunities and accelerating innovation are critical to meeting clients changing needs and maintaining competitive advantage. AI is also levelling the playing field between small and large businesses. At BlueShore Financial, a full-service financial boutique with $5-billion in assets under administration, we use AI to pro-actively assist our client service. We are also exploring the launch of a new robo-advice platform in wealth management next year.

High-tech and high-touch: the dynamic duo

According to Gartner, 85 per cent of all customer interactions with a business will be managed without human interaction by 2020. But despite the incredible potential of AI to transform the customer service experience, teaching a machine human intuition is not yet feasible, so AI will only serve to enhance the human element rather than eliminate it.

Instead of viewing AI as a threat, businesses need to embrace this disruptor and leverage its benefits to increase efficiency and provide more customized service. There are a multitude of ways most businesses can leverage AI but here are three:

1. Customizing the client experience

Recommendation engines (think Netflix) can personalize the customer experience, especially for front-line employees interacting with clients. Using data about our preferences, algorithms suggest and then employees filter with their emotional intelligence to offer highly customized recommendations. Recommendation engines boost revenue and will continue to play a pivotal role. For employees, AI simplifies decisions and eases work flow a case of automation complementing the human element.

2. Accuracy in detecting fraud

AI-based systems, compared with traditional software systems used for detecting fraud, are more accurate in detecting fraudulent patterns. By using machine learning algorithms, companies can spot emerging anomalies in the data. Financial institutions are particularly vulnerable to cybercrime, where global losses from card fraud are expected to reach $31-billion in three years, and cyberattacks are becoming increasingly sophisticated. Security goals and customer experience goals need to be in sync for fraud prevention technologies to be effective.

3. Increasing client engagement

While chat bots are AI-based automated chat systems that can simulate human chat without human intervention, they are being extensively applied to revolutionize customer interactions. By identifying context and emotions in a text chat by the human end-user, chat bots respond with the most appropriate reply. In addition, chat bots, over time, collect data on the behaviour and habits of that individual and can learn their preferred behaviour, adapting even more to their needs and moods. By improving customized communication, customers are more likely to be far more engaged with your company.

While AI and data analytics can appear daunting, the strategic benefits of investing in AI are a no-brainer for larger companies and those looking to scale fast. AI is the portal to a future that will continue to improve our lives faster than we appreciate. I look forward to it.

Executives, educators and human resources experts contribute to the ongoing Leadership Lab series.

Follow us on Twitter: @Globe_Careers

View original post here:

Three ways your business can leverage artificial intelligence - The Globe and Mail

Posted in Artificial Intelligence | Comments Off on Three ways your business can leverage artificial intelligence – The Globe and Mail

Page 136«..1020..135136137138..150160..»