Page 284«..1020..283284285286

Category Archives: Ai

AI is here to save your career, not destroy it – VentureBeat

Posted: February 7, 2017 at 8:15 am

Imagine: Humans waging an epic battle against technology, with human intelligence inevitably subjugated by artificial overlords. Plenty of folks would line up with front-row tickets and popcorn in hand. But its also the very real manifestation of a universal fear jobs relegated to machines, livelihoods handed over to bots.

But when we take a closer look at bots and other forms of artificial intelligence, our worst fears are a far cry from the truth. Weve built bots to help us succeed. And instead of viewing them as our grand reckoning, we should view AI and bots as tools to exponentially expand our human capabilities in and out of the workplace. Yes, bots can make us more human in our daily lives.

Those who use bots as superhuman digital assistants will find the most success. Itll be humans to the bot-th power, rather than humans versus bots.

Much of our understanding of AI and the future is rooted in misconception. Were trepidatious toward the future. Its a valid and human response that shouldnt go ignored. But the truth is, the future is already here.

Anyone whos tagged a photo of a friend on Facebook has used AI. But do people think that way? While 86 percent of people say theyre interested in trying AI tools, 63 percent dont realize theyre already using AI.

Machines are much better at quickly surfacing the most relevant information the internet holds. Its on us humans to take that knowledge and make the most informed decisions. But finding information not all our bot friends can help us with they can do much more than just answer direct questions.

Soon, bots will work in the background on our behalf and initiate a conversation when something interesting has happened. Well be prompted with a notable result, and then well make the choice to move forward.

Its simple, but so powerful. As technology should be.

Computers now have the ability to do what we once thought only human intelligence could handle. In the near future, AI is going to feel less artificial and more intelligent.

Humans learn from example and experience. So do machines. Machine learning allows you tell a system what you want, not how to do it.

Once something a few PhDs wrote about, machine learning is now something millions of people benefit from. Everything from predictive learning and lead scoring to content recommendations and email optimization will get much easier for marketers and salespeople alike.

Already, 40 percent of people dont care if theyre served by an AI tool or a human so long as the question gets answered. Only 26 percent say the same for more complicated customer requests. But how those humans will best serve their customers will take (you guessed it) bots.

If you want your employees and business to benefit from all this machine learning, youll need to invest in getting the data in one centralized place. After all, the data is what gives machine learning the learning part. Theres no learning without the data.

Not only is AI the future of marketing and sales, its the future of the inbound movement. AI and bots allow you to provide highly personalized, helpful, and human experiences for your customers. It may not be a summer blockbuster fit for theaters, but AI and bots sure feel like theyre fit for businesses.

Originally posted here:

AI is here to save your career, not destroy it - VentureBeat

Posted in Ai | Comments Off on AI is here to save your career, not destroy it – VentureBeat

AI For Matching Images With Spoken Word Gets A Boost From MIT – Fast Company

Posted: at 8:15 am

Children learn to speak, as well as recognize objects, people, and places, long before they learn to read or write. They can learn from hearing, seeing, and interacting without being given any instructions. So why shouldnt artificial intelligence systems be able to work the same way?

That's the key insight driving a research project under way at MIT that takes a novel approach to speech and image recognition: Teaching a computer to successfully associate specific elements of images with corresponding sound files in order to identify imagery (say, a lighthouse in a photographic landscape) when someone in an audio clip says the word "lighthouse."

Though in the very early stages of what could be a years-long process of research and development, the implications of the MIT project, led by PhD student David Harwath and senior research scientist Jim Glass, are substantial. Along with being able to automatically surface images based on corresponding audio clips and vice versa, the research opens a path to creating language-to-language translation without needing to go through the laborious steps of training AI systems on the correlation between two languages words.

That could be particularly important for deciphering languages that are dying because there aren't enough native speakers to warrant the expensive investment in manual annotation of vocabulary by bilingual speakers, which has traditionally been the cornerstone of AI-based translation. Of 7,000 spoken languages, Harwath says, speech recognition systems have been applied to less than 100.

It could even eventually be possible, Harwath suggested, for the system to translate languages with little to no written record, a breakthrough that would be a huge boon to anthropologists.

"Because our model is just working on the level of audio and images," Harwath told Fast Company, "we believe it to be language-agnostic. It shouldnt care what language its working on."

t-SNE analysis of the 150 lowest-variance audio pattern cluster centroids for k = 500. Displayed is the majority-vote transcription of the each audio cluster. All clusters shown contained a minimum of 583 members and an average of 2482, with an average purity of .668.

The MIT project isnt the first to consider the idea that computers could automatically associate audio and imagery. But the research being done at MIT may well be the first to pursue it at scale, thanks to the "renaissance" in deep neural networks, which involve multiple layers of neural units that mimic the way the human brain solves problems. The networks require churning through massive amounts of data, and so theyve only taken off as a meaningful AI technique in recent years as computers processing power has increased.

Thats led just about every major technology company to go on hiring sprees in a bid to automate services like search, surfacing relevant photos and news, restaurant recommendations, and so on. Many consider AI to be perhaps the next major computing paradigm.

"It is the most important computing development in the last 20 years," Jen-Hsun Huang, the CEO of Nvidia, one of the worlds largest makers of the kinds of graphics processors powering many AI initiatives, told Fast Company last year, "and [big tech companies] are going to have to race to make sure that AI is a core competency."

Now that computers are powerful enough to begin utilizing deep neural networks in speech recognition, the key is to develop better algorithms, and in the case of the MIT project, Harwath and Glass believe that by employing more organic speech recognition algorithms, they can move faster down the path to truly artificial intelligent systems along the line of what characters like C-3PO have portrayed in Star Wars movies.

To be sure, were many years away from such systems, but the MIT project is aiming to excise one of the most time-consuming and expensive pieces of the translation puzzle: requiring people to train models by manually labeling countless collections of images or vocabularies. That laborious process involves people going through large collections of imagery and annotating them, one by one, with descriptive keywords.

Harwath acknowledges that his team spent quite a lot of time starting in late 2014 doing that kind of manual, or supervised, learning on sound files and imagery, and that afforded them a "big collection of audio."

Now, theyre on to the second version of the project, which is to build algorithms that can both learn language as well as the real-world concepts the language is grounded in, and to do so utilizing very unstructured data.

Heres how it works: The MIT team sets out to train neural networks on what amounts to a game of "which one of these things is not like the other," Harwath explains.

They want to teach the system to understand the difference between matching pairsan image of a dog with a fluffy hat and an audio clip with the caption "dog with a fluffy hat"and mismatched pairs like the same audio clip and a photo of a cat.

Matches get a high score and mismatches get a low score, and when the goal is for the system to learn individual objects within an image and individual words in an audio stream, they apply the neural network to small regions of an image, or small intervals of the audio.

Right now the system is trained on only about 500 words. Yet its often able to recognize those words in new audio clips it has never encountered. The system is nowhere near perfect, for some word categories, Harwath says, the accuracy is in the 15%-20% range. But in others, its as high as 90%.

"The really exciting thing," he says, "is its able to make the association between the acoustic patterns and the visual patterns. So when I say lighthouse, Im referring to a particular [area] in an image that has a lighthouse, [and it can] associate it with the start and stop time in the audio where you says, lighthouse."

A different task that they frequently run the system through is essentially an image retrieval task, something like a Google image search. They give it a spoken query, say, "Show me an image of a girl wearing a blue dress in front of a lighthouse," and then wait for the neural network to search for an image thats relevant to the query.

Heres where its important not to get too excited about the technology being ready for prime time. Harwath says the team considers the results of the query accurate if the appropriate image comes up in the top 10 results from a library of only about 1,000 images. The system is currently able to do that just under 50% of the time.

The number is improving, though. When Harwath and Glass wrote a paper on the project for an upcoming conference in France, it was 43%. Still, he believes that although there are regular improvements and increased accuracy every time they train a new model, theyre held back by the available computational power. Even with a set of eight powerful GPUs, it can still take two weeks to train a single model.

An example of our grounding method. The left image displays a grid defining the allowed start and end coordinates for the bounding box proposals. The bottom spectrogram displays several audio region proposals drawn as the families of stacked red line segments. The image on the right and spectrogram on the top display the final output of the grounding algorithm. The top spectrogram also displays the time-aligned text transcript of the caption, so as to demonstrate which words were captured by the groundings. In this example, the top three groundings have been kept, with the colors indicating the audio segment that is grounded to each bounding box.

Perhaps the most exciting potential of the research is in breakthroughs for language-to-language translation.

"The way to think about it is this," Harwath says. "If you have an image of a lighthouse, and if we speak different languages but describe the same image, and if the system can figure out the word Im using and the word youre using, then implicitly, it has a model for translating my word to your word . . . It would bypass the need for manual translations and a need for someone whos bilingual. It would be amazing if we could just completely bypass that."

To be sure, that is entirely theoretical today. But the MIT team is confident that at some point in the future, the system could reach that goal. It could be 10 years, or it could be 20. "I really have no idea," he says. "Were always wrong when we make predictions."

In the meantime, another challenge is coming up with enough quality data to satisfy the system. Deep neural networks are very hungry models.

Traditional machine learning models were limited by diminishing returns on additional data. "If you think of a machine learning algorithm as an engine, data is like the gasoline," he says. "Then, traditionally, the more gas you pour into the engine, the faster it runs, but it only works up to a point, and then levels off.

"With deep neural networks, you have a much higher capacity. The more data you give it, the faster and faster it goes. It just goes beyond what older algorithms were capable of."

But he thinks no ones sure of the outer limits of deep neural networks capacities. The big question, he says, is how far will a deep neural network scale? Will they saturate at some point and stop learning, or will it just keep going?

"We havent reached this point yet," Harwath says, "because people have been consistently showing that the more data you give them, the better they work. We dont know how far we can push it."

Read more:

AI For Matching Images With Spoken Word Gets A Boost From MIT - Fast Company

Posted in Ai | Comments Off on AI For Matching Images With Spoken Word Gets A Boost From MIT – Fast Company

Teach undergrads ethics to ensure future AI is safe compsci boffins – The Register

Posted: at 8:15 am

Universities should step up efforts to educate students about AI ethics, according to a panel of experts speaking at the AAAI conference in San Francisco on Monday.

Machine learning is constantly advancing as new algorithms are developed, and as hardware to accelerate computations improves. As the capabilities of AI systems increases, so do fears that this progressing technology will be abused to trample on people's privacy and other rights.

Sure, there are magazines and blogs full of academics wringing their hands about seemingly impossible conscious computers wrestling with moral dilemmas. But before we get to that point in AI development, though, there are still modern-day practical problems to consider. Say, when a program decides which medication you should take, shouldn't you be able to pick apart how it came to that conclusion? What if the prescription is based on an paid-for bias in the model in favor of a particular pharmaceutical giant?

When a machine harms a person, who is at fault? How do you, as an engineer, design your system so that a machine doesn't hurt or cause damage?

Several groups such as the Partnership on AI and The Ethics and Governance of Artificial Intelligence Fund have spawned to try to keep tech in check. More direct than that, though, undergrads should be made aware of the moral and ethical issues surrounding technology; good practices should be drilled into the next generation of engineers, the conference was told.

Robots are particularly worrying. Its already difficult to explain decisions made by algorithms, but when they are applied to physical machines capable of directly affecting the environment, its no wonder that alarm bells are ringing.

More robots and AI are functioning as members of society, Ben Kuipers, a professor of computer science and engineering at the University of Michigan, said.

We worry about robot behavior," he told the audience. "With no sense of whats appropriate, and whats not, they may do great harm. Prof Kuipers uses the example of Robot from the sci-fi comedy flick Robot & Frank, who willingly lies and breaks the law in pursuit of its goals.

Even if the robots missions are human-given top-level goals, it will create subgoals and execute them in unexpected ways to fulfill its main task. To design robots to be trustworthy, a solid grounding in engineering is not enough philosophy is needed.

Prof Kuipers pointed to the theories of utilitarianism; deontology; and virtue ethics to find useful clues for ethical theories.

Illah Nourbakhsh, a professor of robotics at Carnegie Mellon University, agreed. On his online robotics and ethics teaching guide, he wrote: First, students need access to formal ethical frameworks that they can use to study and evaluate ethical consequence in robotics well enough to make their own well-informed decisions. Second, students need to understand the downstream impact of media-making well enough to help the field as a whole communicate with the public authentically and effectively about robotics and its ramifications on society.

But rigid ethical frameworks arent always the best way to model moral problems in AI, Judy Goldsmith, a professor of computer science at the University of Kentucky, told the audience.

Case studies are rarely memorable, emotionally gripping or subtle. There is no character development and often theres a right answer, she said. Prof Goldsmith prefers science fiction as it provides a rich vein for ethical dilemmas and an emotional connection to stories makes discussions memorable when real-world dilemmas arise.

Read more from the original source:

Teach undergrads ethics to ensure future AI is safe compsci boffins - The Register

Posted in Ai | Comments Off on Teach undergrads ethics to ensure future AI is safe compsci boffins – The Register

Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All – Forbes

Posted: February 6, 2017 at 3:22 pm


Forbes
Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All
Forbes
Made possible in part by the Samsung Strategy and Innovation Center, the work centered on using both physical feedback and audio data to train AI for the task of analyzing, and recognizing, when conversations take a turn. Study participants were asked ...

and more »

Link:

Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes

Posted in Ai | Comments Off on Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All – Forbes

Who Leads On AI: The CIO Or The CDO? – Forbes

Posted: at 3:22 pm


Forbes
Who Leads On AI: The CIO Or The CDO?
Forbes
It's clear that AI isn't an IT-only focus it requires a combination of teams and personnel to work together. At independent investment research firm Morningstar, the data, technology and analytics teams are all involved with different aspects of AI ...

and more »

Go here to read the rest:

Who Leads On AI: The CIO Or The CDO? - Forbes

Posted in Ai | Comments Off on Who Leads On AI: The CIO Or The CDO? – Forbes

AI and the Ghost in the Machine – Hackaday

Posted: at 3:22 pm


Hackaday
AI and the Ghost in the Machine
Hackaday
Over the last 50 years, interest in AI development has waxed and waned with public interest and the successes and failures of the industry. Predictions made by researchers in the field, and by science fiction visionaries, have often fallen short of ...

and more »

Read more here:

AI and the Ghost in the Machine - Hackaday

Posted in Ai | Comments Off on AI and the Ghost in the Machine – Hackaday

Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers – Fast Company

Posted: at 3:22 pm

Sharing emotion-driven narratives that resonate with other people is something humans are quite good at. Weve been sitting around campfires telling stories for tens of thousands of years, and we still do it. One reason why is because it's an effective way to communicate: We remember stories.

But what makes for good storytelling? Mark Magellan, a writer and designer at Ideo U, puts it this way: "To tell a story that someone will remember, it helps to understand his or her needs. The art of storytelling requires creativity, critical-thinking skills, self-awareness, and empathy."

All those traits are fundamentally human, but as artificial intelligence (AI) becomes more commonplace, even experts whose jobs depend on them possessing those traitspeople like Magellanforesee it playing a bigger role in what they do.

Connecting with an audience has always been something of an art formit's part of the magic of a great storyteller. But AI is steadily converting it into a science. The AI-driven marketing platform Influential uses IBM's Watson to connect brands with audiences. It finds social media influencers who can help spread a brand's message to target demographics in a way that feels authentic and, well, human.

Ryan Detert, Influential's CEO and cofounder, says that the tool uses two of Watsons services, Personality Insights and AlchemyLanguage, to look at the content written by an influencer, analyzing that text, and scoring it across 52 personality traitslike "adventurousness," "achievement striving," and "openness to change." To date, says Detert, Influential has gathered these insights on 10,000 social media influencers with over 4 billion followers altogether.

Once a brand comes to Influential with their marketing goals, the platform uses Watson to identify the traits most strongly expressed by that brand, then matches influencers whose personalities, social media posts, and followers best reflect it. If a brand narrative wants to project adventurousness, Influential will find influencers who score highly on that characteristic and whose followers respond well to it.

Influential worked with Kia on a 2016 Super Bowl ad featuring Christopher Walken, and Detert notes, "We saw a 30% higher level of engagement on FTC posts, which are branded posts [flagged] with [a hashtag like] #Ad or #Sponsored. The more the brand and influencers' voices are aligned," he says, "the greater the engagement, sentiment, ad recall, virality, and clicks." The influencers that the AI technology pinpointed, says Detert, "outperformed their regular organic content with these branded posts." In other words, the machine learned how to connect with the influencers' fans even better than the influencers themselves did.

Influential's Watson-powered AI tool figured out how to get this Kia ad to resonate with influencers' followers more powerfully than those influencers' own posts did.

Influential also uses Watson's AI to analyze social buzz and tell brands how they're being perceived. Sometimes, says Detert, that means telling brands, "Youre not the brand you think you are," and going back to the drawing board to come up with a better story.

Somatic is a digital marketing company whose experiments with machine learning show the technology's potential in visually driven storytelling, too. One of its tools, called "Creative Storyteller," uses AI to scan photos and generate short text descriptions of what it seesbut not in generic prose.

The tool, says Somatic founder and CEO Jason Toy, can write about visual data in different styles or genres, even mimicking the prose styles of celebrities. As long as there's enough written content out there for Creative Storyteller to be trained on, Toy says it can do a pretty good impression.

Creative Storyteller has been used with major companies to turn an ordinary marketing campaign into an interactive one. In one case, says Toy, "We built an interactive ad where a user uploads a picture and a model talks to them in a style of someone else about that pic."

Such short-form stories work well, but longer text often fails because the AI lacks context, notes Toy. "These machines are able to learn the information you give them. It seems magical at first, but then cracks appear with longer text."

Google AI researcher Margaret Mitchell's work may eventually fill cracks like those. She hopes her research, which is geared toward "helping AI start to understand things about everyday human life," can start to push machines beyond just generating "literal content, like you get in image captioning," toward anticipating how those descriptions will make people feel.

Says Mitchell, "There is increasing interest in developing humanistic AI that can understand human behaviors and relations."

[Image: via Somatic]

Now for the inevitable question: Will this "humanistic AI" ever beat humans at their own game? Suzanne Gibbs Howard, a partner at Ideo and founder of Ideo U, believes collaboration between human storytellers and machines is more likely in the near term. Some of the questions she's considering include, "How might the worlds storytellers leverage knowledge and insights via AI to make their stories even more powerful, faster? Might AI be a prototyping tool?"

Magellan, Gibbs Howard's colleague at Ideo U, believes the answer is yes; AI as already shown its ability to "explore unmet or latent needs" in an audience that a human storyteller might miss. That could prove helpful for planning and refining a story. "It's not hard to imagine AI crowdsourcing story plots from the internet and identifying people's needs from social media," he muses.

Jason Toy also sees collaboration with AI as the model to strive toward. "I see them as systems that work with humans. They'll always need the human as high-level architect. Storytellers need to think about how the story will be felt, told, and the medium."

"It's all about practicing empathy," stresses Magellan. And for all the strides in AI research that he's seen, empathy just doesn't appear to be a skill machines will pick up too soon. "Theres a level of emotional intelligence you must possess as a storyteller," he says. "Until robots gain that, weve got a leg up on them!"

In fact, storytelling may be one way to future-proof your job. Spend some more time around the campfire, but dont be afraid if a robot turns up to help.

Darren Menabney lives in Tokyo, where he leads global employee engagement at Ricoh, teaches MBA students at GLOBIS University, coaches online for Ideo U, and supports the Japanese startup scene. Follow him on Twitter at @darmenab.

The rest is here:

Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company

Posted in Ai | Comments Off on Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers – Fast Company

Roses are red, violets are blue. Thanks to this AI, someone’ll fuck you. – The Next Web

Posted: at 3:22 pm

One of the most interesting companies Ive had the pleasure to discover over the past few months is Atomic Reach: a Toronto-based startup with an AI that can understand, contextualize, and improve upon language. The service itself can cost thousands of dollars each month, and its aimed at large enterprises with sizable content marketing budgets.

And now for a limited time, Atomic Reach is letting you use its AI platform to improve your dating profile, which is a bit like using a sledgehammer to crack a walnut.

Gary Vaynerchuk was so impressed with TNW Conference 2016 he paused mid-talk to applaud us.

The aim behind this initiative which Atomic Reach is calling Atomic Love is to demonstrate that its proprietary machine learning platform can be applied on a variety of different texts not just corporate blog post, tutorials, and web copy.

We wanted to show the impact that our Atomic AI platform has on all pieces of written language, including dating profiles, said Kerri Henneberry, director of marketing for Atomic Reach, in a statement.

The way Atomic Love works is pretty simple. First, you select the type of person you want to meet. It gives you five distinct categories to choose from, namely specialist, genius, knowledgeable, academic, and general. Then, sign up with your email address, and copy in your profile text. Atomic AI will then parse it through its machine learning algorithm, making suggestions that will (at least, in theory) make it more attractive to your target audience.

So, how well does it work? Too lazy (and too engaged) to write my own profile, I grabbed a template eHarmonyprofile and copied it in. The profile, while admittedly a little schmaltzy, read well. It was earnest. Funny, even.

But Atomic Love found areas for improvement. Some languagecould be simplified, while other words could be more emotionally intense.

For example, it suggested I replace connecting with hitting, which is pretty reasonable. Connecting is what you do on LinkedIn. But if a date goes well, you hit it off.

Its worth emphasizing that Atomic Reach isnt making any promises as to its efficacy. While Atomic AI has been able to increase pageviews and engagement in corporate environments, the online dating world is untested territory for the company.

You can check out Atomic Love from today. Be quick though. The site is only available until the end of February.

Read next: YouTuber builds a clone of Razers triple-screen laptop (and you can too)

Shh. Here's some distraction

Go here to read the rest:

Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web

Posted in Ai | Comments Off on Roses are red, violets are blue. Thanks to this AI, someone’ll fuck you. – The Next Web

Realdoll builds artificially intelligent sex robots with programmable personalities – Fox News

Posted: at 3:22 pm

Sex doll manufacturer Realdoll is dipping its toe (and we don't want to know which other body parts) into the world of artificial intelligence and robotics with a forthcoming robot sex assistant that promises to form a "real bond" with its, erm, users.

The new system is made up of several components, which will roll out over the course of this year and next. It will begin with the Harmony AI app, scheduled for release on April 15, followed by the company's "first robotic head systems," set to launch by the end of the year. A virtual reality platform will ship sometime in 2018.

It's not going to be cheap, mind you: the head alone will set you back $10,000. No pun intended.

More: Ohroma's 'smell-o-vision' VR wants to put your nose in a porn star's room

"We are developing the Harmony AI system to add a new layer to the relationships people can have with a Realdoll," Realdoll CEO Matt McMullen told Digital Trends. "Many of our clients rely on their imaginations to a great degree to impose imagined personalities on their dolls. With the Harmony AI, they will be able to actually create these personalities instead of having to imagine them. They will be able to talk to their dolls, and the AI will learn about them over time through these interactions, thus creating an alternative form of relationship. The scope of conversations possible with the AI is quite diverse, and not limited to sexual subject matter."

From the sound of things, the Harmony system may be back-compatible with some of the existing dolls the company offers, although we're sure that more clarification will be made available at a later date. With the AI system, users can choose from a range of personality traits (kind, sexual, shy, naive, brainy, etc.) and then choose how strongly these characteristics are engrained in their new acquaintance.

"We feel that this system, and this technology, will appeal to a segment of the population that struggles with forming intimate connections with other people, whether by choice or circumstance," McMullen continued. "Furthermore, it will likely attract those who seek to explore uncharted and new territory where relationships and sex are concerned."

We do worry about the resale value, though.

See more here:

Realdoll builds artificially intelligent sex robots with programmable personalities - Fox News

Posted in Ai | Comments Off on Realdoll builds artificially intelligent sex robots with programmable personalities – Fox News

ZeroStack Launches AI Suite for Self-Driving Clouds – Yahoo Finance

Posted: at 3:22 pm

MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--

ZeroStack, the leader in making self-driving private cloud affordable for all companies, today announced its roadmap and first suite of artificial intelligence (AI) capabilities derived from machine learning. These capabilities build the foundation for self-driving clouds, making deploying, running, and managing on-premises cloud as hands off as using a public cloud. Aimed at empowering application developers, other on-premises clouds require major investments in IT infrastructure and internal skills. ZeroStacks intelligent cloud platform leverages self-healing software and algorithms developed from over one million datagrams. This economic disruption unleashes businesses to choose clouds for application development based on data locality, governance, performance and costs without technology adoption restricting their choices.

With ZeroStacks vision for automated cloud, and this first release of real capabilities, I believe they are the only credible cloud vendor to employ artificial intelligence in the service of enterprise customers, said Torsten Volk, senior analyst at Enterprise Management Associates. Given the increasing complexity of IT operations, deploying AI is an optimal way of managing costs.

The future of the datacenter is AI because fewer and fewer companies want to manage any infrastructure. As a result, the responsibility to manage increasing complexity is shifting from the customer to the vendor, said Dr. Jim Metzler, principal analyst at Ashton, Metzler and Associates. By incorporating AI technology into their software, ZeroStack is at the forefront of these tidal changes in IT.

ZeroStacks AI Suite

Designed by senior engineers from VMware and Google, ZeroStacks intelligent cloud platform collects operational data and leverages machine learning to help customers make decisions about capacity planning, troubleshooting and optimized placement of applications. ZeroStacks vision is to extend existing functionality in three phases:

ZeroStack has continually worked to reduce ITs I&O burden for enterprise customers, and our AI software strategy points the way to the future of IT operations, said Kamesh Pemmaraju, vice president of product management at ZeroStack. As placement and management of customer workloads increase datacenter complexity, AI will be a key requirement for cost-effective management, and we are at the forefront of using this technology.

Helpful Links

Suggested Tweet: ZeroStack launches AI for self-driving clouds

About ZeroStack

ZeroStack uses smart software and artificial intelligence to deliver a self-driving, fully integrated private cloud platform that offers the agility and simplicity of public cloud at a fraction of the cost. On premises, ZeroStacks cloud operating system converts bare-metal servers into a reliable, self-healing cloud cluster. This cluster is consumed via a self-service SaaS portal. The SaaS portal also collects operational data and uses artificial intelligence to create models that help customers make decisions about capacity planning, troubleshooting and optimized placement of applications. The integrated AppStore enables 1-click deployment of many applications that provide the platform for most modern cloud native applications. This solution is fully integrated with public clouds to offer seamless migration between clouds. The company is funded by Formation 8 and Foundation Capital, and is based in Mountain View, California. For more information, visit http://www.zerostack.com or follow us on Twitter @ZeroStackInc.

View source version on businesswire.com: http://www.businesswire.com/news/home/20170206005249/en/

Read more:

ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance

Posted in Ai | Comments Off on ZeroStack Launches AI Suite for Self-Driving Clouds – Yahoo Finance

Page 284«..1020..283284285286