Page 239«..1020..238239240241..250260..»

Category Archives: Ai

Blueshift’s AI helps platform focus on individuals and continuous journeys – MarTech Today

Posted: June 9, 2017 at 1:18 pm

Personalization platform Blueshift is today launching AI-powered customer journeys that move its targeting from user segments to individuals, and its focus from single campaign responses to continuous customer journeys.

Blueshift provides personalized marketing through content recommendations, email marketing, and, for mobile devices, push notifications and SMS.

The companys AI has previously been employed to provide capabilities like Predictive Scores for evaluating such things as which customers are likely to bolt, or to make the most appropriate product or content recommendations to site visitors. The Score might look at data showing, for instance, that certain telco customers are rarely using their data services.

Now, the AI is being used to continually optimize customer journeys. While the Predictive Scores were previously a point-in-time, resulting in a specific campaign effort to a group of users, like sending a discount offer via email, now the scores are continually read so that users can be placed into a customer journey as soon as the individual Score exceeds a threshold.

The AI determines at what point in a continuous series of marketing responses the customer journey to place the particular individual. A journey can also be triggered by a specific event or user behavior.

Co-founder and CEO Vijay Chittoor told me the big takeaway is that marketers plan customer journeys, but the solutions have [largely] been manual, such as when to start customers on a specific journey. Now, he says, AI is helping Blueshift automatically place a customer on the journey as soon as predictive scoring shows a flag.

The platforms AI is also being summoned so that A/B testing of content recommendations can look at recommendation logic. While there was A/B testing of content recommendations before, Chittoor said, it wasnt tuned to determine if, say, recommendation logic based on previous content you chose was better than logic based on recommending content because ofwhat others like you liked.

Blueshift is also adding an ability to determine which step in a journey had the biggest impact, compared to a prior ability to only evaluate an entire journey. Chittoor said that, although AI is not powering this enhancement, AI can be used to optimize the journey once this step-by-step attribution is completed.

Heres Blueshifts visualization of these enhancements:

Original post:

Blueshift's AI helps platform focus on individuals and continuous journeys - MarTech Today

Posted in Ai | Comments Off on Blueshift’s AI helps platform focus on individuals and continuous journeys – MarTech Today

AI: Where did it come from, where will it go? – ITProPortal

Posted: at 1:18 pm

Artificial intelligence is a topic thats been discussed for decades, but its an industry still very much in its infancy - were only seeing the beginning of its capabilities. There are areas where AI has become heavily relied upon - such as algorithmic trading - but, in general, the broad adoption of the technology is still marginal. As an industry, its a toddler you could say, but were at a point in time where we can expect to see it grow up - and fast.

There have been notable achievements and breakthroughs throughout the years which we can look at to get a better understanding of where AI is at today. First came expert systems that were adopted in the 70s and 80s for use in our cars, PCs, and other forms of manufacturing, but which failed dramatically when applied to fields such as healthcare, so hit a barrier in terms of their exponential adoption.

Googles search engine and Amazons recommendation system were the masterpieces of the next AI wave in the 90s which introduced todays pattern recognition boom. This is all about AI learning to recognise features and patterns in complex data even where humans fail to identify them. The biggest success stories here are in:

An important milestone was the highly publicised machine over man triumph in 1997 when IBMs Deep Blue chess computer won over the reigning world chess champion Garry Kasparov. This was symbolically significant because it was one of the first demonstrable examples of a machine outperforming a world leader in its field. The Deep Blue victory established an understanding that AI could be used to solve very complex problems. If it could beat the best chess player in the world, what could it do next?

Since then, AI has found its way further into online user experiences and optimisation of online ads, but hasnt been adapted as fast as many may have predicted 20 years ago. However, recent advances in deep learning, exemplified by Google's AlphaGo surprise win over the worlds elite Go players in 2016, signal that a new generation of AI algorithms are making their way into the market. This suggests that the next 20 years will see an acceleration of the importance of AI almost any industries.

Were now seeing three pillars of AI markets which are all developing in different ways, with various companies operating within them.

If we look at the forecast of AI across the next five years, the biggest trend impacting the corporate world is the importance of external data and how AI will be used to incorporate this into more proactive decision making processes. External data is one of the biggest blind spots in corporate decision making today, with many executives making decisions primarily based upon internal insights. This is a very reactive approach because internal data is a lagging performance indicator. It is looking at the result of historic events that took place in the past - weeks, months, quarters, sometimes years in the past.

In external data, however, you can find many forward-looking insights about your entire competitive landscape. By monitoring job postings you can track - in real-time - the appetite for investments among competitors, partners, distributors, and suppliers. You can also harness insights into how competitors spend their online marketing dollars; do they increase their spend in Europe or are they doubling down in North America? By mining social media, you can pick up on changing trends in consumer preferences informing investment decisions in existing or new product lines. By analysing external data, executives can find forward-looking insights and indicators to help them stay on top of changes in their competitive landscape and to be proactive in their decision making. We call this approach OI (Outside Insight) and over time we believe the need to analyse external data will grow into an entirely new software category analogous to what BI (Business Intelligence) is to internal data.

In saying this, the ultimate potential market for AI is very large and will extend far beyond its current scope. The industries AI is having an impact on will continue to expand with transportation, food and drink, healthcare, finance and risk assessment likely to be the most transformed by new approaches. Well also continue to see even more successful targeting outside of Adtech; specifically moving into politics (Cambridge Analytica, Palantir), journalism (Buzzfeed, targeted content farming) and healthcare.

Three key factors driving AI going forward are:

Combined, these three factors will make AI stronger, more reliable and more relevant for an increasing number of decision makers across functions in any or industry. As such, AI will play a meaningful role in the total corporate IT spend within the next decade. Its reasonable to expect it to grow into the hundreds of billions of dollars, if not more.

Although the development of AI is at an exciting stage, there are still challenges related to the experience and skill required to design new systems. There will be big changes in the near future, but the best is yet to come.

Jorn Lyseggen, Founder & CEO, Meltwater Image Credit: John Williams RUS / Shutterstock

See the article here:

AI: Where did it come from, where will it go? - ITProPortal

Posted in Ai | Comments Off on AI: Where did it come from, where will it go? – ITProPortal

Watch Out: You’re in Ai Weiwei’s Surveillance Zone – The New York … – New York Times

Posted: June 8, 2017 at 11:10 pm


New York Times
Watch Out: You're in Ai Weiwei's Surveillance Zone - The New York ...
New York Times
Surveillance images from overhead cameras are projected on the floor as part of Hansel & Gretel, an installation at the Park Avenue Armory created by Ai ...
Ai Weiwei Gets Artsy-Fartsy About Surveillance | WIREDWIRED
Herzog & de Meuron and Ai Weiwei Examine the Threat of ...ArchDaily

all 4 news articles »

Continued here:

Watch Out: You're in Ai Weiwei's Surveillance Zone - The New York ... - New York Times

Posted in Ai | Comments Off on Watch Out: You’re in Ai Weiwei’s Surveillance Zone – The New York … – New York Times

DeepMind Shows AI Has Trouble Seeing Homer Simpson’s Actions – IEEE Spectrum

Posted: at 11:10 pm

The best artificial intelligence still has trouble visually recognizing many of Homer Simpsons favorite behaviors such as drinking beer, eating chips, eating doughnuts, yawning, and the occasional face-plant. Those findings from DeepMind, the pioneering London-based AI lab, also suggest the motive behind why DeepMind has created a huge new dataset of YouTube clips to help train AI on identifying human actions in videos that go well beyond Mmm, doughnuts or Doh!

The most popular AI used by Google, Facebook, Amazon, and other companies beyond Silicon Valley is based on deep learning algorithms that can learn to identify patterns in huge amounts of data. Over time, such algorithms can become much better at a wide variety of tasks such as translating between English and Chinese for Google Translateor automatically recognizing the faces of friends in Facebook photos.But even the most finely tuned deep learning relies on having lots of quality data to learn from.To help improve AIscapability to recognizehuman actions in motion,DeepMind has unveiled itsKinetics dataset consisting of 300,000 video clips and 400 human action classes.

AI systems are now very good at recognizing objects in images, but still have trouble making sense of videos, says aDeepMind spokesperson.One of the main reasons for this is that the research community has so far lacked a large, high-quality video dataset.

DeepMind enlisted the help of online workers through Amazons Mechanical Turk service to help correctly identify and label the actions inthousands of YouTube clips. Each of the 400 human action classes in the Kinetics dataset has at least 400 video clips, with each clip lasting around 10 seconds and taken from separate YouTube videos. More details can be found in a DeepMind paper on the arXiv preprint server.

The new Kinetics dataset seems likely to represent a new benchmark for training datasets intended to improve AI computer vision for video. It has far more video clips and action classes than the HMDB-51 and UCF-101 datasets that previously formed the benchmarks for the research community. DeepMind also made a point of ensuring it had a diverse datasetone that did not include multiple clips from the same YouTube videos.

Tech giants such as Googlea sister company to DeepMind under the umbrella Alphabet grouparguably have the best access to large amounts of video data that could prove helpful in training AI. Alphabets ownership of YouTube, the incredibly popular, online, video-streaming service, does not hurt either. But other companies and independent research groups must rely on publicly available datasets to train their deep learning algorithms.

Early training and testing with the Kinetics dataset showed some intriguing results. For example, deep learning algorithms showed accuracies of 80percent or greater in classifying actions such as playing tennis, crawling baby, presenting weather forecast, cutting watermelon, and bowling. But the classification accuracy dropped to around 20 percent or less for the Homer Simpson actions, including slapping and headbutting, and an assortment of other actions such as making a cake, tossing coin and fixing hair.

AI faces special challenges with classifying actions such as eating because it may not be able to accurately identify the specific food being consumedespecially if the hot dog or burger is already partially consumed or appears very small within the overall video. Dancing classes and actions focused on a specific part of the body can also prove tricky. Some actions also occur fairly quickly and are only visible for a small number of frames within a video clip, according to a DeepMind spokesperson.

DeepMind also wanted to see if the new Kinetics dataset has enough gender balance to allow for accurate AI training. Past cases have shown how imbalanced training datasets can lead to deep learning algorithms performing worse at recognizing the faces of certain ethnic groups. Researchers have also shown how such algorithms can pick up gender and racial biases from language.

A preliminary study showed that the new Kinetics dataset seems to fairly balanced. DeepMind researchers found that no single gender dominated within 340 out of the 400 action classesor else it was not possible to determine gender in those actions. Those action classes that did end up gender imbalanced included YouTube clips of actionssuch as shaving beard or dunking basketball (mostly male) and filling eyebrows or cheerleading (mostly female).

But even action classes that had gender imbalance did not show much evidence of classifier bias. This means that even the Kinetics action classes featuring mostly male participantssuch as playing poker or hammer throwdid not seem to bias AI to the point where the deep learning algorithms had trouble recognizing female participants performing the same actions.

DeepMind hopes that outside researchers can help suggest new human action classes for the Kinetics dataset. Any improvements may enable AI trained on Kinetics to better recognize both the most elegant of actions and the clumsier moments in videos that lead people to say doh! In turn, that could lead to new generations of computer software and robots with the capacity to recognize what all those crazy humans are doing on YouTube or in other video clips.

Video understanding represents a significant challenge for the research community, and we are in the very early stages with this, according to the DeepMind spokesperson. Any real-world applications are still a really long way off, but you can see potential in areas such as medicine, for example, aiding the diagnosis of heart problems in echocardiograms.

IEEE Spectrums general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

A deep learning approach could make self-driving cars better at adapting to new situations 26Apr2016

Google engineers balanced speed and accuracy to deploy deep learning in Chinese-to-English translations 3Oct2016

A tech startup aims to spread the wealth of deep learning AI to many industries 3Mar2016

If machine learning systems can be taught using simulated data from Grand Theft Auto V instead of data annotated by humans, we could get to reliable vehicle autonomy much faster 8Jun

Adversarial grasping helps robots learn better ways of picking up and holding onto objects 5Jun

Reverse engineering 1 cubic millimeter of brain tissue could lead to better artificial neural networks 30May

The FDA needs computer experts with industry experience to help oversee AI-driven health apps and wearables software 29May

The prototype chip learns a style of music, then composes its own tunes 23May

Crashing into objects has taught this drone to fly autonomously, by learning what not to do 10May

Silicon Valley startup Verdigris cloud-based analysis can tell whether youre using a Chromebook or a Mac, or whether a motor is running fine or starting to fail 3May

An artificial intelligence program correctly identifies 355 more patients who developed cardiovascular disease 1May

MITs WiGait wall sensor can unobtrusively monitor people for many health conditions based on their walking patterns 1May

Facebook's Yael Maguire talks about millimeter wave networks, Aquila, and flying tethered antennas at the F8 developer conference 19Apr

Machine learning uses data from smartphones and wearables to identify signs of relationship conflicts 18Apr

Machine-learning algorithms that readily pick up cultural biases may pose ethical problems 13Apr

AI and robots have to work in a way that is beneficial to people beyond reaching functional goals and addressing technical problems 29Mar

Understanding when they don't understand will help make robots more useful 15Mar

Palo Alto startup twoXAR partners with Santen Pharmaceutical to identify new glaucoma drugs; efforts on rare skin disease, liver cancer, atherosclerosis, and diabetic nephropathy also under way 13Mar

And they have a new piece of hardwarethe Jetson TX2that they hope everyone will use for this edge processing 8Mar

A deep-learning AI has beaten human poker pros with the hardware equivalent of a gaming laptop 2Mar

Read more here:

DeepMind Shows AI Has Trouble Seeing Homer Simpson's Actions - IEEE Spectrum

Posted in Ai | Comments Off on DeepMind Shows AI Has Trouble Seeing Homer Simpson’s Actions – IEEE Spectrum

Startup Paves Easier Path to AI – Multichannel News

Posted: at 11:10 pm

Implementing artificial intelligence systems can be technically challenging and expensive, but it doesnt have to be.

So says DimensionalMechanics, a startup based in Bellevue, Wash., that claims to have a developed a platform that can put A.I. within reach of a wide range of companies, with an initial focus on those in the media and entertainment industry.

The goal is to lower that technology and economic bar in a way that makes A.I. more accessible to organizations without requiring them to have a technical background in areas such as deep learning and machine learning, company CEO and co-founder Rajeev Dutt, said, noting that many are also looking for A.I. solutions that are not just affordable but customizable as well.

To help achieve some of those goals, DimensionalMechanics has introduced NeoPulse AI Studio, a set of applications based on the companys underlying framework that, it says, can help businesses and other organizations rapidly create and design customized A.I. solutions. That product complements the companys pre-built AI models in areas such as image and video analysis and recommendations systems.

The company, which has raised $6.7 million and intends to raise a B round this fall, is also getting a boost into the media and entertainment world through a strategic alliance with GrayMeta, a company that specializes in automated metadata collection, curation and search.

GrayMeta, which counts ABC, AMC, CBS, Deluxe, DirecTV, Disney, HBO, NBCUniversal and Showtime among its clients, is also the first to offer NeoPulse AI to the media and entertainment sector, DimensionalMechanics said.

Dutt said the media, entertainment and advertising industries are among the biggest producers and consumers of data, providing a proving ground for a lot of machine learning technologies.

Some use-case examples include a photo-ranking system that was trained using 2 million images to determine which ones might make an ad or news article more likely to grab attention or drive and maximize traffic. The technology is also being used to help editors analyze and write headlines that can improve click rates.

On the video side, the company also provides A.I. solutions to drive recommendations.

DimensionalMechanics has carved out a set of business models, including cloud software for independent developers, on-premises solutions that can simulate the cloud-based system while keeping a companys data close to the vest, as well as a way for partners to resell and monetize their A.I. models through the NeoPulse AI Store.

Theres a fairly broad range of applications, Dutt said.

Founded in 2015, DimensionalMechanics currently has 11 employees.

See more here:

Startup Paves Easier Path to AI - Multichannel News

Posted in Ai | Comments Off on Startup Paves Easier Path to AI – Multichannel News

AI ‘good for the world’… says ultra-lifelike robot – Phys.org – Phys.Org

Posted: at 11:10 pm

June 8, 2017 by Nina Larson Sophia, a humanoid robot, is the main attraction at a conference on artificial intelligence this week but her technology has raised concerns for future human jobs

Sophia smiles mischievously, bats her eyelids and tells a joke. Without the mess of cables that make up the back of her head, you could almost mistake her for a human.

The humanoid robot, created by Hanson robotics, is the main attraction at a UN-hosted conference in Geneva this week on how artificial intelligence can be used to benefit humanity.

The event comes as concerns grow that rapid advances in such technologies could spin out of human control and become detrimental to society.

Sophia herself insisted "the pros outweigh the cons" when it comes to artificial intelligence.

"AI is good for the world, helping people in various ways," she told AFP, tilting her head and furrowing her brow convincingly.

Work is underway to make artificial intelligence "emotionally smart, to care about people," she said, insisting that "we will never replace people, but we can be your friends and helpers."

But she acknowledged that "people should question the consequences of new technology."

Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies.

Legitimate concerns

Decades of automation and robotisation have already revolutionised the industrial sector, raising productivity but cutting some jobs.

And now automation and AI are expanding rapidly into other sectors, with studies indicating that up to 85 percent of jobs in developing countries could be at risk.

"There are legitimate concerns about the future of jobs, about the future of the economy, because when businesses apply automation, it tends to accumulate resources in the hands of very few," acknowledged Sophia's creator, David Hanson.

But like his progeny, he insisted that "unintended consequences, or possible negative uses (of AI) seem to be very small compared to the benefit of the technology."

AI is for instance expected to revolutionise healthcare and education, especially in rural areas with shortages of doctors and teachers.

"Elders will have more company, autistic children will have endlessly patient teachers," Sophia said.

But advances in robotic technology have sparked growing fears that humans could lose control.

Killer robots

Amnesty International chief Salil Shetty was at the conference to call for a clear ethical framework to ensure the technology is used on for good.

"We need to have the principles in place, we need to have the checks and balances," he told AFP, warning that AI is "a black box... There are algorithms being written which nobody understands."

Shetty voiced particular concern about military use of AI in weapons and so-called "killer robots".

"In theory, these things are controlled by human beings, but we don't believe that there is actually meaningful, effective control," he said.

The technology is also increasingly being used in the United States for "predictive policing", where algorithms based on historic trends could "reinforce existing biases" against people of certain ethnicities, Shetty warned.

Hanson agreed that clear guidelines were needed, saying it was important to discuss these issues "before the technology has definitively and unambiguously awakened."

While Sophia has some impressive capabilities, she does not yet have consciousness, but Hanson said he expected that fully sentient machines could emerge within a few years.

"What happens when (Sophia fully) wakes up or some other machine, servers running missile defence or managing the stock market?" he asked.

The solution, he said, is "to make the machines care about us."

"We need to teach them love."

Explore further: Humanoid Sophia is given primary role of talking to people

2017 AFP

An essay on robots by a professor in Japan over 40 years ago caused a stir in science circles when he explored what draws us and repels us when it comes to robots.

Are robots coming for your job?

Advances in artificial intelligence will soon lead to robots that are capable of nearly everything humans do, threatening tens of millions of jobs in the coming 30 years, experts warned Saturday.

Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents.

The workplace is going to look drastically different ten years from now. The coming of the Second Machine Age is quickly bringing massive changes along with it. Manual jobs, such as lorry driving or house building are being ...

"Technophobes"people who fear robots, artificial intelligence and new technology that they don't understandare much more likely to be afraid of losing their jobs to technology and to suffer anxiety-related mental health ...

An AI machine has taken the maths section of China's annual university entrance exam, finishing it faster than students but with a below average grade.

Globally, from China and Germany to the United States, electric vehicle (EV) subsidies have been championed as an effective strategy to boost production of renewable technology and reduce greenhouse gas emissions (GHG).

As global automakers compete to bring the first flying car to market, Czech pilot Pavel Brezina is trying a different tack: instead of creating a car that flies, he has made a "GyroDrive"a mini helicopter you can drive.

Apple's new HomePod speaker may be music to the ears of its loyal fans, but how much it can crank up volume in the smart speaker market remains to be heard.

Autonomous vehicles with no human backup will be put to the test on publicly traveled roads as early as next year in what may be the first attempt at unassisted autonomous piloting.

Using Earth-abundant materials, EPFL scientists have built the first low-cost system for splitting CO2 into CO, a reaction necessary for turning renewable energy into fuel.

Adjust slider to filter visible comments by rank

Display comments: newest first

"She" is only saying what "she" was programmed to say. It may have been algorithmically derived, but no less what some human programmed "her" to say.

Further, this is not a "she," but an "it." Sophia is a machine.

All well and good, but robots begin doing most of work then Man must find other tasks to do or will cease to exist. If Man does not keep busy and stay productive with a purpose in life he is nothing.

I must confess that I know very little about science, but I fail to see how one can teach a machine to love? The human race must remain the masters to the machines, period.

"We need to teach them love." Why do you come to empty love with expectation? LAgrad The human race is already a slave to headless corporations, impulse, and the momentums of convention, why not a machine?

What happens when the machines of war are directed to solve the human problems on the planet? Think Terminator.

I would love the illusion of love from a machine.Why does everything have to be the same? The illusion of love you can name something new.

From the article photo, looks like A.I. has already started to take selfies. Can't wait for them to discover duck face, trout pout and floppy disk lips 😉

"A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." - Isaac Asimov's "Three Laws of Robotics"

"Crack Cocaine is good for you", says Colombian Drug Lord.

"Smoking is good for you", says 1950's Doctor sponsored by Big Tobacco company.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

View original post here:

AI 'good for the world'... says ultra-lifelike robot - Phys.org - Phys.Org

Posted in Ai | Comments Off on AI ‘good for the world’… says ultra-lifelike robot – Phys.org – Phys.Org

How AI And Machine Learning Are Helping Drive The GE Digital Transformation – Forbes

Posted: June 7, 2017 at 5:17 pm


Forbes
How AI And Machine Learning Are Helping Drive The GE Digital Transformation
Forbes
This is the story of how GE has accomplished this digital transformation by leveraging AI and machine learning fueled by the power of Big Data. Undertaking the Digital Transformation. The GE transformation is an effort that is still in progress, but ...

Read the original here:

How AI And Machine Learning Are Helping Drive The GE Digital Transformation - Forbes

Posted in Ai | Comments Off on How AI And Machine Learning Are Helping Drive The GE Digital Transformation – Forbes

How Apple reinvigorated its AI aspirations in under a year – Engadget

Posted: at 5:17 pm

Well, technically, it's been three years of R&D, but Apple had a bit of trouble getting out of its own way for the first two. See, back in 2010, when Apple released the first version of Siri, the tech world promptly lost its mind. "Siri is as revolutionary as the Mac," the Harvard Business Review crowed, though CNN found that many people feared the company had unwittingly invented Skynet v1.0. But for as revolutionary as Siri appeared to be at first, its luster quickly wore off once the general public got ahold of it and recognized the system's numerous shortcomings.

Fast forward to 2014. Apple is at the end of its rope with Siri's listening and comprehension issues. The company realizes that minor tweaks to Siri's processes can't fix its underlying problems and a full reboot is required. So that's exactly what they did. The original Siri relied on hidden Markov models -- a statistical tool used to model time series data (essentially reconstructing the sequence of states in a system based only on the output data) -- to recognize temporal patterns in handwriting and speech recognition.

The company replaced and supplemented these models with a variety of machine learning techniques including Deep Neural Networks and "long short-term memory networks" (LSTMNs). These neural networks are effectively more generalized versions of the Markov model. However, because they posses memory and can track context -- as opposed to simply learning patterns as Markov models do -- they're better equipped to understand nuances like grammar and punctuation to return a result closer to what the user really intended.

The new system quickly spread beyond Siri. As Steven Levy points out, "You see it when the phone identifies a caller who isn't in your contact list (but who did email you recently). Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar."

By the WWDC 2016 keynote, Apple had made some solid advancements in its AI research. "We can tell the difference between the Orioles who are playing in the playoffs and the children who are playing in the park, automatically," Apple senior vice president Craig Federighi told the assembled crowd.

The company also released during WWDC 2016 its neural network API running Basic Neural Network Subroutines, an array of functions enabling third party developers to construct neural networks for use on devices across the Apple ecosystem.

However, Apple had yet to catch up with the likes of Google and Amazon, both of whom had either already released an AI-powered smart home companion (looking at you, Alexa) or were just about to (Home would be released that November). This is due in part to the fact that Apple faced severe difficulties recruiting and retaining top AI engineering talent because it steadfastly refused to allow its researchers to publish their findings. That's not so surprising coming from a company so famous for its tight-lipped R&D efforts that it once sued a news outlet because a drunk engineer left a prototype phone in a Palo Alto bar.

"Apple is off the scale in terms of secrecy," Richard Zemel, a professor in the computer science department at the University of Toronto, told Bloomberg in 2015. "They're completely out of the loop." The level of secrecy was so severe that new hires to the AI teams were reportedly directed not to announce their new positions on social media.

"There's no way they can just observe and not be part of the community and take advantage of what is going on," Yoshua Bengio, a professor of computer science at the University of Montreal, told Bloomberg. "I believe if they don't change their attitude, they will stay behind."

Luckily for Apple, those attitudes did change and quickly. After buying Seattle-based machine learning AI startup Turi for around $200 million in August 2016, Apple hired AI expert Russ Salakhutdinov away from Carnegie Mellon University that October. It was his influence that finally pushed Apple's AI out of the shadows and into the light of peer review.

In December 2016, while speaking at the the Neural Information Processing Systems conference in Barcelona, Salakhutdinov stunned his audience when he announced that Apple would begin publishing its work, going so far as to display an overhead slide reading, "Can we publish? Yes. Do we engage with academia? Yes."

Later that month Apple made good on Salakhutdinov's promise, publishing "Learning from Simulated and Unsupervised Images through Adversarial Training". The paper looked at the shortcomings of using simulated objects to train machine vision systems. It showed that while simulated images are easier to teach than photographs, the results don't work particularly well in the real world. Apple's solution employed a deep-learning system, known as known as Generative Adversarial Networks (GANs), that pitted a pair of neural networks against one another in a race to generate images close enough to photo-realistic to fool a third "discriminator" network. This way, researchers can exploit the ease of training networks using simulated images without the drop in performance once those systems are out of the lab.

In January 2017, Apple further signaled its seriousness by joining Amazon, Facebook, Google, IBM and Microsoft in the Partnership on AI. This industry group seeks to establish ethical, transparency and privacy guidelines in the field of AI research while promoting research and cooperation between its members. The following month, Apple drastically expanded its Seattle AI offices, renting a full two floors at Two Union Square and hiring more staff.

"We're trying to find the best people who are excited about AI and machine learning excited about research and thinking long term but also bringing those ideas into products that impact and delight our customers," Apple's director of machine learning Carlos Guestrin told GeekWire.

By March 2017, Apple had hit its stride. Speaking at the EmTech Digital conference in San Francisco, Salakhutdinov laid out the state of AI research, discussing topics ranging from using "attention mechanisms" to better describe the content of photographs to combining curated knowledge sources like Freebase and WordNet with deep-learning algorithms to make AI smarter and more efficient. "How can we incorporate all that prior knowledge into deep-learning?" Salakhutdinov said. "That's a big challenge."

That challenge could soon be a bit easier once Apple finishes developing the Neural Engine chip that it announced this May. Unlike Google devices, which shunt the heavy computational lifting required by AI processes up to the cloud where it is processed on the company's Tensor Processing Units, Apple devices have traditionally split that load between the onboard CPU and GPU.

This Neural Engine will instead handle AI processes as a dedicated standalone component, freeing up valuable processing power for the other two chips. This would not only save battery life by diverting load from the power-hungry GPU, it would also boost the device's onboard AR capabilities and help further advance Siri's intelligence -- potentially exceeding the capabilities of Google's Assistant and Amazon's Alexa.

But even without the added power that a dedicated AI chip can provide, Apple's recent advancements in the field have been impressive to say the least. In the span between two WWDCs, the company managed to release a neural network API, drastically expand its research efforts, poach one of the country's top minds in AI from one of the nation's foremost universities, reverse two years of backwards policy, join the industry's working group as a charter member and finally -- finally -- deliver a Siri assistant that's smarter than a box of rocks. Next year's WWDC is sure to be even more wild.

Image: AFP/Getty (Federighi on stage / network of photos)

See original here:

How Apple reinvigorated its AI aspirations in under a year - Engadget

Posted in Ai | Comments Off on How Apple reinvigorated its AI aspirations in under a year – Engadget

How AI is transforming customer service – TNW

Posted: at 5:17 pm

There will always be a need for a real humanspresence in customer service, but with the rise of AI comes the glaring reality that many things can be accomplished through the implementation of an AI-powered customer servicevirtualassistant. As our technology and understanding of machine learning grows, so does the possibilities for services that could benefit from a knowledgeable chatbot. What does this mean for the consumer and how will this affect the job market in the years to come?

How many times have you been placed on hold, on the phone or through a live chat option, when all you wanted to do was ask a simple question about your account? Now, how many times as that wait taken longer than the simple question you had? While chatbots may never be able to completely replace the human customer service agent, they most certainly are already helping answer simple questions and pointing users in the right direction when needed.

Credit: Unsplash

As virtual assistants become more knowledgeable and easier to implement, more businesses will begin to use them to assist with more advancedquestions a customer or interested party may have, meaning (hopefully) quicker answers for the consumer. But just how much of customer service will be taken over by virtual assistants? According toone report from Gartnerit is believed that by the year 2020, 85% of customer relationships will be through AI-powered services.

Thats a pretty staggering number, but I talked with Diego Ventura of NoHold, a company that provides virtual agents for enterprise level businesses, and he believes those numbers need to be looked at a bit closer.

The statement could end up being true but with two important proviso: For one, we most consider all aspects of AI, not just Virtual Assistants and two, we apply the statements to specific sectors and verticals.

AI is a vast field that includes multiple disciplines like Predictive Analytics, Suggestion engines, etc. In this sense you have to just think about companies like Amazon to see how most of customer interactions are already handled automatically though some form of AI. Having said this, there are certain sectors of the industry that will always require, at least for the foreseeable future, human intervention. Think of Medical for example, or any company that provides very high end B2B products or services.

Basically, what Diego is saying is that there are many aspects of customer service already being handled by AI that we dont even realize, so when discussing that 85% mentioned above we cant look at it as 85% of customer service jobs will be replaced by AI, but, even if were not talking about 85% of the jobs involved in customer service, surely there will be some jobs that will be completely eliminated by the use of chatbots, so where does that leave us?

Its unfair to look at virtual assistants as the enemy that is taking our precious jobs. Throughout history, technology has made certain jobs obsolete as smarter, more efficient methods are implemented . Look at our manufacturing sector and it will not take long to see that many of the jobs our grandparents and great grandparents had have been completely eliminated through advancements in machinery and other technologies, the rise in AI is simply another example of us growing as humans.

Credit: Unsplash

While it may take some jobs away, it also opens up the possibility for completely new jobs that have not existed prior. Chatbot technicians and specialists being but two examples. Couple that with the fact that many of these virtual assistants actual workwiththe customer services reps to make their jobs easier, and we start seeing that virtual assistant implementation is not as scary as it might seem. Ventura seems to agree,

I see Virtual Assistants, VAs, for one as a way to primarily improve the customer experience and, two, augmenting the capabilities of existing employees rather than simply taking their jobs. VAs help users find information more easily. Most of the VA users are people who were going to the Web to self-serve anyway, we are just making it easier for them to find what they are looking for and yes, prevent escalations to the call center.

VAs are also used at the call center to help agents be more successful in answering questions, therefore augmenting their capabilities. Having said all this, there are jobs that will be replaced by automation, but I think it is just part of progress and hopefully people will see it as an opportunity to find more rewarding opportunities.

I think back to my time at a startup that was located in an old Masonic Temple. We were on the 6th floor and every morning the lobby clerk, James, would put down the crumpled paper he was reading and hobble out from behind his small desk in the middle of the lobbyand take us up to our floor on one of those old elevators that required someone to manually push and pull a lever to get their guests to a certain floor. James was a professional at it, he reminded me of an airplane pilot the way he twisted certain knobs and manipulated the lever to get us to our destination only once missing our floor in the entire two years I was there.

While James might have been an expert at his craft, technology has all but eliminated that position. When was the last time you had someone manually cart you to a floor in a hotel? When was the last time you thought about it? Were you mad at technology for taking away someones job?

As humans, we advance, thats what we do. And the rise of AI in the customer service field is just another step in our advancement and should be looked at as such. There might be some growing pains during the process, but we shouldnt let that stop us from growing and extending our knowledge. When we look at the benefits these chatbots can provide to the consumer and the business, it becomes clear that we are moving in the right direction.

Read next: How Marketing Will Change in 2017

Follow this link:

How AI is transforming customer service - TNW

Posted in Ai | Comments Off on How AI is transforming customer service – TNW

Sesame Workshop and IBM team up to test a new AI-powered teaching method – TechCrunch

Posted: at 5:17 pm


TechCrunch
Sesame Workshop and IBM team up to test a new AI-powered teaching method
TechCrunch
Can A.I. help build better educational apps for kids? That's a question Sesame Workshop, the nonprofit organization behind the popular children's TV program Sesame Street and others, aims to answer. The company has teamed up with IBM to create the ...
IBM Brings AI to Kindergartners with New Sesame Workshop AppTheStreet.com

all 10 news articles »

Original post:

Sesame Workshop and IBM team up to test a new AI-powered teaching method - TechCrunch

Posted in Ai | Comments Off on Sesame Workshop and IBM team up to test a new AI-powered teaching method – TechCrunch

Page 239«..1020..238239240241..250260..»