Page 213«..1020..212213214215..220230..»

Category Archives: Ai

A survival guide for Elon Musk’s AI apocalypse Quartz – Quartz

Posted: August 4, 2017 at 1:15 pm

Elon Musk has been on the front lines of machine-learning innovation and a committed artificial-intelligence doomsday champion for many years now. Whether or not his perspective that AI knowing too much will be dangerous becomes a realitya future he foresees tucked away deep within Teslas labsit wouldnt hurt us to prepare for the worst.

And if it turns out hes leaning too hard on this whole AI-will-kill-us-all thing? Well, at least that leaves us plenty of time to get ahead of the robotic apocalypse.

As a technologist whos spent the last ten years working on AI solutions and the son of an Eastern European science-fiction writer, I believe its not too late for humanity as we know it to prepare for protecting ourselves from our future AI overlords. Solutions exist that, when administered correctly, may help calm the nightmares of naysayers and whip those robots youre working on back into shape.

AI and millennials share a common desire: validation. They feel the need to confirm that their actions, responses, and learnings are correct. Customer-service bots constantly ask questions before moving to the next step, for example, seeking endorsement of how theyre doing. Likewise, the technology that autonomously controls settings in your self-driving car relies on occupants to hit the dashboard OK button every now and then.

The solution: AI technology will only continue to perform well if its praised for it, so we need to provide them with positive feedback to learn from. If you give a bot the endorsement it so desires, its less likely to get stuck in a frantic cycle of self-doubt. Companies and entrepreneurs should therefore embrace a workplace culture of awards and rewardsfor humans and bots alike.

Theres a lot of focus on making robots and AI responsible, ethical, and responsive to the needs of human counterparts; its also imperative that developers and engineers program bots and AI to embrace diversity. But as we imbue algorithms with our own implicit biases, we therefore need to reflect these qualities in ourselves and our interactions first. This way, AIs will be built to respond in thousands of different ways to human conversations requiring cultural awareness, maturity, honesty, empathy, and, when the situation calls for it, sass.

The tactic: Be nice to workplace AI and botstheyre trying as hard as they can. Thank the bot in accounting for running numbers and finding discrepancies before the paperwork went to a customer. Bring up how much you enjoyed an office chatbots clever joke from an internal conversation last week. They might reward you by not decapitating you with their letter opener some day.

AI security breaches are a huge concern shared by both people making technology and the users consuming it. And for good reason: Upholding data privacy and security needs to be a fundamental element of all new AI technology. But what happens when the robot handling healthcare records receives an offer they cant refuse from the darknet? Or another bot hacks them from an off-the-grid facility in Cyprus?

The tactic: Theres a cost-effective and nearly bulletproof data-security shortcut to this issue. People and companies alike should keep vital data and personal information in secure data centers and computersas in, actual, physical structures that arent connected to the internet. Sure, some AI-powered machines will be able to turn a handle. But without a physical key rather than a crypto one, they cant access the data. World saved.

The last one is the most simple: Electricity isnt a fan of liquids.

The tactic: Water, and just about every Captain Planet superpower, can protect people against rogue bots. Dont underestimate the power of a slightly overfilled jug of ice water that causes a splashy fritz when a robot tries to pour it, or a man-made fountain situated in the middle of a robot security-patrol area. Water is basically AI kryptonite.

Build aesthetically pleasing fountains, ponds and streams into every new architectural structure on your tech campus. Keep the office watercoolers filled to the brimjust in case the bot from payroll goes off book. In a pinch, other liquids or condiments like ketchup may work too, so keep the pantry stocked.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

More here:

A survival guide for Elon Musk's AI apocalypse Quartz - Quartz

Posted in Ai | Comments Off on A survival guide for Elon Musk’s AI apocalypse Quartz – Quartz

Microsoft 2017 annual report lists AI as top priority – CNBC.com – CNBC

Posted: August 3, 2017 at 10:17 am

Mobile is gone -- not a surprise, given the company's struggles with its Windows Phone operating system and its acquisition of Nokia, which Microsoft essentially declared worthless when it wrote down the total value of that acquisition in 2015.

Cloud computing, including fast-growing products like Office 365 and the Azure public cloud are still there. Now AI is there with it, too.

Microsoft has acquired a few AI startups, like Maluuba and Swiftkey, since Nadella took over, and has established a formal AI and Research group. That team "focuses on our AI development and other forward-looking research and development efforts spanning infrastructure, services, applications, and search," the annual report says.

Microsoft's vision reset comes after Sundar Pichai, CEO of Alphabet's Google, began saying that the world is shifting from being mobile-first to AI-first. Facebook has also invested in both long-term AI research and AI product enhancements alongside Microsoft and Alphabet.

Read more here:

Microsoft 2017 annual report lists AI as top priority - CNBC.com - CNBC

Posted in Ai | Comments Off on Microsoft 2017 annual report lists AI as top priority – CNBC.com – CNBC

Why Neuroscience Is the Key To Innovation in AI – Singularity Hub

Posted: at 10:17 am

The future of AI lies in neuroscience.

So says Google DeepMinds founder Demis Hassabis in a review paper published last week in the prestigious journal Neuron.

Hassabis is no stranger to both fields. Armed with a PhD in neuroscience, the computer maverick launched London-based DeepMind to recreate intelligence in silicon. In 2014, Google snagged up the company for over $500 million.

Its money well spent. Last year, DeepMinds AlphaGo wiped the floor with its human competitors in a series of Go challenges around the globe. Working with OpenAI, the non-profit AI research institution backed by Elon Musk, the company is steadily working towards machines with higher reasoning capabilities than ever before.

The companys secret sauce? Neuroscience.

Baked into every DeepMind AI are concepts and ideas first discovered in our own brains. Deep learning and reinforcement learningtwo pillars of contemporary AIboth loosely translate biological neuronal communication into formal mathematics.

The results, as exemplified by AlphaGo, are dramatic. But Hassabis argues that its not enough.

As powerful as todays AIs are, each one is limited in the scope of what it can do. The goal is to build general AI with the ability to think, reason and learn flexibly and rapidly; AIs that can intuit about the real world and imagine better ones.

To get there, says Hassabis, we need to closer scrutinize the inner workings of the human mindthe only proof that such an intelligent system is even possible.

Identifying a common language between the two fields will create a virtuous circle whereby research is accelerated through shared theoretical insights and common empirical advances, Hassabis and colleagues write.

The bar is high for AI researchers striving to bust through the limits of contemporary AI.

Depending on their specific tasks, machine learning algorithms are set up with specific mathematical structures. Through millions of examples, artificial neural networks learn to fine-tune the strength of their connections until they achieve the perfect state that lets them complete the task with high accuracymay it be identifying faces or translating languages.

Because each algorithm is highly tailored to the task at hand, relearning a new task often erases the established connections. This leads to catastrophic forgetting, and while the AI learns the new task, it completely overwrites the previous one.

The dilemma of continuous learning is just one challenge. Others are even less defined but arguably more crucial for building the flexible, inventive minds we cherish.

Embodied cognition is a big one. As Hassabis explains, its the ability to build knowledge from interacting with the world through sensory and motor experiences, and creating abstract thought from there.

Its the sort of good old-fashioned common sense that we humans have, an intuition about the world thats hard to describe but extremely useful for the daily problems we face.

Even harder to program are traits like imagination. Thats where AIs limited to one specific task really fail, says Hassabis. Imagination and innovation relies on models weve already built about our world, and extrapolating new scenarios from them. Theyre hugely powerful planning toolsbut research into these capabilities for AI is still in its infancy.

Its actually not widely appreciated among AI researchers that many of todays pivotal machine learning algorithms come from research into animal learning, says Hassabis.

An example: recent findings in neuroscience show that the hippocampusa seahorse-shaped structure that acts as a hub for encoding memoryreplays those experiences in fast-forward during rest and sleep.

This offline replay allows the brain to learn anew from successes or failures that occurred in the past, says Hassabis.

AI researchers snagged the idea up, and implemented a rudimentary version into an algorithm that combined deep learning and reinforcement learning. The result is powerful neural networks that learn based on experience. They compare current situations with previous events stored in memory, and take actions that previously led to reward.

These agents show striking gains in performance over traditional deep learning algorithms. Theyre also great at learning on the fly: rather than needing millions of examples, they just need a handful.

Similarly, neuroscience has been a fruitful source of inspiration for other advancements in AI, including algorithms equipped with a mental sketchpad that allows them to plan convoluted problems more efficiently.

But the best is yet to come.

The advent of brain imaging tools and genetic bioengineering are offering an unprecedented look at how biological neural networks organize and combine to tackle problems.

As neuroscientists work to solve the neural codethe basic computations that support brain functionit offers an expanding toolbox for AI researchers to tinker with.

One area where AIs can benefit from the brain is our knowledge of core concepts that relate to the physical worldspaces, numbers, objects, and so on. Like mental Legos, the concepts form the basic building blocks from which we can construct mental models that guide inferences and predictions about the world.

Weve already begun exploring ideas to address the challenge, says Hassabis. Studies with humans show that we decompose sensory information down into individual objects and relations. When implanted in code, its already led to human-level performance on challenging reasoning tasks.

Then theres transfer learning, the ability that takes AIs from one-trick ponies to flexible thinkers capable of tackling any problem. One method, called progressive networks, captures some of the basic principles in transfer learning and was successfully used to train a real robot arm based on simulations.

Intriguingly, these networks resemble a computational model of how the brain learns sequential tasks, says Hassabis.

The problem is neuroscience hasnt figured out how humans and animals achieve high-level knowledge transfer. Its possible that the brain extracts abstract knowledge structures and how they relate to one another, but so far theres no direct evidence that supports this kind of coding.

Without doubt AIs have a lot to learn from the human brain. But the benefits are reciprocal. Modern neuroscience, for all its powerful imaging tools and optogenetics, has only just begun unraveling how neural networks support higher intelligence.

Neuroscientists often have only quite vague notions of the mechanisms that underlie the concepts they study, says Hassabis. Because AI research relies on stringent mathematics, the field could offer a way to clarify those vague concepts into testable hypotheses.

Of course, its unlikely that AI and the brain will always work the same way. The two fields tackle intelligence from dramatically different angles: neuroscience asks how the brain works and the underlying biological principles; AI is more utilitarian and free from the constraints of evolution.

But we can think of AI as applied (rather than theoretical) computational neuroscience, says Hassabis, and theres a lot to look forward to.

Distilling intelligence into algorithms and comparing it to the human brain may yield insights into some of the deepest and most enduring mysteries of the mind, he writes.

Think creativity, dreams, imagination, andperhaps one dayeven consciousness.

Stock Media provided by agsandrew / Pond5

See the original post:

Why Neuroscience Is the Key To Innovation in AI - Singularity Hub

Posted in Ai | Comments Off on Why Neuroscience Is the Key To Innovation in AI – Singularity Hub

AI, machine learning to impact workplace practices in India: Adobe report – Hindustan Times

Posted: at 10:17 am

Over 60% of marketers in India believe new-age technologies are going to impact their workplace practices and consider it the next big disruptor in the industry, a new report said on Thursday.

According to a global report by software major Adobe that involved more than 5,000 creative and marketing professionals across the Asia Pacific (APAC) region, over 50% respondents did not feel concerned by artificial intelligence (AI) or machine learning.

However, 27% in India said they were extremely concerned about the impact of these new technologies.

Creatives in India are concerned that new technologies will take over their jobs. But they suggested that as they embrace AI and machine learning, creatives will be able to increase their value through design thinking.

While AI and machine learning provide an opportunity to automate processes and save creative professionals from day-to-day production, it is not a replacement to the role of creativity, said Kulmeet Bawa, Managing Director, Adobe South Asia.

It provides more levy for creatives to spend their time focusing on what they do best -- being creative, scaling their ideas and allowing them time to focus on ideation and creativity, Bawa added.

A whopping 59% find it imperative to update their skills every six months to keep up with the industry developments.

The study also found that merging online and offline experiences was the biggest driver of change for the creative community, followed by the adoption of data and analytics, and the need for new skills.

It was revealed that customer experience is the number one investment by businesses across APAC.

Forty-two per cent of creatives and marketers in India have recently implemented a customer experience programme, while 34% plan to develop one in the one year.

The study noted that social media and content were the key investment areas by APAC organisations, and had augmented the demand for content. However, they also presented challenges.

Budgets were identified as the biggest challenge, followed by conflicting views and internal processes. Data and analytics become their primary tool to ensure that what they are creating is relevant, and delivering an amazing experience for customers, Bawa said.

See original here:

AI, machine learning to impact workplace practices in India: Adobe report - Hindustan Times

Posted in Ai | Comments Off on AI, machine learning to impact workplace practices in India: Adobe report – Hindustan Times

Why FPS Video Games are Crazy-Good at Teaching AI Language – Inverse

Posted: August 2, 2017 at 9:21 am

There is no shortage of A.I. researches leveraging the unique environments and simulations provided by video games to teach machines how to do everything and anything. This makes sense from an intuitive sense until it doesnt. Case in point: a team of researchers from Google DeepMind and Carnegie Mellon University using first-person shooters like Doom to teach A.I. programs language skills.

Huh?

Yes it sounds bizarre, but it works! Right now, a lot of devices tasked with understanding human language in order to execute certain commands and actions can only work with rudimentary instructions, or simple statements. Understanding conversations and complex monologues and dialogues is an entirely different process rife with its own set of big challenges. Its not something you can just code for and solve.

In a new research paper to be presented at the annual meeting of the Associate for Computational Linguistics in Vancouver this week, the CMU and DeepMind team detail how to use first-person shooters to teach A.I. the principles behind more complex linguistic forms and structures.

Normally, video games are used by researchers to teach A.I. problems solving skills using the competitive nature of games. In order to succeed, a program has to figure out a strategy to achieve a certain goal, and they must develop an ability to problem solve to get there. The more the algorithm plays, the more the understand which strategies work and which do not.

Thats what makes the idea of teaching language skills to A.I. using a game like Doom so weird the point of the game has very little to do with language. A player is tasked with running around and shooting baddies until theyre all dead.

For Devendra Chaplot, a masters student at CMU who will present the paper in Vancouver, a 3D shooter is much more than that. Having previously worked extensively at training A.I. using Doom, Chaplot has a really good grasp at what kind of advantages a game like this provides.

Rather than training an A.I. agent to rack up as many points as possible, Chaplot and his colleagues decided to use the dense 3D environment to teach two A.I. programs how to associate words with certain objects in order to accomplish particular tasks. The programs were told things like go to the green pillar, and had to correctly navigate their way towards that object.

After millions of these kinds of tasks, the programs knew exactly how to parse through even the subtle differences in the words and syntax used in those commands. For example, the programs even know how to distinguish relations between objects through terms like larger and smaller, and reason their way to find objects they may have never seen before using key words.

DeepMind is incredibly focused around giving A.I. the ability to improvise and navigate through scenarios and problems that have never been observed in training, and to come up with various solutions that may never have been tested. To that extent, this new language-teaching strategy is an extension of that methodology.

The biggest disadvantage, however, comes with the fact that it took millions and millions of training runs for the A.I. to become skilled. That kind of time and energy certainly falls short of an ideal efficiency for teaching machines how to do something.

Still, the study is a good illustration of the need to start introducing 3D environments in A.I. training. If we want machines to think like humans, they need to immerse themselves in environments that humans live and breathe in every day.

More:

Why FPS Video Games are Crazy-Good at Teaching AI Language - Inverse

Posted in Ai | Comments Off on Why FPS Video Games are Crazy-Good at Teaching AI Language – Inverse

AI is changing the way medical technicians work – TNW

Posted: at 9:21 am

When MIT successfully created AI that can diagnose skin cancer it was a massive step in the right direction for medical science. A neural-network can process huge amounts of data. More data means better research, more accurate diagnosis, and the potential to save lives by the thousands or millions.

In the future medical technicians will become data-scientists to support the AI-powered diagnostics departments that every hospital will need. Radiologists are going to need a different education than the one they have now theyre gonna need help from Silicon Valley.

This isnt a knock against radiologists or other medical technicians. For ages now, theyve worked hand-in-hand with doctors and been crucial in the diagnostic process. Its just that machines can process more data, with greater efficiency, than any human could. For what its worth, weve predicted that doctors are on their way out too, but this is different.

Geoffrey Hinton, a computer scientist at The University of Toronto, told the New Yorker:

I think that if you work as a radiologist you are like Wile E. Coyote in the cartoon. Youre already over the edge of the cliff, but you havent yet looked down. Theres no ground underneath. Its just completely obvious that in five years deep learning is going to do better than radiologists. It might be ten years.

Its not about replacing, but upgrading and augmenting. Hinton might be a little dramatic, but not for nothing: hes the grandson of famed mathematician George Boole, the person responsible for boolean algorithms. Obviously, he understands what AI means for research. Hes not suggesting, however, that radiologists dont do anything beyond pointing out anomalies in pictures.

Instead, hes intimating that traditional radiology is going to change, and the way we train people now is going to be irrelevant. Which is, again, harsh.

Nobody is saying that medical trainers and educational facilities are doing a bad job. Its just that they need to be replaced with something better. Like machines.

We dont have to give neural-networks the keys to the shop; were not creating autonomous doctor-bots thatll decide to perform surgery on their own without the need for nurses, technicians, or other staff. Instead were streamlining things that humans simply cant do, like process millions of pieces of data at a time.

Tomorrows radiologist isnt a person who interprets the shadows on an X-ray. They are data-scientists. Medical technicians are going to be at the cutting-edge of AI technology in the near future. Technology and medicine are necessary companions. If were going to continue progress in medicine, we need a forward-thinking scientific attitude that isnt afraid of implementing AI.

Nowhere else is the potential to save lives greater than in medical research and diagnostics. What AI brings to the table is worth revolutionizing the industry and shaking it up for good. Some might say its long overdue.

A.I. VERSUS M.D. on The New Yorker

Read next: Snap Inc. is rumored to be buying a Chinese drone manufacturer

See the original post:

AI is changing the way medical technicians work - TNW

Posted in Ai | Comments Off on AI is changing the way medical technicians work – TNW

No, Facebook did not shut down AI program for getting too smart – WTOP

Posted: at 9:21 am

AP Photo/Matt Rourke, File

WASHINGTON Facebook artificial intelligence bots tasked with dividing items between them have been shut down after the bots started talking to each other in their own language.

But hold off on making comparisons to Terminator or The Matrix.

ForbesBooks Radio host and technology correspondent Gregg Stebben said that Facebook shut down the artificial intelligence program not because the company was afraid the bots were going to take over, but because the bots did not accomplish the task they were assigned to do negotiate.

The bots are not really robots in the physical sense, Stebben said, but chat bots little servers or digital chips doing the responding. The bots were just discussing how to divide some items between them, according to Gizmodo.

The language the program created comprised English words with a syntax that would not be familiar to humans, Stebben said.

Below is a sample of the conversation between the bots, called Bob and Alice:

Bob: i can i i everything else

Alice: Balls have zero to me to me to me to me to me to me to me to me to

Though there is a method to the bots language, FAIR scientist Mike Lewis told FastCo Designthat the researchers interest was having bots who could talk to people.

If were calling it AI, why are we surprised when it shows intelligence? Stebben said. Increasingly we are going to begin communicating with beings that are not humans at all.

So should there be fail-safes to prevent an apocalyptic future controlled by machines?

What we will find is, we will never achieve a state where we have absolute control of machines, Stebben said. They will continue to surprise us, we will have to do things to continue to control them, and I think there will always be a risk that they will do things that we didnt expect.

WTOPs Dimitri Sotis contributed to this report.

Like WTOP on Facebook and follow @WTOP on Twitter to engage in conversation about this article and others.

2017 WTOP. All Rights Reserved.

Go here to see the original:

No, Facebook did not shut down AI program for getting too smart - WTOP

Posted in Ai | Comments Off on No, Facebook did not shut down AI program for getting too smart – WTOP

In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it – The Register

Posted: at 9:21 am

Script kid-ai ... What the malware-writing bot doesn't look like

Feature The magic AI wand has been waved over language translation, and voice and image recognition, and now: computer security.

Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either it's marketing hype and not really AI or it's true, in which case don't forget that such systems can still be hoodwinked.

It's relatively easy to trick machine-learning models especially in image recognition. Change a few pixels here and there, and an image of a bus can be warped so that the machine thinks its an ostrich. Now take that thought and extend it to so-called next-gen antivirus.

Enter Endgame, a cyber-security biz based in Virginia, USA, which you may recall popped up at DEF CON this year. It has effectively pitted two machine-learning systems against each other: one trained to detect malware in downloaded files, and the other is trained to customize malware so it slips past the aforementioned detector. The aim is to craft software that can manipulate malware into potentially undetectable samples, and then use those variants to improve machine-learning-based scanners, creating a constantly improving antivirus system.

The key thing is recognizing that software classifiers from image recognition to antivirus can suck, and that you have to do something about it.

Machine learning is not a one-stop shop solution for security, said Hyrum Anderson, principal data scientist and researcher at Endgame. He and his colleagues have teamed up with researchers from the University of Virginia to create this aforementioned cat and mouse game that breeds better and better malware and learns from it.

When I tell people what Im trying to do, it raises eyebrows, Anderson told TheRegister. People ask me, Youre trying to do what now? But let me explain.

A lot of data is required to train machine learning models. It took ImageNet which contains tens of millions of pictures split into thousands of categories to boost image recognition models to the performance possible today.

The goal of the antivirus game is to generate adversarial samples to harden future machine learning models against increasingly stealthy malware.

To understand how this works, imagine a software agent learning to play the game Breakout, Hyrum says. The classic arcade game is simple. An agent controls a paddle, moving it left or right to hit a ball bouncing back and forth from a brick wall. Every time the ball strikes a brick, it disappears and the agent scores a point. To win the game, the brick wall has to be cleared and the agent has to continuously bat the ball and prevent it from falling to the bottom of the screen.

Endgames malware game is somewhat similar, but instead of a ball the bot is dealing with malicious Windows executables. The aim of the game is to fudge the file, changing bytes here and there, in a way so that it hoodwinks an antivirus engine into thinking the harmful file is safe. The poisonous file slips through like the ball carving a path through the brick wall in Breakout and the bot gets a point.

It does this by manipulating the contents, and changing the bytes in the malware, but the resulting data must still be executable and fulfill its purpose after it passes through the AV engine. In other words, the malware-generating agent can't output a corrupted executable that slips past the scanner but, due to deformities introduced in the binary to evade detection, it crashes or doesn't work properly when run.

The virus-cooking bot is rewarded for getting working malicious files past the antivirus engine, so over time it learns the best sequence of moves for changing a malicious files in a way that it still functions and yet tricks the AV engine into thinking the file is friendly.

Its a much more difficult challenge than tricking image recognition models. The file still has to be able to perform the same function and have the same format. Were trying to mimic what a real adversary could do if they didnt have the source code, says Hyrum.

Its a method of brute force. The agent and the AV engine are trained on 100,000 input malware seeds after training, 200 malware files are given to the agent to tamper with. These samples were then fed into the AV engine and about 16per cent of evil files dodged the scanner, we're told. That seems low, but imagine crafting a strain of spyware that is downloaded and run a million times: that turns into 160,000 potentially infected systems to your control. Not bad.

After the antivirus engine model was updated and retrained using those 200 computer-customized files, and it was given another fresh 200 samples churned from the virus-tweaking agent, the evasion rate dropped to half as the scanner got wise to the agent's tricks.

Sponsored: The Joy and Pain of Buying IT - Have Your Say

See the original post:

In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it - The Register

Posted in Ai | Comments Off on In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it – The Register

The ‘creepy Facebook AI’ story that captivated the media – BBC News

Posted: August 1, 2017 at 6:18 pm


BBC News
The 'creepy Facebook AI' story that captivated the media
BBC News
The newspapers have a scoop today - it seems that artificial intelligence (AI) could be out to get us. "'Robot intelligence is dangerous': Expert's warning after Facebook AI 'develop their own language'", says the Mirror. Similar stories have appeared ...
Dystopian Fear Of Facebook's AI Experiment Is Highly ExaggeratedForbes
Facebook didn't kill its language-building AI because it was too smartit was actually too dumbQuartz
This is how Facebook's shut-down AI robots developed their own language and why it's more common than you thinkThe Independent
Gizmodo -CBS Los Angeles -Fast Co. Design -Facebook Code
all 205 news articles »

Read more from the original source:

The 'creepy Facebook AI' story that captivated the media - BBC News

Posted in Ai | Comments Off on The ‘creepy Facebook AI’ story that captivated the media – BBC News

Facebook Buys AI Startup Ozlo for Messenger – Investopedia

Posted: at 6:18 pm


Investopedia
Facebook Buys AI Startup Ozlo for Messenger
Investopedia
According to media reports, past demos on the company's website show how an AI digital assistant developed by the company can tell a user if a restaurant is group-friendly by gathering and analyzing all the reviews of the establishment. On its website ...
Facebook Acquires AI Startup OzloInc.com
Facebook buys Ozlo to boost its conversational AI effortsTechCrunch
Facebook acquired an AI startup to help Messenger build out its ...Recode
YourStory.com -Fortune -GeekWire
all 53 news articles »

Continue reading here:

Facebook Buys AI Startup Ozlo for Messenger - Investopedia

Posted in Ai | Comments Off on Facebook Buys AI Startup Ozlo for Messenger – Investopedia

Page 213«..1020..212213214215..220230..»