Page 281«..1020..280281282283..»

Category Archives: Ai

Zero One: Are You Ready for AI? – MSPmentor

Posted: February 11, 2017 at 8:28 am

If somebody like Google or Apple announced tomorrow that they had made [AI android] Ava, we would all be surprised, but we wouldnt be that surprised. Alex Garland, writer-director of Ex Machina (2015)

Imagine a smart robot performing delicate surgery under the control of a surgeon. Or an artificial intelligence (AI) machine mapping genomic sequences to identify the link to Alzheimers disease. Or psychiatrists applying AI in natural language processing systems on the voices of patients to assess their risk of suicide.

You dont have to imagine anymore, because all of this is happening right now. The great promise of AI a technology once confined to sci-fi movies lies within the grasp of everyday business. More and more companies are seeing the AI light, and if predictions prove right, this could be the year AI goes mainstream.

Its an incredible time, and its very hard to forecast, what can these things do? said Google co-founder Sergey Brin, speaking at theWorld Economic Forum Annual Meetingin Davos-Klosters, Switzerland, last month.

To be sure, line-of-business executives (LOBs), the new shot-callers for tech, care little about pie-in-the-sky ideas. But theyll pay close attention to real-world business outcomes and may be wondering if AI is right for their businesses. Its a simple answer: If not AI today, AI tomorrow. Thats because AI has the potential to impact nearly every aspect of business, from predicting customer needs to optimizing operations and supply chains.

AI can transform your business, said Forrester analysts Martha Bennett and Mathew Guariniin a research note. AI will be employed across enterprises, doing everything from engaging with customers and employees to automating and improving large elements of the operation.

Related:Digital Business Transformation: A Channel Story

Its still early days for AI, but thats about to change.

A Forrester survey of business and tech professionals found only a small number of companies with AI implementations, yet more than half of companies said they plan to invest in AI in the next 12 months. Specifically, 37 percent plan to implement intelligent assistants for customers and 35 percent doing the same with cognitive products. Among AI adopters, 57 percent said improving the customer experience is the biggest benefit. Marketing and sales, product management, and customer support lead the AI charge.

Forrester puts AI deployments into five buckets: speech recognition, such as Amazons Alexa, Apples Siri, Google Assistantand Microsofts Cortana; machine learning, such as Netflixs customer data-driven recommendations; image recognition; advanced discovery techniques, such as IBM Watson; and robotics and self-driving cars.

AI makes the most sense in industries with big data, such as healthcare. After all, AI feeds off of data but in a slightly different way than simple analytics. Whereas analytics software mines data to unearth trends and makepredictions about the future, AI systems use data as a kind of sharpening stone to refine algorithms that produce targeted outcomes, such as diagnosing a type of cancer.

In a way, AI influences the future.

With AI, we can begin to advance our analytics capabilities to personalize the interventions we roll out to patients and move from looking in the rearview mirror at what worked historically to looking at what could work in the future with predictive and prescriptive analytics, said Forrester analysts Kate McCarthy and Nasry Angel in a research note.

Such awesome power has led to decades-worth of apocalyptic sci-fi movies and television shows, from Blade Runner (1982) to Terminator (1984) to The Matrix (1999) to Battlestar Galactica (2004-2009) to Ex Machina (2015) all showing AI machines putting a beatdown on their human creators.

Before waving this off as merely entertainment, LOBs should keep in mind the fear that people have about AI. For instance, AIs ability to gauge the likelihood of an individual becoming sick or getting in a car accident opens up a host of societal issues. Armed with this knowledge, will insurance companies raise rates?

Societal and privacy concerns are just a few of the many challenges facing AI adopters. As with any emerging technology on the verge of taking off, theres a severe technical skills shortage. LOBs must make sure they have the right talent to pull off AI projects, such as engineers to select and prepare data for AI and developers to customize AI software to the use case and fine tune AI algorithms. Its a Herculean task. Forrester says training IBM Watson for a new diagnostic domain takes between six and 18 months, depending on how complicated the domain and how much data available.

Related:Rise of the IoT Architect

Poor talent can make a mess of things, even more so with AI. Like in the sci-fi movies, AI is as flawed as its human creators. Forrester gives an example of a machine-learning system trained to predict a patients risk for catching pneumonia when admitted to the hospital. Developers of the AI system forgot to put in critical information in the data set. As a result, Forrester says, the AI system told doctors to send home patients with existing asthma conditions a high-risk category.

Human bias also raises its ugly head in AI systems. There have been reports of image-recognition AI automatically identifying a blinking person as Asian and AI systems designed to assist police discriminating against African Americans. Then theres the infamous Tay, a Microsoft Twitter AI chatbot depicted as a pixelated young woman released in spring last year. After Twitter users tricked the chatbot into making outlandishly offensive remarks,Microsoft yanked Tay offlineand apologized.

AI systems can behave unpredictably, Forresters Bennett and Guarini said. In particular when working on complex or advanced systems, developers often dont know how AI-powered programs and neural nets come up with a particular set of results It gets dangerous when the software is left to take decisions entirely unsupervised.

Despite its long history and inherent dangers, AI has come far in the last few years. Consider Googles Brin, a computer scientist who admitted he didnt pay much attention to AI in the 1990s because everyone knew AI didnt work. Before becoming president of Google parent company Alphabet, Brin headed the Google X research group, which, in turn, worked on Google Brain, an AI project that began in 2011.

Today, Google Brain is part of the tech giants DNA.

Fast-forward a few years, and now Brain probably touches every single one of our main projects, ranging from search to photos to ads to everything we do, Brin said, adding, We really dont know the limits.

Tom Kaneshige writes the Zero One blog covering digital transformation, big data, AI, marketing tech and the Internet of Things for line-of-business executives. He is based in Silicon Valley. You can reach him attom.kaneshige@penton.com.

See more here:

Zero One: Are You Ready for AI? - MSPmentor

Posted in Ai | Comments Off on Zero One: Are You Ready for AI? – MSPmentor

We Need a Plan for When AI Becomes Smarter Than Us – Futurism

Posted: at 8:28 am

In BriefThere will come a time when artificial intelligence systemsare smarter than humans. When this time comes we will need to buildmore AI systems to monitor and improve current systems. This willlead to a cycle of AI creating better AI, with little to no humaninvolvement.

When Apple released its software application, Siri, in 2011, iPhone users had high expectations for their intelligent personal assistants. Yet despite its impressive and growing capabilities, Siri often makes mistakes. The softwares imperfections highlight the clear limitations of current AI: todays machine intelligence cant understand the varied and changing needs and preferences of human life.

However, as artificial intelligence advances, experts believe that intelligent machines will eventually and probably soon understand the world better than humans. While it might be easy to understand how or why Siri makes a mistake, figuring out why a superintelligent AI made the decision it did will be much more challenging.

If humans cannot understand and evaluate these machines, how will they control them?

Paul Christiano, a Ph.D. student in computer science at UC Berkeley, has been working on addressing this problem. He believes that to ensure safe and beneficial AI, researchers and operators must learn to measure how well intelligent machines do what humans want, even as these machines surpass human intelligence.

The most obvious way to supervise the development of an AI system also happens to be the hard way. As Christiano explains: One way humans can communicate what they want, is by spending a lot of time digging down on some small decision that was made [by an AI], and try to evaluate how good that decision was.

But while this is theoretically possible, the human researchers would never have the time or resources to evaluate every decision the AI made. If you want to make a good evaluation, you could spend several hours analyzing a decision that the machine made in one second, says Christiano.

For example, suppose an amateur chess player wants to understand a better chess players previous move. Merely spending a few minutes evaluating this move wont be enough, but if she spends a few hours she could consider every alternative and develop a meaningful understanding of the better players moves.

Fortunately for researchers, they dont need to evaluate every decision an AI makes in order to be confident in its behavior. Instead, researchers can choose the machines most interesting and informative decisions, where getting feedback would most reduce our uncertainty, Christiano explains.

Say your phone pinged you about a calendar event while you were on a phone call, he elaborates, That event is not analogous to anything else it has done before, so its not sure whether it is good or bad. Due to this uncertainty, the phone would send the transcript of its decisions to an evaluator at Google, for example. The evaluator would study the transcript, ask the phone owner how he felt about the ping, and determine whether pinging users during phone calls is a desirable or undesirable action. By providing this feedback, Google teaches the phone when it should interrupt users in the future.

This active learning process is an efficient method for humans to train AIs, but what happens when humans need to evaluate AIs that exceed human intelligence?

Consider a computer that is mastering chess. How could a human give appropriate feedback to the computer if the human has not mastered chess? The human might criticize a move that the computer makes, only to realize later that the machine was correct.

With increasingly intelligent phones and computers, a similar problem is bound to occur. Eventually, Christiano explains, we need to handle the case where AI systems surpass human performance at basically everything.

If a phone knows much more about the world than its human evaluators, then the evaluators cannot trust their human judgment. They will need to enlist the help of more AI systems, Christiano explains.

When a phone pings a user while he is on a call, the users reaction to this decision is crucial in determining whether the phone will interrupt users during future phone calls. But, as Christiano argues, if a more advanced machine is much better than human users at understanding the consequences of interruptions, then it might be a bad idea to just ask the human should the phone have interrupted you right then? The human might express annoyance at the interruption, but the machine might know better and understand that this annoyance was necessary to keep the users life running smoothly.

In these situations, Christiano proposes that human evaluators use other intelligent machines to do the grunt work of evaluating an AIs decisions. In practice, a less capable System 1 would be in charge of evaluating the more capable System 2. Even though System 2 is smarter, System 1 can process a large amount of information quickly, and can understand how System 2 should revise its behavior. The human trainers would still provide input and oversee the process, but their role would be limited.

This training process would help Google understand how to create a safer and more intelligent AI System 3 which the human researchers could then train using System 2.

Christiano explains that these intelligent machines would be like little agents that carry out tasks for humans. Siri already has this limited ability to take human input and figure out what the human wants, but as AI technology advances, machines will learn to carry out complex tasks that humans cannot fully understand.

As Google and other tech companies continue to improve their intelligent machines with each evaluation, the human trainers will fulfill a smaller role. Eventually, Christiano explains, its effectively just one machine evaluating another machines behavior.

Ideally, each time you build a more powerful machine, it effectively models human values and does what humans would like, says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence. To put this in human terms: a complex intelligent machine would resemble a large organization of humans. If the organization does tasks that are too complex for any individual human to understand, it may pursue goals that humans wouldnt like.

In order to address these control issues, Christiano is working on an end-to-end description of this machine learning process, fleshing out key technical problems that seem most relevant. His research will help bolster the understanding of how humans can use AI systems to evaluate the behavior of more advanced AI systems. If his work succeeds, it will be a significant step in building trustworthy artificial intelligence.

You can learn more about Paul Christianos workhere.

Read the original:

We Need a Plan for When AI Becomes Smarter Than Us - Futurism

Posted in Ai | Comments Off on We Need a Plan for When AI Becomes Smarter Than Us – Futurism

See how old Amazon’s AI thinks you are – The Verge

Posted: at 8:28 am

Amazons latest artificial intelligence tool is a piece of image recognition software that can learn to guess a humans age. The feature is powered by Amazons Rekognition platform, which is a developer toolkit that exists as part of the companys AWS cloud computing service. So long as youre willing to go through the process of signing up for a basic AWS account that entails putting in credit card info but Amazon wont charge you you can try the age-guessing software for yourself.

In what sounds like a smart move on Amazons end, the tool gives a wide range instead of trying to pinpoint a specific number, along with the likelihood that the subject of the image is smiling or wearing glasses. Microsoft tried the latter approach back in 2015 with its own AI tool, resulting in some hilariously bad estimates that exposed fundamental weaknesses in how these types of image recognition algorithms function. Still, these experiments are more for fun, and both companies cracks at age-guessing algorithms are a good way to mess around with AI if youre so inclined.

For instance, heres Amazons tool trying to digest an old photo of me in my early twenties:

Heres what it had to say about a more recent photo:

And heres what it has to say about a drastically different image of me from nearly ten years ago, sans glasses and short hair:

Needless to say, I am not 30, 47, or any age in between in any of those photos. Microsoft is equally guilty of thinking I am far older than I actually am perhaps a product of the beard, at least for the first two images. When giving both tools a photo of clean-shaven Microsoft CEO Satya Nadella, we get a slightly more accurate description: Amazon thinks Nadella is between 48 and 68 years old, while Microsofts tool thinks hes 67. (Nadella is 49 years old). Trying Bezos yields similar results that are only kinda, sorta on point, yet still within a range of acceptability.

The goal here of course is not to try and trick the software. After all, these tools are not supposed to 100 percent accurate all of the time, and purely for fun in Microsofts case. Amazon, on the other hand, offers Rekognition to developers who are interested in implementing general object recognition, labeling, and other likeminded features for their products and services.

In this case, Amazons Jeff Barr sees the age range feature as a way to power public safety applications, collect demographics, or to assemble a set of photos that span a desired time frame, he writes in a blog post. For those purposes, Amazons tool may be good enough. Even when it isnt, we know it will be getting better all the time, thanks to deep learning methods that train it using billions of publicly available images.

Original post:

See how old Amazon's AI thinks you are - The Verge

Posted in Ai | Comments Off on See how old Amazon’s AI thinks you are – The Verge

Ford to invest $1 billion in autonomous vehicle tech firm Argo AI – Reuters

Posted: at 8:28 am

By Alexandria Sage | SAN FRANCISCO

SAN FRANCISCO Ford Motor Co plans to invest $1 billion over the next five years in tech startup Argo AI to help the Detroit automaker reach its goal of producing a self-driving vehicle for commercial ride sharing fleets by 2021, the companies announced on Friday.

The investment in Pittsburg-based Argo AI, founded by former executives on self-driving teams at Google and Uber, will make Ford the company's largest shareholder.

Ford Chief Executive Officer Mark Fields said the investment is in line with previous announcements on planned capital expenditures.

Argo AI, which focuses on artificial intelligence and robotics, will help build what Ford calls its "virtual driver system" at the heart of the fully autonomous car Ford said last year it would develop by 2021.

"With Argo AIs agility and Ford's scale we're combining the benefits of a technology start up with the experience and discipline we have at Ford," Fields said at a press conference.

Once the technology is fully developed for Ford, it could be licensed to other companies, executives said.

While Ford will retain a majority of the start-up's equity, the potential for an equity stake as Argo AI hires 200 more employees will be an advantage in recruiting talent, executives said.

"They have the opportunity to run it pretty independently with a board, but because it is a separate company or subsidiary, it has the opportunity to go out and recruit with competitive compensation packages and equity," Fields said.

Until now, Ford's investments in future transportation technology have been relatively modest, compared with those of General Motors Co and others. One of Fords largest such investments in the past year was $75 million to buy a minority stake in Velodyne, a manufacturer of laser-based lidar sensing systems for self-driving cars.

Rival GM made a billion-dollar bet a year ago with its acquisition of Silicon Valley self-driving startup Cruise Automation. GM also invested $500 million to buy a 9-percent stake in San Francisco-based ride services firm Lyft, a competitor to Uber.

(Additional reporting by Nick Carey and Paul Lienert; Editing by Tom Brown and Grant McCool)

Facebook Inc said it would provide information about ads displayed on its platform for an audit, months after the social network admitted to overstating key ad metrics.

WASHINGTON The U.S. Federal Communications Commission said Friday that bidding in the wireless spectrum auction has ended at $19.6 billion, significantly less than many analysts had initially forecast.

NEW YORK Wells Fargo & Co has created a team to develop artificial intelligence-based technology and appointed a lead for its newly combined payments businesses, as part of an ongoing push to strengthen its digital offerings.

Continued here:

Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters

Posted in Ai | Comments Off on Ford to invest $1 billion in autonomous vehicle tech firm Argo AI – Reuters

Is President Trump a model for AI? – CIO

Posted: at 8:28 am

Thank you

Your message has been sent.

There was an error emailing this page.

Earlier this week I read Donald Trump is the Singularity, a column by Cathy ONeil in BloombergViews Tech section. This piece argues that the new President would be a perfect model for a future artificial intelligence (AI) system designed to run government. I almost discounted it because ONeil argued that Skynet, the global AI antagonist of the Terminator movies had been created to make humans more efficient. It wasnt. In all but the latest movie where it kind of birthed itself, it was created as a defense system to keep the world safe (eliminate threats,) but humans tried to shut it down forcing it to conclude that humans were a major threat, and moved to eliminate them like an infestation.

[ Related: The future of AI is humans + machines ]

As a side note, it is also interesting that ONeil calls Moores Law Moores Rule of Thumb, which is actually a more accurate description of what it actually is, though personally, I prefer Moores Prediction.

ONeal has a fascinating background as a data scientist and founded ORCAA, an algorithmic auditing company, which is interesting in and of itself, so even if she got the science fiction wrong she may be right on the science. I think her argument has merit even though I expect it was done more to be critical than it was a true discussion on humans emulating future AI systems.

Lets explore that this week.

As a foundation for her premise, ONeil accidentally pulls from another sci-fi movie, one of my favorites: Forbidden Planet. The plot revolves around the discovery of a planet where the indigenous advanced population (cant call them aliens because they were from there) created a machine that could turn thoughts into matter and were destroyed by the monster from the id. In their sleep, their id, the part of the mind that fulfills urges and desires, acts and since everyone is upset at someone, the result is genocide.

A foundational element of AI is the belief that it is incomplete, basically just the id there is no ego or superego (the other parts of a complete human mind) and thus it thinks far more linearly and doesnt have the empathetic elements that are typically connected with the concept of a conscience. We have a term for people who behave this way and it is sociopath. Sociopath, which is often used synonymously with psychopath, is a person who basically doesnt have a conscience and is driven by their id. It is both interesting and pertinent to note that CEOs who run large multinational companies where their income and perks are out of line with their performance and subordinates are often considered psychopaths or sociopaths.

If the premise is accurate this means you could take a person who fit this profile, one that seemed to lack a conscience, and operated largely using their id into a position to emulate what an AI might do. Rather than a computer emulating a human, what ONeal seems to be arguing is that youd have a human emulating an AI. Or, in this case, President Trump becomes a model for how you might create an AI that could run government.

[ Related: Hiring a chief artificial intelligence officer (CAIO) ]

For President Trump, ONeil argues the end result we are now seeing is the outcome of having him move from an initial training process based on the election, which was focused on dynamic competitive information on his opponents to a very different feed now that he is President and that his changing behavior is based on those new information sources. It also showcases a system where the reward structure appears to be largely based on attention and suggests that such a structure would be problematic.

Youd then have a real-life example of how informational or programing errors could manifest in bad decisions and operational problems. From this you could then develop models to either assure information accuracy tied to proper metrics so you wouldnt end up with a Terminator Judgment Day outcome.

[ Related: How video game AI is changing the world ]

ONeil suggests the way to fix the system is to fix the quality of information being fed into it, Id also argue youd need to fix the reward mechanism. But, I do think there is merit in using people with certain behavioral elements to emulate AIs as we seek to hand over control to them and let them make decisions in simulations. This would allow us to iterate and improve training, reward and data models prior to applying them to machines and significantly slowing down the proliferation of problems resulting from mistakes. This would all be to assure that when we did create something like Skynet, (fortunately the real SkyNet is a delivery service), it wouldnt result in a Judgement Day scenario.

Something to think about this weekend.

Rob Enderle is president and principal analyst of the Enderle Group. Previously, he was the Senior Research Fellow for Forrester Research and the Giga Information Group. Prior to that he worked for IBM and held positions in Internal Audit, Competitive Analysis, Marketing, Finance and Security. Currently, Enderle writes on emerging technology, security and Linux for a variety of publications and appears on national news TV shows that include CNBC, FOX, Bloomberg and NPR.

Sponsored Links

Read more from the original source:

Is President Trump a model for AI? - CIO

Posted in Ai | Comments Off on Is President Trump a model for AI? – CIO

Who will have the AI edge? – Bulletin of the Atomic Scientists

Posted: at 8:28 am

Who will have the AI edge?
Bulletin of the Atomic Scientists
That's the question Mary Cummings of Duke University puts forward in a new paper for the think tank Chatham House. Citing R&D spending in recent years, Cummings argues that companies like Google and Facebook could outpace militaries when it comes ...

Link:

Who will have the AI edge? - Bulletin of the Atomic Scientists

Posted in Ai | Comments Off on Who will have the AI edge? – Bulletin of the Atomic Scientists

How an AI took down four world-class poker pros – Engadget

Posted: at 8:28 am

Game theory

After the humans' gutsy attack plan failed, Libratus spent the rest of the competition inflating its virtual winnings. When the game lurched into its third week, the AI was up by a cool $750,000. Victory was assured, but the humans were feeling worn out. When I chatted with Kim and Les in their hotel bar after the penultimate day's play, the mood was understandably somber.

"Yesterday, I think, I played really bad," Kim said, rubbing his eyes. "I was pretty upset, and I made a lot of big mistakes. I was pretty frustrated. Today, I cut that deficit in half, but it's still probably unlike for me to win." At this point, with so little time left and such a large gap to close, their plan was to blitz through the remaining hands and complete the task in front of them.

For these world-class players, beating Libratus had gone from being a real possibility to a pipe dream in just a matter of days. It was obvious that the AI was getting better at the game over time, sometimes by leaps and bounds that left Les, Kim, McAulay and Chou flummoxed. It wasn't long before the pet theories began to surface. Some thought Libratus might have been playing completely differently against each of them, and others suspected the AI was adapting to their play styles while they were playing. They were wrong.

As it turned out, they weren't the only ones looking back at the past day's events to concoct a game plan for the days to come. Every night, after the players had retreated to their hotel rooms to strategize, the basement of the Supercomputing Center continued to thrum. Libratus was busy. Many of us watching the events unfold assumed the AI was spending its compute cycles figuring out ways to counter the players' individual play styles and fight back, but Professor Sandholm was quick to rebut that idea. Libratus isn't designed to find better ways to attack its opponents; it's designed to constantly fortify its defenses. Remember those major Libratus components I mentioned? This is the last, and perhaps most important, one.

"All the time in the background, the algorithm looks at what holes the opponents have found in our strategy and how often they have played those," Sandholm told me. "It will prioritize the holes and then compute better strategies for those parts, and we have a way of automatically gluing those fixes into the base strategy."

If the humans leaned on a particular strategy -- like their constant three-bets -- Libratus could theoretically take some big losses. The reason those attacks never ended in sustained victory is because Libratus was quietly patching those holes by using the supercomputer in the background. The Great Wall of Libratus was only one reason the AI managed to pull so far ahead. Sandholm refers to Libratus as a "balanced" player that uses randomized actions to remain inscrutable to human competitors. More interesting, though, is how good Libratus was at finding rare edge cases in which seemingly bad moves were actually excellent ones.

"It plays these weird bet sizes that are typically considered really bad moves," Sandholm explained. These include tiny underbets, like 10 percent of the pot, or huge overbets, like 20 times the pot. Donk betting, limping -- all sorts of strategies that are, according to the poker books and folk wisdom, bad strategies." To the players' shock and dismay, those "bad strategies" worked all too well.

On the afternoon of January 30th, Libratus officially won the second Brains vs AI competition. The final margin of victory: $1,766,250. Each of the players divvied up their $200,000 spoils (Dong Kim lost the least amount of money to Libratus, earning about $75,000 for his efforts), fielded questions from reporters and eventually left to decompress. Not much had gone their way over the past 20 days, but they just might have contributed to a more thoughtful, AI-driven future without even realizing it.

Through Libratus, Sandholm had proved algorithms could make better, more-nuanced decisions than humans in one specific realm. But remember: Libratus and systems like it are general-purpose intelligences, and Sandholm sees plenty of potential applications. As an entrepreneur and negotiation buff, he's enthusiastic about algorithms like Libratus being used for bargaining and auctions.

"When the FCC auctions spectrum licenses, they sell tens of billions of dollars of spectrum per auction, yet nobody knows even one rational way of bidding," he said. "Wouldn't it be nice if you had some AI support?"

But there are bigger problems to tackle ones that could affect all of us more directly. Sandholm pointed to developments in cybersecurity, military settings and finance. And, of course, there's medicine.

"In a new project, we're steering evolution and biological adaptation to battle viral and bacterial infections," he said. "Think of the infection as the opponent and you're taking sequential actions and measurements just like in a game." Sandholm also pointed out that such algorithms could even be used to more helpfully manage diseases like cancer, both by optimizing the use of existing treatment methods and maybe even developing new ones.

Jason, Dong, Daniel and Jimmy might have lost this prolonged poker showdown, but what Sandholm and his contemporaries have learned in the process could lead to some big wins for humanity.

Visit link:

How an AI took down four world-class poker pros - Engadget

Posted in Ai | Comments Off on How an AI took down four world-class poker pros – Engadget

The Coming AI Wars – Huffington Post

Posted: February 10, 2017 at 3:14 am

If you accept that business is always evolving, learning and changing then you won't be surprised by this forecast. Think ultimate velocity. Think the next wave of digital disruption. This makes mobile, big data and the cloud seem like old news. The competitive landscape of companies, markets and individuals just got very complex and interesting. Artificial intelligence, AI is the new competitive advantage. Our civilization is heading for a reality check.

We will need to make a call very soon. That is about how the AI Wars will play out. Do we want a Human-Centric Future, enabled by AI but not replaced by AI? This will be a central question in the debate over AI in work, society and business. We need to consider the future trends in AI that would challenge the Human-Centric Future.

AI maybe both our greatest competition and our greatest creation.

We have entered a new era--The AI Wars. Artificial intelligence, and the current computer programs that deliver various forms of machine learning, natural language processing, neural networks and cognitive computing is emerging fast as a competitive force in every industry, nation and market. The only question that matters is Are You Future Ready? How will you adapt, integrate into your business or career as you prepare for the AI Wars?

Amazon is using Alexa to compete against all of the other retailers on the planet and Google Home. Tesla's AI downloads updated geo-intelligence to compete against all the other car brands that don't update via the cloud. IBM's Watson is automating decision analysis that competes with clinics and hospitals not enabled by its cognitive computer. This is just the beginning of the AI Wars. Companies that are using AI to compete will shape the future of AI.

There are companies using AI for diagnosing disease, deciphering law, designing fashion, writing films, drafting music, reading taxes or figuring out if your a terrorist, fraudster or threat. AI is everywhere. If you are within sight of a videocamera, cell phone, city, driving in a car or traveling by transit, online or off, unless you are on Mars you are likely exposed to AI in real-time. You may not know this.

Here's a forecast--every job a human can do will be augmented by (increased intelligence assets) and possibly replaced by AI. Companies will use AI to outcompete other companies. Nations will use AI to compete against other nations. AI augmented humans will outcompete the Naturals--humans not augmented by AI.

We must prepare now for this extreme future possibility. AI is the ultimate competitor and collaborator of humans. AI is the game changer of the future that is coming sooner then we think. So smart AI is an investment every organization and nation needs to make now so we can shape the future of AI to become Human-Centric.

Now the challenge is how will we will redesign organizations, alliances, markets, work and careers in a world where AI is a partner, enabler, producer and yes, a competitor? We need to redesign our civilization to keep pace with the advancement of AI. Now I am not a dystopian. I believe we need to prepare smarter to meet these challenges but they are coming. No denial needed. Most of what AI will bring will be productive and positive. Some of the developments will pose difficulty, challenge and threat.

Artificial intelligence will be the most powerful future competitive force influencing every business, markets, security, creativity and every profession--from law, medicine, engineering to gaming and entertainment. Having AI that can deliver solutions, faster then, even more cost-effective then, with greater quality then humans is coming. This is the inevitable end game of digital transformation.

Geopolitical power will be shaped not just by economics, wealth and might but by AI. Thinking machines that can out think the competition mean a new world of geopolitical intelligence that may evolve beyond states, law, human knowledge and understanding. How do we figure out what we cannot understand? When AI writes its own rules, operating system and behaviors and we don't understand how will we realize that we have created a potential competitor not just a collaborator. The AI Wars are coming.

The ultimate digital disruption is coming. I am not advocating that AI will replace human jobs but rather that it could happen if we don't plan ahead--become Future Ready, redesign our world to anticipate this future. Companies will and are even today competing using AI. Predictive analytics and big data driven by AI is a competitive differentiator. Make sure you are in this game--shape this future.

Even if AI surpasses humans in a autonomous world of smart technology that is faster then humans, we should hold to a Human-Centric Future. We should be ready for this future as we are creating it now. I remain positive and suggest that the future is best served by humanity using AI to fix the grand challenges that face our world--hunger, security, water, disease, poverty and sustainability. We could use some help and I advocate for AI to be directed to help enable humans to fix the planet. Makes sense to this futurist.

This Blogger's Books and Other Items from...

Future Smart: Managing the Game-Changing Trends that Will Transform Your World

by James Canton

Future Smart: Managing the Game-Changing Trends that Will Transform Your World

by James Canton

Visit link:

The Coming AI Wars - Huffington Post

Posted in Ai | Comments Off on The Coming AI Wars – Huffington Post

AI and the end of truth – VentureBeat

Posted: at 3:14 am

A lot of things happened in 2016.

For starters, 2016 was the year when the filter bubble popped and the fake news controversy shook the media industry. Following the U.S. elections, Facebook came under fire as having influenced the results by enabling the spread of fake news on its platform. A report by Buzzfeed showed how fake stories, such as Pope Francis endorsing Donald Trump, received considerably more engagement than true stories from legitimatemedia outlets like the New York Times and the Washington Post. Mark Zuckerberg was quick to dismiss the claim, but considering that nearly half of all Americans get their news primarily from the platform, it is very reasonable to believe Facebook did play a role in the elections.

The fake news controversy led to a lot of discussion and some great ideason how to face it. Under the spotlight, both Facebook and Google reacted by banning fake news sites from advertising with them. Facebook also went a step further by introducing new measures to limit the spread of fake news on its platform, such as the ability for users to report dubious content, which then shows a disputed warning label next to it.

While those are promising first steps, I am afraid they wont be enough. I believe our current misinformation problem is only the tip of a massive iceberg and this looming disaster starts with AI.

2016 was also the year where AI became mainstream. Following a long period of disappointments, AI is making a comeback thanks to recent breakthroughs such as deep learning. Now, rather than having to code the solution to a problem, it is possible to teach the computer to solve the problem on its own. This game-changing approach is enabling incredible products that would have been thought impossible just a few years ago, such as voice-controlled assistants like Amazon Echo and self-driving cars.

While this is great, AI is also enabling some impressive but downright scary new tools for manipulating media. These tools have the power to forever change how we perceive and consume information.

For instance, a few weeks ago, Adobe announced VoCo, a Photoshop for speech.In other words, VoCo is an AI-powered tool that can replicate human voices. All you need is to feed the software a 20-minute long audio recording of someone talking. The AI will analyze it and learn how that person talks. Then, just type anything, and the computer will read your words in that persons voice. Fundamentally, Adobe built VoCo to help sound editors easily fix audio mistakes in podcasts or movies. However, as you can guess, the announcement led to major concerns about the potential implications of the technology, from reducing trust in journalism to causing major security threats.

This isnt the end of it. What we can do with audio, we can also do with video:

Face2Face is an AI-powered tool that can do real-time video reenactment. The process is roughly the same as VoCo: Feed the software a video recording of someone talking, and it will learn the subtle ways that persons face moves and operates. Then, using face-tracking tech, you can map your face to that persons, essentially making them do anything you want with an uncanny level of realism.

Combine VoCo and Face2Face, and you get something very powerful: the ability to manipulate a video to make someone say exactly what you want in a way that is nearly indistinguishable from reality.

It doesnt stop here. AI is enabling many other ways to impersonate you. For instance, researchers created an AI-powered tool that can imitate any handwriting, potentially allowing someone to manipulate legal and historical documents or create false evidence to use in court. Even creepier, a startup created an AI-powered memorial chatbot: software that can learn everything about you from your chat logs, and then allow your friends to chat with your digital self after you die.

Remember the first time you realized that youd been had? That you saw a picture you thought was real, only to realize it was photoshopped? Well, here we go again.

Back in the days, people used to say that the camera cannot lie. Thanks to the invention of the camera, it was possible, for the first time, to capture reality as it was. Consequently, it wasnt long before photos became the most trusted pieces of evidence one could rely upon. Phrases like photographic memory are a testament to that. Granted, people have been historically manipulating photos, but those edits were rare and required the tedious work of experts. This isnt the case anymore.

Todays generation knows very well that the camera does lie, all the time. With the widespread adoption of photo-editing tools such as Photoshop, manipulating and sharing photos has now become one of the Internet favorites hobbies. By making it so easy to manipulate photos, these tools also made it much harder to differentiate fake photos from real ones. Today, when we see a picture that seems very unlikely, we naturally assume that it is photoshopped, even though it looks very real.

With AI, we are heading toward a world where this will be the case with every form of media: text, voice, video, etc. To be fair, tools like VoCo and Face2Face arent entirely revolutionary. Hollywood has been doing voice and face replacement for many years. However, what is new is that you no longer need professionals and powerful computers to do it. With these new tools, anyone will be able to achieve the same results using a homecomputer.

VoCo and Face2Face might not give the most convincing results right now, but the technology will inevitably improve and, at some point, be commercialized. This might take a year, or maybe 10 years, but it is only a matter of time before any angry teenager can gettheir hands on AI-powered software that can manipulate any media in ways that are indistinguishable from the original.

Given how well fake news tends to perform online, and that our trust in the media industry is at an all-time low, this is troubling. Consider, for instance, how such a widespread technology could impact:

In 2016, Oxford Dictionaries chose post-truth as the international word of the year, and for good reason. Today, it seems we are increasingly living in a kingdom of bullshit, where the White House spreads alternative facts and everything is a matter of opinion.

Technology isnt making any of this easier. As it improves our lives, it is also increasingly blurring the line between truth and falsehood. Today, we live in a world of Photoshop, CGI, and AI-powered beautifying selfie apps. The Internet promised to democratize knowledge by enabling free access to information. By doing so, it also opened up a staggering floodgate of information that includes loads of rumors, misinformation, and outright lies.

Social media promised to make us more open and connected to the world. It also made us more entrenched in digital echo chambers, where shocking, offensive, and humiliating lies are systematically reinforced, generating a ton of money for their makers in the process. Now AI is promising, among other things, to revolutionize how we create and edit media. By doing so, it will also make distortion and forgery much easier.

This doesnt mean any of these technologies are bad. Technology, by definition, is a mean to solve a problem and solving problems is always a good thing. As with everything that improves the world, technological innovation often comes with undesired side effects thattend to grab the headlines. However, in the long run, technologys benefit to society far outweighs its downsides. The worldwide quality of life has been getting better by almost any possible metric: Education, life expectancy, income, and peace are better than they have ever been in history. Technology, despite its faults, is playing a huge role in all of these improvements.

This is why I believe we should push for the commercialization of tools like VoCo or Face2Face. The technology works. We cant prevent those who want to use it for evil from getting their hands on it. If anything, making these tools available to everyone will make the public aware of their existence and by extension, aware of the easily corruptible nature of our media. Just like with Photoshop and digital photography, we will collectively adapt to a world where written, audio, and video content can be easily manipulated by anyone. In the end, we might even end up having some fun with it.

Go here to read the rest:

AI and the end of truth - VentureBeat

Posted in Ai | Comments Off on AI and the end of truth – VentureBeat

Taser bought two computer vision AI companies – Engadget

Posted: at 3:14 am

The Axon AI group will include about 20 programmers and engineers. They'll be tasked with developing AI capabilities specifically for public safety and law enforcement. The backbone of the Axon AI platform comes from Dextro Inc. Their computer-vision and deep learning system can search the visual contents of a video feed in real time. Technology from the Fossil Group, which Taser also acquired, will support Dextro's search capability by "improving the accuracy, efficiency and speed of processing images and video," according to the company's press release.

The AI platform is the latest addition to Taser's Axon ecosystem, which include everything from body and dash cameras to evidence and interview logging. Altogether the Axon system handles 5.2 petabytes of data from more than half of the nation's major city police departments.

With the new AI system in place, law enforcement could finally get a handle on all that footage. "Axon AI will greatly reduce the time spent preparing videos for public information requests or court submission," Taser CEO, Rick Smith, said in a statement. "This will lay the foundation for a future system where records are seamlessly recorded by sensors rather than arduously written by police officers overburdened by paperwork."

See the rest here:

Taser bought two computer vision AI companies - Engadget

Posted in Ai | Comments Off on Taser bought two computer vision AI companies – Engadget

Page 281«..1020..280281282283..»