Page 209«..1020..208209210211..»

Category Archives: Artificial Intelligence

Artificial Intelligence Is Coming Whether You Like It Or Not – Mother Jones

Posted: February 6, 2017 at 3:21 pm

SIPA Asia via ZUMA Wire

Atrios today:

Self-Checkouts

Those still a thing? I mean, I know they are, but around me the 3 major supermarkets within walking distance got rid of them....Anyway, I know they still exist, but I do think our robot future is not quite as inevitable as people think. Worrying about the impact of future automation on jobs seems to be a cool tech away of ignoring the current fucked and bullshit jobs situation. And, yes, automation has been going on for decades, which is actually my point. There's nothing new about it, and I don't know why people think there will be this sudden automation discontinuity. The robots have been here for awhile, and they aren't really going away, but that doesn't mean the sci-fi dystopian workless future is just around the corner. Shit is fucked up and bullshit enough without worrying about things which haven't happened yet, and likely won't.

It really doesn't matter if artificial intelligence is distracting us from whatever you think the "real" problem is. It's coming anyway. The speed of the AI revolution depends solely on fundamental factors (mostly continued reductions in the cost of parallel computing power) and the level of interest in AI software development. The fundamental factors are obviously still barreling ahead, and it sure looks like the free market has a ton of interest too:

Besides, AI is the real problem. As we all know (don't we?), the decline of manufacturing in the US has far more to do with automation than with trade or globalization. That decline set up the conditions for an angry working class in three northwestern states that finally decided it had found a savior in a guy who claimed it was all the fault of a bunch of foreigners. So now Donald Trump is president. How much more real can you get?

And that was just old-fashioned dumb automation. Smart automation is going to have a far bigger and far faster effect. We're not very far off from the first real destruction of an industry (probably long-haul trucking) thanks to smart automation, and after that it's going to come thick and fast.

So what are we going to do? Will our future be in the hands of demagogues who gain power by lashing out at scapegoats while they work hard to make sure that rich people get all the benefits of AI? Or will it be in the hands of people who actually give a damn about the working class and understand that a world of increasing automation requires a dramatic rethink of basic economics? I would sure like it to be the latter.

Unfortunately, like global warming, the effects of AI are slow and invisibleon a human timescale anyway. So it's easy to pretendno matter how idiotic this isthat AI is just a rerun of the Industrial Revolution. It's easy to pretend that each new advance isn't really a step toward true AI. It's easy to pretend that each individual industry to fall is just a special case. It's easy to pretend that something else is always more important.

Is AI coming soon? I find this question too boring to spend much time on anymore. Of course it's coming soon. The only question I'm interested in is what we're going to do about it. I keep pondering this, and I keep failing to come up with any likely answers that are very optimistic in the medium term. Maybe I'm not thinking outside the box enough. But it sure looks like we're determined to keep our collective heads in the sand for a long time. At best, the result is going to be a grim future of plutocracy for some and the dole for everyone else. At worst, it's going to be a future of global genocide (do you think there's enough aid in the world to keep Bangladesh afloat when there's no longer any work there?).

Eventually everything will work out, probably after a lot of suffering and a popular revolt. But wouldn't it be nice to avoid all that?

Oh, and those self-checkout machines? I don't know about Philly, but there's hardly a supermarket within ten miles of me that doesn't have them. Not only are they still a thing, but they're only going to get better. So sorry about all those nice union jobs as checkers and baggers.

Here is the original post:

Artificial Intelligence Is Coming Whether You Like It Or Not - Mother Jones

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Is Coming Whether You Like It Or Not – Mother Jones

Montreal sees its future in smart sensors, artificial intelligence (with video) – Computerworld

Posted: at 3:21 pm

The Quebecois city of Montreal has long been known as a hotbed of creativity -- home of Cirque du Soleil and a hub for companies in the online gaming and special effects industries, not to mention its place as a financial and trade capital.

Creativity played a key role when the city of 2 million (with 4 million regionally) competed against other municipalities globally to win the 2016 title of Intelligent Community of the Year.

And now that commitment to creativity is spurring the city to explore a range of unique new smartphone apps and other startup-generated initiatives that leverage sensors, data collection and analysis, and machine learning to deal with snow removal, ever-increasing traffic and other municipal challenges.

Public Wi-Fi, smart mobility and digital public services are just some of the 70 municipal projects detailed in the city's Smart and Digital City Action Plan, begun in 2015. More than half of the projects are expected to be finished by 2018, though some will take longer.

"Montreal is known as the place 'where Shakespeare meets Moliere.' It's a creativity hub," says Harout Chitilian, the elected official in charge of the city's smart city initiatives and technology. "All these things meshing together make Montreal one of the greatest startup digital ecosystems."

By intent, the government has made that startup ecosystem a key compontent of its smart city push, says Chitilian, who serves as vice president of the city's executive committee, the executive branch of the municipal government that includes Mayor Denis Coderre.

Of the dozens of initiatives currently underway in Montreal, several involve partnerships with the private sector in which the city, Quebec Province and businesses share costs. Those projects range from a high-speed, fiber-optic Scientific Information Network to eight different smart mobility and parking projects.

The principal driver of this partnership is InnoCit MTL, an independent, non-profit tech accelerator that receives both city and business financial support. Housed in the historic Notman House in downtown Montreal, InnoCit MTL has already fostered more than 15 startups in just over a year.

Notman House was alive with activity when Computerworld visited during a cold snap in mid-December, 2016 as part of a three-day tour of this smart city. Here's what we found.

The city government, along with the Province of Quebec and members of the academic community, have put special focus on artificial intelligence. Those efforts meld well with private sector startups that likewise are tapping the power of AI.

One such startup is Infra.AI, which intends to use machine learning and artificial intelligence to scan high resolution images of the city's streets and buildings."The benefits of AI are numerous," says co-founder Franois Maillet. "The fact that Montreal is serious about smart city and investing in it, there's a direct and positive impact in the startup community and the R&D. For the city itself, it provides better services to the citizens."

LIDAR images can help municipalities like Montreal monitor city infrastructure to identify such changes in status as detoriating bridges, broken windows or building code violations.

With digital image information from satellites, low-flying planes and LIDAR-equipped city vehicles, technology under development at Infra.AI will make it possible for Montreal and other cities to provide almost-real-time data on street conditions or the safety of roads and bridges.

That data can be combined with information from traffic video sensors and sensors on buildings, says Maillet, who also co-founded a related startup, MLDB.AI, that is working on a machine-learning database.

The potential applications are far-ranging. A firetruck speeding to a fire might be automatically advised that there's an obstruction in the roadway, allowing it to take another pathway. Or a pothole larger than a foot could be spotted, automatically dispatching a road crew to patch it. AI can even help identify a sagging highway bridge span, noticing a small drop when compared with the previous scans from days or weeks earlier.

Montreal-based Infra.AI is employing pattern recognition intelligence to distinguish a group of pedestrians from vehicles. The software could be used to identify problem locations and develop systems for improved pedestrian safety.

Infra.AI is currently piloting a program that helps identify ailing trees on city streets, a problem plaguing Montreal right now. When the startup's AI system is shown images of healthy trees, it can compare those with recent imagery to identify less-healthy trees with patches and browning leaves that need to be maintained or replaced.

"When you think of the kind of data [already] coming in from LIDAR and cameras, it's huge. The applications are now becoming possible with AI," says Jean-Franois Gagn, CEO of Element AI, a Montreal-based incubator dedicated to matching AI startups with larger companies and with government agencies.

Through its Canada First Research Excellence Fund, the Canadian government last year provided about $200 (US) million to three Montreal-based universities for research that Gagn believes will yield sophisticated AI spinoff companies in 2017.

In addtion, both Google and Microsoft have recently made investments in Montreal-based AI.

On a more personal level, another InnoCit MTL startup, Key2Access, is getting ready to test an app to make it safer for disabled people to cross city streets, according to CEO Sophie Aladas. Key2Access's tech is already being piloted in Ottawa, and has been successfully tested there by Richard Marsolais, a man with a vision impairment who is a specialist in independent living for the Canadian National Institute for the Blind.

Marsolais and his guide dog, Ashland, along with Motaz Aladas, head engineer for Key2Access (and CEO Sophie's father), demonstrated for Computerworld at a Montreal intersection how a small handheld device or a smartphone could be used to activate a Bluetooth-enabled crosswalk signal, making it safe for a vision-impaired or disabled person to cross. (See Smart Cities: Montreal for video footage of that demonstration.)

Marsolais says it would be helpful to have a handheld activation device to change the signal, instead of relying only on his guide dog or an audible crossing signal, which isn't always easy to hear. In addition, it isn't always clear in which direction it's safe to cross; Key2Access aims to solve that problem by using audible commands or vibrations to direct the user onto the crosswalk in the proper direction.

For Key2Access to function, traffic engineers in Montreal will need to install a receiver at each intersection to receive the wireless signal from the handheld device, Aladas says. The cost will be comparable to enabling a traditional crosswalk button on a pole, Sophie Aladas says. The city is expected to install the gear on at least one intersection in the spring as part of the testing phase.

A number of initiatives are in the works to help reduce traffic in Montreal in the next two years, including a tripling of the number of intelligent traffic signals to reach 2,200 units.

Data from the 700 existing smart signals installed over the last two years and from 500 surveillance cameras and Bluetooth sensors already helps prioritize buses traveling the streets to lessen commute times by 15% to 20%, the city's Chitilian says, with more improvements expected. Montreal is also in partnership with Waze, Google's crowdsourcing traffic app, to help syphon off driver data for greater intelligence.

In addition to its efforts to lessen traffic congestion and improve the efficiency of public transportation, Montreal heavily promotes bicycle riding. It's not uncommon to see bicyclists pedaling through downtown streets even in the dead of winter.

Bixi, a bike-sharing system, got its start in Montreal in 2009; as of 2015, there were 3.5 million Bixi rides each year in the city, and the service has grown to 45,000 bikes in 15 cities. The Bixi mobile apps for iOS and Android, along with other Bixi add-ins developed by Montreal startups, allow everything from online payments to personal fitness tracking for the bikes.

Separately, Montreal startup SmartHalo is testing technology to turn any bike into a smart bike using a rider's smartphone and its GPS connection.

"We know for a fact that adding preferential lights and dedicated bus lanes increases the speed of going from point A to B and makes the service much more efficient. You can have the same amount of buses and workable hours with better service," Chitilian says.

Sensor data from traffic signals is already being sent to a recently created central command post -- a "decision center," Chitilian calls it -- where technicians pore over dozens of desktop monitors and large wall displays. "The center gives us the ability to have an overall view" of the city, helping if there is an accident or other public safety need, he says.

Montreal also has designated $76 million US to replace 100,000 streetlights in the next five years with more efficient LED lighting that will be equipped with sensor and communications technology to expand the city's ability to manage congestion, pedestrian crowds, accidents and more, according to Chitilian.

With its combination of AI-focused startup innovation, sensor-driven traffic-improvement initiatives and data-driven apps for citizen empowerment, Montreal seems well on its way to furthering its designation as an intelligent city.

"We are trying to build a smart city from the ground up, and are putting in the pillars to do it," Chitilian says. "As politicians, we have to show immediate results, but some of our decisions will have lasting impact beyond our political mandates," he muses.

"We have to make decisions that will look good down the road," Chitilian says. "What we have in Montreal is more than optimism. It is a generational transformation."

Montreal and the Quebec Province have committed to sharingpublicly available data, which private enterpreneurs have put to innovative use via smartphone apps. Here are a few of locals' favorites:

More:

Montreal sees its future in smart sensors, artificial intelligence (with video) - Computerworld

Posted in Artificial Intelligence | Comments Off on Montreal sees its future in smart sensors, artificial intelligence (with video) – Computerworld

Silicon Valley Hedge Fund Takes On Wall Street With AI Trader – Bloomberg

Posted: at 3:21 pm

Babak Hodjat believes humans are too emotional for thestock market. So he's started one of the first hedge funds run completely by artificial intelligence.

"Humans have bias and sensitivities, conscious and unconscious," says Hodjat, a computer scientist who helped laythe groundwork for Apple's Siri. "It's well documented we humans make mistakes. For me, it's scarier to be relying on those human-based intuitions and justifications than relying on purely what the data and statistics are telling you."

Babak Hodjat

Photographer: David Paul Morris/Bloomberg

Hodjat, with 21 patents to his name, is co-founder and top scientist of Sentient Technologies Inc., a startup that has spent nearly a decadelargely in secrettraining an AI system that can scour billions of pieces of data, spot trends, adapt as it learns and make money trading stocks. The team of technology-industry vets is betting that softwareresponsible forteaching computers to drive cars, beat the world's best poker players and translate languages will give their hedge fund an edge on Wall Street pros.

The walls of Sentient's San Francisco office are dotted with posters for robots-come-alive movies such as "Terminator." Inside a small windowless trading room, the only light emanates fromcomputer screens and a virtual fire on a big-screen TV. Two guys are quietly monitoring the machine's tradesjust in case the system needs to be shut down.

If all hell breaks loose," Hodjat says, "there is a red button."

Sentient won't disclose its performance or many details about the technology, and the jury is out on the wisdom of handing off trading to a machine. While traditional hedge funds including Bridgewater Associates, Point72 and Renaissance Technologies have poured money into advanced technology, many use artificial intelligence to generate ideasnot to control their entire trading operations.

All the same, Sentient, which currently trades only its own money, is being closely watched by the finance and AIcommunities. The venture capital firm owned by Hong Kong's richest man, Li Ka-shing, and India's biggest conglomerate, Tata Group, are among backers who have given the company $143 million. (Beyond trading, Sentient's AI system is being applied to a separate e-commerce product.)

Trading is "one of the top 10 places that AI can make a difference," says Nello Cristianini, a professor of artificial intelligence at the University of Bristol who has been advising Sentient. "A trading algorithm can look at the data, make a decision, act and repeatyou can have full autonomy."

Sentient's team includes veterans of Amazon, Apple, Google, Microsoft and other technology companies. They're part of a small group in Silicon Valley using expertise in data science and the field of artificial intelligence known as machine learning to try and disrupt financial markets.

AI scientists typically have no interest in working for a hedge fund, says Richard Craib, who started the AI hedge fund Numerai. "But they may want to mess around with data sets." Numerai's system makes trades by aggregating trading algorithms submitted by anonymous contributors who participate in a weekly tournament where prizes are awarded in Bitcoin. It recently raised $6 million from investors including Howard Morgan, the co-founder of the quant investment management firm Renaissance Technologies. "It's entirely a data science problem," Craib says.

Another company, called Emma, started a hedge fund last year based on an artificial intelligence system that can write news articles.

Employees of Sentient Technologies in San Francisco.

Photographer: David Paul Morris/Bloomberg

Hodjat of Sentient spent much of his career focused on the language-detection technology behind smartphone digital assistants. Several employees from his previous company, Dejima, went on to create Apple's Siri. Rather than join, he chose to focus on advances in artificial intelligence. His career goals didn't include finance, but he sees markets as one of the most promising applications for the technology. The vast amounts of publicly available data, along with stronger computers to analyse it for patterns, make the field an ideal fit. "That is the fuel for AI," he says.

Sentient's system is inspired by evolution. According to patents, Sentient has thousands of machines running simultaneously around the world, algorithmically creating what are essentially trillions of virtual traders that it calls "genes." These genes are tested by giving them hypothetical sums of money to trade in simulated situations created from historical data. The genes that are unsuccessful die off, while those that make money are spliced together with others to create the next generation. Thanks to increases in computing power, Sentient can squeeze 1,800 simulated trading days into a few minutes.

An acceptable trading gene takes a few days and then is used inlive trading. Employees set goals such as returns to achieve, risk level and time horizon, and then let the machines go to work. The AI system evolves autonomously as it gains more experiences.

Sentient typically owns a wide-ranging batch of U.S. stocks, trading hundreds of times per day and holding positions for days or weeks. "We didn't impose that on the system," says Jeff Holman, the company's chief investment officer. "The artificial intelligence seems to agree with what you get from human intelligence that it's better to spread your bets and have a more diversified portfolio."

As impressive as Sentient's technology appears, it's hard to know if it works. The company says the AI system is beating internal benchmarks, but won't disclose what those are. It shares little about the data used for the AI's decision-makingand isn't profitable. The company plans to bring in outside investors later this year. Holman, a Wall Street veteran who joined last year, said thecompany is limited on what it can say by U.S. Securities Exchange Commission rules restricting marketing by hedge funds that are raising money. "The platform is solid," he says. "It doesn't look like any other strategy I've seen."

Anthony Ledford, the chief scientist at the $19 billion hedge fund Man AHL in London, warns of putting too much faith in this branch of artificial intelligence without more evidence. Man AHL uses machine learning for a portion of its clients money, and Ledford is encouraged by the results. While the company is exploring a standalone machine-learning strategy, he says it's too early to declare success."There's a lot of hype and promise," Ledford says. "But when you actually ask people how many hundreds of millions dollars they are trading, many of them don't come back with much at all."

Little performance data is available about AI-focused hedge funds. One index that tracks 12 pools that utilize AI as part of its core strategies, called Eurekahedge AI Hedge Fund Index, returned 5 percent last year. That's slightly better than the average hedge fund, but trailed the S&P 500.

Tristan Fletcher, who wrote his doctoral thesis on machine learning in financial markets and works for a hedge fund, says investors may be reluctant to turn over their money completely to a machine. "I know how conservative investors are and I know of no one who would put their money in asystem that's fully systematic," says Fletcher. "Machine learning isn't a panacea for everything. You need people who have literal thinking."

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

with Nishant Kumar in London

Excerpt from:

Silicon Valley Hedge Fund Takes On Wall Street With AI Trader - Bloomberg

Posted in Artificial Intelligence | Comments Off on Silicon Valley Hedge Fund Takes On Wall Street With AI Trader – Bloomberg

The Observer view on artificial intelligence – The Guardian

Posted: at 3:21 pm

An artificial intelligence called Libratus beats four of the worlds best poker players in Pittsburgh last week. Photograph: Carnegie Mellon University

First it was checkers (draughts to you and me), then chess, then Jeopardy!, then Go and now poker. One after another, these games, all of which require significant amounts of intelligence and expertise if they are to be played well, have fallen to the technology we call artificial intelligence (AI). And as each of these milestones is passed, speculation about the prospect of superintelligence (the attainment by machines of human-level capabilities) reaches a new high before the media caravan moves on to its next obsession du jour. Never mind that most leaders in the field regard the prospect of being supplanted by super-machines as exceedingly distant (one has famously observed that he is more concerned about the dangers of overpopulation on Mars): the solipsism of human nature means that even the most distant or implausible threat to our uniqueness as a species bothers us.

The public obsession with the existential risks of artificial superintelligence is, however, useful to the tech industry because it distracts attention from the type of AI that is now part of its core business. This is weak AI and is a combination of big data and machine-learning algorithms that ingest huge volumes of data and extract patterns and actionable predictions from them. This technology is already ubiquitous in the search engines and apps we all use every day. And the trend is accelerating: the near-term strategy of every major technology company can currently be summarised as AI Everywhere.

The big data/machine-learning combination is powerful and enticing. It can and often does lead to the development of more useful products and services search engines that can make intelligent guesses about what the user is trying to find, movies or products that might be of interest, sources of information that one might sample, connections that one might make and so on. It also enables corporations and organisations to improve efficiency, performance and services by learning from the huge troves of data that they routinely collect but until recently rarely analysed.

Human freedoms and options are increasingly influenced by opaque, inscrutable algorithms

Theres no question that this is a powerful and important new technology and it has triggered a gadarene stampede of venture and corporate capital. We are moving into what one distinguished legal scholar calls the black box society, a world in which human freedoms and options are increasingly influenced by opaque, inscrutable algorithms. Whose names appear on no-fly lists? Who gets a loan or a mortgage? Which prisoners get considered for parole? Which categories of fake news appear in your news feed? What price does Ryanair quote you for that particular flight? Why has your credit rating suddenly and inexplicably worsened?

In many cases, it may be that these decisions are rational and/or defensible. The trouble is that we have no way of knowing. And yet the black boxes that yield such outcomes are not inscrutable to everyone just to those who are affected by them. They are perfectly intelligible to the corporations that created and operate them. This means that the move towards an algorithmically driven society also represents a radical power-shift, away from citizens and consumers and towards a smallish number of powerful, pathologically secretive technology companies, whose governing philosophy seems to be that they should know everything about us, but that we should know as little as possible about their operations.

Whats even more remarkable is that these corporations are now among the worlds largest and most valuable enterprises. Yet, on the whole, they dont receive the critical scrutiny their global importance warrants. On the contrary, they get an easier ride from the media than comparable companies in other industries. If the CEO of an oil company, a car manufacturer or a mining corporation were to declare, for example, that his motto was Dont Be Evil, even the most somnolent journalist might raise a sceptical eyebrow. But when some designer-stubbled CEO in a hoodie proclaims his belief in the fundamental goodness of humanity, the media yawn tolerantly and omit to notice his companys marked talent for tax avoidance. This has to stop: transparency is a two-way process.

See more here:

The Observer view on artificial intelligence - The Guardian

Posted in Artificial Intelligence | Comments Off on The Observer view on artificial intelligence – The Guardian

Allow mathematicians to pierce artificial intelligence frontiers – Livemint

Posted: at 3:21 pm

New research indicates that Artificial Intelligence, or AI, as it is defined and practised today, has several limits. New buzzwords only serve to mystify the populace, and it is increasingly clear to me that many technologists and information technology (IT) managers are just groping about in the dark. They throw out terms such as neural networks, deep learning, big data, black box systems, and so on, hoping to mask the fact that they know very little of how this technology may evolve over the next several years.

As an observer, I cant help but think there is an important question in front of us: are the ramblings of these pundits in fact a case of the one-eyed man becoming king in the land of the blindor, instead, more akin to the parable of the five blind men, who all encountered an elephant and, after inspecting various parts of the elephant by touch, came away with different definitions of what an elephant is like?

The vital premise in todays AI is that the computer program itself learns as it goes along, creating a database of information, and then, uses that database to automatically generate additional computer programming codes as it learns morewithout the need for human programmers. These AI programs then become black boxes, since even their original human programmers have no way of knowing what code the machine has generated on its own.

ALSO READ: The road ahead for AI: engendering trust

These computer programs, however, need copious amounts of carefully categorized data to make themselves smarter. Anything that is sloppily characterized can easily cause the machine to make the wrong conclusions. I have mentioned before in this column that it has been proven that just changing a few pixels on an image can make an AI image-recognition program conclude that a car is in fact an elephantwhich is a mistake that an ordinarily intelligent human eye would never make.

Thus, many firms that are trying to chart out a path in AI are scrambling to go out and acquire vast stores of data that have already been neatly characterized. IBM, for instance, has bought firms that own billions of medico-radiological imagesin the hope of feeding this vast acquired data to the medical diagnosis components of IBMs Watson product. The idea is that this data, collected over many years of digital medico-radiological imaging, will enable Watson to become cannier in diagnosing diseases. When quizzed about these acquisitions, a senior IBM executive said to me recently: If youre not at the table, you can be sure youll be on the menu.

In another example of the use of categorized data, a firm called Cambridge Analytica has recreated a sinister way to profile people, from psychometric tests that show up, ostensibly as harmless quizzes, on Facebook and other social networking sitesluring people into taking them and posting the individual results online. Cambridge Analytica claims it used these psychometric analyses to accurately predict the personality types and preferences of individual voters. The firm was apparently retained by both the Brexit leave and Donald Trumps presidential election campaigns to accurately target voters who were likely to vote for them, and to lure more of these supportive voters out to the polling booths.

Trained psychologists have a dim view of psychometric testing and other personality profiling tests. When I asked my sister, who holds a doctorate from Harvard in Psychology, about the efficacy of such methods, her response was that there are dozens of such psychometric rubrics out there that do have some utility, but are in fact quite flawed; many of them have been debunked for predictive utility.

The accuracy of diagnostics and psychometrics aside, the fact remains that without reams of carefully categorized data, AI as we know it today is dead on arrival. That means that in areas where data is not yet availablefor instance, crash data for self-driving carswe must look elsewhere to create models that mimic large data stores accurately when data is absent. Where does one go to find out under what circumstances self-driving automobiles like the Tesla that killed its occupant in 2016 might have other such accidents? Enough instances of this havent occurred and, therefore, the data doesnt exist. Building predictive models here without data is not neuralits neurotic, and dangerous!

ALSO READ: Why India needs an AI policy

This brings us to the fields of pure mathematics and theoretical physics, which are the way forward. In an informative blog last year, Wale Akinfaderin, a Ph.D. candidate in physics at Florida State University, has enumerated the types of mathematics that an aspiring AI specialist must be familiar with, if not master, to be effective. Here is a partial list from his blog post: Principal Component Analysis, Eigen decomposition, Combinatorics, Bernoulli, Gaussian, Hessian, Jacobian, Laplacian, and Lagragian Distributions, Entropy, and Manifolds. Ill stop hereIm sure you get the idea!

Dont panic, says Neil Sheffield, an AI researcher at Amazon, in a blog. By bringing our mathematical tools to bear on the new wave of deep learning methods, we can ensure they remain mostly harmless.

Time for us amateur pundits and pedestrian programmers to make way for the pure mathematicians and theoretical physicists to lead the charge. They have long used mathematical theory to contemplate the unsolvable where data doesnt exist. Visionaries like Stephen Hawking, Albert Einstein and Srinivasa Ramanujan have been feted for their ability to posit plausible models on hitherto unsolvable problems such as the theory of the universe.

One-eyed they may well be, but all hail the new kings of AI!

Siddharth Pai is a world-renowned technology consultant who has led over $20 billion in complex, first-of-a-kind outsourcing transactions.

First Published: Tue, Feb 07 2017. 12 58 AM IST

Originally posted here:

Allow mathematicians to pierce artificial intelligence frontiers - Livemint

Posted in Artificial Intelligence | Comments Off on Allow mathematicians to pierce artificial intelligence frontiers – Livemint

Artificial Intelligence Tops Humans in Poker Battle What’s the Big Deal? – PokerNews.com

Posted: at 3:21 pm

HomeNewsPokerNews Op-Ed

Deep Blue was one hell of a chess player.

It was February 1996 and the machine developed by IBM was locked in battle with Gary Kasparov. Chess was big news as the computer system project originally begun in 1985 at Carnegie Mellon University attempted to do something other chess-playing devices had been unable to do beat a reigning world champion.

Even those with only a passing interest in chess like myself were intrigued by the matchup. Deep Blues designer said the machine could evaluate 200 million positions per second, and at the time, it was the fastest computer to match up with a world chess champion. Reports on the days progress were published in newspapers all across the globe.

Ultimately, the first match of six games was a victory for humanitywith Kasparov notching a 4-2 victory. However, in May the following year, and after some additional re-engineering, it was Deep Blue coming out on top.

The Deep Blue phenomenon has been in my head for the last couple weeks as four top poker players (Jason Les, Daniel McAulay, Jimmy Chou and Dong Kim) squared off against artificial intelligence software at the Rivers Casino in Pittsburgh.

This time the AI came out on top.

As Reuters noted, Libratus [Latin for balance], an AI built by Carnegie Mellon University racked up over $1.7 million worth of chips against four of the top professional poker players in the world in a 20-day marathon poker tournament that ended on Tuesday.

Headlines have trumpeted Libratus accomplishment around the world. Here are just a few examples:

Machine beats humans for the first time in poker (Reuter's) Computer manages to beat 4 of world's best poker players (FOX News) A Computer Just Clobbered Four Pros At Poker (FiveThirtyEight) A Mystery AI Just Crushed the Best Human Players at Poker (Wired magazine) Artificial Intelligence Goes All-in on Texas Holdem (Wall Street Journal)

Developers compared the victory to that of Deep Blue 20 years ago. The team certainly faced a challenge in engineering their AI to adjust to betting differences, imperfect information, unorthodox play, and that unique aspect of poker that differs it from most other games,bluffing.

Players were given a certain amount of play money and Libratus would go on to notch a computer's first victory in the no limit variety of Texas Hold'em (a previous computer had already mastered Limit Hold'em).

Yes, poker is just a game," University of Michigan professor Michael Wellman, who specializes in game theory and closely follows AI poker, said to Wired magazine. "But the game theory exhibited by Libratus could help with everything from financial trading to political negotiations to auctions.

Some have hailed the entire spectacle as great for the game of poker and no doubt there is some nice PR benefit that comes with it. But from a simple poker-playing perspective and in regards to its relevance among poker fans, the whole thing seems a bit too much. As a massive fan of the game of poker, this whole spectacle lacks the impact of Deep Blues win.

To me, this matchup of man versus droid/computer/software/techno-gizmo lacks the one aspect of poker that makes it so unique:risk. Its the reason that playing poker online for free or playing with your grandmother for matchsticks (or cheerios or whatever) is so lame;there is no risk of losing ones own money.

Chess is a game with merely risk of losing one individual match itself. The two combatants may have some kind of extrinsic monetary motivation, such as tournament payouts, appearance fees, etc., but there is not an inherent expected loss of ones own personal earnings.

In poker, players must square off against each other with their (usually) hard-earned money and that risk of ones own cash is a huge part of pokers appeal. Financial risk is inherently about losing money, and if youre not playing with risk in the game, youre not really playing poker.

If youre afraid to lose your money, you cant play to win, said Johnny Moss, a Texas poker legend and winner of the first two WSOP Main Events.

That attitude is something inherently flawed in making so much hoopla about Libratus' accomplishment;a machine/software/robot has no real inherent sense of loss or risk.

And when it comes to the art of the bluff, it seems engineering a machine to make these kinds of moves misses the key component of the risk involved in doing this: the pulse-racing feel of having all your chips in on a pot when you know your hand is squadoosh as ESPN WSOP analyst Norman Chad likes to put it. A highly-engineered AI topped four poker sharks with no real money on the line.

As a poker fan, this whole event doesnt even seem like real poker and just left me asking: So what? Poker is a game that is extremely dependent on human emotion and temperament.

Artificial intelligence has no fears about losing the mortgage payment in a pot.

Artificial intelligence has no fears about losing the mortgage payment in a pot or being down to that last bit of the poker bankroll and having to look for a real job to build it back.

Another aspect of this matchup with Libratus that is really missing for me, and I think for many poker fans, is that the self-reliant, mano-a-mano, battle of minds that takes place at the poker table. Sure I can concede a machine can get the better of humans in this type of setup, but pokers appeal for me is seeing players squaring off against each other and matching skills.

A battle against a computer lacks the panache of seeing real-life humans battling it out for their own cash. Libratus may have massive amounts of computing power, but it lacks the humanity that makes poker great and now watchable on television.

Many poker insiders and those with deep roots in the game may forget that, to casual fans, seeing thousands of dollars won and lost on a single game of cards is extremely bizarre yet extremely appealing. That appeal, along with the games unique characters and history, is the reason poker has grown into the international game it is today.

Poker is great because the human aspect is so important to excelling; it is not simply a series of moves on a game board or your old Commodore 64. Players who master the game can read other players and keep their own emotions in check.

They must master the subtleties and games within the game to excel. They benefit themselves by timing their actions correctly based on other players tendencies, outlooks and general gameplay. Players like Jason Mercier and Daniel Negreanu have mastered these nuances.

Dont read my hand wrong here, I am not a poker pessimist who thinks the game is moving in the wrong direction. Quite the contrary: I think the game is moving in the right direction in general after massive growth in the 2000s.

Libratus is not the next Big Blue and these four players were not Gary Kasparov.

Actual growth of the game depends on continuing presentations of the game in its real context on the felt and focusing on the players.

Some of those include: continued growth of the WSOP and live ESPN broadcasts; the World Poker Tours continued success and international growth; great broadcasts like Poker Centrals Super High Roller Bowl (with great commentary catering to fans and hard-core players alike); progress (thought slow) of state-by-state legalized online poker; the growth of the game by appealing younger players via Twitch; and the success of middle-tier tours catering to average Joe poker players (which are still needed to grow the game) like the Heartland Poker Tour and Mid-States Poker Tour.

The AI win seems like a minute footnote in comparison. Libratus may have won the battle against mankind, but was there ever really a war? Im not sure this is a battle that means a whole lot in the big picture of modern poker.

Libratus may have won the battle against mankind, but was there ever really a war?

Libratus is not the next Big Blue and these four players were not Gary Kasparov. It may have been an interesting technological endeavor, but Im sure these players in the "Brains vs. Artificial Intelligence, as the event came to be known, would much rather bring home a WSOP gold bracelet or WPT title if they had to pick. That hardware (not software) would be tangible and real and it would certainly be a nice real-life check to cash.

Sean Chaffin is a freelance writer in Crandall, Texas, and writes frequently about gambling and poker. If you have any story ideas, please email him at seanchaffin@sbcglobal.net or follow him @PokerTraditions. His poker book is RAISING THE STAKES: True Tales of Gambling, Wagering & Poker Faces and available on amazon.com.

The opinions expressed here are those of the authors and do not necessarily reflect the positions PokerNews

Be sure to complete your PokerNews experience by checking out an overview of our mobile and tablet apps here. Stay on top of the poker world from your phone with our mobile iOS and Android app, or fire up our iPad app on your tablet. You can also update your own chip counts from poker tournaments around the world with MyStack on both Android and iOS.

Go here to read the rest:

Artificial Intelligence Tops Humans in Poker Battle What's the Big Deal? - PokerNews.com

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Tops Humans in Poker Battle What’s the Big Deal? – PokerNews.com

Is AI a Threat to Christianity? – The Atlantic

Posted: at 3:21 pm

In his relatively short tenure, Pope Francis has been hard at work welcoming spiritual seekers into the Catholic Church. Hes refused to judge LGBT people, sought to integrate divorced couples, and extended priests ability to forgive abortion. But Franciss wide arms have arguably never stretched further than a mass in 2014 when he suggested the church would baptize Martians.

Iffor exampletomorrow an expedition of Martians came and one says, But I want to be baptized! What would happen? Pope Francis asked. When the Lord shows us the way, who are we to say, No, Lord, it is not prudent! No, lets do it this way.

While playful, this odd scenario got at a serious question about just how far the churchs welcome can go. Should Christianity, the worlds largest religion, embrace all intelligent life? Even aliens? Granted, the arrival of green space creatures seeking salvation isnt very likely. But the Popes lesson opens the door to the acceptance of another science-fiction stalwart, tooone thats not so easily dismissed. Namely, hyper-intelligent machines.

While most theologians arent paying it much attention, some technologists are convinced that artificial intelligence is on an inevitable path toward autonomy. How far away this may be depends on whom you ask, but the trajectory raises some fundamental questions for Christianityas well as religion broadly conceived, though for this article Im going to stick to the faith tradition I know best. In fact, AI may be the greatest threat to Christian theology since Charles Darwins On the Origin of Species.

For decades, artificial intelligence has been advancing at breakneck speed. Today, computers can fly planes, interpret X-rays, and sift through forensic evidence; algorithms can paint masterpiece artworks and compose symphonies in the style of Bach. Google is developing artificial moral reasoning so that its driverless cars can make decisions about potential accidents.

AI is already here, its real, its quickening, says Kevin Kelly, a co-founder of Wired magazine and the author of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future. I think the formula for the next 10,000 start-ups is to take something that already exists and add AI to it.

Despite AIs promise, certain thinkers are deeply concerned about a time when machines might become fully sentient, rational agentsbeings with emotions, consciousness, and self-awareness. The development of full artificial intelligence could spell the end of the human race, Stephen Hawking told the BBC in 2014. Once humans develop artificial intelligence, it would take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded."

This explosion of artificial intelligenceoften referred to as the singularityis one of many futures technologists have envisioned for robots, not all so apocalyptic. But the possibility of any threat to humans, even if small, is real enough that some are advocating for precautionary measures. More than 8,000 people, including Hawking, Noam Chomsky, and Elon Musk, have signed an open letter warning against potential pitfalls of AI development. Ryan Calo, a Washington University law professor, argues for the development of a Federal Robotics Commission to monitor and regulate developments so that we dont innovate irresponsibly.

While concerns mostly center on economics, government, and ethics, theres also a spiritual dimension to what were making, Kelly argues. If you create other things that think for themselves, a serious theological disruption will occur.

History lends credibility to this prediction, given that many major scientific advances have had religious impacts. When Galileo promoted heliocentrism in the 1600s, it famously challenged traditional Christian interpretations of certain Bible passages, which seemed to teach that the earth was the center of the universe. When Charles Darwin popularized the theory of natural selection in the 1800s, it challenged traditional Christian beliefs about the origins of life. The trend has continued with modern genetics and climatology.

The creation of non-human autonomous robots would disrupt religion, like everything else, on an entirely new scale. "If humans were to create free-willed beings, says Kelly, who was raised Catholic and identifies as a Christian, absolutely every single aspect of traditional theology would be challenged and have to be reinterpreted in some capacity.

Take the soul, for instance. Christians have mostly understood the soul to be a uniquely human element, an internal and eternal component that animates our spiritual sides. The notion originates from the creation narrative in the biblical book of Genesis, where God created human beings in Gods own image. In the story, God forms Adam, the first human, out of dust and breathes life into his nostrils to make him, literally, a living soul. Christians believe that all humans since that time similarly possess Gods image and a soul.

But what exactly is a soul? St. Augustine, the early Christian philosopher, once observed that I have therefore found nothing certain about the origin of the soul in the canonical scriptures. And Mike McHargue, a self-described Christian mystic and author of Finding God in the Waves: How I Lost my Faith and Found it Again Through Science, believes that the rise of AI would draw out the ambiguities in the ways that many Christians have defined terms like consciousness and soul.

Those in religious contexts dont know precisely what a soul is, McHargue says. Weve understood it to be some non-physical essence of an individual thats not dependent upon or tied to their body. Would AI have a soul by that definition?

If this seems like an absurd question, consider technologies such as in vitro fertilization and genetic cloning. Intelligent life is created by humans in each case, but presumably many Christians would agree that those beings have a soul. If you have a soul and you create a physical copy of yourself, you assume your physical copy also has a soul, says McHargue. But if we learn to digitally encode a human brain, then AI would be a digital version of ourselves. If you create a digital copy, does your digital copy also have a soul?

If youre willing to follow this line of reasoning, theological challenges amass. If artificially intelligent machines have a soul, would they be able to establish a relationship with God? The Bible teaches that Jesuss death redeemed all things in creationfrom ants to accountantsand made reconciliation with God possible. So did Jesus die for artificial intelligence, too? Can AI be saved?

I dont see Christs redemption limited to human beings, Christopher Benek, an associate pastor at Providence Presbyterian Church in Florida with degrees from Princeton Theological Seminary, told Gizmodo in 2015. Its redemption of all of creation, even AI. If AI is autonomous, then we should encourage it to participate in Christs redemptive purposes in the world.

And what about sin? Christians have traditionally taught that sin prevents divine relationship by somehow creating a barrier between fallible humans and a holy God. Say in the robot future, instead of eradicating humans, the machines decideor have it hardwired somewhere deep inside themthat never committing evil acts is the ultimate good. Would artificially intelligent beings be better Christians than humans are? And how would this impact the Christian view of human depravity?

These questions so far concern religious belief, but there is also the many matters related to religious practice. If Christians accept that all creation is intended to glorify God, how would AI do such a thing? Would AI attend church, sing hymns, care for the poor? Would it pray?

James McGrath, a professor of religion at Butler University and the author of Theology and Science Fiction, recently toyed with the prayer question using a strange classroom assignment. He told his religion students to ask Siri, the personal assistant in Apple devices, to pray for them and observe what happened. The students quickly learned that Siri was more comfortable with questions like What is prayer? than commands like Pray for me. When directed to pray, Siri basically responded, Im not programmed to do that. But if a more advanced version Siri were programmed to pray, would such an action be valuable? Does God receive prayers from any intelligent beingor just human intelligence?

There are no easy answers for Christians willing to entertain these questions. And, certainly, theres a case to be made that Christians shouldnt bother in the first place. The Christian Bible never anticipates non-human intelligence, much less addresses the questions and concern it creates. It does, however, teach that God has established a special relationship with humans that is unique among all creatures. Russell Bjork, a professor at the evangelical Gordon College who is cautious about broadening the Christian understanding of personhood to include AI, argues in the journal Perspectives on Science and Christian Faith, What makes humans special is not what humanity is, but rather it is Gods relationship to us based on his purpose for making us.

In addition to the Bible, many Christians look to their ancient creeds for guidance. One of the most popular, the Nicene Creed, speaks of Jesus as the only son of God, begotten, not made. The implicit corollary is that humans are Gods children who are made, not begotten. Christians believe that God makes humans, but humans make machines. By this logic, one might conclude that AI could not be considered Gods children or possess soul.

But this hasnt stopped Kevin Kelly from beginning to advocate for the development of a catechism for robots. A catechism is a statement of faith usually framed in a question-and-answer format that outlines orthodox belief and is typically taught to children in some religious traditions. Kelly says he takes the idea very seriously and even suggested it in a keynote talk at Q conference, an annual gathering of more than 1,000 prominent Christian leaders.

There will be a point in the future when these free-willed beings that weve made will say to us, I believe in God. What do I do? At that point, we should have a response, Kelly says.

Kelly, McHargue, and McGrath all are convinced that most traditional theologians today arent engaged enough in conversations like this because theyre stuck rehashing old questions instead of focusing on the coming ones. McHargue notes that questions about AI and theology are some of the most common that he receives from listeners of his popular Ask Science Mike and The Liturgist podcasts. Any non-biological, non-human intelligence will present a greater challenge to religion and human philosophy than anything else in our entire history combine, he claims. Nothing else will raise that level of upheaval, and collective trauma as the moment we first encounter it.

Despite these pitfalls, McGrath raises one last mischievous point: AI actually could bolster a persons faith. For some people, religion is precisely about recognizing that I, as a human being, am not God and so I don't have all the answers and will inevitably be wrong about things, he says. If that is ones outlook, then finding out you were wrong is a good thing. It simply confirms what you already knew: that life is about trusting God and not trusting in my own understanding.

Go here to see the original:

Is AI a Threat to Christianity? - The Atlantic

Posted in Artificial Intelligence | Comments Off on Is AI a Threat to Christianity? – The Atlantic

9 Development in Artificial Intelligence | Funding a …

Posted: January 4, 2017 at 6:06 pm

ment" (Nilsson, 1984). Soon, SRI committed itself to the development of an AI-driven robot, Shakey, as a means to achieve its objective. Shakey's development necessitated extensive basic research in several domains, including planning, natural-language processing, and machine vision. SRI's achievements in these areas (e.g., the STRIPS planning system and work in machine vision) have endured, but changes in the funder's expectations for this research exposed SRI's AI program to substantial criticism in spite of these real achievements.

Under J.C.R. Licklider, Ivan Sutherland, and Robert Taylor, DARPA continued to invest in AI research at CMU, MIT, Stanford, and SRI and, to a lesser extent, other institutions.18 Licklider (1964) asserted that AI was central to DARPA's mission because it was a key to the development of advanced command-and-control systems. Artificial intelligence was a broad category for Licklider (and his immediate successors), who "supported work in problem solving, natural language processing, pattern recognition, heuristic programming, automatic theorem proving, graphics, and intelligent automata. Various problems relating to human-machine communicationtablets, graphic systems, hand-eye coordinationwere all pursued with IPTO support" (Norberg and O'Neill, 1996).

These categories were sufficiently broad that researchers like McCarthy, Minsky, and Newell could view their institutions' research, during the first 10 to 15 years of DARPA's AI funding, as essentially unfettered by immediate applications. Moreover, as work in one problem domain spilled over into others easily and naturally, researchers could attack problems from multiple perspectives. Thus, AI was ideally suited to graduate education, and enrollments at each of the AI centers grew rapidly during the first decade of DARPA funding.

DARPA's early support launched a golden age of AI research and rapidly advanced the emergence of a formal discipline. Much of DARPA's funding for AI was contained in larger program initiatives. Licklider considered AI a part of his general charter of Computers, Command, and Control. Project MAC (see Box 4.2), a project on time-shared computing at MIT, allocated roughly one-third of its $2.3 million annual budget to AI research, with few specific objectives.

The history of speech recognition systems illustrates several themes common to AI research more generally: the long time periods between the initial research and development of successful products, and the interactions between AI researchers and the broader community of researchers in machine intelligence. Many capabilities of today's speech-recognition systems derive from the early work of statisticians, electrical engineers,

Original post:

9 Development in Artificial Intelligence | Funding a ...

Posted in Artificial Intelligence | Comments Off on 9 Development in Artificial Intelligence | Funding a …

Artificial Intelligence Market Size and Forecast by 2024

Posted: at 6:06 pm

Artificial intelligence is a fast emerging technology, dealing with development and study of intelligent machines and software. This software is being used across various applications such as manufacturing (assembly line robots), medical research, and speech recognition systems. It also enables in-build software or machines to operate like human beings, thereby allowing devices to collect, analyze data, reason, talk, make decisions and act The global artificial intelligence market was valued at US$ 126.24 Bn in 2015 and is forecast to grow at a CAGR of 36.1% from 2016 to 2024 to reach a value of US$ 3,061.35 Bn in 2024.

The global artificial intelligence market is currently witnessing healthy growth as companies have started leveraging the benefits of such disruptive technologies for effective customer reach and positioning of their services/solutions. Market growth is also supported by an expanding application base of artificial intelligence solutions across various industries. However, factors such as low funding access or high upfront investment, and demand for skilled resources (workforce) are presently acting as major deterrents to market growth.

On the basis of types of artificial intelligence systems, the market is segmented into artificial neural network, digital assistance system, embedded system, expert system, and automated robotic system. Expert system was the most adopted or revenue generating segment in 2015. This was mainly due to the extensive use of artificial intelligence across various sectors including diagnosis, process control, design, monitoring, scheduling and planning.

Based on various applications of artificial intelligence systems, the market has been classified into deep learning, smart robots, image recognition, digital personal assistant, querying method, language processing, gesture control, video analysis, speech recognition, context aware processing, and cyber security. Image recognition is projected to be the fastest growing segment by application in the global artificial intelligence market. This is due to the growing demand for affective computing technology across various end-use sectors for better study of systems that can recognize, analyze, process, and simulate human effects.

North America was the leader in the global artificial intelligence market in 2015, holding approximately 38% of the global market revenue share, and is expected to remain dominant throughout the forecast period from 2016 to 2024. High government funding and a strong technological base have been some of the major factors responsible for the top position of the North America region in the artificial intelligence market over the past few years. Middle East and Africa is expected to grow at the highest CAGR of 38.2% throughout the forecast period. This is mainly attributed to enormous opportunities for artificial intelligence in the MEA region in terms of new airport developments and various technological innovations including robotic automation.

The key market players profiled in this report include QlikTech International AB, MicroStrategy Inc., IBM Corporation, Google, Inc., Brighterion Inc., Microsoft Corporation, IntelliResponse Systems Inc., Next IT Corporation, Nuance Communications, and eGain Corporation.

Chapter 1 Preface 1.1 Research Scope 1.2 Market Segmentation 1.3 Research Methodology

Chapter 2 Executive Summary 2.1 Market Snapshot: Global Artificial Intelligence Market, 2015 & 2024 2.2 Global Artificial Intelligence Market Revenue, 2014 2024 (US$ Bn) and CAGR (%)

Chapter 3 Global Artificial Intelligence Market Analysis 3.1 Key Trends Analysis 3.2 Market Dynamics 3.2.1 Drivers 3.2.2 Restraints 3.2.3 Opportunities 3.3 Value Chain Analysis 3.4 Global Artificial Intelligence Market Analysis, By Types 3.4.1 Overview 3.4.2 Artificial Neural Network 3.4.3 Digital Assistance System 3.4.4 Embedded System 3.4.5 Expert System 3.4.6 Automated Robotic System 3.5 Global Artificial Intelligence Market Analysis, By Application 3.5.1 Overview 3.5.2 Deep Learning 3.5.3 Smart Robots 3.5.4 Image Recognition 3.5.5 Digital Personal Assistant 3.5.6 Querying Method 3.5.7 Language Processing 3.5.8 Gesture Control 3.5.9 Video Analysis 3.5.10 Speech Recognition 3.5.11 Context Aware Processing 3.5.12 Cyber Security 3.6 Competitive Landscape 3.6.1 Market Positioning of Key Players in Artificial Intelligence Market (2015) 3.6.2 Competitive Strategies Adopted by Leading Players

Chapter 4 North America Artificial Intelligence Market Analysis 4.1 Overview 4.3 North America Artificial Intelligence Market Analysis, by Types 4.3.1 North America Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 4.4 North America Artificial Intelligence Market Analysis, By Application 4.4.1 North America Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 4.5 North America Artificial Intelligence Market Analysis, by Region 4.5.1 North America Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 5 Europe Artificial Intelligence Market Analysis 5.1 Overview 5.3 Europe Artificial Intelligence Market Analysis, by Types 5.3.1 Europe Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 5.4 Europe Artificial Intelligence Market Analysis, By Application 5.4.1 Europe Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 5.5 Europe Artificial Intelligence Market Analysis, by Region 5.5.1 Europe Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 6 Asia Pacific Artificial Intelligence Market Analysis 6.1 Overview 6.3 Asia Pacific Artificial Intelligence Market Analysis, by Types 6.3.1 Asia Pacific Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 6.4 Asia Pacific Artificial Intelligence Market Analysis, By Application 6.4.1 Asia Pacific Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 6.5 Asia Pacific Artificial Intelligence Market Analysis, by Region 6.5.1 Asia Pacific Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 7 Middle East and Africa (MEA) Artificial Intelligence Market Analysis 7.1 Overview 7.3 MEA Artificial Intelligence Market Analysis, by Types 7.3.1 MEA Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 7.4 MEA Artificial Intelligence Market Analysis, By Application 7.4.1 MEA Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 7.5 MEA Artificial Intelligence Market Analysis, by Region 7.5.1 MEA Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 8 Latin America Artificial Intelligence Market Analysis 8.1 Overview 8.3 Latin America Artificial Intelligence Market Analysis, by Types 8.3.1 Latin America Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 8.4 Latin America Artificial Intelligence Market Analysis, By Application 8.4.1 Latin America Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 8.5 Latin America Artificial Intelligence Market Analysis, by Region 8.5.1 Latin America Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 9 Company Profiles 9.1 QlikTech International AB 9.2 MicroStrategy, Inc. 9.3 IBM Corporation 9.4 Google, Inc. 9.5 Brighterion, Inc. 9.6 Microsoft Corporation 9.7 IntelliResponse Systems Inc. 9.8 Next IT Corporation 9.9 Nuance Communications 9.10 eGain Corporation

The Artificial Intelligence Market report provides analysis of the global artificial intelligence market for the period 20142024, wherein the years from 2016 to 2024 is the forecast period and 2015 is considered as the base year. The report precisely covers all the major trends and technologies playing a major role in the artificial intelligence markets growth over the forecast period. It also highlights the drivers, restraints, and opportunities expected to influence the market growth during this period. The study provides a holistic perspective on the markets growth in terms of revenue (in US$ Bn), across different geographies, which includes Asia Pacific (APAC), Latin America (LATAM), North America, Europe, and Middle East & Africa (MEA).

The market overview section of the report showcases the markets dynamics and trends such as the drivers, restraints, and opportunities that influence the current nature and future status of this market. Moreover, the report provides the overview of various strategies and the winning imperatives of the key players in the artificial intelligence market and analyzes their behavior in the prevailing market dynamics.

The report segments the global artificial intelligence market on the types of artificial intelligence systems into artificial neural network, digital assistance system, embedded system, expert system, and automated robotic system. By application, the market has been classified into deep learning, smart robots, image recognition, digital personal assistant, querying method, language processing, gesture control, video analysis, speech recognition, context aware processing, and cyber security. Thus, the report provides in-depth cross-segment analysis for the artificial intelligence market and classifies it into various levels, thereby providing valuable insights on macro as well as micro level.

The report also provides the competitive landscape for the artificial intelligence market, thereby positioning all the major players according to their geographic presence, market attractiveness and recent key developments. The complete artificial intelligence market estimates are the result of our in-depth secondary research, primary interviews, and in-house expert panel reviews. These market estimates have been analyzed by taking into account the impact of different political, social, economic, technological, and legal factors along with the current market dynamics affecting the artificial intelligence markets growth.

QlikTech International AB, MicroStrategy Inc., IBM Corporation, Google, Inc., Brighterion Inc., Microsoft Corporation, IntelliResponse Systems Inc., Next IT Corporation, Nuance Communications, and eGain Corporation are some of the major players which have been profiled in this study. Details such as financials, business strategies, recent developments, and other such strategic information pertaining to these players has been provided as part of company profiling.

See the original post here:

Artificial Intelligence Market Size and Forecast by 2024

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Market Size and Forecast by 2024

Algorithm-Driven Design: How Artificial Intelligence Is …

Posted: at 6:06 pm

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences crafted for pros like yourself? E.g. upcoming SmashingConf San Francisco, dedicated to smart front-end techniques and design patterns.

Ive been following the idea of algorithm-driven design for several years now and have collected some practical examples. The tools of the approach can help us to construct a UI, prepare assets and content, and personalize the user experience. The information, though, has always been scarce and hasnt been systematic.

However, in 2016, the technological foundations of these tools became easily accessible, and the design community got interested in algorithms, neural networks and artificial intelligence (AI). Now is the time to rethink the modern role of the designer.

One of the most impressive promises of algorithm-driven design was given by the infamous CMS The Grid3. It chooses templates and content-presentation styles, and it retouches and crops photos all by itself. Moreover, the system runs A/B tests to choose the most suitable pattern. However, the product is still in private beta, so we can judge it only by its publications and ads.

The Designer News community found real-world examples of websites created with The Grid, and they had a mixed reaction4 people criticized the design and code quality. Many skeptics opened a champagne bottle on that day.

The idea to fully replace a designer with an algorithm sounds futuristic, but the whole point is wrong. Product designers help to translate a raw product idea into a well-thought-out user interface, with solid interaction principles and a sound information architecture and visual style, while helping a company to achieve its business goals and strengthen its brand.

Designers make a lot of big and small decisions; many of them are hardly described by clear processes. Moreover, incoming requirements are not 100% clear and consistent, so designers help product managers solve these collisions making for a better product. Its much more than about choosing a suitable template and filling it with content.

However, if we talk about creative collaboration, when designers work in pair with algorithms to solve product tasks, we see a lot of good examples and clear potential. Its especially interesting how algorithms can improve our day-to-day work on websites and mobile apps.

Designers have learned to juggle many tools and skills to near perfection, and as a result, a new term emerged, product designer7. Product designers are proactive members of a product team; they understand how user research works, they can do interaction design and information architecture, they can create a visual style, enliven it with motion design, and make simple changes in the code for it. These people are invaluable to any product team.

However, balancing so many skills is hard you cant dedicate enough time to every aspect of product work. Of course, a recent boon of new design tools has shortened the time we need to create deliverables and has expanded our capabilities. However, its still not enough. There is still too much routine, and new responsibilities eat up all of the time weve saved. We need to automate and simplify our work processes even more. I see three key directions for this:

Ill show you some examples and propose a new approach for this future work process.

Publishing tools such as Medium, Readymag and Squarespace have already simplified the authors work countless high-quality templates will give the author a pretty design without having to pay for a designer. There is an opportunity to make these templates smarter, so that the barrier to entry gets even lower.

For example, while The Grid is still in beta, a hugely successful website constructor, Wix, has started including algorithm-driven features. The company announced Advanced Design Intelligence8, which looks similar to The Grids semi-automated way of enabling non-professionals to create a website. Wix teaches the algorithm by feeding it many examples of high-quality modern websites. Moreover, it tries to make style suggestions relevant to the clients industry. Its not easy for non-professionals to choose a suitable template, and products like Wix and The Grid could serve as a design expert.

Surely, as in the case of The Grid, rejecting designers from the creative process leads to clichd and mediocre results (even if it improves overall quality). However, if we consider this process more like paired design with a computer, then we can offload many routine tasks; for example, designers could create a moodboard on Dribbble or Pinterest, then an algorithm could quickly apply these styles to mockups and propose a suitable template. Designers would become art directors to their new apprentices, computers.

Of course, we cant create a revolutionary product in this way, but we could free some time to create one. Moreover, many everyday tasks are utilitarian and dont require a revolution. If a company is mature enough and has a design system9, then algorithms could make it more powerful.

For example, the designer and developer could define the logic that considers content, context and user data; then, a platform would compile a design using principles and patterns. This would allow us to fine-tune the tiniest details for specific usage scenarios, without drawing and coding dozens of screen states by hand. Florian Schulz shows how you can use the idea of interpolation10 to create many states of components.

My interest in algorithm-driven design sprung up around 2012, when my design team at Mail.Ru Group required an automated magazine layout. Existing content had a poor semantic structure, and updating it by hand was too expensive. How could we get modern designs, especially when the editors werent designers?

Well, a special script would parse an article. Then, depending on the articles content (the number of paragraphs and words in each, the number of photos and their formats, the presence of inserts with quotes and tables, etc.), the script would choose the most suitable pattern to present this part of the article. The script also tried to mix patterns, so that the final design had variety. It would save the editors time in reworking old content, and the designer would just have to add new presentation modules. Flipboard launched a very similar model13 a few years ago.

Vox Media made a home page generator14 using similar ideas. The algorithm finds every possible layout that is valid, combining different examples from a pattern library. Next, each layout is examined and scored based on certain traits. Finally, the generator selects the best layout basically, the one with the highest score. Its more efficient than picking the best links by hand, as proven by recommendation engines such as Relap.io15.

Creating cookie-cutter graphic assets in many variations is one of the most boring parts of a designers work. It takes so much time and is demotivating, when designers could be spending this time on more valuable product work.

Algorithms could take on simple tasks such as color matching. For example, Yandex.Launcher uses an algorithm to automatically set up colors for app cards, based on app icons18. Other variables could be automatically set, such as changing text color according to the background color19, highlighting eyes in a photo to emphasize emotion20, and implementing parametric typography21.

Algorithms can create an entire composition. Yandex.Market uses a promotional image generator for e-commerce product lists (in Russian24). A marketer fills a simple form with a title and an image, and then the generator proposes an endless number of variations, all of which conform to design guidelines. Netflix went even further25 its script crops movie characters for posters, then applies a stylized and localized movie title, then runs automatic experiments on a subset of users. Real magic! Engadget has nurtured a robot apprentice to write simple news articles about new gadgets26. Whew!

Truly dark magic happens in neural networks. A fresh example, the Prisma app29, stylizes photos to look like works of famous artists. Artisto30 can process video in a similar way (even streaming video).

However, all of this is still at an early stage. Sure, you could download an app on your phone and get a result in a couple of seconds, rather than struggle with some library on GitHub (as we had to last year); but its still impossible to upload your own reference style and get a good result without teaching a neural network. However, when that happens at last, will it make illustrators obsolete? I doubt it will for those artists with a solid and unique style. But it will lower the barrier to entry when you need decent illustrations for an article or website but dont need a unique approach. No more boring stock photos!

For a really unique style, it might help to have a quick stylized sketch based on a question like, What if we did an illustration of a building in our unified style? For example, the Pixar artists of the animated movie Ratatouille tried to apply several different styles to the movies scenes and characters; what if a neural network made these sketches? We could also create storyboards and describe scenarios with comics (photos can be easily converted to sketches). The list can get very long.

Finally, there is live identity, too. Animation has become hugely popular in branding recently, but some companies are going even further. For example, Wolff Olins presented a live identity for Brazilian telecom Oi33, which reacts to sound. You just cant create crazy stuff like this without some creative collaboration with algorithms.

One way to get a clear and well-developed strategy is to personalize a product for a narrow audience segment or even specific users. We see it every day in Facebook newsfeeds, Google search results, Netflix and Spotify recommendations, and many other products. Besides the fact that it relieves the burden of filtering information from users, the users connection to the brand becomes more emotional when the product seems to care so much about them.

However, the key question here is about the role of designer in these solutions. We rarely have the skill to create algorithms like these engineers and big data analysts are the ones to do it. Giles Colborne of CX Partners sees a great example in Spotifys Discover Weekly feature: The only element of classic UX design here is the track list, whereas the distinctive work is done by a recommendation system that fills this design template with valuable music.

Colborne offers advice to designers35 about how to continue being useful in this new era and how to use various data sources to build and teach algorithms. Its important to learn how to work with big data and to cluster it into actionable insights. For example, Airbnb learned how to answer the question, What will the booked price of a listing be on any given day in the future? so that its hosts could set competitive prices36. There are also endless stories about Netflixs recommendation engine.

A relatively new term, anticipatory design38 takes a broader view of UX personalization and anticipation of user wishes. We already have these types of things on our phones: Google Now automatically proposes a way home from work using location history data; Siri proposes similar ideas. However, the key factor here is trust. To execute anticipatory experiences, people have to give large companies permission to gather personal usage data in the background.

I already mentioned some examples of automatic testing of design variations used by Netflix, Vox Media and The Grid. This is one more way to personalize UX that could be put onto the shoulders of algorithms. Liam Spradlin describes the interesting concept of mutative design39; its a well-though-out model of adaptive interfaces that considers many variables to fit particular users.

Ive covered several examples of algorithm-driven design in practice. What tools do modern designers need for this? If we look back to the middle of the last century, computers were envisioned as a way to extend human capabilities. Roelof Pieters and Samim Winiger have analyzed computing history and the idea of augmentation of human ability40 in detail. They see three levels of maturity for design tools:

Algorithm-driven design should be something like an exoskeleton for product designers increasing the number and depth of decisions we can get through. How might designers and computers collaborate?

The working process of digital product designers could potentially look like this:

These tasks are of two types: the analysis of implicitly expressed information and already working solutions, and the synthesis of requirements and solutions for them. Which tools and working methods do we need for each of them?

Analysis of implicitly expressed information about users that can be studied with qualitative research is hard to automate. However, exploring the usage patterns of users of existing products is a suitable task. We could extract behavioral patterns and audience segments, and then optimize the UX for them. Its already happening in ad targeting, where algorithms can cluster a user using implicit and explicit behavior patterns (within either a particular product or an ad network).

To train algorithms to optimize interfaces and content for these user clusters, designers should look into machine learning43. Jon Bruner gives44 a good example: A genetic algorithm starts with a fundamental description of the desired outcome say, an airlines timetable that is optimized for fuel savings and passenger convenience. It adds in the various constraints: the number of planes the airline owns, the airports it operates in, and the number of seats on each plane. It loads what you might think of as independent variables: details on thousands of flights from an existing timetable, or perhaps randomly generated dummy information. Over thousands, millions or billions of iterations, the timetable gradually improves to become more efficient and more convenient. The algorithm also gains an understanding of how each element of the timetable the take-off time of Flight 37 from OHare, for instance affects the dependent variables of fuel efficiency and passenger convenience.

In this scenario, humans curate an algorithm and can add or remove limitations and variables. The results can be tested and refined with experiments on real users. With a constant feedback loop, the algorithm improves the UX, too. Although the complexity of this work suggests that analysts will be doing it, designers should be aware of the basic principles of machine learning. OReilly published45 a great mini-book on the topic recently.

Two years ago, a tool for industrial designers named Autodesk Dreamcatcher46 made a lot of noise and prompted several publications from UX gurus47. Its based on the idea of generative design, which has been used in performance, industrial design, fashion and architecture for many years now. Many of you know Zaha Hadid Architects; its office calls this approach parametric design48.

Logojoy51 is a product to replace freelancers for a simple logo design. You choose favorite styles, pick a color and voila, Logojoy generates endless ideas. You can refine a particular logo, see an example of a corporate style based on it, and order a branding package with business cards, envelopes, etc. Its the perfect example of an algorithm-driven design tool in the real world! Dawson Whitfield, the founder, described machine learning principles behind it52.

However, its not yet established in digital product design, because it doesnt help to solve utilitarian tasks. Of course, the work of architects and industrial designers has enough limitations and specificities of its own, but user interfaces arent static their usage patterns, content and features change over time, often many times. However, if we consider the overall generative process a designer defines rules, which are used by an algorithm to create the final object theres a lot of inspiration. The working process of digital product designers could potentially look like this:

Its yet unknown how can we filter a huge number of concepts in digital product design, in which usage scenarios are so varied. If algorithms could also help to filter generated objects, our job would be even more productive and creative. However, as product designers, we use generative design every day in brainstorming sessions where we propose dozens of ideas, or when we iterate on screen mockups and prototypes. Why cant we offload a part of these activities to algorithms?

The experimental tool Rene55 by Jon Gold, who worked at The Grid, is an example of this approach in action. Gold taught a computer to make meaningful typographic decisions56. Gold thinks that its not far from how human designers are taught, so he broke this learning process into several steps:

His idea is similar to what Roelof and Samim say: Tools should be creative partners for designers, not just dumb executants.

Golds experimental tool Rene is built on these principles58. He also talks about imperative and declarative approaches to programming and says that modern design tools should choose the latter focusing on what we want to calculate, not how. Jon uses vivid formulas to show how this applies to design and has already made a couple of low-level demos. You can try out the tool59 for yourself. Its a very early concept but enough to give you the idea.

While Jon jokingly calls this approach brute-force design and multiplicative design, he emphasizes the importance of a professional being in control. Notably, he left The Grid team earlier this year.

Unfortunately, there are no tools for product design for web and mobile that could help with analysis and synthesis on the same level as Autodesk Dreamcatcher does. However, The Grid and Wix could be considered more or less mass-level and straightforward solutions. Adobe is constantly adding features that could be considered intelligent: The latest release of Photoshop has a content-aware feature60 that intelligently fills in the gaps when you use the cropping tool to rotate an image or expand the canvas beyond the images original size.

There is another experiment by Adobe and University of Toronto. DesignScape61 automatically refines a design layout for you. It can also propose an entirely new composition.

You should definitely follow Adobe in its developments, because the company announced a smart platform named Sensei62 at the MAX 2016 conference. Sensei uses Adobes deep expertise in AI and machine learning, and it will be the foundation for future algorithm-driven design features in Adobes consumer and enterprise products. In its announcement63, the company refers to things such as semantic image segmentation (showing each region in an image, labeled by type for example, building or sky), font recognition (i.e. recognizing a font from a creative asset and recommending similar fonts, even from handwriting), and intelligent audience segmentation.

However, as John McCarthy, the late computer scientist who coined the term artificial intelligence, famously said, As soon as it works, no one calls it AI anymore. What was once cutting-edge AI is now considered standard behavior for computers. Here are a couple of experimental ideas and tools64 that could become a part of the digital product designers day-to-day toolkit:

But these are rare and patchy glimpses of the future. Right now, its more about individual companies building custom solutions for their own tasks. One of the best approaches is to integrate these algorithms into a companys design system. The goals are similar: to automate a significant number of tasks in support of the product line; to achieve and sustain a unified design; to simplify launches; and to support current products more easily.

Modern design systems started as front-end style guidelines, but thats just a first step (integrating design into code used by developers). The developers are still creating pages by hand. The next step is half-automatic page creation and testing using predefined rules.

Platform Thinking by Yury Vetrov (Source67)

Should your company follow this approach?

If we look in the near term, the value of this approach is more or less clear:

Altogether, this frees the designer from the routines of both development support and the creative process, but core decisions are still made by them. A neat side effect is that we will better understand our work, because we will be analyzing it in an attempt to automate parts of it. It will make us more productive and will enable us to better explain the essence of our work to non-designers. As a result, the overall design culture within a company will grow.

However, all of these benefits are not so easy to implement or have limitations:

There are also ethical questions: Is design produced by an algorithm valuable and distinct? Who is the author of the design? Wouldnt generative results be limited by a local maximum? Oliver Roeder says68 that computer art isnt any more provocative than paint art or piano art. The algorithmic software is written by humans, after all, using theories thought up by humans, using a computer built by humans, using specifications written by humans, using materials gathered by humans, in a company staffed by humans, using tools built by humans, and so on. Computer art is human art a subset, rather than a distinction. The revolution is already happening, so why dont we lead it?

This is a story of a beautiful future, but we should remember the limits of algorithms theyre built on rules defined by humans, even if the rules are being supercharged now with machine learning. The power of the designer is that they can make and break rules; so, in a year from now, we might define beautiful as something totally different. Our industry has both high- and low-skilled designers, and it will be easy for algorithms to replace the latter. However, those who can follow and break rules when necessary will find magical new tools and possibilities.

Moreover, digital products are getting more and more complex: We need to support more platforms, tweak usage scenarios for more user segments, and hypothesize more. As Frogs Harry West says, human-centered design has expanded from the design of objects (industrial design) to the design of experiences (encompassing interaction design, visual design and the design of spaces). The next step will be the design of system behavior: the design of the algorithms that determine the behavior of automated or intelligent systems. Rather than hire more and more designers, offload routine tasks to a computer. Let it play with the fonts.

(vf, il, al)

Back to top Tweet itShare on Facebook

Yury leads a team comprising UX and visual designers at one of the largest Russian Internet companies, Mail.Ru Group. His team works on communications, content-centric, and mobile products, as well as cross-portal user experiences. Both Yury and his team are doing a lot to grow their professional community in Russia.

See more here:

Algorithm-Driven Design: How Artificial Intelligence Is ...

Posted in Artificial Intelligence | Comments Off on Algorithm-Driven Design: How Artificial Intelligence Is …

Page 209«..1020..208209210211..»