Page 152«..1020..151152153154..160170..»

Category Archives: Artificial Intelligence

Shogi: A measure of artificial intelligence – The Japan Times

Posted: July 8, 2017 at 9:11 pm

Though last Sundays Tokyo assembly elections garnered the most media attention, another contest came in a close second, even if only two people were involved. Fourteen-year-old Sota Fujiis record-setting winning streak of 29 games of shogi was finally broken on July 2 when he lost a match to 22-year-old Yuki Sasaki.

Fujii has turned into a media superstar in the past year because of his youth and exceptional ability in a game that non-enthusiasts may find too cerebral to appreciate. The speed of Fujiis ascension to headline status has been purposely accelerated by the media, which treats him as not just a prodigy, but as the vanguard figure of a pastime in which the media has a stake.

Press photos of Fujiis matches show enormous assemblies of reporters, video crews and photographers hovering over the kneeling opponents. Such attention may seem ridiculous to some people owing to the solemnity surrounding shogi, which is played much like chess, but if Fujii succeeds in attracting new fans, then the media is all for it.

Thats because all the national dailies and some broadcasters cover shogi regularly and in detail. In fact, most major shogi tournaments are sponsored by media outlets. The Ryuo Sen championship, toward which Fujii was aiming when he lost last week, is the biggest in terms of prize money, and is sponsored by the Yomiuri Shimbun. NHK also has a tournament and airs a popular shogi instructional program several times a week.

The Fujii fuss, however, is about more than his prodigal skills. Fujii ushers an old game with a stuffy image into the present by accommodating the 21st centurys most fickle god: artificial intelligence. Much has been made in the past few weeks of Fujiis style of play, which is described as being counter-intuitive and abnormally aggressive. What almost all the critics agree on is that he honed this style through self-training that involved the use of dedicated shogi software incorporating AI.

But before Fujiis revolutionary strategic merits could be celebrated, AI needed to be accepted, and a scandal last July put such technology into focus. One of the top players in the game, Hiroyuki Miura, was accused by his opponent of cheating after he won a match. Miura repeatedly left the room during play and was suspected of consulting his phone when he did so. The Japan Shogi Association (JSA) suspended him as they investigated the charges.

As outlined by Toru Takeda in the Nov. 22 online version of Asahi Shimbun, the JSA checked the moves Miura had made in previous games against moves made by popular shogi software to see if there was a pattern. In four of his victories there was a 90 percent rate of coincidence. Miuras smartphone was also checked by a third party, which found no shogi app. Moreover, there was no communications activity recorded for the phone on the day of the contested match because it had been shut off the whole time.

Miura was officially exonerated on May 24, at the height of the medias Sota fever, but that doesnt mean Miura was not using shogi software to change his game strategy. In November last year, Takeda theorized that, given the prevalence of the software and the amount of progress programmers had made in improving its AI functions, its impossible to believe that there is a professional shogi player who has not yet taken advantage of the technology. Miura, he surmised, had become what chess grandmaster Garry Kasparov once called a centaur half man, half computerized beast. By studying the way shogi programs played, Miura had likely appropriated the AI functions own learning curve. He didnt have to check the software to determine moves it was already in his nervous system. Miura is, in fact, one of the pros who battled computerized shogi programs in past years. In 2013, he played against shogi software developed by the University of Tokyo and lost.

The evolution of shogi software was covered in a recent NHK documentary about AI. Amahiko Sato, one of the games highest ranked players, has played the shogi robot Ponanza several times without a victory. The robots programmer told NHK that he input 20 years of moves by various professionals into the program and it has since been playing itself. Since computers decide at a speed that is exponentially faster than humans, the software has played itself about 7 million times, learning more with each game.

Its like using a shovel to compete with a bulldozer, Yoshiharu Habu, Japans top shogi player, commented to NHK after describing Ponanzas moves as unbelievable.

Fujii is simply the human manifestation of this evolution, and whats disconcerting for the shogi establishment is that he didnt reach that position because of a mentor. As with most skills in Japan, shogi hopefuls usually learn by sitting at the feet of masters and copying their technique in a rote fashion until theyve developed it into something successful and idiosyncratic. Fujii leapfrogged the mentor phase thanks to shogi software.

An article in the June 27 Asahi Shimbun identified Shota Chida as the player who turned Fujii on to AI a year ago, just before Fujii turned pro. On the NHK program Habu noticed something significant as a result: Fujiis moves became faster and more decisive. He achieved victory with fewer moves by abandoning the conventional strategy of building a defense before going on the offensive. Fujii constantly looks for openings in his opponents game and immediately strikes when he sees one, which is the main characteristic of AI shogi.

Fujiis defeat obviously means that his type of play is no longer confounding. Masataka Sugimoto, his shogi teacher, told Tokyo Shimbun that he doesnt think Fujii uses software as a weapon, since he now faces players who also practiced with AI. But that doesnt mean his game play hasnt been changed by AI. Before the Miura scandal, pros who used software were considered the board-game equivalents of athletes who took performance-enhancing drugs. Now theyre the norm, and the media couldnt be happier.

See the rest here:

Shogi: A measure of artificial intelligence - The Japan Times

Posted in Artificial Intelligence | Comments Off on Shogi: A measure of artificial intelligence – The Japan Times

In Edmonton, companies find a humble hub for artificial intelligence – CBC.ca

Posted: at 9:11 pm

There's a hall of champions at the University of Alberta that only computer science students know where to find more of a hallway, really, one office after the next, the achievements archived on hard drives and written in code.

It's there you'll find the professors who solved the game of checkers, beat a top human player in the game of Goand used cutting-edge artificial intelligence to outsmart a handful of professional poker players for the very first time.

But latelyit's Richard Sutton who is catching people's attention on the Edmonton campus.

He's a pioneer in a branch of artificial intelligence research known as reinforcement learning the computer science equivalent of treat-training a dog, except in this case the dog is an algorithm that's been incentivized to behave in a certain way.

U of A computing science professors and artificial intelligence researchers (left to right) Richard Sutton, Michael Bowling and Patrick Pilarski are working with Google's DeepMind to open the AI company's first research lab outside the U.K., in Edmonton. (John Ulan/University of Alberta)

It's a problem that's preoccupied Sutton for decades, one on which he literally wrote the book, and it's this wealth of experience that's brought a growing number of the tech industry's AI labs right to his doorstep.

Last week, Google's AI subsidiary DeepMind announced it was opening its first international office in Edmonton, where Sutton alongside professors Michael Bowling and Patrick Pilarski will work part-time. And earlier in the year, the research arm of the Royal Bank of Canada announced it was also opening an office in the city, where Sutton also will advise.

Dr. Jonathan Schaeffer, dean of the school's faculty of science, says there are more announcements to come.

Edmonton which Schaeffer describes as "just off the beaten path" has not experienced the same frenzied pace of investment as cities like Toronto and Montreal, nor are tech companies opening offices or acquiring startups there with the same fervour. But the city and the university in particular has been a hotbed for world-class artificial intelligence research longer than outsiders might realize.

Those efforts date all the way back to the 1980s, when some of the school's researchers first entertained the notion of building a computer program that could play chess.

The faculty came together "organically" over the years, Shaeffer says. "It wasn't like there was a deliberate, brilliant strategy to build a strong group here."

While artificial intelligence is linked nowadays with advances in virtual assistants, robotics and self-driving vehicles, students and faculty at the university have spent decades working on one of the field's oldest challenges: games.

In 2007, Schaeffer and his team solved the game of checkers with a program they developed named Chinook, finishing a project that began nearly 20 years earlier.

In 2010, researcher Martin Muller and his colleagues detailed their work on Fuego then one of the world's most advanced computer programs capable of playing Go. The ancient Chinese game is notoriously difficult, owing to the incredible number of possible moves a computer has to evaluate, but Fuego managed to beat a top professional on a smaller version of the game's board.

Fans of the 3,000-year-old Chinese board game Go watch a showdown between South Korean Go grandmaster Lee Sedol and the Google-developed supercomputer AlphaGo, in Seoul, March 9, 2016. (Jung Yeon-Je/AFP/Getty Images)

And earlier this year, a team led by Bowling presented DeepStack, a poker-playing program they taught to bluff and learn from its previously played games. DeepStack beat 11 professional poker players, one of two academic teams to recently take on the task and a feat the school's Computer Poker Research Group has been working on since its founding in 1996.

David Churchill an assistant professor at Memorial University in Newfoundland and formerly a PhD student at the U of A says that games are particularly well suited to artificial intelligence research, in part because they have well-defined rules, a clear goal and no shortage of human players to evaluate a program's progress and skill.

"We're not necessarily playing games for the sake of games," says Churchill who spent his PhD teaching computers to play the popular real-time strategy video game StarCraft but rather "using games as a test bed" to make artificial intelligence better.

The school's researchers haven't solely been focused on games, Schaeffer says even if those are the projects that get the most press. He points to a professor named Russ Greiner, who has been using AI to more accurately identify brain tumours in MRI scans, and Pilarski, who has been working on algorithms that make it easier for amputees to control their prosthetic limbs.

But it is Sutton's work on reinforcement learning that has the greatest potential to turn the city into Canada's next budding AI research hub.

Montreal and Toronto have received the bulk of attention in recent years, thanks to the rise of a particular branch of artificial intelligence research known as deep learning. Pioneered by the University of Toronto's Geoffrey Hinton, and the Montreal Institute for Learning Algorithms' Yoshua Bengio, among others, the technique has transformed everything from speech recognition to the development of self-driving cars.

But reinforcement learning which some say is complementary to deep learning is now getting its fair share of attention too.

Carnegie Mellon used the technique this year in its poker-playing program Libratus, which beat one of the best players in the world. Apple's director of artificial intelligence, Ruslan Salakhutdinov, has called it an "exciting area of research" that he believes could help solve challenging problems in robotics and self-driving cars.

And most famously, DeepMind relied on reinforcement learning and the handful of U of A graduates it hired to develop AlphaGo, the AI that beat Go grandmaster Lee Sedol.

"We don't seek the spotlight," says Schaeffer. "We're very proud of what we've done. We don't necessarily toot our own horn as much as other people do."

See more here:

In Edmonton, companies find a humble hub for artificial intelligence - CBC.ca

Posted in Artificial Intelligence | Comments Off on In Edmonton, companies find a humble hub for artificial intelligence – CBC.ca

Why artificial intelligence is far too human – The Boston Globe

Posted: at 4:14 am

LUCY NALAND FOR THE BOSTON GLOBE

Have you ever wondered how the Waze app knows shortcuts in your neighborhood better than you? Its because Waze acts like a superhuman air traffic controller it measures distance and traffic patterns, it listens to feedback from drivers, and it compiles massive data set to get you to your location as quickly as possible.

Even as we grow more reliant on these kinds of innovations, we still want assurances that were in charge, because we still believe our humanity elevates us above computers. Movies such as 2001: A Space Odyssey and the Terminator franchise teach us to fear computers programmed without any understanding of humanity; when a human sobs, Arnold Schwarzeneggers robotic character asks, Whats wrong with your eyes? They always end with the machines turning on their makers.

Advertisement

What most people dont know is that artificial intelligence ethicists worry the opposite is happening: We are putting too much of ourselves, not too little, into the decision-making machines of our future.

God created humans in his own image, if you believe the scriptures. Now humans are hard at work scripting artificial intelligence in much the same way in their own image. Indeed, todays AI can be just as biased and imperfect as the humans who engineer it. Perhaps even more so.

Get This Week in Opinion in your inbox:

Globe Opinion's must-reads, delivered to you every Sunday.

We already assign responsibility to artificial intelligence programs more widely than is commonly understood. People are diagnosed with diseases, kept in prison, hired for jobs, extended housing loans, and placed on terrorist watch lists, in part or in full, as a result of, AI programs weve empowered to decide for us. Sure, humans might have the final word. But computers can control how the evidence is weighed.

And and no one has asked you what you want.

That was by design. Automation was done in part to remove human bias from the equation. So why does a computer algorithm reviewing bank loans exhibit racial prejudice against applicants?

It turns out that algorithms, which are the building blocks of AI acquire bias the same way that humans do through instruction. In other words, theyve got to be taught.

Advertisement

Computer models can learn by analyzing data sets for relationships. For example, if you want to train a computer to understand how words relate to each other, you can upload the entire English-langugage Web and let the machine assign relational values to words based on how often they appear next to other words; the closer together, the greater the value. In this pattern recognition, the computer begins to paint a picture of what words mean.

Teaching computers to think keeps getting easier. But theres a serious miseducation problem as well. While humans can be taught to differentiate between implicit and explicit bias, and recognize both in themselves, a machine simply follows a series of if-then statements. When those instructions reflect the biases and dubious assumptions of their creators, a computer will execute them faithfully while still looking superficially neutral. What we have to stop doing is assuming things are objective and start assuming things are biased. Because thats what our actual evidence has been so far, says Cathy ONeil, data scientist and author of the recent book Weapons of Math Destruction.

As with humans, bias starts with the building blocks of socialization: language. The magazine Science recently reported on a study showing that implicit associations including prejudices are communicated through our language. Language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well, writes Arvind Narayanan, co-author of the study.

The scientists found that words like flower are more closely associated with pleasantness than insect. Female words were more closely associated with the home and arts than with career, math, and science. Likewise, African-American names were more frequently associated with unpleasant terms than names more common among white people were.

This becomes an issue when job recruiting programs trained on language sets like this are used to select resumes for interviews. If the program associates African-American names with unpleasant characteristics, its algorithmic training will be more likely to select European named candidates. Likewise, if the job-recruiting AI is told to search for strong leaders, it will be less likely to select women, because their names are associated with homemaking and mothering.

The scientists took their findings a step farther and found a 90 percent correlation between how feminine or masculine the job title ranked in their word-embedding research and the actual number of men versus women employed in 50 different professions according to Department Labor statistics. The biases expressed in language directly relates to the roles we play in life.

AI is just an extension of our culture, says co-author Joanna Bryson, a computer scientist at the University of Bath in the United Kingdom and Princeton University. Its not that robots are evil. Its that the robots are just us.

Even AI giants like Google cant escape the impact of bias. In 2015, the companys facial recognition software tagged dark skinned people as gorillas. Executives at FaceApp, a photo editing program, recently apologized for building an algorithm that whitened the users skin in their pictures. The company had dubbed it the hotness filter.

In these cases, the error grew from data sets that didnt have enough dark-skinned people, which limited the machines ability to learn variation within darker skin tones. Typically, a programmer instructs a machine with a series of commands, and the computer follows along. But if the programmer tests the design on his peer group, coworkers, and family, hes limited what the machine can learn and imbues it with whichever biases shape his own life.

Photo apps are one thing, but when their foundational algorithms creep into other areas of human interaction, the impacts can be as profound as they are lasting.

The faces of one in two adult Americans have been processed through facial recognition software. Law enforcement agencies across the country are using this gathered data with little oversight. Commercial facial-recognition algorithms have generally done a better job of telling white men apart than they do with women and people of other races, and law enforcement agencies offer few details indicating that their systems work substantially better. Our justice system has not decided if these sweeping programs constitute a search, which would restrict them under the Fourth Amendment. Law enforcement may end up making life-altering decisions based on biased investigatory tools with minimal safeguards.

Meanwhile, judges in almost every state are using algorithms to assist in decisions about bail, probation, sentencing, and parole. Massachusetts was sued several years ago because an algorithm it uses to predict recidivism among sex offenders didnt consider a convicts gender. Since women are less likely to reoffend, an algorithm that did not consider gender likely overestimated recidivism by female sex offenders. The intent of the scores was to replace human bias and increase efficiency in an overburdened judicial system. But, as mathematician Julia Angwin reported in ProPublica, these algorithms are using biased questionnaires to come to their determinations and yielding flawed results.

A ProPublica study of the recidivism algorithm used in Fort Lauderdale found that 23.5 percent of white men were labeled as being at an elevated risk for getting into trouble again, but didnt re-offend. Meanwhile, 44.9 percent of black men were labeled higher risk for future offenses, but didnt re-offend, showing how these scores are inaccurate and favor white men.

While the questionnaires dont ask specifically about skin color, data scientists say they back into race by asking questions like: When was your first encounter with police?

The assumption is that someone who comes in contact with police as a young teenager is more prone to criminal activity than someone who doesnt. But this hypothesis doesnt take into consideration that policing practices vary and therefore so does the polices interaction with youth. If someone lives in an area where the police routinely stop and frisk people, he will be statistically more likely to have had an early encounter with the police. Stop-and-frisk is more common in urban areas where African-Americans are more likely to live than whites.This measure doesnt calculate guilt or criminal tendencies, but becomes a penalty when AI calculates risk. In this example, the AI is not just computing for the individuals behavior, it is also considering the polices behavior.

Ive talked to prosecutors who say, Well, its actually really handy to have these risk scores because you dont have to take responsibility if someone gets out on bail and they shoot someone. Its the machine, right? says Joi Ito, director of the Media Lab at MIT.

Its even easier to blame a computer when the guts of the machine are trade secrets. Building algorithms is big business, and suppliers guard their intellectual property tightly. Even when these algorithms are used in the public sphere, their inner workings are seldom open for inspection. Unlike humans, these machine algorithms are much harder to interrogate because you dont actually know what they know, Ito says.

Whether such a process is fair is difficult to discern if a defendant doesnt know what went into the algorithm. With little transparency, there is limited ability to appeal the computers conclusions. The worst thing is the algorithms where we dont really even know what theyve done and theyre just selling it to police and theyre claiming its effective, says Bryson, co-author of the word embedding study.

Most mathematicians understand that the algorithms should improve over time. As theyre updated, they learn more if theyre presented with the right data. In the end, the relatively few people who manage these algorithms have an enormous impact on the future. They control the decisions about who gets a loan, who gets a job, and, in turn, who can move up in society. And yet from the outside, the formulas that determine the trajectories of so many lives remain as inscrutable as the will of the divine.

Follow this link:

Why artificial intelligence is far too human - The Boston Globe

Posted in Artificial Intelligence | Comments Off on Why artificial intelligence is far too human – The Boston Globe

The AI revolution in science – Science Magazine

Posted: at 4:14 am

Just what do people mean by artificial intelligence (AI)? The term has never had clear boundaries. When it was introduced at a seminal 1956 workshop at Dartmouth College, it was taken broadly to mean making a machine behave in ways that would be called intelligent if seen in a human. An important recent advance in AI has been machine learning, which shows up in technologies from spellcheck to self-driving cars and is often carried out by computer systems called neural networks. Any discussion of AI is likely to include other terms as well.

ALGORITHM A set of step-by-step instructions. Computer algorithms can be simple (if its 3 p.m., send a reminder) or complex (identify pedestrians).

BACKPROPAGATION The way many neural nets learn. They find the difference between their output and the desired output, then adjust the calculations in reverse order of execution.

BLACK BOX A description of some deep learning systems. They take an input and provide an output, but the calculations that occur in between are not easy for humans to interpret.

DEEP LEARNING How a neural network with multiple layers becomes sensitive to progressively more abstract patterns. In parsing a photo, layers might respond first to edges, then paws, then dogs.

EXPERT SYSTEM A form of AI that attempts to replicate a humans expertise in an area, such as medical diagnosis. It combines a knowledge base with a set of hand-coded rules for applying that knowledge. Machine-learning techniques are increasingly replacing hand coding.

GENERATIVE ADVERSARIAL NETWORKS A pair of jointly trained neural networks that generates realistic new data and improves through competition. One net creates new examples (fake Picassos, say) as the other tries to detect the fakes.

MACHINE LEARNING The use of algorithms that find patterns in data without explicit instruction. A system might learn how to associate features of inputs such as images with outputs such as labels.

NATURAL LANGUAGE PROCESSING A computers attempt to understand spoken or written language. It must parse vocabulary, grammar, and intent, and allow for variation in language use. The process often involves machine learning.

NEURAL NETWORK A highly abstracted and simplified model of the human brain used in machine learning. A set of units receives pieces of an input (pixels in a photo, say), performs simple computations on them, and passes them on to the next layer of units. The final layer represents the answer.

NEUROMORPHIC CHIP A computer chip designed to act as a neural network. It can be analog, digital, or a combination.

PERCEPTRON An early type of neural network, developed in the 1950s. It received great hype but was then shown to have limitations, suppressing interest in neural nets for years.

REINFORCEMENT LEARNING A type of machine learning in which the algorithm learns by acting toward an abstract goal, such as earn a high video game score or manage a factory efficiently. During training, each effort is evaluated based on its contribution toward the goal.

STRONG AI AI that is as smart and well-rounded as a human. Some say its impossible. Current AI is weak, or narrow. It can play chess or drive but not both, and lacks common sense.

SUPERVISED LEARNING A type of machine learning in which the algorithm compares its outputs with the correct outputs during training. In unsupervised learning, the algorithm merely looks for patterns in a set of data.

TENSORFLOW A collection of software tools developed by Google for use in deep learning. It is open source, meaning anyone can use or improve it. Similar projects include Torch and Theano.

TRANSFER LEARNING A technique in machine learning in which an algorithm learns to perform one task, such as recognizing cars, and builds on that knowledge when learning a different but related task, such as recognizing cats.

TURING TEST A test of AIs ability to pass as human. In Alan Turings original conception, an AI would be judged by its ability to converse through written text.

Link:

The AI revolution in science - Science Magazine

Posted in Artificial Intelligence | Comments Off on The AI revolution in science – Science Magazine

Artificial intelligence-based system warns when a gun appears in a video – Phys.Org

Posted: at 4:14 am

July 7, 2017 Credit: University of Granada

Scientists from the University of Granada (UGR) have designed a computer system based on new artificial intelligence techniques that automatically detects in real time when a subject in a video draws a gun.

Their work, pioneering on a global scale, has numerous practical applications, from improving security in airports and malls to automatically controlling violent content in which handguns appear in videos uploaded on social networks such as Facebook, Youtube or Twitter, or classifying public videos on the internet that have handguns.

Francisco Herrera Triguero, Roberto Olmos and Siham Tabik, researchers in the Department of Computational and Artificial Intelligence Sciences at the UGR, developed this work. To ensure the proper functioning and efficiency of the model, the authors analyzed low-quality videos from YouTube and movies from the '90s such as Pulp Fiction, Mission Impossible and James Bond films. The algorithm showed an effectiveness of over 96.5 percent and is capable of detecting guns with high precision, analyzing five frames per second, in real time. When a handgun appears in the image, the system sends an alert in the form of a red box on the screen where the weapon is located.

A fast and inexpensive model

UGR full professor Francisco Herrera explained that the model can easily be combined with an alarm system and implemented inexpensively using video cameras and a computer with moderately high capacities.

Additionally, the system can be implemented in any area where video cameras can be placed, indoors or outdoors, and does not require direct human supervision.

Researcher Siham Tabik noted that deep learning models like this represent a major breakthrough over the last five years in the detection, recognition and classification of objects in the field of computational.

A pioneering system

Until now, the principal weapon detection systems were based on metal detection and found in airports and public events in closed spaces. Although these systems have the advantage of being able to detect a firearm even when it is hidden from sight, they unfortunately have several disadvantages.

Among these drawbacks is the fact that these systems can only control the passage through a specific point (if the person carrying the weapon does not pass through this point, the system is useless); they also require the constant presence of a human operator and generate bottlenecks when there is a large flow of people. They also detect everyday metallic objects such as coins, belt buckles and mobile phones. This makes it necessary to use conveyor belts and x-ray scanners in combination with these systems, which is both slow and expensive. In addition, these systems cannot detect weapons that are not made of metal, which are now possible because of 3-D printing.

For this reason, handgun detection through video cameras is a new complementary security system that is useful for areas with video surveillance.

Explore further: Tracking humans in 3-D with off-the-shelf webcams

More information: Automatic Handgun Detection Alarm in Videos Using Deep Learning. arxiv.org/abs/1702.05147

Many applications require that people and their movements are captured digitally in 3-D in real-time. Until now, this was possible only with expensive systems of several cameras, or by having people wear special suits. Computer ...

University of Washington researchers have shown that Google's new tool that uses machine learning to automatically analyze and label video content can be deceived by inserting a photograph periodically and at a very low rate ...

Hitachi, Ltd. today announced the development of a detection and tracking technology using artificial intelligence (AI) which can distinguish an individual in real-time using features from over 100 categories of external ...

Despite YouTube's attempts to safeguard user anonymity, intelligence agencies, hackers and online advertising companies can still determine which videos a user is watching, according to Ben-Gurion University of the Negev ...

It took 24 hours before the video of a man murdering his baby daughter was removed from Facebook. On April 24, 2017, the father from Thailand had streamed the killing of his 11-month-old baby girl using the social network's ...

The cow goes "moo." The pig goes "oink." A child can learn from a picture book to associate images with sounds, but building a computer vision system that can train itself isn't as simple. Using artificial intelligence techniques, ...

Google parent Alphabet is spinning off a little-known unit working on geothermal power called Dandelion, which will begin offering residential energy services.

Elon Musk's Tesla will build what the maverick entrepreneur claims is the world's largest lithium ion battery within 100 days, making good on a Twitter promise to ease South Australia's energy woes.

Qualcomm on Thursday escalated its legal battle with Apple, filing a patent infringement lawsuit and requesting a ban on the importation of some iPhones, claiming unlawful and unfair use of the chipmaker's technology.

France will end sales of petrol and diesel vehicles by 2040 as part of an ambitious plan to meet its targets under the Paris climate accord, new Ecology Minister Nicolas Hulot announced Thursday.

Japanese designer Yuima Nakazato claimed Wednesday that he has cracked a digital technique which could revolutionise fashion with mass made-to-measure clothes.

Volvo plans to build only electric and hybrid vehicles starting in 2019, making it the first major automaker to abandon cars and SUVs powered solely by the internal combustion engine.

Adjust slider to filter visible comments by rank

Display comments: newest first

It should be able to tell John McClane when to duck or Robocop when to shoot first by analysing their film footage: If they can train it to shout at the TV screen.

1/5??? The above mentioned videos would be a much better test, with sub-optimal lighting, and fast moving objects, than the sideways presented still frame featured in the article.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Go here to see the original:

Artificial intelligence-based system warns when a gun appears in a video - Phys.Org

Posted in Artificial Intelligence | Comments Off on Artificial intelligence-based system warns when a gun appears in a video – Phys.Org

Google gives journalists money to use artificial intelligence in reporting – The Hill

Posted: at 4:14 am

Google is giving British journalists more than 700,000 pounds to help them incorporate artificial intelligence into their work.

Google awarded thegrantto The Press Association (PA), the national news agency for the United Kingdom and Ireland, and Urbs Media, a data driven news startup. It's one of the largest grants handed out by Googles Digital News Initiative Innovation Fund.

The funding, announced on Thursday, will specifically go to Reporters And Data And Robots, a news service that aims to create 30,000 local stories a month.

Skilled human journalists will still be vital in the process, but RADAR allows us to harness artificial intelligence to scale up to a volume of local stories that would be impossible to provide manually, Clifton said in a statement.

The news organizations expressed optimism for development of their AI tools with the new grant.

PA and Urbs Media are developing an end-to-end workflow to generate this large volume of news for local publishers across the UK and Ireland, they said in a release.

The funds will also help develop capabilities to auto-generate graphics and video to add to text-based stories, as well as related pictures. PAs distribution platforms will also be enhanced to make sure that all local outlets can find and use the large volume of localised news stories.

PA and Urbss AI push is not the first time mediaoutlets have taken advantage of the technology to supplement their reporting. Reporters at the Los Angeles Times have been working with AI since 2014 to assist them in writing and reporting stories about earthquakes.

"It saves people a lot of time, and for certain types of stories, it gets the information out there in usually about as good a way as anybody else would, then-Los AngelesTimes journalist Ken Schwencke, who wrote a program for automated earthquake reporting, told the BBC.

"The way I see it is, it doesn't eliminate anybody's job as much as it makes everybody's job more interesting."

Follow this link:

Google gives journalists money to use artificial intelligence in reporting - The Hill

Posted in Artificial Intelligence | Comments Off on Google gives journalists money to use artificial intelligence in reporting – The Hill

Google’s artificial intelligence masterminds move into Edmonton – CBC.ca

Posted: at 4:14 am

One of the world's leading artificial intelligence companies is setting up headquarters in Edmonton, signaling a boonforthe city's science and technology sector.

DeepMind Ltd, Google's high-profile AI research firm, announced this week that it will open its first laboutside the United Kingdomin the capital region.

As DeepMind co-founder and CEO Demis Hassabis explained in a recent blog entry, the choice of location was no accident.

DeepMind considers the capital region a key hub for AI research in Canada, Hassabis said.

"Collaborating with (the University ofAlberta) to open a lab feels like a natural extension of what we do here in London," Hassabis wrote.

"It was a big decision for us to open our first non-U.K. research lab, and the fact we're doing so in Edmonton is a sign of the deep admiration and respect we have for the Canadian research community."

The satellite office, named DeepMind Alberta, will work in close partnership with the University of Alberta, and three computing science professors Rich Sutton, Michael Bowling and Patrick Pilarski have been recruited to lead the effort.

The men will continue to work in the university's Alberta Machine Intelligence Institute while overseeing the new firm. They will be joined by seven other researchers hired by DeepMind from around the world.

The connections between London-based DeepMind and the U of A are longstanding.

Nearly a dozen University of Alberta graduates have joined the company's ranks over the years, and it has sponsored a machine learning lab to provide additional funding for PhDs.

"There is incredible alignment between DeepMind and the University of Alberta, both famed for their boundary-pushing research," Pilarski said in a statement.

"Their complementary areas of expertise are now being combined through DeepMind Alberta, and I look forward to making new scientific breakthroughs together."

Having been acquired by Google in 2014, DeepMind is now part of its parent company Alphabet.

DeepMind made headlines in 2015 when it engineered a computer program capable of beating a professional card player in the ancient Chinese game of Go,a feat many in the industry expected would be out of reach for many years to come.

The company has also won acclaim for its Atari program,a neuralnetwork that learns how to play video games like humans do, without external instruction and mimics the short-term memory of the human brain.

The firm is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve complex problems without being taught how.

The new DeepMind Alberta team will open for business this month in temporary facilities close to the the university's main campus.

Their research has ramifications forsuch cutting-edge technologies as driverless cars and fully-autonomous robots.

"They have the big picture," said Jonathan Schaeffer, dean of the faculty of science, and computing science professor.

"They're looking to understand where artificial intelligence will be five, 10, 20 years from now and fund those kind of projects with the expectation that they're going to do revolutionary, not evolutionary research."

DeepMind's decision to expand in Edmonton will allow the university to stay on the cutting edge of new and exciting technological advances in the development of human-like robots, Schaeffersaid.

"We're bringing world-class technology to the city of Edmonton," he said.

"We have, for the last couple of decades, had one of the world's best artificial intelligence research groups in the world in Edmonton.

"It's been a best-kept secret, but with DeepMind coming to town, it's almost like a calling card, an international group recognizes how great we are.

"It's going to allow us to attract and retain the brightest and the best in the world."

Originally posted here:

Google's artificial intelligence masterminds move into Edmonton - CBC.ca

Posted in Artificial Intelligence | Comments Off on Google’s artificial intelligence masterminds move into Edmonton – CBC.ca

Karandish: Problems Artificial Intelligence must overcome – St. Louis Business Journal

Posted: at 4:14 am


St. Louis Business Journal
Karandish: Problems Artificial Intelligence must overcome
St. Louis Business Journal
It's graduation season, and Bill Gates recently said that Artificial Intelligence is among the top fields for 2017 graduates to enter. A quorum of business leaders and executives have echoed these sentiments. What problems and issues will these recent ...

Excerpt from:

Karandish: Problems Artificial Intelligence must overcome - St. Louis Business Journal

Posted in Artificial Intelligence | Comments Off on Karandish: Problems Artificial Intelligence must overcome – St. Louis Business Journal

Is Artificial Intelligence A (Job) Killer? | HuffPost – HuffPost

Posted: July 7, 2017 at 2:13 am

Theres no shortage of dire warnings about the dangers of artificial intelligence these days.

Modern prophets, such as physicist Stephen Hawking and investor Elon Musk, foretell the imminent decline of humanity. With the advent of artificial general intelligence and self-designed intelligent programs, new and more intelligent AI will appear, rapidly creating ever smarter machines that will, eventually, surpass us.

When we reach this so-called AI singularity, our minds and bodies will be obsolete. Humans may merge with machines and continue to evolve as cyborgs.

Is this really what we have to look forward to?

AI, a scientific discipline rooted in computer science, mathematics, psychology, and neuroscience, aims to create machines that mimic human cognitive functions such as learning and problem-solving.

Since the 1950s, it has captured the publics imagination. But, historically speaking, AIs successes have often been followed by disappointments caused, in large part, by the inflated predictions of technological visionaries.

In the 1960s, one of the founders of the AI field, Herbert Simon, predicted that machines will be capable, within twenty years, of doing any work a man can do. (He said nothing about women.)

Marvin Minsky, a neural network pioneer, was more direct, within a generation, he said, the problem of creating artificial intelligence will substantially be solved.

But it turns out that Niels Bohr, the early 20th century Danish physicist, was right when he (reportedly) quipped that, Prediction is very difficult, especially about the future.

Today, AIs capabilities include speech recognition, superior performance at strategic games such as chess and Go, self-driving cars, and revealing patterns embedded in complex data.

These talents have hardly rendered humans irrelevant.

Reuters

But AI is advancing. The most recent AI euphoria was sparked in 2009 by much faster learning of deep neural networks.

Artificial intelligence consists of large collections of connected computational units called artificial neurons, loosely analogous to the neurons in our brains. To train this network to think, scientists provide it with many solved examples of a given problem.

Suppose we have a collection of medical-tissue images, each coupled with a diagnosis of cancer or no-cancer. We would pass each image through the network, asking the connected neurons to compute the probability of cancer.

We then compare the networks responses with the correct answers, adjusting connections between neurons with each failed match. We repeat the process, fine-tuning all along, until most responses match the correct answers.

Eventually, this neural network will be ready to do what a pathologist normally does: examine images of tissue to predict cancer.

This is not unlike how a child learns to play a musical instrument: she practices and repeats a tune until perfection. The knowledge is stored in the neural network, but it is not easy to explain the mechanics.

Networks with many layers of neurons (therefore the name deep neural networks) only became practical when researchers started using many parallel processors on graphical chips for their training.

Another condition for the success of deep learning is the large sets of solved examples. Mining the internet, social networks and Wikipedia, researchers have created large collections of images and text, enabling machines to classify images, recognize speech, and translate language.

Already, deep neural networks are performing these tasks nearly as well as humans.

But their good performance is limited to certain tasks.

Scientists have seen no improvement in AIs understanding of what images and text actually mean. If we showed a Snoopy cartoon to a trained deep network, it could recognize the shapes and objects a dog here, a boy there but would not decipher its significance (or see the humor).

We also use neural networks to suggest better writing styles to children. Our tools suggest improvement in form, spelling, and grammar reasonably well, but are helpless when it comes to logical structure, reasoning, and the flow of ideas.

Current models do not even understand the simple compositions of 11-year-old schoolchildren.

AIs performance is also restricted by the amount of available data. In my own AI research, for example, I apply deep neural networks to medical diagnostics, which has sometimes resulted in slightly better diagnoses than in the past, but nothing dramatic.

In part, this is because we do not have large collections of patients data to feed the machine. But the data hospitals currently collect cannot capture the complex psychophysical interactions causing illnesses like coronary heart disease, migraines or cancer.

So, fear not, humans. Febrile predictions of AI singularity aside, were in no immediate danger of becoming irrelevant.

AIs capabilities drive science fiction novels and movies and fuel interesting philosophical debates, but we have yet to build a single self-improving program capable of general artificial intelligence, and theres no indication that intelligence could be infinite.

Deep neural networks will, however, indubitably automate many jobs. AI will take our jobs, jeopardising the existence of manual labourers, medical diagnosticians, and perhaps, someday, to my regret, computer science professors.

Robots are already conquering Wall Street. Research shows that artificial intelligence agents could lead some 230,000 finance jobs to disappear by 2025.

In the wrong hands, artificial intelligence can also cause serious danger. New computer viruses can detect undecided voters and bombard them with tailored news to swing elections.

Already, the United States, China, and Russia are investing in autonomous weapons using AI in drones, battle vehicles, and fighting robots, leading to a dangerous arms race.

Now thats something we should probably be nervous about.

Marko Robnik-ikonja, Associate Professor of Computer Science and Informatics, University of Ljubljana

This article was originally published on The Conversation. Read the original article.

The Morning Email

Wake up to the day's most important news.

See the article here:

Is Artificial Intelligence A (Job) Killer? | HuffPost - HuffPost

Posted in Artificial Intelligence | Comments Off on Is Artificial Intelligence A (Job) Killer? | HuffPost – HuffPost

A Look At The Artificial Intelligence Companies And My Top 5 – Seeking Alpha

Posted: at 2:12 am

Note: This article first appeared on my Trend Investing Marketplace service on June 8. All data is therefore as of that date.

I wrote previously about the relatively new trend of Artificial Intelligence (AI) in my article. This time I plan to take a look at the main companies to consider for investing in AI, and select my top 5.

AI - "machines with brains"

Source

Many AI companies are unlisted and acquired before IPO

According to CB Insights, "over 200 private companies using AI algorithms across different verticals have been acquired since 2012, with over 30 acquisitions taking place in Q1'17 alone." Perhaps CB Insights will be next. The graph below gives further details. The left side list also gives an idea of the current AI leaders and acquirers.

Source: CB Insights

Alphabet Inc (NASDAQ: GOOG, GOOGL)

Wikipedia quotes: "According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects."

Google's research projects often have the idea of automating everything, and connecting everything and everyone online (such as "Project Loon").

Google search is already using complex algorithms (e.g. RankBrain) and deep learning techniques.

Google's Home is a recent example of Google's move into AI with their inbuilt "Google Assistant" (similar to Siri, Alexa, Bixby, M, Cortana and Watson). The assistant responds to any sentence beginning "Hey Google". A key is the voice recognition technology and microphones, so that the assistant can correctly understand you and make appropriate responses.

Google announced its first chip, called the Tensor Processing Unit (TPU), in 2016. That chip worked in Google's data centers to power search results and image-recognition. A new version will be available to clients of its cloud business.

Google is a leader in autonomous car systems, which uses plenty of AI. It helps they already have Google maps. Google also have android auto car entertainment and internet capability.

Some AI acquisitions include: DNNresearch (voice and image recognition), DeepMind Technologies (deep learning, memory), Moodstock (visual search), Api.ai. (bot platform), Kaggle (predictive analytics platform).

Amazon Echo Dot has Alexa, Google Home has Hey Google, and Apple smartphone has Siri

Source

Amazon (NASDAQ: AMZN)

Amazon's home speaker Echo (and Echo Dot) and personal assistant "Alexa" is an example of Amazon's move into AI. Alexa is perhaps the most successful AI powered personal assistant thus far, especially as it is able to do over 10,000 online and offline functions.

Amazon Web Services ("cloud") offers deep-learning capabilities, allowing users to use up to 16 of Nvidia's Tesla K80 GPUs.

Amazon's AI acquisitions include Orbeus (automated facial, object and scene recognition), Angel ai (Chatbots), and Harvest.ai (cyber security).

Advanced Micro Devices (NASDAQ:AMD)

AMD are a semiconductor manufacturer. The company's new AI chips are the Radeon Instinct series. AMD are also racing to put its chips into AI applications. AMD will release its ROCm deep-learning framework, as well as an open-source GPU-accelerated library called MIOpen. The company plans to launch three products under the new brand in 2017.

Apple (NASDAQ: AAPL)

Apple has been an AI leader with their voice (and face) recognition software used by their personal assistant Siri (on your smartphone), introduced in 2011. In October 2016, Apple hired Russ Salakhutdinov from Carnegie Mellon University as its director of AI research.

Apple is working on a processor devoted specifically to AI-related tasks. The chip is known internally as the "Apple Neural Engine". Apple have an autonomous vehicle development team that uses AI, and they are also said to be working on augmented reality using AI chips. Apple is reportedly developing a specific AI chip for mobile devices.

Apple is also a leader in the areas of Virtual Reality ((VR)) and Augmented Reality ((AR)) headsets. Apple's ARKit will available soon and may one day may replace the smartphone. Apple iPhone users could get their first taste of AR technology later this year. The 10-year anniversary Apple iPhone will be enhanced with AR features.

Apple has made several AI acquisitions including Perceptio (deep learning technology for smartphones, that allows phones to independently identify images without relying on external data libraries), Emotient (can assess emotions by reading facial expressions), and RealFace (facial recognition).

VR and AR headsets are the next potential big thing

Source

Baidu (NASDAQ: BIDU)

Baidu are the Google of China, so not surprisingly they have followed in Google's footsteps developing deep learning search functionality, as well as autonomous driving.

Facebook (NASDAQ: FB)

Mark Zuckerberg has become very interested in AI, since initially using it for simple tasks. Facebook has chatbots and Zuckerberg has developed his own personal assistant "M". The Facebook site uses AI to then direct targeted advertising to match your likes.

Facebook holds more than $32 billion in cash on its balance sheet and produced more than $13 billion in free cash flow over the last year. The company does not pay any dividends and has no debt. This means Facebook can pretty much buy up any rivals or promising new AI or tech companies. Some acquisitions include Face.com (facial recognition), Masquerade (a selfie app that lets you do face-swaps), Occulus (virtual reality), Eye Tribe (eye tracking software) and Zurich Eye (enables machines to see).

Facebook's Mark Zuckerberg has recently said that "the next big thing is augmented reality." He sees a world where we use AR glasses to project an image like a computer screen. The tricky part is the mouse, and so Facebook's team are looking at using direct brain to glasses technology, or something like eye movements to control your screen.

International Business Machine's (NYSE: IBM)

IBM should not be underestimated in AI, as they have previously led the AI industry. They became famous in this area when their supercomputer/personal assistant Watson was able to beat two quiz champions live on television. Apparently Watson can read 40 million documents in 15 seconds.

IBM have acquired Cognea (virtual assistants with a depth of personality), Alchemy deep learning, natural language processing (specifically, semantic text analysis, including sentiment analysis) and computer vision (specifically, face detection and recognition), and Explorys (a healthcare intelligence cloud company that has built one of the largest clinical data sets in the world, representing more than 50 million lives).

Intel (NASDAQ: INTC)

Intel are the global leading semiconductor manufacturer, with a dominant position in the desktop/PC market.

Intel also acquired Indisys (intelligent dialogue systems), Saffron (cognitive computing), Itseez (vision systems), and recently paid $15.3b for Mobileye (used in autonomous driving). Intel also recently spent $400 million to buy deep-learning startup Nervana. The company intends to integrate Nervana technology into Xeon and Xeon Phi processor lineups. Intel claims Nervana will deliver up to a 100-fold reduction in the time it takes to train a deep learning model within the next three years.

Microsoft (NASDAQ: MSFT)

Microsoft's background is in PC/desktop software (Office etc), and in gaming (XBox).

AI was first adopted by hyperscale data centers such as Microsoft, Facebook, and Google, which used AI for image recognition and voice processing. Microsoft's Azure offers deep-learning capabilities support for up to four of Nvidia's slightly older GPUs.

Microsoft also have a personal assistant name "Cortana".

Microsoft AI acquisitions include : Netbreeze (social media monitoring and analytics), Equivio (machine learning), SwiftKey (analyzes data to predict what a user is typing and what they'll type next), Genee (an AI app that acts as a digital personal assistant to schedule meetings), and Maluuba (deep learning, natural language processing).

Nvidia (NASDAQ: NVDA)

Nvidia design and sell industry leading AI chips, which puts them at the top of the AI pyramid. They collaborate, design and sell their various types of chips to almost all the top tech companies.

Nvidia's background is as a designer and seller of graphic processing units ((GPUs)) to dominate the gaming industry.

The company has expanded their products to include AI chips such as the Tesla P100 GPU, making them a leader in AI. Those chips are popular with data centers, and autonomous and semi-autonomous vehicles. Tesla recently decided to drop Mobileye and go for Nvidia technology for their autonomous vehicles.

Bloomberg summarizes well; "Nvidia has become one of the chipmakers leading the charge to provide the underpinnings of machine intelligence in everything from data centers to automobiles. As the biggest maker of graphics chips, Nvidia has proved that type of processor's ability to perform multiple tasks in parallel has value in new markets, where artificial intelligence is increasingly important."

Qualcomm (NASDAQ: QCOM)

Qualcomm's revenue has mostly come from wireless modem licensing fees, especially as a key supplier to Apple. However the company are currently facing legal (U.S. FTC and Apple litigation) and other issues such as pressure on declining smartphone related revenues. Qualcomm are also operate in the areas of IoT, security and networking industries. Given the coming boom in autonomous vehicles, Qualcomm are in the process of buying NXP Semiconductors (NASDAQ:NXPI) (for $47b), the leader in high-performance, mixed-signal semiconductor electronics - and a leading solutions supplier to the automotive industry.

Qualcomm Inc.'s latest Snapdragon chip for smartphones has a module for handling artificial intelligence tasks.

Samsung (OTC: SSNLF)

Samsung are the global number 2 semiconductor manufacturer, and the global number 1 smartphone seller. The rise of AI will lead to a huge increase in demand for both computer processing chips and memory chips - which Samsung can supply.

Samsung's "Bixby" is similar to Apple's personal assistant Siri.

In virtual reality (VR) headsets, as of Q1 2017, Samsung is the current market leader with 21.5% market share. Sony is second with 18.8%, followed by HTC with 8.4%, Facebook with 4.4%, and TCL 4%. Total worldwide shipments of augmented reality and virtual reality headsets reached 2.3M in the first quarter, according to IDC. The VR headsets represented over 98% of the sales. An IDC report from March predicts the amount of shipped AR and VR headsets will reach 99.4M units by 2021.

Tesla (NASDAQ: TSLA)

Tesla are the global leader in Autonomous Vehicles ((AVs)). As AVs progress from stage 1 to stage 5 (full autonomy) Tesla can be a large beneficiary in terms of electric car sales and transport as a service (taxi company). Tesla may also benefit from selling their AV technology to other car manufacturers, or if they expand into using AI in the home as they already have the power wall and solar roof. Meaning Tesla may end up being your taxi company, your energy supplier, and your content provider both in your car and home. All of these can be run using AI programs from your smartphone. Elon's latest venture is neural networks - wherein he is looking at how we can connect the brain directly to a device.

Others to consider

Ebay Inc (NASDAQ:EBAY), General Electric (NYSE:GE), Nice Ltd (NASDAQ:NICE), Oracle (NYSE:ORCL), Salesforce.com (NYSE:CRM), Skyworks Solutions (NASDAQ:SWKS), Softbank (OTC:SFBTF)(they bought out chip designer ARM, and own 4.95% of Nvidia), Sophos Group (LN:SOPH) (IT security), and Twitter (NYSE:TWTR).

The companies that make the hardware behind the AI boom - especially the optics (transceivers, transponders, amplifiers, lasers and sensors)

With all booms often it is wisest to buy those that make the picks and shovels. In this case it is the optics (transceivers, transponders, amplifiers, lasers and sensors), cameras, semiconductors and so on. I have already discussed Samsung, Nvidia, Intel, Qualcomm, and AMD above, as they will do well from semiconductor design and sales.

Some of the major optics providers that can do very well include:

Note: I will most likely write a separate article on how to benefit from the AI boom by buying the picks and shovels stocks behind the boom.

Conclusion

AI will be invisible, yet it will be everywhere. AI appears on our smartphones, with virtual assistants, augmented and virtual reality headsets, robots, in data centers, and in semi and fully autonomous vehicles.

My top 5 AI stocks to play the coming AI boom are Apple, Samsung, Alphabet Google, Facebook, and Nvidia. If I could find the next Nvidia or listed small cap AI company with potential I would include them in a top 6. For now I have not yet found, as mostly they get bought out by the tech giants before going public.

Apple and Samsung are chosen as they are the global top two smartphone sellers (the smartphone and AR/VR devices will mostly be the operating systems for mass market retail AI), they have very loyal customer bases to cross-sell new AI products to, and also control what chips they use in their devices. Apple may move towards their own AI chip, and Samsung are the global number 2 chip maker already.

Alphabet Google and Facebook dominate the internet, and therefore have a large influence on the retail and business market. They are both already leaders in AI, with the financial backing to buy out any competitive threats. We could perhaps add Amazon to this group also.

Nvidia are the clear chip design leader in the AI space, and have an excellent track record. They are a must have in any AI portfolio. AMD would be a cheaper alternative for those worried about Nvidia's valuation.

Finally an equally wise move would be to buy the "pick and shovel" makers behind the boom such as Applied Optoelectronics and Fabrinet.

I am interested to hear your favorite AI stock and why. As usual all comments are welcome.

Disclosure: I am/we are long GOOG, FB, SSNLF, FN, AAOI.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: The information in this article is general in nature and should not be relied upon as personal financial advice.

Editor's Note: This article discusses one or more securities that do not trade on a major U.S. exchange. Please be aware of the risks associated with these stocks.

Continued here:

A Look At The Artificial Intelligence Companies And My Top 5 - Seeking Alpha

Posted in Artificial Intelligence | Comments Off on A Look At The Artificial Intelligence Companies And My Top 5 – Seeking Alpha

Page 152«..1020..151152153154..160170..»