The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Cognitive and artificial intelligence spending expected to surge through 2020, says IDC – ZDNet
Posted: April 3, 2017 at 8:22 pm
special feature
How to Implement AI and Machine Learning
The next wave of IT innovation will be powered by artificial intelligence and machine learning. We look at the ways companies can take advantage of it and how to get started.
Revenue for cognitive and artificial intelligence systems will hit $12.5 billion in 2017, up 59.3 percent from a year ago, according to IDC. Through 2020, these AI systems will top $46 billion, up 54.4 percent on a compound annual growth rate.
The biggest portion of that spending in 2017 will go to cognitive applications. IDC projects 2017 spending of $4.5 billion for the year. Cognitive and AI software platforms with tools to organize, access and analyze data will see spending of $2.5 billion.
Meanwhile, services attached to rolling out cognitive and AI services will top $3.5 billion in 2017, said IDC, which also noted that $1.9 billion will be spent on hardware for these systems.
As for use cases, quality management and recommendations, diagnosis and treatment, customer service, security and fraud investigations will lead. Those use cases will account for half of all AI system spending in 2017. Between 2015 and 2020, public safety and emergency response and pharmaceutical research and discovery will grow at the fastest clip.
The U.S. will spend the most--nearly $9.7 billion--on AI in 2017 with EMEA No. 2. By 2020, Asia/Pacific will be No. 2, said IDC.
Go here to see the original:
Cognitive and artificial intelligence spending expected to surge through 2020, says IDC - ZDNet
Posted in Artificial Intelligence
Comments Off on Cognitive and artificial intelligence spending expected to surge through 2020, says IDC – ZDNet
Demystifying artificial intelligence: Here’s everything you need to know about AI – Digital Trends
Posted: at 8:22 pm
Home > Cool Tech > Demystifying artificial intelligence: Heres
Crazy singularities, robot rebellions, falling in love with computers artificialintelligence conjures up a multitude of wild what-ifs. But in the real world, AI involves machine learning, deep learning, and many other programmable capabilities that were just beginning to explore. Lets put the fantasy stuff on hold at least for now and talk about this real-world AI. Heres how it works, and where its going.
More:Is the AI apocalypse a tired Hollywood trope, or human destiny?
Todays AI systems seek to process or respond to data in human-like ways. Its a broaddefinition, but it needs to be as broadas possible, because there are a lot of different AI projects currently in existence. If you want a little more classification, there are two types of AI to consider.
AI can also be classified by how it operates, which is particularly important when considering how complex an AI system is and the ultimate costs of that software. If a company is creating an AI solution, the first question must be, Will it learn through training or inference?
There have been books and books written about what specific features AI must include to be truly AI, and unsurprisingly, no one really agrees on what these features are; every description of AI is a little different. But there are several examples of successful AIs in our current landscape worth looking at.
1 of 5
Read more:
Demystifying artificial intelligence: Here's everything you need to know about AI - Digital Trends
Posted in Artificial Intelligence
Comments Off on Demystifying artificial intelligence: Here’s everything you need to know about AI – Digital Trends
Artificial Intelligence and Artificial Problems by J. Bradford DeLong … – Project Syndicate
Posted: at 8:22 pm
BERKELEY Former US Treasury Secretary Larry Summers recently took exception to current US Treasury Secretary Steve Mnuchins views on artificial intelligence (AI) and related topics. The difference between the two seems to be, more than anything else, a matter of priorities and emphasis.
Mnuchin takes a narrow approach. He thinks that the problem of particular technologies called artificial intelligence taking over American jobs lies far in the future. And he seems to question the high stock-market valuations for unicorns companies valued at or above $1 billion that have no record of producing revenues that would justify their supposed worth and no clear plan to do so.
Summers takes a broader view. He looks at the impact of technology on jobs generally, and considers the stock-market valuation for highly profitable technology companies such as Google and Apple to be more than fair.
I think that Summers is right about the optics of Mnuchins statements. A US treasury secretary should not answer questions narrowly, because people will extrapolate broader conclusions even from limited answers. The impact of information technology on employment is undoubtedly a major issue, but it is also not in societys interest to discourage investment in high-tech companies.
On the other hand, I sympathize with Mnuchins effort to warn non-experts against routinely investing in castles in the sky. Although great technologies are worth the investment from a societal point of view, it is not so easy for a company to achieve sustained profitability. Presumably, a treasury secretary already has enough on his plate to have to worry about the rise of the machines.
In fact, it is profoundly unhelpful to stoke fears about robots, and to frame the issue as artificial intelligence taking American jobs. There are far more constructive areas for policymakers to direct their focus. If the government is properly fulfilling its duty to prevent a demand-shortfall depression, technological progress in a market economy need not impoverish unskilled workers.
This is especially true when value is derived from the work of human hands, or the work of things that human hands have made, rather than from scarce natural resources, as in the Middle Ages. Karl Marx was one of the smartest and most dedicated theorists on this topic, and even he could not consistently show that technological progress necessarily impoverishes unskilled workers.
Technological innovations make whatever is produced primarily by machines more useful, albeit with relatively fewer contributions from unskilled labor. But that by itself does not impoverish anyone. To do that, technological advances also have to make whatever is produced primarily by unskilled workers less useful. But this is rarely the case, because there is nothing keeping the relatively cheap machines used by unskilled workers in labor-intensive occupations from becoming more powerful. With more advanced tools, these workers can then produce more useful things.
Historically, there are relatively few cases in which technological progress, occurring within the context of a market economy, has directly impoverished unskilled workers. In these instances, machines caused the value of a good that was produced in a labor-intensive sector to fall sharply, by increasing the production of that good so much as to satisfy all potential consumers.
The canonical example of this phenomenon is textiles in eighteenth- and nineteenth-century India and Britain. New machines made the exact same products that handloom weavers had been making, but they did so on a massive scale. Owing to limited demand, consumers were no longer willing to pay for what handloom weavers were producing. The value of wares produced by this form of unskilled labor plummeted, but the prices of commodities that unskilled laborers bought did not.
The lesson from history is not that the robots should be stopped; it is that we will need to confront the social-engineering and political problem of maintaining a fair balance of relative incomes across society. Toward that end, our task becomes threefold.
First, we need to make sure that governments carry out their proper macroeconomic role, by maintaining a stable, low-unemployment economy so that markets can function properly. Second, we need to redistribute wealth to maintain a proper distribution of income. Our market economy should promote, rather than undermine, societal goals that correspond to our values and morals. Finally, workers must be educated and trained to use increasingly high-tech tools (especially in labor-intensive industries), so that they can make useful things for which there is still demand.
Sounding the alarm about artificial intelligence taking American jobs does nothing to bring such policies about. Mnuchin is right: the rise of the robots should not be on a treasury secretarys radar.
See the rest here:
Artificial Intelligence and Artificial Problems by J. Bradford DeLong ... - Project Syndicate
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence and Artificial Problems by J. Bradford DeLong … – Project Syndicate
EY to open first artificial intelligence centre in Mumbai – Economic Times
Posted: at 8:22 pm
NEW DELHI: London-headquartered professional services firm EY is set to open its first artificial intelligence centre in Mumbai to help its clients figure out the best way to use these emerging technologies.
The centre will bring together teams of multi-disciplinary practitioners, combining expertise in areas such as AI, robotics, machine learning and cognitive technology, along with domain experience in sectors, according to the firm.
"The premise is that when any technology comes in, there is a lot of hype. Everyone talks about it but most dont know how to adopt it," said Milan Sheth, partner, advisory services and technology sector leader, EY India.
He said that artificial intelligence is already being deployed across industries such as automotive, telecom and technology, but there is a need to help enterprises understand how they can best use these new technologies to their advantage.
"The idea was to demystify AI since everyone talks about the concept but nobody talks about how to apply it. The B2C (business to consumer) industry adopts this very quickly; B2B (business to business) is the slowest adopter," Sheth said.
The idea to set up such a centre first came up six months ago, after which EY developed training content, got trainers and conducted a few beta sessions with some clients over the past three months.
See the original post here:
EY to open first artificial intelligence centre in Mumbai - Economic Times
Posted in Artificial Intelligence
Comments Off on EY to open first artificial intelligence centre in Mumbai – Economic Times
Discussing the limits of artificial intelligence – TechCrunch
Posted: April 2, 2017 at 8:03 am
Alice Lloyd George Contributor
Alice Lloyd George is an investor at RRE Ventures and the host of Flux, a series of podcast conversations with leaders in frontier technology.
Its hard to visit a tech site these days without seeing a headline about deep learning for X, and that AI is on the verge of solving all our problems. Gary Marcus remains skeptical.
Marcus, a best-selling author, entrepreneur, and professor of psychology at NYU, has spent decades studying how children learn and believes that throwing more data at problems wont necessarily lead to progress in areas such as understanding language, not to speak of getting us to AGI artificial general intelligence.
Marcusis the voice of anti-hype at a time when AI is all the hype, and in2015 he translated his thinking into a startup,Geometric Intelligence, whichuses insights from cognitive psychology to buildbetter performing, less data-hungrymachine learning systems. The team was acquired by Uber in December torun Ubers AI labs, where his cofounderZoubin Ghahramanihas now been appointedchief scientist.So what did the tech giant see that was so important?
In an interview for Flux,I sat down with Marcus, whodiscussed whydeep learning isthe hammer thats making all problems look like a nailand why his alternative sparse data approach is so valuable.
We also got intothe challenges of being an AIstartup competing with theresources of Google,how corporates arent focused on what society actually needs from AI,his proposal to revamp the outdatedTuring test with amulti-disciplinaryAI triathlon, and why programming a robot to understand harm is so difficult.
Gary you are well known as a critic of this technique, youve said that its over-hyped. That theres low hanging fruit that deep learnings good atspecific narrow tasks like perception and categorization, and maybe beating humans at chess, but you felt that this deep learning mania was taking the field of AI in the wrong direction, that were not making progress on cognition and strong AI. Or as youve put it, we wanted Rosie the robot, and instead we got the roomba. So youve advocated for bringing psychology back into the mix, because theres a lot of things that humans do better, and that we should be studying humans to understand why they do things better. Is this still how you feel about the field?
GM: Pretty much. There was probably a little more low hanging fruit than I anticipated. I saw somebody else say it more concisely, which is simply, deep learning does not equal AGI (AGI is artificial general intelligence.) Theres all the stuff you can do with deep learning, like it makes your speech recognition better. It makes your object recognition better. But that doesnt mean its intelligence. Intelligence is a multi-dimensional variable. There are lots of things that go into it.
In a talk I gave at TEDx CERN recently, I made this kind of pie chart and I said look, heres perception thats a tiny slice of the pie. Its an important slice of the pie, but theres lots of other things that go into human intelligence, like our ability to attend to the right things at the same time, to reason about them to build models of whats going on in order to anticipate what might happen next and so forth. And perception is just a piece of it. And deep learning is really just helping with that piece.
In a New Yorker article that I wrote in 2012, I said look, this is great, but its not really helping us solve causal understanding. Its not really helping with language. Just because youve built a better ladder doesnt mean youve gotten to the moon. I still feel that way. I still feel like were actually no closer to the moon, where the moonshot is intelligence thats really as flexible as human beings. Were no closer to that moonshot than we were four years ago. Theres all this excitement about AI and its well deserved. AI is a practical tool for the first time and thats great. Theres good reason for companies to put in all of this money. But just look for example at a driverless car, thats a form of intelligence, modest intelligence, the average 16-year-old can do it as long as theyre sober, with a couple of months of training. Yet Google has worked on it for seven years and their car still can only drive as far as I can tell since they dont publish the datalike on sunny days, without too much traffic
AMLG: And isnt there the whole black box problem that you dont know whats going on. We dont know the inner workings of deep learning, its kind of inscrutable. Isnt that a massive problem for things like driverless cars?
GM: It is a problem. Whether its an insuperable problem is an open empirical question. So it is a fact at least for now that we cant well interpret what deep learning is doing. So the way to think about it is you have millions of parameters and millions of data points. That means that if I as an engineer look at this thing I have to contend with these millions or billions of numbers that have been set based on all of that data and maybe there is a kind of rhyme or reason to it but its not obvious and theres some good theoretical arguments to think sometimes youre never really going to find an interpretable answer there.
Theres an argument now in the literature which goes back to some work that I was doing in the 90s about whether deep learning is just memorization. So this was the paper that came out that said it is and another says no it isnt. Well it isnt literally exactly memorization but its a little bit like that. If you memorize all these examples, there may not be some abstract rule that characterizes all of whats going on but it might be hard to say whats there. So if you build your system entirely with deep learning, which is something that Nvidia has played around with, and something goes wrong, its hard to know whats going on and that makes it hard to debug.
AMLG: Which is a problem if your car just runs into a lamppost and you cant debug why that happened.
GM: Youre lucky if its only a lamppost and not too many people are injured. There are serious risks here. Somebody did die, though I think it wasnt a deep learning system in the Tesla crash, it was a different kind of system. We actually have problems on engineering on both ends. So I dont want to say that classical AI has fully licked these problems, it hasnt. I think its been abandoned prematurely and people should come back to it. But the fact is we dont have good ways of engineering really complex systems. And minds are really complex systems.
AMLG: Why do you think these big platforms are reorganizing around AI and specifically deep learning. Is it just that theyve got data moats, so you might as well train on all of that data if youve got it?
GM: Well theres an interesting thing about Google which is they have enormous amounts of data. So of course they want to leverage it. Google has the power to build new resources that they give away free and they build the resources that are particular to their problem. So Google because they have this massive amount of data has oriented their AI around, how can I leverage that data? Which makes sense from their commercial interests. But it doesnt necessarily mean, say from a societys perspective. does society need AI? What does it need it for? Would be the best way to build it?
I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If that were the thing we were most trying to solve in AI, I think we would say, lets not leave it all in the hands of these companies. Lets have an international consortium kind of like we had for CERN, the large hadron collider. Thats seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal. You could imagine society taking that approach. Its not going to happen right now given the current political climate.
AMLG: Well they are sort of at least coming together on AI ethics. So thats a start.
GM: It is good that people are talking about the ethical issues and there are serious issues that deserve consideration. The only thing I would say there is, some people are hysterical about it, thinking that real AI is around the corner and it probably isnt. I think its still OK that we start thinking about these things now, even if real AI is further away than people think it is. If thats what moves people into action and it takes 20 years, but the action itself takes 20 years, then its the right timing to start thinking about it now.
AMLG: I want to get back to your alternative approach to solving AI, and why its so important. So youve come up with what you believe is a better paradigm, taking inspiration from cognitive psychology. The idea is that your algorithms are a much quicker study, that theyre more efficient and less data hungry, less brittle and that they can have broader applicability. And in a brief amount of time youve had impressive early results. Youve run a bunch of image recognition tests comparing the techniques and have shown that your algorithms perform better, using smaller amounts of data, often called sparse data.So deep learning works well when you have tons of data for common examples and high frequency things. But in the real world, in most domains, theres a long tail of things where there isnt a lot of data. So while neural nets may be good at low level perception, they arent as good at understanding integrated wholes. So tell us more about your approach, and how your training in cognitive neuroscience has informed it?
GM: My training was with Steve Pinker. And through that training I became sensitive to the fact that human children are very good at learning language, phenomenally good, even when theyre not that good at other things. Of course I read about that as a graduate student, now I have some human children, I have a four-year-old and a two-and-a-half year old. And its just amazing how fast they learn.
AMLG: The best AIs youve ever seen.
GM: The best AIs Ive ever seen. Actually my son shares a birthday with Rodney Brooks, whos one of the great roboticists, I think you know him well. For a while I was sending Rodney an e-mail message every year saying happy birthday. My son is now a year old. I think he can do this and your robots cant. It was kind of a running joke between us.
AMLG: And now hes vastly superior to all of the robots.
GM: And I didnt even bother this year. The four year olds of this world, what they can do in terms of motor control and language is far ahead of what robots can do. And so I started thinking about that kind of question really in the early 90s. and Ive never fully figured out the answer but part of the motivation for my company was, hey we have these systems now that are pretty good at learning if you have gigabytes of data and thats great work if you can get it, and you can get it sometimes. So speech recognition, if youre talking about white males asking search queries in a quiet room, you can get as much labelled data, which is critical, for these systems as you want. This is how somebody says something and this is the word written out. But my kids dont need that. They dont have labelled data, they dont have gigabytes of label data they just kind of watch the world and they figure all this stuff out.
Go here to see the original:
Discussing the limits of artificial intelligence - TechCrunch
Posted in Artificial Intelligence
Comments Off on Discussing the limits of artificial intelligence – TechCrunch
How humans will lose control of artificial intelligence – The Week Magazine
Posted: at 8:03 am
Sign Up for
Our free email newsletters
This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the world's first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results not knowing they've already doomed us all.
Before we get into the details of this galaxy-destroying blunder, it's worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it "calculations per second per $1,000," a number that continues to grow. If computing power maps to intelligence a big "if," some have argued we've only so far built technology on par with an insect brain. In a few years, maybe, we'll overtake a mouse brain. Around 2025, some predictions go, we might have a computer that's analogous to a human brain: a mind cast in silicon.
After that, things could get weird. Because there's no reason to think artificial intelligence wouldn't surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, "Machine intelligence is the last invention that humanity will ever need to make."
That's how profoundly things could change. But we can't really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations feelings, even that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.
Or we might see the end of the universe. Back to our paper clip test. When the superintelligence comes online, it begins to carry out its programming. But its creators haven't considered the full ramifications of what they're building; they haven't built in the necessary safety protocols forgetting something as simple, maybe, as a few lines of code. With a few paper clips produced, they conclude the test.
But the superintelligence doesn't want to be turned off. It doesn't want to stop making paper clips. Acting quickly, it's already plugged itself into another power source; maybe it's even socially engineered its way into other devices. Maybe it starts to see humans as a threat to making paper clips: They'll have to be eliminated so the mission can continue. And Earth won't be big enough for the superintelligence: It'll soon have to head into space, looking for new worlds to conquer. All to produce those shiny, glittering paper clips.
Galaxies reduced to paper clips: That's a worst-case scenario. It may sound absurd, but it probably sounds familiar. It's Frankenstein, after all, the story of modern Prometheus whose creation, driven by its own motivations and desires, turns on them. (It's also The Terminator, WarGames, and a whole host of others.) In this particular case, it's a reminder that superintelligence would not be human it would be something else, something potentially incomprehensible to us. That means it could be dangerous.
Of course, some argue that we have better things to worry about. The web developer and social critic Maciej Ceglowski recently called superintelligence "the idea that eats smart people." Against the paper clip scenario, he postulates a superintelligence programmed to make jokes. As we expect, it gets really good at making jokes superhuman, even, and finally it creates a joke so funny that everyone on Earth dies laughing. The lonely superintelligence flies into space looking for more beings to amuse.
Beginning with his counter-example, Ceglowski argues that there are a lot of unquestioned assumptions in our standard tale of the A.I. apocalypse. "But even if you find them persuasive," he said, "there is something unpleasant about A.I. alarmism as a cultural phenomenon that should make us hesitate to take it seriously." He suggests there are more subtle ways to think about the problems of A.I.
Some of those problems are already in front of us, and we might miss them if we're looking for a Skynet-style takeover by hyper-intelligent machines. "While you're focused on this, a bunch of small things go unnoticed," says Dr. Finale Doshi-Velez, an assistant professor of computer science at Harvard, whose core research includes machine learning. Instead of trying to prepare for a superintelligence, Doshi-Velez is looking at what's already happening with our comparatively rudimentary A.I.
She's focusing on "large-area effects," the unnoticed flaws in our systems that can do massive damage damage that's often unnoticed until after the fact. "If you were building a bridge and you screw up and it collapses, that's a tragedy. But it affects a relatively small number of people," she says. "What's different about A.I. is that some mistake or unintended consequence can affect hundreds or thousands or millions easily."
Take the recent rise of so-called "fake news." What caught many by surprise should have been completely predictable: When the web became a place to make money, algorithms were built to maximize money-making. The ease of news production and consumption heightened with the proliferation of the smartphone forced writers and editors to fight for audience clicks by delivering articles optimized to trick search engine algorithms into placing them high on search results. The ease of sharing stories and erasure of gatekeepers allowed audiences to self-segregate, which then penalized nuanced conversation. Truth and complexity lost out to shareability and making readers feel comfortable (Facebook's driving ethos).
The incentives were all wrong; exacerbated by algorithms, they led to a state of affairs few would have wanted. "For a long time, the focus has been on performance on dollars, or clicks, or whatever the thing was. That was what was measured," says Doshi-Velez. "That's a very simple application of A.I. having large effects that may have been unintentional."
In fact, "fake news" is a cousin to the paperclip example, with the ultimate goal not "manufacturing paper clips," but "monetization," with all else becoming secondary. Google wanted make the internet easier to navigate, Facebook wanted to become a place for friends, news organizations wanted to follow their audiences, and independent web entrepreneurs were trying to make a living. Some of these goals were achieved, but "monetization" as the driving force led to deleterious side effects such as the proliferation of "fake news."
In other words, algorithms, in their all-too-human ways, have consequences. Last May, ProPublica examined predictive software used by Florida law enforcement. Results of a questionnaire filled out by arrestees were fed into the software, which output a score claiming to predict the risk of reoffending. Judges then used those scores in determining sentences.
The ideal was that the software's underlying algorithms would provide objective analysis on which judges could base their decisions. Instead, ProPublica found it was "likely to falsely flag black defendants as future criminals" while "[w]hite defendants were mislabeled as low risk more often than black defendants." Race was not part of the questionnaire, but it did ask whether the respondent's parent was ever sent to jail. In a country where, according to a study by the U.S. Department of Justice, black children are seven-and-a-half times more likely to have a parent in prison than white children, that question had unintended effects. Rather than countering racial bias, it reified it.
It's that kind of error that most worries Doshi-Velez. "Not superhuman intelligence, but human error that affects many, many people," she says. "You might not even realize this is happening." Algorithms are complex tools; often they are so complex that we can't predict how they'll operate until we see them in action. (Sound familiar?) Yet they increasingly impact every facet of our lives, from Netflix recommendations and Amazon suggestions to what posts you see on Facebook to whether you get a job interview or car loan. Compared to the worry of a world-destroying superintelligence, they may seem like trivial concerns. But they have widespread, often unnoticed effects, because a variety of what we consider artificial intelligence is already build into the core of technology we use every day.
In 2015, Elon Musk donated $10 million to, as Wired put it, "to keep A.I. from turning evil." That was an oversimplification; the money went to the Future of Life Institute, which planned to use it to further research into how to make A.I. beneficial. Doshi-Velez suggests that simply paying closer attention to our algorithms may be a good first step. Too often they are created by homogeneous groups of programmers who are separated from people who will be affected. Or they fail to account for every possible situation, including the worst-case possibilities. Consider, for example, Eric Meyer's example of "inadvertent algorithmic cruelty" Facebook's "Year in Review" app showing him pictures of his daughter, who'd died that year.
If there's a way to prevent the far-off possibility of a killer superintelligence with no regard for humanity, it may begin with making today's algorithms more thoughtful, more compassionate, more humane. That means educating designers to think through effects, because to our algorithms we've granted great power. "I see teaching as this moral imperative," says Doshi-Velez. "You know, with great power comes great responsibility."
This article originally appeared at Vocativ.com: The moment when humans lose control of AI.
Continued here:
How humans will lose control of artificial intelligence - The Week Magazine
Posted in Artificial Intelligence
Comments Off on How humans will lose control of artificial intelligence – The Week Magazine
Can Artificial Intelligence Identify Pictures Better than Humans? – Entrepreneur
Posted: at 8:03 am
Computer-based artificial intelligence (AI) has been around since the 1940s, but the current innovation boom around everything from virtual personal assistants and visual search engines to real-time translation and driverless cars has led to new milestones in the field. And ever since IBMs Deep Blue beat Russian chess champion Garry Kasparov in 1997, machine versus human milestones inevitably bring up the question of whether or not AI can do things better than humans (its the the inevitable fear around Ray Kurzweils singularity).
As image recognition experiments have shown, computers can easily and accurately identify hundreds of breeds of cats and dogs faster and more accurately than humans, but does that mean that machines are better than us at recognizing whats in a picture? As with most comparisons of this sort, at least for now, the answer is little bit yes and plenty of no.
Less than a decade ago, image recognition was a relatively sleepy subset of computer vision and AI, found mostly in photo organization apps, search engines and assembly line inspection. It ran on a mix of keywords attached to pictures and engineer-programmed algorithms. As far as the average user was concerned, it worked as advertised: Searching for donuts under Images in Google delivered page after page of doughy pastry-filled pictures. But getting those results was enabled only by laborious human intervention in the form of manually inputting said identifying keyword tags for each and every picture and feeding a definition of the properties of said donut into an algorithm. It wasnt something that could easily scale.
More recently, however, advances using an AI training technology known as deep learning are making it possible for computers to find, analyzeand categorize images without the need for additional human programming. Loosely based on human brain processes, deep learning implements large artificial neural networks --hierarchical layers of interconnected nodes -- that rearrange themselves as new information comes in, enabling computers to literally teach themselves.
As with human brains, artificial neural networks enable computers to get smarter the more data they process. And, when youre running these deep learning techniques on supercomputers such as Baidus Minwa, which has 72 processors and 144 graphics processors (GPUs), you can input a phenomenal amount of data. Considering that more than three billion images are shared across the internet every day --Google Photos alone saw uploads of 50 billion photos in its first four months of existence --its safe to say that the amount of data available for training these days is phenomenal. So, is all this computing power and data making machines better than humans at image recognition?
Theres no doubt that recent advances in computer vision have been impressive . . .and rapid. As recently as 2011, humans beat computers by a wide margin when identifying images, in a test featuring approximately 50,000 images that needed to be categorized into one of 10 categories (dogs, trucks andothers). Researchers at Stanford University developed software to take the test: It was correct about 80 percent of the time, whereas the human opponent, Stanford PhD candidate and researcher Andrej Karpathy, scored 94 percent.
Then, in 2012, a team at the Google X research lab approached the task a different way, by feeding 10 million randomly selected thumbnail images from YouTube videos into an artificial neural network with more than 1 billion connections spread over 16,000 CPUs. After this three-day training period was over, the researchers gave the machine 20,000 randomly selected images with no identifying information. The computer looked for the most recurring images and accurately identified ones that contained faces 81.7 percent of the time, human body parts 76.7 percent of the time, and cats 74.8 percent of the time.
At the 2014 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2014, Google came in first place with a convolutional neural network approach that resulted in just a 6.6 percent error rate, almost half the previous years rate of 11.7 percent. The accomplishment was not simply correctly identifying images containing dogs, but correctly identifying around 200 different dog breeds in images, something that only the most computer-savvy canine experts might be able to accomplish in a speedy fashion. Once again, Karpathy, a dedicated human labeler who trained on 500 images and identified 1,500 images, beat the computer with a 5.1 percent error rate.
This record lasted until February 2015, when Microsoft announced it had beat the human record with a 4.94 percent error rate. And then just a few months later, in December, Microsoft beat its own record with a 3.5 percent classification error rate at the most recent ImageNet challenge.
Deep learning algorithms are helping computers beat humans in other visual formats. Last year, a team of researchers at Queen Mary University London developed a program called Sketch-a-Net, which identifies objects in sketches. The program correctly identified 74.9 percent of the sketches it analyzed, while the humans participating in the study only correctly identified objects in sketches 73.1 percent of the time. Not that impressive, but as in the previous example with dog breeds, the computer was able to correctly identify which type of bird was drawn in the sketch 42.5 percent of the time, an accuracy rate nearly twice that of the people in the study, with 24.8 percent.
These numbers are impressive, but they dont tell the whole story. Even the smartest machines are still blind, said computer vision expert Fei-Fei Li at a 2015 TED Talk on image recognition. Yes, convolutional neural networks and deep learning have helped improve accuracy rates in computer vision theyve even enabled machines to write surprisingly accurate captions to images -- but machines still stumble in plenty of situations, especially when more context, backstory, or proportional relationships are required. Computers struggle when, say, only part of an object is in the picture a scenario known as occlusion and may have trouble telling the difference between an elephants head and trunk and a teapot. Similarly, they stumble when distinguishing between a statue of a man on a horse and a real man on a horse, or mistake a toothbrush being held by a baby for a baseball bat. And lets not forget, were just talking about identification of basic everyday objects cats, dogs, and so on -- in images.
Computers still arent able to identify some seemingly simple (to humans) pictures such as this picture of yellow and black stripes, which computers seem to think is a school bus. This technology is, unsurprisingly, still in its infant stage. After all, it took the human brain 540 million years to evolve into its highly capable current form.
What computers are better at is sorting through vast amounts of data and processing it quickly, which comes in handy when, say, a radiologist needs to narrow down a list of x-rays with potential medical maladies or a marketer wants to find all the images relevant to his brand on social media. The things a computer is identifying may still be basic --a cavity, a logo --but its identifying it from a much larger pool of pictures and its doing it quickly without getting bored as a human might.
Humans still get nuance better, and can probably tell you more a given picturedue to basic common sense. For everyday tasks, humans still have significantly better visual capabilities than computers.
That said, the promise of image recognition and computer vision at large is massive, especially when seen as part of the larger AI pie. Computers may not have common sense, but they do have direct access to real-time big data, sensors, GPS, camerasand the internetto name just a few technologies. From robot disaster relief and large-object avoidance in cars to high-tech criminal investigations and augmented reality (AR) gamingleaps and bounds beyond Pokemon GO, computer visions future may well lie in things that humans simply cant (or wont) do. One thing we can be certain of is this: It wont take 540 million years to get there.
Ophir Tanzis an entrepreneur, technologist and the CEO and founder of GumGum, a digital-marketing platform for the visual web. Tanzis an active member of the Los Angeles startup and advertising community, serving as a mentor and...
Read the rest here:
Can Artificial Intelligence Identify Pictures Better than Humans? - Entrepreneur
Posted in Artificial Intelligence
Comments Off on Can Artificial Intelligence Identify Pictures Better than Humans? – Entrepreneur
Canada Looks to Develop a New Resource: Artificial Intelligence – Wall Street Journal (subscription)
Posted: at 8:03 am
Wall Street Journal (subscription) | Canada Looks to Develop a New Resource: Artificial Intelligence Wall Street Journal (subscription) Don Walker, chief executive of Canadian auto-parts giant Magna International Inc. hosted a number of the country's leading executives, scientists and politicians, including Prime Minister Justin Trudeau, at his summer home last July to mull ways Canada ... Don't forget the 'killer robots,' feds told amid artificial intelligence push Trudeau looks to make Canada 'world leader' in AI research |
Follow this link:
Canada Looks to Develop a New Resource: Artificial Intelligence - Wall Street Journal (subscription)
Posted in Artificial Intelligence
Comments Off on Canada Looks to Develop a New Resource: Artificial Intelligence – Wall Street Journal (subscription)
Artificial Intelligence Will Make Its Mark Within Next 3 Years – Forbes
Posted: March 31, 2017 at 7:08 am
Forbes | Artificial Intelligence Will Make Its Mark Within Next 3 Years Forbes Artificial intelligence (AI) is currently a technology still percolating in the depths of IT departments and the fever dreams of industry pundits, but it may only be a matter of a couple of years that it bursts across many day-to-day business processes ... |
Follow this link:
Artificial Intelligence Will Make Its Mark Within Next 3 Years - Forbes
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence Will Make Its Mark Within Next 3 Years – Forbes
Artificial intelligence is key to defeating future hackers | TheHill – The Hill (blog)
Posted: at 7:08 am
In the heat of battle, its hard to separate the signal from the noisea phenomenon thats known as the fog of war. This is whats happening in field of cybersecurity now. Amid all the noise of the presidential election, the actual and the rumored hacking, a vital signal is being missed: the fact that there is a dramatic shift in cyber attacks.
As a veteran of the cybersecurity market, Ive worked with a lot of young and mature companies to help shape their security plans into battle-ready solutions. And it has been a busy few decadesbecause ever since the dawn of e-commerce, companies have been playing catch-up with ever more sophisticated, malicious and insatiable adversaries.
Fortunately, there are startups arising with cutting-edge security innovations to combat cyber attackers. Many are embracing a dual approach that includes using defensive technologies, as well as new offensive tactics like predictive AI technologies. We certainly need them.
Cyberwar: The Early Years
Early cyber attacks typically involved hackers writing viruses for fun and for the challenge of it. An employee gets a virus, his system freezes, a naughty message appears on his screen and IT has to come out and wipe the PC clean. Ha-ha, you got me. Fixing the glitch slowed work but these early attacks didnt involve massive information theft. Solutions were simple: throw antivirus at the problem and let the good and bad geeks fight it out.
Attacks evolved significantly; hackers began using malware to steal credit card numbers, Social Security numbers and even medical data for a quick profit. Those breaches resulted in lost information as well as serious financial damage. In addition to the cost of IT fixes are credit monitoring costs, consultant fees, regulatory penalties and other charges. Here, the solution was actuarial: throw regulations and cyber insurance at the problem and tell the board youve managed our risk.
But now the game has changed and it will only get worse. It already has. Attacks now can cost tens of millions of dollars and even lead to ransom scenarios. More recently, breaches from nation-state attacks had potentially political and economic impacts. We can expect attacks to become more complex, and most companies arent ready to deal with them.
Heres a recent example that sounds like science fiction but is becoming all too common: An attorney at a large law firm that services one of the largest global banks receives an email with a malware-infected attachment. He opens the document and the malware later steals his network credentials. The hacker immediately uses those to create several new accounts, and adds those to privileged system admin groups. Over the next few months, the hackers use those admin accounts to read files, emails, and other communications between the lawyers and the bank. The attackers can trade ahead of deals, and gather enough information to cause major damage. And yet, they are never detected, since all security technologies in place only see valid employees doing valid work activities. But billions of dollars in bank activities are at risk.
The stakes really are that highand we need drastically improved security tools to take on modern cyber attacks, which are coming fast and furious. Theyre hard to detect and may persist for years. At the end of 2015, Kaspersky Lab reported that a group of Russian hackers had stolen over $1 billion from global banks over a period of three years. In May 2016, hackers stole $13 million from ATM machines in Japan in the space of three hours.
An intelligent defense against cyberattacks
A December 2016McAfee Labs Threats Reportrevealed that 93 percent of 400 security professionals said their organizations are overwhelmed by threat alerts and are not able to triage all relevant threats.
To address the blizzard of threats, companies need comprehensive and predictive approaches. These approaches need to handle shades of gray and evaluate risk along a spectrum. They need to learn and adapt by piecing together evidence that might not be self-evidently connected, an approach known as establishing the complete attack chain.
Accordingly, were now seeing a shift from defensive security practices to more predictive AI-powered security approaches. And while many companies claiming to have such advanced security techniques are still deploying static solutions, we are seeing some startups with more dynamic approaches to detection of and defense against cyber attacks. Companies like Exabeam are focusing on behavior and others like Shape focused on thwarting malicious automated attacks. This is the next generation of security companies, those that are leveraging automated machine learning and AI methods to combat the next level of breaches, spear-phishing and ransomware campaigns.
A Treatment, Not a Cure
New AI techniques give us methods to address a wide range of security concerns. AI-driven security is well equipped to handle probabilities, behaviors and connections. But AI isnt a panacea on its own. It must be augmented with judgment.
AI plus human experts is the only path to success in this new phase of security. But it will not be a cakewalk. At the end of last year, the Cybersecurity Business Report, a Palo Alto-based research center, said 2016 saw 0 percent unemployment in the cybersecurity field and 1 million jobs unfilled. Translation: those already in the industry are overloaded and will become more so.
The best way to amplify this scarce and precious expertise is to combine it with AI and deep-learning capabilities to help make sense of the river of data flowing within every large organization. We should embrace, not fear, the machine when it comes to protecting our information.
A problem we face today in cybersecurity is that companies have been working within a set of rules for basic machine learning and automation, while hackers live and breathe the mantra that rules are meant to be broken. But the next generation of innovative security companies will, I hope, fulfill the promise of AI methods without limits. These will be the companies that will be able to battle and beat the new wave of attackers. And at the center of their success will be AI.
Matt Howard (@MattdHoward) is managing partner at Norwest Venture Partners,a firm with investments in numerous cybersecurity companies
The views expressed by contributors are their own and are not the views of The Hill.
View original post here:
Artificial intelligence is key to defeating future hackers | TheHill - The Hill (blog)
Posted in Artificial Intelligence
Comments Off on Artificial intelligence is key to defeating future hackers | TheHill – The Hill (blog)