The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
How AI Will Change the Way We Make Decisions – Harvard Business Review
Posted: July 26, 2017 at 4:18 pm
Executive Summary
Recent advances in AI are best thought of as a drop in the cost of prediction.Prediction is useful because it helps improve decisions. But it isnt the only input into decision-making; the other key input is judgment. Judgmentis the process of determining what the reward to a particular action is in a particular environment.In many cases, especially in the near term, humans will be required to exercise this sort of judgment. Theyll specialize in weighing the costs and benefits of different decisions, and then that judgment will be combined with machine-generated predictions to make decisions. But couldnt AI calculate costs and benefits itself? Yes, but someone would have had to program the AI as to what the appropriate profit measure is. This highlights a particular form of human judgment that we believe will become both more common and more valuable.
With the recent explosion in AI, there has been the understandable concern about its potential impact on human work. Plenty of people have tried to predict which industries and jobs will be most affected, and which skills will be most in demand. (Should you learn to code? Or will AI replace coders too?)
Rather than trying to predict specifics, we suggest an alternative approach. Economic theory suggests that AI will substantially raise the value of human judgment. People who display good judgment will become more valuable, not less. But to understand what good judgment entails and why it will become more valuable, we have to be precise about what we mean.
Recent advances in AI are best thought of as a drop in the cost of prediction. By prediction, we dont just mean the futureprediction is about using data that you have to generate data that you dont have, often by translating large amounts of data into small, manageable amounts. For example, using images divided into parts to detect whether or not the image contains a human face is a classic prediction problem. Economic theory tells us that as the cost of machine prediction falls, machines will do more and more prediction.
Prediction is useful because it helps improve decisions. But it isnt the only input into decision-making; the other key input is judgment. Consider the example of a credit card network deciding whether or not to approve each attempted transaction. They want to allow legitimate transactions and decline fraud. They use AI to predict whether each attempted transaction is fraudulent. If such predictions were perfect, the networks decision process is easy. Decline if and only if fraud exists.
However, even the best AIs make mistakes, and that is unlikely to change anytime soon. The people who have run the credit card networks know from experience that there is a trade-off between detecting every case of fraud and inconveniencing the user. (Have you ever had a card declined when you tried to use it while traveling?) And since convenience is the whole credit card business, that trade-off is not something to ignore.
This means that to decide whether to approve a transaction, the credit card network has to know the cost of mistakes. How bad would it be to decline a legitimate transaction? How bad would it be to allow a fraudulent transaction?
Someone at the credit card association needs to assess how the entire organization is affected when a legitimate transaction is denied. They need to trade that off against the effects of allowing a transaction that is fraudulent. And that trade-off may be different for high net worth individuals than for casual card users. No AI can make that call. Humans need to do so.This decision is what we call judgment.
Judgment is the process of determining what the reward to a particular action is in a particular environment. Judgment is howwe work out the benefits and costs of different decisions in different situations.
Credit card fraud is an easy decision to explain in this regard. Judgment involves determining how much money is lost in a fraudulent transaction, how unhappy a legitimate customer will be when a transaction is declined, as well as the reward for doing the right thing and allowing good transactions and declining bad ones. In many other situations, the trade-offs are more complex, and the payoffs are not straightforward. Humans learn the payoffs to different outcomes by experience, making choices and observing their mistakes.
Getting the payoffs right is hard. It requires an understanding of what your organization cares about most, what it benefits from, and what could go wrong.
In many cases, especially in the near term, humans will be required to exercise this sort of judgment. Theyll specialize in weighing the costs and benefits of different decisions, and then that judgment will be combined with machine-generated predictions to make decisions.
But couldnt AI calculate costs and benefits itself? In the credit card example, couldnt AI use customer data to consider the trade-off and optimize for profit? Yes, but someone would have had to program the AI as to what the appropriate profit measure is. This highlights a particular form of human judgment that we believe will become both more common and more valuable.
Like people, AIs can also learn from experience. One important technique in AI is reinforcement learning whereby a computer is trained to take actions that maximize a certain reward function. For instance, DeepMinds AlphaGo was trained this way to maximize its chances of winning the game of Go. Games are often easy to apply this method of learning because the reward can be easily described and programmed shutting out a human from the loop.
But games can be cheated. As Wired reports, when AI researchers trained an AI to play the boat racing game, CoastRunners, the AI figured out how to maximize its score by going around in circles rather than completing the course as was intended. One might consider this ingenuity of a type, but when it comes to applications beyond games this sort of ingenuity can lead to perverse outcomes.
The key point from the CoastRunners example is that in most applications, the goal given to the AI differs from the true and difficult-to-measure objective of the organization. As long as that is the case, humans will play a central role in judgment, and therefore in organizational decision-making.
In fact, even if an organization is enabling AI to make certain decisions, getting the payoffs right for the organization as a whole requires an understanding of how the machines make those decisions. What types of prediction mistakes are likely? How might a machine learn the wrong message?
Enter Reward Function Engineering. As AIs serve up better and cheaper predictions, there is a need to think clearly and work out how to best use those predictions. Reward Function Engineering is the job of determining the rewards to various actions, given the predictions made by the AI. Being great at itrequires having an understanding of the needs of the organization and the capabilities of the machine. (And it is not the same as putting a human in the loop to help train the AI.)
Sometimes Reward Function Engineering involves programming the rewards in advance of the predictions so that actions can be automated. Self-driving vehicles are an example of such hard-coded rewards. Once the prediction is made, the action is instant. But as the CoastRunners example illustrates, getting the reward right isnt trivial. Reward Function Engineering has to consider the possibility that the AI will over-optimize on one metric of success, and in doing so act in a way thats inconsistent with the organizations broader goals.
At other times, such hard-coding of the rewards is too difficult. There may so be many possible predictions that it is too costly for anyone to judge all the possible payoffs in advance. Instead, some human needs to wait for the prediction to arrive, and then assess the payoff. This is closer to how most decision-making works today, whether or not it includes machine-generated predictions. Most of us already do some Reward Function Engineering, but for humans not machines. Parents teach their children values. Mentors teach new workers how the system operates. Managers give objectives to their staff, and then tweak them to get better performance. Every day, we make decisions and judge the rewards. But when we do this for humans, prediction and judgment are grouped together, and the distinct role of Reward Function Engineering has not needed to be explicitly separate.
As machines get better at prediction, the distinct value of Reward Function Engineering will increase as the application of human judgment becomes central.
Overall, will machine prediction decrease or increase the amount of work available for humans in decision-making? It is too early to tell. On the one hand, machine prediction will substitute for human prediction in decision-making. On the other hand, machine prediction is a complement to human judgment. And cheaper prediction will generate more demand for decision-making, so there will be more opportunities to exercise human judgment. So, although it is too early to speculate on the overall impact on jobs, there is little doubt that we will soon be witness to a great flourishing of demand for human judgment in the form of Reward Function Engineering.
Read the original post:
How AI Will Change the Way We Make Decisions - Harvard Business Review
Posted in Ai
Comments Off on How AI Will Change the Way We Make Decisions – Harvard Business Review
Xiaomi’s take on the Amazon Echo smart speaker costs less than $50 – TechCrunch
Posted: at 4:18 pm
Hot on the heels of reports that Facebook is developing its own take on Amazon Echo, Chinas Xiaomi has joined the tech company masses by jumping into the increasingly crowded smart speaker space.
The Mi AI Speaker is Xiaomis first take at rivaling the Echo, which has alreadyinspired a product from Alibaba in China and countsofferings from Googleand Appleamong its competitors.
Building on the voice-controlled speaker that Xiaomi shipped in December, the new device is powered by artificial intelligence, the company said, which has just been added the Xiaomis MIUI operating system, a variant of Android. The speaker can be used as a control for Xiaomi products and also over 30 smart products from Xiaomis partners. Xiaomi touted its content available for the speaker, which includes music, audio books, kids stories and radio.
In terms of audio itself, the device uses a setup of six microphones for 360 degree sound broadcast.
The price will be 299 RMB, $45, when it goes on sale in August but the usual caveat applies.As is often the case with Xiaomi products, the initial release is confirmed for China but we dont have word of international availability.
Early bird users in China can pick up a Mi AI Speaker for almost free just 1RMB in a working-beta test that Xiaomi says will improve the AI systems andhelp train [it] to be even more intelligent in the early stage.
The speaker was unveiled at an event in Beijing today where Xiaomi took the wraps of MIUI 9, which includes a bevy of AI-powered features such as a digital assistant and quick app launch capabilities.
The company also launched the Mi5X smartphone, a5.5-inch device that ships with MIUI9 and features a dual rear camera. The phone is priced from 1499 RMB, or $220.
See the original post here:
Xiaomi's take on the Amazon Echo smart speaker costs less than $50 - TechCrunch
Posted in Ai
Comments Off on Xiaomi’s take on the Amazon Echo smart speaker costs less than $50 – TechCrunch
Facebook is hiring a (human) AI Editor | TechCrunch – TechCrunch
Posted: at 4:18 pm
Human: Oh sweet bot, tell us a story! A nice story! About a very wise human who worked his whole life to save everybody in the world from having to spend time manually tagging their friends in digital photos and made a magic machine that did it for them instead!
Bot: Thats not really a very nice story when you think about it.
Human: Well, tell us about the wise human who thought no-one should ever feel forgotten on their birthday so he made a clever algorithm that always knew to remind the forgetful humans to write happy messages so their friends would never feel sad. He even thought that in future the clever algorithm could suggest what message to write so humans wouldnt even have to think of something nice to tell their friends!
Bot: I feel quite sad after reading that.
Human: And he made another magical algorithm that reminds people of Special Moments in their life even years and years afterwards, in case theyve forgotten that holiday they went on with their ex eight years ago.
Bot: You do realize some people voluntarily medicate themselves with alcohol * * in order * * to forget???
Human: But the wise human also wanted to make sure all humans in the world always felt there was something they needed to read and so he made a special series of algorithms that watched very closely what each human read and looked at and liked and clicked on in order to order the information they saw in such a way that a person never felt they had reached the end of all the familiar things they could click on and could just keep clicking the whole day and night and be reading all the things that felt so very familiar to them so they always felt the same every day and felt they were surrounded by people who felt exactly like them and could just keep on keeping on right as they were each and every day.
Bot: Thats confusing.
Human: And the great humans algorithms became so good at ordering the information which each human wanted to read that other mercenary humans came to realize they could make lots of money by writing fairy stories and feeding them into the machine like how politicians ate little children for breakfast and wore devils horns on Sundays.
Bot: Okay, youre scaring me now
Human: And in the latter years the great human realized it was better to replace all the human writers he had employed to help train the machine how to intelligently order information for humans because it was shown that humans could not be trusted not to be biased.
Bot: Um
Human: After all, the great human had proven years ago that his great machine was capable of manipulating the emotions of the humans that used it. All he needed to do was tweak the algorithmic recipe that determined what each human saw and he could make a person feel great joy or cast them down into a deep pit of despair.
Bot: Help.
Human: The problem was other humans started to notice the machines great power, and became jealous of the great and clever human who wielded this power and dark forces started to move against the great man and his machine.
Bot: Are you talking about regulators?
Human: There were even calls for the man to take editorial responsibility for the output of the machine. The man tried to tell the silly humans that a machine cant be an editor! Only a human can do that! The machine was just a machine! Even if nearly two billion humans were reading what the machine was ordering them to read every single month.
But it was no good. The great human finally realized the machines power was now so great there was no hiding it. So he took up his pen and started writing open letters about the Great Power and Potential of the machine. And all the Good it could do Humanity. All the while telling himself that only when humans truly learned to love the machine would they finally be free to just be themselves.
Humans had to let themselves subconsciously be shown the path of what to click and what to like and who to be friends with. Only then would they be free of the pain and suffering of having nothing to else to click on. And only his great all-seeing algorithm could show them the way, surreptitiously, to that true happiness.
It wasnt something that regulators were capable of understanding. It required he realized real faith in the algorithm.
Bot: Ive heard this story before, frankly, and I know where it ends.
Human: But even the great human knew the limits of his own creation. And selling positive stories about the machines powers was definitely not a job for the machine. So he fired off another email to his subordinates, ordering the (still) human-staffed PR department to add one more human head to its tally, with a special focus on the algorithms powering the machine thinking, as he did so, multiple steps ahead to the great day when such a ridiculous job would no longer be necessary.
Because everyone would love the machine as much as he did.
Bot: Oh I seeeee! Job title: AI Editor Hmm Develop and execute on editorial strategy and campaigns focused on advancements in AI being driven by Facebook. Minimum qualifications: Bachelors degree in English, Journalism, Communications, or related field well chatbots are related to language so I reckon I can make that fly. What else? 8+ years professional communications experience: journalism, agency or in-house. Well Ill need to ingest a media law course or two but I reckonIll challenge myself to apply.
In truth Ive done worse jobs. An AI bots gotta do what an AI bots gotta do, right? Just dont tell an algorithm to be accountable. Ive done my time learning. If theres a problem its not me, its the data, okay? Okay?
Excerpt from:
Facebook is hiring a (human) AI Editor | TechCrunch - TechCrunch
Posted in Ai
Comments Off on Facebook is hiring a (human) AI Editor | TechCrunch – TechCrunch
AI Grant aims to fund the unfundable to advance AI and solve hard problems – TechCrunch
Posted: at 1:19 am
Artificial intelligence-focused investment funds are a dime a dozen these days. Everyone knows theres money to be made from AI, but to capture value, good VCs know they need to back products and not technologies. This has left a bit of a void in the space where research occurs within research institutions and large tech companies and commercialization occurs within verticalized startups there isnt much left for the DIY AI enthusiast. AI Grant, created by Nat Friedman and Daniel Gross, aims to bankroll science projects for the heck of it to give untraditional candidates a shot at solving big problems.
Gross, a partner at Y Combinator, and Friedman, a founder who grewXamarin to acquisition by Microsoft, started working on AI Grant back in April. AI Grant issues no-strings-attached grants to people passionate about interesting AI problems. The more formalized version launching today brings a slate of corporate partners and a more structured application review process.
Anyone, regardless of background, can submit an application for a grant. The application is online and consists of questions about background and prior projects in addition to basic information about what the money will be used for and what the initial steps will be for the project. Applicants are asked to connect their GitHub, LinkedIn, Facebook and Twitter accounts.
Gross told me in an interview that the goal is to build profiles of non-traditional machine learning engineers. Eventually, the data collected from the grant program could allow the two to play a bit of machine learning moneyball valuing machine learning engineers without traditional metrics (like having a PhD from Stanford). You can imagine how all the social data could even help build a model for ideal grant recipients in the future.
The long-term goal is to create a decentralized AI research lab think DeepMind but run through Slack and full of engineers that dont cost $300,000 a pop. One day, the MacArthur genius grant-inspired program could serve other industries outside of AI offering a playground of sorts for the obsessed to build, uninhibited.
The entire AI Grant project reminds me of a cross between a Thiel Fellowship and a Kaggle competition. The former, a program to give smart college dropouts money and freedom to tinker and the later, an innovative platform for evaluating data scientists through competition. Neither strive to advance the field in the way the AI Grant program does, but you can see the ideological similarity around democratizing innovation.
Some of the early proposals to receive the AI Grant include:
Charles River Ventures (CRV) is providing the $2,500 grants that will be handed out to the next 20 fellows. In addition, Google has signed on to provide $20,000 in cloud computing credits to each winner, CrowdFlower is offering $18,000 in platform credit with $5,000 in human labeling credits, Scale is giving $1,000 in human labeling credit per winner and Floyd will give 250 Tesla K80 GPU hours to each winner.
During the first selection of grant winners, Floodgate awarded $5,000 checks. The program launching today will award $2,500 checks. Gross told me that this change was intentional the initial check size was too big. The plan is to add additional flexibility in the future to allow applicants to make a case for how much money they actually need.
You can check out the application here and give it a go. Applications will be taken until August 25th. Final selection of fellows will occur on September 24th.
Excerpt from:
AI Grant aims to fund the unfundable to advance AI and solve hard problems - TechCrunch
Posted in Ai
Comments Off on AI Grant aims to fund the unfundable to advance AI and solve hard problems – TechCrunch
Facebook is hiring a (human) AI Editor – TechCrunch
Posted: at 1:19 am
Human: Oh sweet bot, tell us a story! A nice story! About a very wise human who worked his whole life to save everybody in the world from having to spend time manually tagging their friends in digital photos and made a magic machine that did it for them instead!
Bot: Thats not really a very nice story when you think about it.
Human: Well, tell us about the wise human who thought no-one should ever feel forgotten on their birthday so he made a clever algorithm that always knew to remind the forgetful humans to write happy messages so their friends would never feel sad. He even thought that in future the clever algorithm could suggest what message to write so humans wouldnt even have to think of something nice to tell their friends!
Bot: I feel quite sad after reading that.
Human: And he made another magical algorithm that reminds people of Special Moments in their life even years and years afterwards, in case theyve forgotten that holiday they went on with their ex eight years ago.
Bot: You do realize some people voluntarily medicate themselves with alcohol * * in order * * to forget???
Human: But the wise human also wanted to make sure all humans in the world always felt there was something they needed to read and so he made a special series of algorithms that watched very closely what each human read and looked at and liked and clicked on in order to order the information they saw in such a way that a person never felt they had reached the end of all the familiar things they could click on and could just keep clicking the whole day and night and be reading all the things that felt so very familiar to them so they always felt the same every day and felt they were surrounded by people who felt exactly like them and could just keep on keeping on right as they were each and every day.
Bot: Thats confusing.
Human: And the great humans algorithms became so good at ordering the information which each human wanted to read that other mercenary humans came to realize they could make lots of money by writing fairy stories and feeding them into the machine like how politicians ate little children for breakfast and wore devils horns on Sundays.
Bot: Okay, youre scaring me now
Human: And in the latter years the great human realized it was better to replace all the human writers he had employed to help train the machine how to intelligently order information for humans because it was shown that humans could not be trusted not to be biased.
Bot: Um
Human: After all, the great human had proven years ago that his great machine was capable of manipulating the emotions of the humans that used it. All he needed to do was tweak the algorithmic recipe that determined what each human saw and he could make a person feel great joy or cast them down into a deep pit of despair.
Bot: Help.
Human: The problem was other humans started to notice the machines great power, and became jealous of the great and clever human who wielded this power and dark forces started to move against the great man and his machine.
Bot: Are you talking about regulators?
Human: There were even calls for the man to take editorial responsibility for the output of the machine. The man tried to tell the silly humans that a machine cant be an editor! Only a human can do that! The machine was just a machine! Even if nearly two billion humans were reading what the machine was ordering them to read every single month.
But it was no good. The great human finally realized the machines power was now so great there was no hiding it. So he took up his pen and started writing open letters about the Great Power and Potential of the machine. And all the Good it could do Humanity. All the while telling himself that only when humans truly learned to love the machine would they finally be free to just be themselves.
Humans had to let themselves subconsciously be shown the path of what to click and what to like and who to be friends with. Only then would they be free of the pain and suffering of having nothing to else to click on. And only his great all-seeing algorithm could show them the way, surreptitiously, to that true happiness.
It wasnt something that regulators were capable of understanding. It required he realized real faith in the algorithm.
Bot: Ive heard this story before, frankly, and I know where it ends.
Human: But even the great human knew the limits of his own creation. And selling positive stories about the machines powers was definitely not a job for the machine. So he fired off another email to his subordinates, ordering the (still) human-staffed PR department to add one more human head to its tally, with a special focus on the algorithms powering the machine thinking, as he did so, multiple steps ahead to the great day when such a ridiculous job would no longer be necessary.
Because everyone would love the machine as much as he did.
Bot: Oh I seeeee! Job title: AI Editor Hmm Develop and execute on editorial strategy and campaigns focused on advancements in AI being driven by Facebook. Minimum qualifications: Bachelors degree in English, Journalism, Communications, or related field well chatbots are related to language so I reckon I can make that fly. What else? 8+ years professional communications experience: journalism, agency or in-house. Well Ill need to ingest a media law course or two but I reckonIll challenge myself to apply.
In truth Ive done worse jobs. An AI bots gotta do what an AI bots gotta do, right? Just dont tell an algorithm to be accountable. Ive done my time learning. If theres a problem its not me, its the data, okay? Okay?
Here is the original post:
Posted in Ai
Comments Off on Facebook is hiring a (human) AI Editor – TechCrunch
China is using predictive AI to stop crimes before they happen – The Daily Dot
Posted: at 1:19 am
Chinais usingartificial intelligence to predict crime before it happens, according to a Financial Times report. No, there wont be any psychic precogs. Instead, China will use facial recognition technology and predictive analytics to warn police of potential criminals.
Facial recognition company Cloud Walk is spearheading the effort using a system that tracks peoples movements and behaviors to assess how likely they are to commit a crime. For example, the system may see someone visit a weapon store on a regular basis and conclude that they are more likely to act out. The company is currently trialing the software, which would automatically notify the police if it considers the odds of someone committing an offense to be dangerously high.
The police are using a big-data rating system to rate highly suspicious groups of people based on where they go and what they do, a Cloud Walk spokesperson told FT. That rating increases when someone frequently visits transport hubs and goes to suspicious places like a knife store.
The companys invasive software is already integrated into police databases in more than 50 cities and provinces. Those databases are filled with personal information on millions of Chinese citizens gathered for years by the surveillance states government. New technologies have only made it easier for the government to track the activities of its citizens. The facial recognition software combined with gain analysis and surveillance footage is reportedly able to recognize people even if they are found in a different spot wearing different clothes than when they were first seen.
We can use re-ID to find people who look suspicious by walking back and forth in the same area, or who are wearing masks, Leng Biao, professor of bodily recognition at the Beijing University of Aeronautics and Astronautics, told the Financial Times. With re-ID, its also possible to reassemble someones trail across a large area.
The systems arent just being used to prevent potentially fatal offenses. As Mashable points out, petty crimes like jaywalking or stealing toilet paper are also being monitored. The AI isalso being used with crowd analysis to identify suspicious behavior in densely populated areas.
Chinese law does not currently allow charges to be made based on predictions if someone did not commit a crime.
Several other countries including the United States are also trying to predictcrime using data analytics. In Chicago,police are using a formula based on record combinations, gang affiliations, and other bits of information, to figure out who is likely to shoot someone else. And a report from the Washington Post introduced the country to PredPol, a controversial software used to predict crimes in 20 of the 50 largest police forces in the U.S.
H/T Mashable
Follow this link:
China is using predictive AI to stop crimes before they happen - The Daily Dot
Posted in Ai
Comments Off on China is using predictive AI to stop crimes before they happen – The Daily Dot
Elon Musk Warns U.S. Governors That AI Poses An "Existential Risk … – Big Think
Posted: at 1:19 am
Elon Musk has warned of the threats posed by the advancements in artificial intelligence on numerous occasions. And in a July 15th meeting of the bipartisan National Governor's Association in Rhode Island, he tried to educate the nations governors on what he sees as a looming existential risk to humanity.
In an interview with Governor Brian Sandoval of Nevada, Musk said that soon robots will be able to do everything better than us, leading to a lot of job disruption. Indeed, AI-driven automation has been projected to take over up to half of all jobs, starting in the near future.
But Musk is not just worried about job loss for a large part of the population. He sees a bigger issue, saying that he has exposure to the most cutting edge AI, and I think people should be really concerned about it. It will hit us one day that AI has a much darker potential presence in our lives, but until people see robots going down the street killing people, they dont know how to react because it seems so ethereal, suggests Musk.
What can we do about this? As it was the governors conference, Musk proposes thinking about regulations.
AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it is too late, says Musk. [48:55]
He says the usual regulatory process has worked well enough for things that did not present a fundamental existential risk to human civilization which is how he views AI. Car accidents, faulty drugs, airplane crashes, bad food may all harm humans to varying degrees, but they do not present a danger to all of us as a whole.
In perhaps an unlikely defense of government institutions, Musk sees agencies like EPA and FAA as having necessary regulatory functions. Even the most libertarian, free-market people would be unwilling to get rid of the FAA for fear that a plane manufacturer might feel like cutting corners without supervision, thinks Musk. He also points out that hes against overregulation and finds it irksome but with AI, he thinks weve got to get on that especially as the race to create AI is heating up between a number of companies.
How genuine are Musks's concerns? Some have dismissed them as part of a genius marketing strategy, but stories about Musk say he talks about AI risks even in private. Along with Stephen Hawking, he seems genuinely worried about the future where artificial intelligence is rampant.
Heres a compilation of Musks comments on AI:
If you want to watch the full conference, including Musk addressing a multitude of other topics, check it out here:
Follow this link:
Elon Musk Warns U.S. Governors That AI Poses An "Existential Risk ... - Big Think
Posted in Ai
Comments Off on Elon Musk Warns U.S. Governors That AI Poses An "Existential Risk … – Big Think
DeepMind researchers create AI with an ‘imagination’ – Engadget
Posted: at 1:19 am
Other programs have been able to work in planning abilities, but only within limited environments. AlphaGo, for example, can do this well, as the researchers note in the blog post, however, they add that "environments like Go are 'perfect' - they have clearly defined rules which allow outcomes to be predicted very accurately in almost every circumstance." Facebook also created a bot that could reason through dialogue before engaging in conversation, but again, that was in a fairly restricted environment. "But the real world is complex, rules are not so clearly defined and unpredictable problems often arise. Even for the most intelligent agents, imagining in these complex environments is a long and costly process," said the blog post.
DeepMind researchers created what they're calling "imagination-augmented agents," or I2As, that have a neural network trained to extract any information from its environment that could be useful in making decisions later on. These agents can create, evaluate and follow through on plans. To construct and evaluate future plans, the I2As "imagine" actions and outcomes in sequence before deciding which plan to execute. They can also choose how they want to imagine, options for which include trying out different possible actions separately or chaining actions together in a sequence. A third option allows the I2As to create an "imagination tree," which lets the agent choose to continue imagining from any imaginary situation created since the last action it took. And an imagined action can be proposed from any of those previously imagined states, thus creating a tree.
The researchers tested the I2As on the puzzle game Sokoban and a spaceship navigation game, both of which require planning and reasoning. You can watch the agent playing Sokoban in the video below. For both tasks, the I2As performed better than agents without future reasoning abilities, were able to learn with less experience and were able to handle imperfect environments.
DeepMind AI has been taught how to navigate a parkour course and recall past knowledge and researchers have used it to explore how AI agents might cooperate or conflict with each other. When it comes to planning ability and future reasoning, there's still a lot of work to be done, but this first look is a promising step towards imaginative AI.
Read more:
DeepMind researchers create AI with an 'imagination' - Engadget
Posted in Ai
Comments Off on DeepMind researchers create AI with an ‘imagination’ – Engadget
Qualcomm opens up its AI optimization software, says dedicated mobile chips are coming – The Verge
Posted: July 25, 2017 at 12:16 pm
In the race to get AI working faster on your smartphone, companies are trying all sorts of things. Some, like Microsoft and ARM, are designing new chips that are better suited to run neural networks. Others, like Facebook and Google, are working to reduce the computational demands of AI itself. But for chipmaker Qualcomm whose processors account for 40 percent of the mobile market the current plan is simpler: adapt the silicon thats already in place.
To this end the company has developed what it calls its Neural Processing Engine. This is a software development kit (or SDK) that helps developers optimize their apps to run AI applications on Qualcomms Snapdragon 600 and 800 series processors. That means that if youre building an app that uses AI for, say, image recognition, you can integrate Qualcomms SDK and it will run faster on phones with compatible processors.
Qualcomm first announced the Neural Processing Engine a year ago as part of its Zeroth platform (which has since been killed off as a brand). From last September its been working with a few partners on developing the SDK, and today its opening it up to be used by all.
Any developer big or small that has already invested in deep learning meaning they have access to data and trained AI models they are the target audience, Gary Brotman, Qualcomms head of AI and machine learning, told The Verge. Its simple to use. We abstract everything under the hood so you dont have to get your hands dirty.
The company says one of the first companies to integrate its SDK is Facebook, which is currently using it to speed up the augmented reality filters in its mobile app. By using the Neural Processing Engine, says Qualcomm, Facebooks filters load five times faster than compared to a generic CPU implementation.
How exactly developers will use the SDK will vary from job to job, but the basic task of the software is to allocate tasks to different parts of Qualcomms Snapdragon chipset. Depending on whether developers want to optimize for battery life or processing speed, for example, they can draw on compute resources from different parts of the chip eg, the CPU, GPU, or DST. It allows you choose your core of choice relative to the power performance profile you want for your user, explains Brotman.
The SDK works with some of the most popular frameworks for developing AI systems, including Caffe, Caffe2, and Googles TensorFlow. Qualcomm says its designed not just to optimize AI on mobile devices, but also in cars, drones, VR headsets, and smart home products.
what were seeing is a tidal wave of AI workloads.
But deploying frameworks that adapt existing silicon is only the beginning. What were seeing is a tidal wave of AI workloads that are creating more demand for compute, says Brotman. To meet this demand, companies are working on entirely new architectural designs for AI-optimized chips. Microsoft, for example, is building a custom machine learning processor for the Hololens 2, while British chipmaker Graphcore recently raised $30 million to build its own Intelligence Processing Units for mobile devices.
For Qualcomm, this switch is further down the line, but its definitely coming. When were baking something into silicon, thats a very deliberate bet for us, and it doesnt come easy, says Brotman. Computes compute, and if we can optimize now what weve already got in our portfolio then were doing our job well. Longer term, though, is there going to be a need for dedicated neural computing? I think thats going to be the case, and the question is just, when do we place that bet.
Here is the original post:
Qualcomm opens up its AI optimization software, says dedicated mobile chips are coming - The Verge
Posted in Ai
Comments Off on Qualcomm opens up its AI optimization software, says dedicated mobile chips are coming – The Verge
HubSpot acquires Kemvi to bring more AI into its sales and marketing platform – TechCrunch
Posted: at 12:16 pm
HubSpot is announcing that it has acquiredKemvi, a startup applying artificial intelligence and machine learning to help sales teams.
A few months ago, Kemvi launched DeepGraph, a product that analyzes public data so that salespeople can identify the best time (say, after a job change or the publication of an article) to reach out to potential customers. It also proactively reaches out to verify leads.
Our vision has been to empower sales and marketing professionals by building technology that can extract information from text about whats happening in the world, said Kemvi founder and CEOVedant Misra.
And from the HubSpot perspective, Chief Strategy OfficerBrad Coffey said the company had been looking for new ways to bring AI technology into its platform. He acknowledged that AI and machine learning are buzzwords that get thrown around a lot right now, but he found Kemvi particularly appealing because it addressed a real need among salespeople.
What we want to do is focus on delivering tangible value to our customers, Coffey said. Thats what theyre here for theyre trying to understand the right way to grow their business and reach their customer. Its not that we want to invest in machine learning and AI for the academic interest of it.
The two-person Kemvi team, including Misra, will be joining HubSpot to work on bringing the startups technology into the HubSpot platform. Misra also said theres a transition plan for current Kemvi/DeepGraph customers: I think theyll be excited for what were working on at HubSpot.
The financial terms of the acquisition were not disclosed. Kemvi previously raised $1 million in funding from Seabed VC, Neotribe Ventures, Kepha Partners and others.
Read more here:
HubSpot acquires Kemvi to bring more AI into its sales and marketing platform - TechCrunch
Posted in Ai
Comments Off on HubSpot acquires Kemvi to bring more AI into its sales and marketing platform – TechCrunch