The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
I Built an Artificial-Intelligence System for Investing — and It Showed How Smart Warren Buffett Is – Motley Fool
Posted: August 20, 2017 at 6:16 pm
I was curious.
The idea of driverless cars has been, and still is, fascinating to me. And so is the technology that makes driverless cars possible -- artificial intelligence (AI). Like many, I grew up enjoying books, movies, and TV shows that featured machines that could think in similar ways as humans. But what was once science fiction has now become reality.
A couple of years ago, I began a quest to really understand how AI works. I already knew the general concepts, but I wanted to delve into the nitty-gritty of one of the most revolutionary technologies of all time. So I read everything I could get my hands on, from fairly high-level books to textbooks to websites geared toward AI developers.
Along the way, I decided to build my own AI system. One thing even more interesting to me than AI is investing, so I decided to combine the two and develop an AI system that could make investment recommendations. That system is now up and running. And it told me just how smart Warren Buffett really is.
Image source: Getty Images.
AI includes quite a few approaches. One that especially intrigued me was artificial neural networks. The idea for artificial neural networks originated back in the 1940s, when aneurophysiologist and a mathematicianteamed up to write a paper about how neurons in the human brain might work. Based on their research, they built a simple neural network using electrical circuits.
Fast-forward to today. Artificial neural networks are used in many AI applications. Facebook (NASDAQ:FB), for example, uses neural networks in recognizing the faces of people in photos and to decide which advertisements to display to which users.Apple (NASDAQ:AAPL) uses neural networks to enable Siri to recognize what people ask and respond to questions.
Artificial neural networks work in a similar way that your neurons do. Each neuron is connected to multiple other neurons. When there is input (for example, a bee sting), the neuron transmits a signal to the neurons to which it's linked. In artificial neural networks, though, the inputs are data -- like images and speech. The artificial neural network learns from when it gets things wrong, self-adjusts, and gets better at recognition the more data it handles.
Image source: Getty Images.
My artificial neural network was child's play compared to what Apple and Facebook use. I created a relatively simple network that received financial input. This input included price, earnings, and valuation history for the S&P 500 index. I also threw consumer price index (CPI) data, prime lending rates, three-month Treasury bill rates, industrial productivity index data, unemployment rates, and other financial data into the mix.
The kind of artificial neural network I built used what's called "supervised learning," where the AI system is trained using a lot of data for which the desired outputs are known. I trained my system using over 50 years of data, going back to the 1940s. I then tested it using data from 2000 through today.
What I wanted the artificial neural network to determine was whether a person should be invested in the S&P 500 or in cash on a month-by-month basis. After a few stumbles along the way, I finally received an answer from the AI system. That answer was: Always be invested in stocks.
I ran that system all kinds of ways. I pared down the inputs. I changed out some data for other data. I experimented with several variables that the AI experts recommend tweaking. And the answer always came back the same.
It occurred to me that my AI system was basically saying to do what legendary investor Warren Buffett wrote in his letter to Berkshire Hathaway (NYSE:BRK-A) (NYSE:BRK-B) shareholders in 2014. He related the instructions in his will for the trustee of his estate to follow upon his death: Invest most of the money in an S&P 500 index fund and let it ride.
Here's the really interesting part. I examined in more detail the recommendations from the artificial neural network. The system had more confidence in being in stocks when the market was going down and less confidence when the market was going up. That's basically what Buffett had in mind when he said to "be fearful when others are greedy and greedy when others are fearful." I realized that I had created a "Buffett-bot"!
Image source: Getty Images.
The more you think about it, though, the more my AI system -- and Warren Buffett -- makes sense. Historically, the stock market has risen a lot more months than it's fallen. Every time the market has fallen, it's come roaring back. There's every reason in the world to be confident when the market is down, because better days will surely be ahead. That's been Buffett's philosophy his entire career.
Of course, Buffett hasn't followed the advice that he is leaving for his heirs. Instead of parking his money in an S&P 500 index fund, he has used Berkshire Hathaway as a vehicle to build his own investment fund of sorts. If you pick the right stocks, your success will be even greater than going only with the S&P 500. Buffett's track record proves the point.
I haven't asked my AI system yet, by the way, which stocks it would recommend. My hunch is that, if it's as smart as I think it is, it would come up with suggestions pretty close to the stock picks made by The Motley Fool's investing newsletters, which have trounced the S&P 500's performance. (For what it's worth, The Motley Fool long ago recommended several stocks of AI leaders, including both Apple and Facebook, as well as Buffett's Berkshire Hathaway.)
What's the key takeaway from my experiment with AI? Invest in stocks, stay invested in stocks, and buy even more when others are too afraid to do so. The concept applies to the S&P 500 or solid individual stocks like The Motley Fool recommends. That's an intelligent approach -- whether the intelligence is artificial or not.
Keith Speights owns shares of Apple and Facebook. The Motley Fool owns shares of and recommends Apple, Berkshire Hathaway (B shares), and Facebook. The Motley Fool has a disclosure policy.
View post:
Posted in Artificial Intelligence
Comments Off on I Built an Artificial-Intelligence System for Investing — and It Showed How Smart Warren Buffett Is – Motley Fool
America Can’t Afford to Lose the Artificial Intelligence War | The … – The National Interest Online
Posted: at 6:16 pm
Today, the question of artificial intelligence (AI) and its role in future warfare is becoming far more salient and dramatic than ever before. Rapid progress in driverless cars in the civilian economy has helped us all see what may become possible in the realm of conflict. All of a sudden, it seems, terminators are no longer the stuff of exotic and entertaining science-fiction movies, but a real possibility in the minds of some. Innovator Elon Musk warns that we need to start thinking about how to regulate AI before it destroys most human jobs and raises the risk of war.
It is good that we start to think this way. Policy schools need to start making AI a central part of their curriculums; ethicists and others need to debate the pros and cons of various hypothetical inventions before the hypothetical becomes real; military establishments need to develop innovation strategies that wrestle with the subject. However, we do not believe that AI can or should be stopped dead in its tracks now; for the next stage of progress, at least, the United States must rededicate itself to being the first in this field.
First, a bit of perspective. AI is of course not entirely new. Remotely piloted vehicles may not really qualifyafter all, they are humanly, if remotely, piloted. But cruise missiles already fly to an aimpoint and detonate their warheads automatically. So would nuclear warheads on ballistic missiles, if God forbid nuclear-tipped ICBMs or SLBMs were ever launched in combat. Semi-autonomous systems are already in use on the battlefield, like the U.S. Navy Phalanx Close-In Weapons System, which is capable of autonomously performing its own search, detect, evaluation, track, engage, and kill assessment functions, according to the official Defense Department description, along with various other fire-and-forget missile systems.
But what is coming are technologies that can learn on the jobnot simply follow prepared plans or detailed algorithms for detecting targets, but develop their own information and their own guidelines for action based on conditions they encounter that were not initially foreseeable in specific.
A case in point is what our colleague at Brookings, retired Gen. John Allen, calls hyperwar. He develops the idea in a new article in the journal Proceedings, coauthored with Amir Husain. They imagine swarms of self-propelled munitions that, in attacking a given target, deduce patterns of behavior of the targets defenses and find ways to circumvent them, aware all along of the capabilities and coordinates of their teammates in the attack (the other self-propelled munitions). This is indeed about the place where the word robotics seems no longer to do justice to what is happening, since that term implies a largely prescripted process or series of actions. What happens in hyperwar is not only fundamentally adaptive, but also so fast that it far supercedes what could be accomplished by any weapons system with humans in the loop. Other authors, such as former Brookings scholar Peter Singer, have written about related technologies, in a partly fictional sense. Now, Allen and Husain are not just seeing into the future, but laying out a near-term agenda for defense innovation.
The United States needs to move expeditiously down this path. People have reasons to fear fully autonomous weaponry, but if a Terminator-like entity is what they are thinking of, their worries are premature. That software technology is still decades away, at the earliest, along with the required hardware. However, what will be available sooner is technology that will be able to decide what or who is a targetbased on the specific rules laid out by the programmer of the software, which could be highly conservative and restrictiveand fire upon that target without any human input.
To see why outright bans on AI activities would not make sense, consider a simple analogy. Despite many states having signed the Non-Proliferation Treaty, a ban on the use and further development of nuclear weapons, the treaty has not prevented North Korea from building a nuclear arsenal. But at least we have our own nuclear arsenal with which we can attempt to deter other such countries, a tactic that has been generally successful to date. A preemptive ban on AI development would not be in the United States best interest because non-state actors and noncompliant states could still develop it, leaving the United States and its allies behind. The ban would not be verifiable and it could therefore amount to unilateral disarmament. If Western countries decided to ban fully autonomous weaponry and a North Korea fielded it in battle, it would create a highly fraught and dangerous situation.
To be sure, we need the debate about AIs longer-term future, and we need it now. But we also need the next generation of autonomous systemsand America has a strong interest in getting them first.
Michael O'Hanlon is a senior fellow at the Brookings Institution.Robert Karlen is a student at the University of Washington and an intern in the Center for Twenty-First Century Security and Intelligence at the Brookings Institution.
Image: Reuters
More:
Posted in Artificial Intelligence
Comments Off on America Can’t Afford to Lose the Artificial Intelligence War | The … – The National Interest Online
Artificial intelligence is coming to medicine don’t be afraid – STAT
Posted: at 6:16 pm
A
utomation could replace one-third of U.S. jobs within 15 years. Oxford and Yale experts recently predicted that artificial intelligence could outperform humans in a variety of tasks by 2045, ranging from writing novels to performing surgery and driving vehicles. A little human rage would be a natural response to such unsettling news.
Artificial intelligence (AI) is bringing us to the precipice of an enormous societal shift. We are collectively worrying about what it will mean for people. As a doctor, Im naturally drawn to thinking about AIs impact on the practice of medicine. Ive decided to welcome the coming revolution, believing that it offers a wonderful opportunity for increases in productivity that will transform health care to benefit everyone.
Groundbreaking AI models have bested humans in complex reasoning games, like the recent victory of Googles AlphaGo AI over the human Go champ. What does that mean for medicine?
advertisement
To date, most AI solutions have solved minor human issues playing a game or helping order a box of detergent. The innovations need to matter more. The true breakthroughs and potential of AI lie in real advancements in human productivity. A McKinsey Global Institute report suggests that AI is helping us approach an unparalleled expansion in productivity that will yield five times the increase introduced by the steam engine and about 1 1/2 times the improvements weve seen from robotics and computers combined. We simply dont have a mental model to comprehend the potential of AI.
Across all industries, an estimated 60 percent of jobs will have 30 percent of their activities automated; about 5 percent of jobs will be 100 percent automated.
What this means for health care is murky right now. Does that 5 percent include doctors? After all, medicine is a series of data points of a knowable nature with clear treatment pathways that could be automated. That premise, though, fantastically overstates and misjudges the capabilities of AI and dangerously oversimplifies the complexity underpinning what physicians do. Realistically, AI will perform many discrete tasks better than humans can which, in turn, will free physicians to focus on accomplishing higher-order tasks.
If you break down the patient-physician interaction, its complexity is immediately obvious. Requirements include empathy, information management, application of expertise in a given context, negotiation with multiple stakeholders, and unpredictable physical response (think of surgery), often with a life on the line. These are not AI-applicable functions.
I mentioned AlphaGo AI beating human experts at the game. The reason this feat was so impressive is due to the high branching factor and complexity of the Go game tree there are an estimated 250 choices per move, permitting estimates of 10 to the 170th different game outcomes. By comparison, chess has a branching factor of 35, with 10 to the 47th different possible game outcomes. Medicine, with its infinite number of moves and outcomes, is decades away from medical approaches safely managed by machines alone.
We still need the human factor.
That said, more than 20 percent of a physicians time is now spent entering data. Since doctors are increasingly overburdened with clerical tasks like electronic health record entry, prior authorizations, and claims management, they have less time to practice medicine, do research, master new technology, and improve their skills. We need a radical enhancement in productivity just to sustain our current health standards, much less move forward. Thoughtfully combining human expertise and automated functionality creates an augmented physician model that scales and advances the expertise of the doctor.
Physicians would rather practice at the top of their licensing and address complex patient interaction than waste time entering data, faxing (yes, faxing!) service authorizations, or tapping away behind a computer. The clerical burdens pushed by fickle health care systems onto physicians and other care providers is both unsustainable and a waste of our best and brightest minds. Its the equivalent of asking an airline pilot to manage the ticket counter, count the passengers, handle the standby and upgrade lists, and give the safety demonstrations then fly the plane. AI can help with such support functions.
But to radically advance health care productivity, physicians must work alongside innovators to atomize the tasks of their work. Understanding where they can let go to unlock time is essential, as is collaborating with technologists to guide truly useful development.
Perhaps it makes sense to start with automated interpretation of basic labs, dose adjustment for given medications, speech-to-text tools that simplify transcription or document face-to-face interactions, or even automate wound closure. And then move on from there.
It will be important for physicians and patients to engage and help define the evolution of automation in medicine in order to protect patient care. And physicians must be open to how new roles for them can be created by rapidly advancing technology.
If it all sounds a bit dreamy, I offer an instructive footnote about experimentation with AlphaGo AI. The recent game summit proving AlphaGos prowess also demonstrated that human talent increases significantly when paired with AI. This hybrid model of humans and machines working together presents a scalable automation paradigm for medicine, one that creates new tasks and roles for essential medical and technology professionals, increasing the capabilities of the entire field as we move forward.
Physicians should embrace this opportunity rather than fear it. Its time to rage with the machine.
Jack Stockert, M.D., is a managing director and leader of strategy and business development at Health2047, a Silicon Valley-based innovation company.
Trending
Democrats in Congress to explore creating an expert panel
Democrats in Congress to explore creating an expert panel on Trumps mental health
See how the opioid epidemic swept across the United
See how the opioid epidemic swept across the United States
Can detox waters really flush your fat and toxins
Can detox waters really flush your fat and toxins away?
Recommended
A new survey says doctors are warming up to
A new survey says doctors are warming up to single-payer health care
Beyond aggravation: Constipation is an American epidemic
Beyond aggravation: Constipation is an American epidemic
Express Scripts to limit opioids, concerning doctors
Express Scripts to limit opioids, concerning doctors
See more here:
Artificial intelligence is coming to medicine don't be afraid - STAT
Posted in Artificial Intelligence
Comments Off on Artificial intelligence is coming to medicine don’t be afraid – STAT
Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence – Government Technology
Posted: at 6:16 pm
Government is always being asked to do more with less less money, less staff, just all around less and that makes the idea of artificial intelligence (AI)a pretty attractive row to hoe. If a piece of technology could reduce staff workload or walk citizens through a routine process or form, you could effectively multiply a workforce without ever actually adding new people.
But for every good idea, there are caveats, limitations, pitfalls and the desire to push the envelope. While innovating anything in tech is generally a good thing, when it comes to AI in government, there is fine line to walk between improving a process and potentially making it more convoluted.
Outside of a few key government functions, a new white paper from the Harvard Ash Center for Democratic Governance and Innovation finds that AI could actually increase the burden of government and muddy-up the functions it is so desperately trying to improve.
Hila Mehr, a Center for Technology and Democracy fellow, explained that there are five key government problems that AI might be able to assist with reasonably: resource allocation, large data sets, expert shortages, predictable scenarios, and procedural and diverse data.
And governments have already started moving into these areas. In Arkansas and North Carolina, chatbots are helping the state connect with its citizens through Facebook. In Utah and Mississippi, Amazon Alexa skills have been introduced to better connect constituents to the information and services they need.
Unlike Hollywood representations of AI in film, Mehr said, the real applications for artificial intelligence in a government organization are generally far from sexy. The administrative aspects of governing are where tools like this will excel.
Where it comes to things like expert shortages, she said she sees AI as a means to support existing staff. In a situation where doctors are struggling to meet the needs of all of their patients, AI could act as a research tool. The same is true of lawyers dealing with thousands of pages of case law. AI could be used as a research assistant.
If youre talking about government offices that are limited in staff and experts," Mehr said, "thats where AI trained on niche issues could come in.
But, she warned, AI is not without its problems, namely making sure that it is not furthering human bias written in during the programming process and played out through the data it is fed. Rather than rely on AI to make critical decisions, she argues that any algorithms and decisions made for or as a result of AI should retain a human component.
We cant rely on them to make decisions, so we need that check, the way we have checks in our democracy, we need to have checks on these systems as well, and thats where the human group or panel of individuals comes in, Mehr said. The way that these systems are trained, you cant always know why they are making the decision they are making, which is why its important to not let that be the final decision because it can be a black box depending on how it is trained and you want to make sure that it is not running on its own.
But past the fear that the technology might disproportionately impact certain citizens or might somehow complicate the larger process, there is the somewhat legitimate fear that the implementation of AI will mean lost jobs. Mehr said its a thought that even she has had.
On the employee side, I think a lot of people view this, rightly so, as something that could replace them," she added. "I worry about that in my own career, but I know that it is even worse for people who might have administrative roles. But I think early studies have shown that youre using AI to help people in their work so that they are spending less time doing repetitive tasks and more time doing the actual work that requires a human touch.
In both her white paper and on the phone, Mehr is careful to advise against going whole hog into AI with the expectation that it can replace costly personnel. Instead she advocates for the technology as a tool to build and supplement the team that already exists.
As for where the technology could run affront of human jobs, Mehr advises that government organizations and businesses alike start considering labor practices in advance.
Inevitably, it will replace some jobs, she said. People need to be looking at fair labor practices now, so that they can anticipate these changes to the market and be prepared for them.
With any blossoming technology, there are barriers to entry and hurdles that must be overcome before a useful tool is in the hands of those best fit to use it. And as with anything, money and resources present a significant challenge but Mehr said large amounts of data are also needed to get AI, especially learning systems, off the ground successfully.
If you are talking about simple automation or [answering] a basic set of questions, it shouldnt take that long. If you are talking about really training an AI system with machine learning, you need a big data set, a very big data set, and you need to train it, not just feed the system data and then its ready to go, she said. The biggest barriers are time and resources, both in the sense of data and trained individuals to do that work.
More here:
Posted in Artificial Intelligence
Comments Off on Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence – Government Technology
How artificial intelligence conquered democracy – The Independent
Posted: August 18, 2017 at 5:15 am
There has never been a better time to be a politician. But its an even better time to be a machine learning engineer working for a politician.
Throughout modern history, political candidates have had only a limited number of tools to take the temperature of the electorate. More often than not, theyve had to rely on instinct rather than insight when running for office.
Now big data can be used to maximise the effectiveness of a campaign. The next level will be using artificial intelligence in election campaigns and political life.
Machine-learning systems are based on statistical techniques that can automatically identify patterns in data. These systems can already predict which US congressional bills will pass by making algorithmic assessments of the text of the bill as well as other variables such as how many sponsors it has and even the time of year it is being presented to congress.
Machine intelligence is also now being carefully deployed in election campaigns to engage voters and help them be more informed about key political issues.
This of course raises ethical questions. There is evidence, for example, to suggest that AI-powered technologies were used to manipulate citizens in Donald Trumps 2016 election campaign. Some even claim these tools were decisive in the outcome of the vote.
And it remains unclear what role AI played in campaigning ahead of the Brexit referendum in the UK.
Artificial intelligence can be used to manipulate individual voters. During the 2016 US presidential election, the data science firm Cambridge Analytica rolled out an extensive advertising campaign to target persuadable voters based on their individual psychology.
This highly sophisticated micro-targeting operation relied on big data and machine learning to influence peoples emotions. Different voters received different messages based on predictions about their susceptibility to different arguments. The paranoid received ads with messages based around fear. People with a conservative predisposition received ads with arguments based on tradition and community.
This was enabled by the availability of real-time data on voters, from their behaviour on social media to their consumption patterns and relationships. Their internet footprints were being used to build unique behavioural and psychographic profiles.
The problem with this approach is not the technology itself but the fact that the campaigning is covert and because of the insincerity of the political messages being sent out. A candidate with flexible campaign promises like Trump is particularly well-suited to this tactic. Every voter can be sent a tailored message that emphasises a different side of a particular argument. Each voter gets a different Trump. The key is simply to find the right emotional triggers to spur each person into action.
We already know that AI can be used to manipulate public opinion. Massive swarms of political bots were used in the 2017 general election in the UK to spread misinformation and fake news on social media. The same happened during the US presidential election in 2016 and several other key political elections around the world.
These bots are autonomous accounts that are programmed to aggressively spread one-sided political messages to manufacture the illusion of public support. This is an increasingly widespread tactic that attempts to shape public discourse and distort political sentiment.
Typically disguised as ordinary human accounts, bots spread misinformation and contribute to an acrimonious political climate on sites like Twitter and Facebook. They can be used to highlight negative social media messages about a candidate to a demographic group more likely to vote for them, the idea being to discourage them from turning out on election day.
Technology first: Trumps presidential campaign team were able to present a different version of him to different voters (EPA)
In the 2016 election, pro-Trump bots even infiltrated Twitter hashtags and Facebook pages used by Hillary Clinton supporters to spread automated content.
Bots were also deployed at a crucial point in the 2017 French presidential election, throwing out a deluge of leaked emails from candidate Emmanuel Macrons campaign team on Facebook and Twitter. The information dump also contained what Macron says was false information about his financial dealings. The aim of #MacronLeaks was to build a narrative that Macron was a fraud and a hypocrite a common tactic used by bots to push trending topics and dominate social feeds.
It is easy to blame AI technology for the worlds wrongs (and for lost elections) but the underlying technology itself is not inherently harmful. The algorithmic tools that are used to mislead, misinform and confuse could equally be repurposed to support democracy.
AI can be used to run better campaigns in an ethical and legitimate way. We can, for example, programme political bots to step in when people share articles that contain known misinformation. They could issue a warning that the information is suspect and explain why. This could help to debunk known falsehoods, like the infamous article that falsely claimed the pope had endorsed Trump.
We can use AI to better listen to what people have to say and make sure their voices are being clearly heard by their elected representatives. Based on these insights, we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues to help them make up their own mind.
People are often overwhelmed by political information in TV debates and newspapers. AI can help them discover the political positions of each candidate based on what they care about most. For example, if a person is interested in environment policy, an AI targeting tool could be used to help them find out what each party has to say about the environment. Crucially, personalised political ads must serve their voters and help them be more informed, rather than undermine their interests.
The use of AI techniques in politics is not going away anytime soon. It is simply too valuable to politicians and their campaigns. However, they should commit to using AI ethically and judiciously to ensure that their attempts to sway voters do not end up undermining democracy.
Vyacheslav W Polonski is a researcher at the University of Oxford. This article was originally published on The Conversation (www.theconversation.com)
Original post:
How artificial intelligence conquered democracy - The Independent
Posted in Artificial Intelligence
Comments Off on How artificial intelligence conquered democracy – The Independent
The Ethics of Artificial Intelligence – HuffPost
Posted: at 5:15 am
Many experts believe that artificial intelligence (AI) might lead to the end of the worldjust not in the way that Hollywood films would have us believe. Movie plots, for example, feature robots increasing in intelligence until they take over the human race. The reality is far less dramatic, but may cause some incredible cultural shiftsnonetheless.
Last year, industry leaders like Elon Musk, Stephen Hawking, and Bill Gates wrote a letter to the International Joint Conference in Argentina stating that the successful adoption of AI might be one of humankinds biggest achievementsand maybe its last. They noted that AI poses unique ethical dilemmas, whichif not considered carefullycould prove more dangerous than nuclear capabilities.
How can we implement AI technology while remaining faithful to our ethical obligations? The solution requires systematic effort.
Transparency is the key to integrating AI effectively. Companies may mistakenly assume that ethics is merely a practice in risk mitigation. This mindset only serves to deadlock innovation.
Create a company ethics committee that works with your shareholders to determine whats ethical and whats not from the outset. Align this moral code with your business cultural values to create innovative products while increasing public trust. An ethics committee member should participate in the design and development stages of all new products, including anything that incorporates AI. Integrity is essential to the foundation of an organization. Your ethical mindset must therefore be proactive, not reactive.
A solid ethical foundation leads to good business decisions. It wouldnt make sense, for example, to build a product that you later determine will affect the industry negatively. By applying your ethical code from the start, you create a positive impact while wisely allocating resources.
An ethics committee, however, doesnt tell a design and development team what it can and cant do. Instead, the committee encourages the team to pursue innovation without infringing on the companys cultural values. Think of it as an important system of checks and balances; one department may be so focused on the potential of a new innovation that members of the department never pause to consider the larger ramifications. An ethics committee can preserve your business integrity in light of exciting new developments that have the potential to completely reshape your organization.
AI is still a relatively new concept, so its possible to do something legal, yet unethical. Ethical conversations are more than just a checklist for team members to follow. They require hard questions and introspection about new products and the companys intentions. This Socratic method takes time and may create tension between team membersbut its worth the effort.
Dont know where to begin with your ethical code? Start by reading the One Hundred Year Study on Artificial Intelligence from Stanford. This report reviews the impact of AI on culture in five-year timespans, outlines societys opportunities and challenges in light of AI innovation, and envisions future changes. Its intended to guide decision-making and policy-making to ensure AI benefits humankind as a whole.
Use this report as an informed framework for your AI initiatives. Other ethical framework essentials include:
One tech industry concern is that failure to self-police will only lead to external regulation. The Stanford report maintains it will be impossible to adequately regulate AI. Risks and opportunities vary in scope and domain. While the tech industry balks at the idea of oversight, the Stanford report suggests that all levels of government should be more aware of AIs potential.
A committee of tech leaders plans to convene this month to discuss the ethics of AI intelligence, and the possibility of creating a large-scale best practices guide for companies to follow. The hope? That discussion will breed introspection, leading all AI companies to make ethical decisions benefitting society. The process will take time, and tech companies are notoriously competitive. But in this we universally agree: its worth the effort.
Article first seen on Futurum here. Photo Credit: HoursDeOuvre via Compfight cc
The Morning Email
Wake up to the day's most important news.
Originally posted here:
Posted in Artificial Intelligence
Comments Off on The Ethics of Artificial Intelligence – HuffPost
Tom Siebel is Back! A Software Pioneer Explores IoT and AI – Barron’s
Posted: at 5:15 am
Barron's | Tom Siebel is Back! A Software Pioneer Explores IoT and AI Barron's Tom Siebel, who was employee number 20 at Oracle and later sold his company to Larry Ellison for billions, has a new startup that is riding the convergence of artificial intelligence with the Internet of Things, as the world zooms toward perhaps 50 ... |
Read the original post:
Tom Siebel is Back! A Software Pioneer Explores IoT and AI - Barron's
Posted in Artificial Intelligence
Comments Off on Tom Siebel is Back! A Software Pioneer Explores IoT and AI – Barron’s
The artificial Intelligence wave is upon us. We better be prepared – Hindustan Times
Posted: August 16, 2017 at 6:16 pm
The AI (artificial intelligence) revolution is well and truly upon us, and we are at a significant watershed moment in our lives where AI could become the new electricity pervasive and touching every aspect of our life. While many industries including healthcare, education, retail and banks have already started adopting AI in key business aspects, there are also new business models which are predicated on AI.
With the global market of AI expected to grow at 36% annually, reaching a valuation of $3 trillion by 2025 from $126 bn in 2015, new age disruption is not only redefining the way traditional businesses are run, but is also unfolding as a new factor of production.
However, the fear of what might happen once AI evolves into artificial general intelligence which can perform any intellectual task that a human can do has now taken centre stage with the ongoing debate between two tech titans Elon Musk and Mark Zuckerberg. Similarly, Microsoft co-founder Bill Gates had also voiced his views that in a few years, AI would have evolved enough to warrant wide attention, while Facebook has ended up shutting down one of its AI projects as chatbots had developed their own language (unintelligible to humans) to communicate.
Beyond this, the common citizen wants to know if she should be worried about AI taking away her job? This calls for broader thinking, including the evolution of industry protocols, while making sure that the public is ready for these futuristic advancements.
Will AI move my cheese?
The emergence of AI has seen criticism because of the probability that it could replace human jobs by automation. However, as we see the shift of AI from R&D stage to various real-life business prototypes, it seems evident that goal of most AI applications is to augment human abilities through hybrid business models.
According to McKinsey, AI would raise global labour productivity by 0.8% to 1.4% a year between now and 2065. I believe that both policy makers and corporates must recognise AIs potential to empower the workforce and invest in creating training programmes/workshops to help the labour force adapt to these newer models.
For instance, Ocado, the UK online supermarket has embedded robotics at the core of warehouse management. Robots steer thousands of product-filled bins to human packers just in time to fill shopping bags which are then sent to delivery vans whose drivers use AI applications to pick the best route based on traffic conditions and weather.
Technology will create more new jobs than it eliminates
We must learn from the history of the industrial and technological revolutions over the last 500 years that jobs eliminated in one sector have been replaced by newer jobs requiring refreshed skill-sets. As a corollary, countries such as Japan, Korea or Germany, which have the highest levels of automation, should have seen large scale unemployment over the past 4-5 decades. This is not necessarily the case.
Having said that, in the near future, every routine operational task is certainly likely to become digitised and AI could be running the back-office of most businesses. Over the next few decades, many middle skill jobs are also likely to be eliminated. However, AI is unlikely to replace jobs which require human to human interaction. Consequently, fundamental human thinking skills such as entrepreneurship, strategic thinking, social leadership, connected salesmanship, philosophy, and empathy, among others, would be in even greater demand.
Further, till a point of singularity is reached, AI will not be able to service or program on its own leading to new, high-skilled jobs for technicians and computing experts.
Lets be prepared
Globally, policymakers and corporations will need to significantly revamp the education system to address technology gaps.
In India, this represents an enormous opportunity for policymakers to make better informed decisions, tackle some of the toughest socio-economic challenges, and address the woeful shortage of qualified doctors, teachers etc.
We need to immediately plan for state and nation-wide university hubs, and MOOCs (massive open online courses) built on the framework of DICE (design, innovation, creativity led entrepreneurship). Curricula should be focussed on developing basic skills in STEM (science, technology , engineering and mathematics) fields, coupled with a new emphasis on creativity, critical and strategic thinking. Adaptive and individualised learning systems need to be established to help students at different levels work collaboratively amongst themselves as well as with AI in the classroom.
The National Skills Development Corporation will need to evolve into National Future Skills Development, as we as a civil society prepare to bring the future into the present!
Rana Kapoor is MD and CEO, YES Bank; and Chairman, YES Global Institute
The views expressed are personal
Visit link:
The artificial Intelligence wave is upon us. We better be prepared - Hindustan Times
Posted in Artificial Intelligence
Comments Off on The artificial Intelligence wave is upon us. We better be prepared – Hindustan Times
How Artificial Intelligence is reshaping art and music – The Hindu
Posted: at 6:16 pm
In the mid-1990s, Douglas Eck worked as a database programmer in Albuquerque, New Mexico, while moonlighting as a musician. After a day spent writing computer code inside a lab run by the Department of Energy, he would take the stage at a local juke joint, playing what he calls punk-influenced bluegrass Johnny Rotten crossed with Johnny Cash. But what he really wanted to do was combine his days and nights, and build machines that could make their own songs. My only goal in life was to mix AI and music, Mr. Eck said.
It was a naive ambition. Enrolling as a graduate student at Indiana University, in Bloomington, not far from where he grew up, he pitched the idea to Douglas Hofstadter, the cognitive scientist who wrote the Pulitzer Prize-winning book on minds and machines, Gdel, Escher, Bach: An Eternal Golden Braid . Mr. Hofstadter turned him down, adamant that even the latest artificial intelligence techniques were much too primitive.
But during the next two decades, working on the fringe of academia, Mr. Eck kept chasing the idea, and eventually, the AI caught up with his ambition.
Last spring, a few years after taking a research job at Google, Mr. Eck pitched the same idea he pitched to Mr. Hofstadter all those years ago. The result is Project Magenta, a team of Google researchers who are teaching machines to create not only their own music but also to make so many other forms of art, including sketches, videos and jokes.
With its empire of smartphones, apps and internet services, Google is in the business of communication, and Mr. Eck sees Magenta as a natural extension of this work. Its about creating new ways for people to communicate, he said during a recent interview inside the small two-story building here that serves as headquarters for Google AI research.
Growing effort
The project is part of a growing effort to generate art through a set of AI techniques that have only recently come of age. Called deep neural networks, these complex mathematical systems allow machines to learn specific behaviour by analysing vast amounts of data.
By looking for common patterns in millions of bicycle photos, for instance, a neural network can learn to recognise a bike. This is how Facebook identifies faces in online photos, how Android phones recognise commands spoken into phones, and how Microsoft Skype translates one language into another. But these complex systems can also create art. By analysing a set of songs, for instance, they can learn to build similar sounds.
As Mr. Eck says, these systems are at least approaching the point still many, many years away when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different.
Tools for artists
But that end game is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.
In the 1990s, at that juke joint in New Mexico, Mr. Eck combined Johnny Rotten and Johnny Cash. Now, he is building a software that does much the same thing. Using neural networks, he and his team are cross-breeding sounds from very different instruments say, a bassoon and a clavichord creating instruments capable of producing sounds no one has ever heard.
Much as a neural network can learn to identify a cat by analysing hundreds of cat photos, it can learn the musical characteristics of a bassoon by analysing hundreds of notes. It creates a mathematical representation, or vector, that identifies a bassoon. So, Mr. Eck and his team have fed notes from hundreds of instruments into a neural network, building a vector for each one.
Now, simply by moving a button across a screen, they can combine these vectors to create new instruments. One may be 47% bassoon and 53% clavichord. Another might switch the percentages. And so on.
For centuries, orchestral conductors have layered sounds from instruments atop one other. But this is different. Rather than layering sounds, Mr. Eck and his team combine them to form something that did not exist before, creating new ways that artists can work.NYT
Follow this link:
How Artificial Intelligence is reshaping art and music - The Hindu
Posted in Artificial Intelligence
Comments Off on How Artificial Intelligence is reshaping art and music – The Hindu
It’s a bird! It’s a plane! It’s Microsoft using artificial intelligence to teach a machine to stay aloft – GeekWire
Posted: at 6:16 pm
Microsofts autonomous glider soars through the air above Hawthorne, Nev. Once airborne, the glider uses artificial intelligence to find and rely on thermals, or columns of air that rise due to heat, to stay aloft. (Microsoft Photo / John Brecher)
Paying attention to the rise of the machines increasingly means scanning the skies for things other than conventional aircraft or birds. But what if the line between the two begins to blur and autonomous planes can somehow be taught to mimic nature?
Thats the hope of researchers from Microsoft who are using artificial intelligence to keep a sailplane aloft without the help of a motor. A new report on the Redmond, Wash.-based tech giants website details the efforts of scientists launching test flights in a Nevada desert.
The researchers have found that through a complex set of AI algorithms, they can get their 16 1/2-foot, 12 1/2-pound aircraft to soar much like a hawk would, byidentifying things like air temperature and wind direction to locatethermals invisible columns of air that rise due to heat.
Birds do this seamlessly, and all theyre doing is harnessing nature. And they do it with a peanut-sized brain, Ashish Kapoor, a principal researcher at Microsoft, said in the report.
Kapoor said its probably one of the few AI systems operating in the real world thats not only making predictions but also taking action based on those predictions. He said the planes could eventually be used for such things as monitoring crops in rural areas or providing mobile Internet service in hard-to-reach places.
Beyond those practical tasks,Andrey Kolobov, the Microsoft researcher in charge of the projects research and engineering efforts, said the sailplane is charting a course for how intelligent learning itself will evolve over the coming years, calling the project a testbed for intelligent technologies. Its becoming increasingly important for systems of all kinds to make complex decisions based on a number of variables without making costly or dangerous mistakes.
Read more about what Microsoft is learning this summer in the desert via the story from the companys News Center.
See the rest here:
Posted in Artificial Intelligence
Comments Off on It’s a bird! It’s a plane! It’s Microsoft using artificial intelligence to teach a machine to stay aloft – GeekWire