The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Will the real AI startups please stand up? – YourStory.com
Posted: March 31, 2017 at 7:09 am
AI is touted to change the world in the same way electricity did. But are we moving too fast? While AI has created impact in some sectors, there are some documented instances of enterprises claiming to have automated systems, but actually relying on humans behind the scenes. For our future to stay bright, the real AI startups will need to stand up.
On March 22, 2017, Andrew NG, who had been heading Artificial Intelligence (AI) efforts at Chinese tech giant Baidu, announced that he was resigning from the company. Within a few hours of his announcement, Baidus stock dipped by about $1.5 billion, showcasing the value that his work in AI brought to the company. Andrew is also one of the co-founders of Courseraand has taught machine learning classes online to over 1,00,000 students. He had earlier led the Google Brain project, which developed massive-scale deep learning algorithms at Google. So, we can take his word, when he makes this prediction,
Just as electricity transformed many industries roughly 100 years ago, AI will also now change nearly every major industry healthcare, transportation, entertainment, manufacturing enriching the lives of countless people. I am more excited than ever about where AI can take us
Data may be the new oil, but AI seems to be the tool of choice to drill and leverage it. Almost every major company in the world, from Google to Facebook and Microsoft, is arming itself for the new AI-powered race that is almost here.
But like any new technology, there are alot of misconceptions surrounding AI, and the potential impact that AI-powered startups and enterprises will have on the world. Many startups around the world, and in India, are looking to innovate in this space, and VCs,too, are swooping in to invest in AI-powered startups.
YourStory referred to multiple industry reports and also spoke to startups and VCs in this space to better understand where AI is headed and how startups could benefit from it. Here is an overview.
Defining AI in the modern context is a tough task. Ben Thompson, the well know media analyst, recently noted on Stratechery that AI could be categorised into two main categories-
Machine learning, on the other hand, could be described as a type of AI that provides computers with the ability to learn without being explicitly programmed. This can be broadly classified into two categories- supervised learning and unsupervised learning. Most of the major mainstream progresses in AI, so far, has been under supervised learning.
AB or supervised systems have been progressing rapidly, and the best ones today are built with a technology called 'deep learning' or 'deep neural networks', which were loosely inspired by the brain and how it works. Andrew NG noted in aHarvard Business Review post,
Todays supervised learning software has an Achilles heel: It requires a huge amount of data. You need to show the system a lot of examples of both A and B
Getting large volumes of good quality data is one of thebiggest challenges for AI startups today, as the quality of input data has a direct impact on the results. Apart from machine learning some of the other important focus areas where AI advances are happening include speech, natural language processing (NLP), computer vision and knowledge graphs.
The rise of AI has also brought multiple misconceptions in the market. Talking to YourStory, Ashwini Asokan, Co-founder of Mad Street Den, feels that the biggest myths include-
AI is automation
Ashwini noted, Automation is automation Machines performing repetitive tasks in not necessarily AI. Many use the terms interchangingly.
APIs make an AI company
The AI community around the world is vibrant, and many researchers open-source their research and code. Ashwini noted that there are some startups that plug in these AI APIs into their system and then claim to be AI startups. Building AI-based applications by leveraging APIs and other tools and building AI systems from the ground-up are different feats, and she believes that not many are aware of the difference.
AI needs to be invisible
Some startups and businesses are generally secretive about what goes on under the hood at the venture, and they have a right, if they so wish. But Ashwini believes that making AI invisible and not wanting people to know that your startup is leveraging AI behind the scenes may not be a great idea. Being open and transparent about what aspects are AI-powered is generally better to set the right expectations from customers.
According to a report from CB Insights, over 550 startups using AI as a core part of their products, raising $5 billion in funding in 2016. Since 2012, deals involving AI startups have been on the rise, and 2016 was a record year for startups globally. The US accounts for about 61 percent of the funds raised, while India is fifth on the list, at 3.5 percent.
While the funding dollars may not reflect the current situation, there are many interesting AI startups and projects in India. Some prominent and interesting Indian or India-focused startups include Mad Street Den, Niki.ai, Neuron.me, Locus.sh and Artifacia (more popularly known as Snapshopr).
On the other side of the table, many VCs and angel investors are keeping a close watch on the AI space, according to the general pulse that YourStory has been able to sense. Predicting a lot of growth in this space, Manish Singhal and Umakant Soni founded pi Ventures, an AI-focused fund for India. They recently closed $13 million of their $30 million AI-focused fund, and also invested or co-invested (with others) in three startups- Ten3t, Zenatix and Sigtuple.
In an earlier conversation with YourStory, Manish had noted that he had seen a lot of great applications in the AI space, in sectors such as healthcare and energy. About the maturity of the space, he had said,
Whether it is winter, spring, or summer, enough stuff is happening in the AI space in India. There is enough critical mass now.
Given that some VC firms have burnt their fingers from the funding boom during 2015, VCs are more cautious nowand do a lot more due diligence before writing a cheque. Talking to YourStory, Anurag Ramdasan, an investor from 3one4 Capital, noted that when it comes to investing in startups that are tech-heavy, his team ensures it spends sufficient time with the startup team to understand what they do. He noted,
It is really important for VCs to be able to understand the details of what the startup is building, including their publications and source code. This is what makes it challenging to be a VC investing in the AI space, as it requires very focused competencies to gauge the startups' novel progress, and to remain on the cutting edge.
Anurag opined that startups that deal with AI can be broadly categorised as models that use AI as enablers to the core value proposition, or the ones that conduct AI research to push the boundaries of the field. He also noted that AI need not always be the best approach to solving a problem. He said,
For the ones that use AI as an enabler for its features, it is more important to understand the need for AI, and the optimisations that it delivers as opposed to the implementation itself. If a non-AI solution delivers a more optimum result, then an AI-based implementation is unlikely to get any special consideration.
But given that AI is still in its nascent stages, and not completely understood, VCs may have to pay special attention to the Reverse Turing Test, where a human pretends to be AI to fool a fellow human into thinking he/she is a machine. This is a variation of the more popular Turing Test, where a machine's ability to exhibit intelligent behaviour indistinguishable from that of a human is tested.
There are some documented instances of enterprises claiming to have automated systems, but actually relying on humans behind the scenes, thus giving cause to paraphrase Eminems hit single from 2000 to, 'Will the real AI startups please stand up?'
The general public has likely interacted with AI only in a few avenues, with chatbots being the most prominent ones. But based on Andrew NGs belief that AI is the new electricity, almost every sector will face disruption. Ashwini agrees with this view and believes it is more a matter of time rather than 'if it will happen'. Talking about India, she believes that we have enough talent and resources to catch up and leapfrog ahead to a better position in the global AI race.
Anurag of 3one4 Capital noted, While covering disruption, always look out for the blast radius instead of the impact centre. The effects are usually far more spread out than you would imagine.
Citing a CrunchBase report, Anurag elaborated how advances in the driver-less cars space could impact other sectors like insurance, cab aggregators and the airline industry, among others.
But there is currently a big pushback against the adoption of AI because of the impact it could have on the global workforce. At a recent event in Bengaluru, Microsoft chief Satya Nadella had spoken about the impact AI will have on jobs, and how people should cope. He had said,
Predicting the skills needed for mid-level jobs in the future won't be easy. To better prepare people, we will need to help them get life-long learning skills.
Bill Gates believes that a good way to make the transition to an AI-powered world is to tax AI or robots. In an interview with Quartz, Gates noted, Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those things. If a robot comes in to do the same thing, youd think that wed tax the robot at a similar level.
So in the long term, startups leveraging AI for gain may have to heed to strict guidelines and regulations. AI beings too will likely have to follow rules and regulations. Taking a page out of Isaac Asimovs playboook, a futuristic representation of his Three laws of Robotics, applied to AI startups could be-
The rest is here:
Posted in Ai
Comments Off on Will the real AI startups please stand up? – YourStory.com
Baidu’s AI team taught a virtual agent just like a human would their … – TechCrunch
Posted: at 7:09 am
TechCrunch | Baidu's AI team taught a virtual agent just like a human would their ... TechCrunch Baidu's artificial intelligence research team has achieved a significant milestone: teaching a virtual agent "living" in a 2D environment how to navigate its.. Baidu trained an AI agent to navigate the world like a parent teaches ... |
Continued here:
Baidu's AI team taught a virtual agent just like a human would their ... - TechCrunch
Posted in Ai
Comments Off on Baidu’s AI team taught a virtual agent just like a human would their … – TechCrunch
Google bets on AI in Canada with Google Brain Toronto and Vector Institute investment – TechCrunch
Posted: at 7:09 am
TechCrunch | Google bets on AI in Canada with Google Brain Toronto and Vector Institute investment TechCrunch The Vector Institute is a dedicated AI research facility, and will use its amassed resources to fund research by postgraduate researchers working on projects in the field. The areas of focus for The Vector Institute include healthcare, financial ... Government, business leaders launch Toronto-based AI initiative Facebook and Google Are Backing a $150 Million Canadian AI Research Facility New AI facility opens in Toronto, with up to $100M from feds, province |
See more here:
Google bets on AI in Canada with Google Brain Toronto and Vector Institute investment - TechCrunch
Posted in Ai
Comments Off on Google bets on AI in Canada with Google Brain Toronto and Vector Institute investment – TechCrunch
Ghost in the Sell: Hollywood’s Mischievous Vision of AI – Scientific American
Posted: at 7:09 am
Watch enough science fiction movies and youll probably come to the conclusion that humans are living on borrowed time. Whether its HAL 9000s murderous meltdown in 2001: A Space Odyssey or Skynets sadistic self-preservation tactics in the Terminator franchise, artificial intelligence usually comes off as a well-intentioned attempt to serve humanity thatthrough some overlooked technical flawends up trying to extinguish it.
The latest dystopian prophecy arrives Friday with the release of Ghost in the Shell, one of a few major releases this year to feature AI prominently in its plot. The filmbased on the 1995 anime movie and Kodansha Comics manga series of the same nametells the story of a special ops humancyborg hybrid known as the Major (Scarlett Johansson). She leads an elite crime-fighting task force whose main mission is to protect a company that makes AI robots. Ghost depicts a technologically advanced society in which a persons brainincluding the Majorsis susceptible to hacking, and ones consciousness can be copied into a new body. Over time the Major begins to question whether her memories are real or were implanted by someone else.
Hollywoods vision of AI is often entertaining, generally pessimistic and rarely realistic. With that in mind, Scientific American asked several prominent real-world AI researchers which movies, if any, have come closest to hitting the mark over the years.
[An edited transcript of the interviews follows.]
Selmer Bringsjord, director of Rensselaer Polytechnic Institutes Rensselaer AI and Reasoning Laboratory
Year after year I keep holding out hope that someone will make a film to compete with the predictive power ofBlade Runner, but it never happens. The point of my [1992 book] What Robots Can and Can't Be can be distilled to this stark but, by my lights, accurate claim: We are sliding inexorably toward a time when AI will supplydespite demanding tests of unmasking [like the movies VoightKampff test]creatures behaviorally indistinguishable from human persons, such as Blade Runner's replicants. People used to object to this claim by saying: No, Selmer, there isn't any point in making embodied AIs look like us, so you're wrong there. Well, not a lot of people express that objection any longer, and just as the long-term job prospects of driving for a living are dismal, the same prospectsas the Westworld television program showsare in place for the oldest profession, in which what one looks like can be regarded important by clients. This theme is more than touched upon in A.I. Artificial Intelligence, which I also regard to have an almost uncanny level of predictive power. It fails as high art despite the pretensions (and reputations) of some who brought it to life, but even a cursory scan today of the world of lifelike toys, and its history, shows plainly what track we're on.
Brian David Johnson, a professor at Arizona State Universitys School for the Future of Innovation in Society
The narrative is typically that once you create something thats sentient, it rises up and kills you. I look at what movies are giving us a different narrative. One recent example is Robot and Frankthis guy gets a health care robot, and he and his robot go and rob places. Another is Herit wasnt about a robot, its about an AI thats awarebut didnt rise up and kill us. Instead it breaks up with us and moves on. Its about a person whos healed by his relationship with AI. The last Ill mention is Interstellar, in which robots with humor/honesty settings give the robot a personality. In that movie the characters are having social relationships with robots, even though they know they are robots. It shows you can have a working relationship with artificial intelligence and still be aware that its AI. Those types of movies matter because they set our mental model for how we see our future.
Daniela Rus, director of Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab (CSAIL)
Eternal Sunshine of the Spotless Mind is a visionary story about reprogramming the human brain, and how such a development could impact how we understand ourselves and interact with the world. The movie raises the question of what it would mean to reprogram our brains as if they were machines. Computer memory can be added, manipulated or wiped clean. Could similar things be done one day with human memory? Imagine if veterans could overcome their PTSD by forgetting battles or if abuse victims could unexperience traumas. Like any new technology, of course, it would be up to us to decide how to use it responsibly to help rather than harm. The film inspired me to think more about the nature of memory, and how unlocking its mysteries could help us better understand our own behaviors and motivations.
Yann LeCun, director of Facebook AI Research and founding director of the New York University Center for Data Science
I think one that reflects what might well happen, although not exactly, is Her. Theres no major blatant mistakes that I saw in that movie. Of course, were extremely far from having technology thats shown in the movie. We dont have truly intelligent machines, and I dont know how long it will take for us to get anywhere near that. But the idea that you would have a personal virtual assistant that you interact with, and with whom you have a relationship like a digital friendthat is something that is actually fairly realistic. Then theres a list of movies that depict all kinds of crazy stuff that theres no way in hell will happen. Thats pretty much every movie that portrays AIThe Terminator, The Matrix, all the popular ones. Ex Machinathats a beautiful film, but the AI depiction is completely wrong.
Manuela Veloso, head of Carnegie Mellon Universitys Machine Learning Department
I like Bicentennial Man and the television program Humans, without the complicated bad robots/synthetics. Robots coexist with people and are helpful. And I like Robot and Frank, except for the fact that the robot learns to rob.
Timothy Persons, chief scientist at the U.S. Government Accountability Office
I thought Steven Spielbergs A.I. Artificial Intelligence in 2001 was powerfulnot in the sense that it portrayed a dystopic, post-apocalyptic world. The context was dystopic, but it wasnt like the machines were all out to kill us or anything like that. Particularly compelling was the idea of having the machine be able to understand what youre feeling, and you being able to have love and affection for your machine. The powerful thing that Spielberg captured was the human compassion dimension to that, even when its a machine.
Yoshua Bengio, head of the University of Montreals Montreal Institute for Learning Algorithms (MILA)
2001, A Space Odyssey. Most of the recent science fiction movies about AI are not very good. Less bad than others: Her.
Andrew Moore, dean of the Carnegie Mellon University School of Computer Science and former director of Google Pittsburgh
I like Robot and Frank, which, like all great AI movies, is really about humans. It gently portrays a world that has intelligent devices in it and looks at the mismatch between what a naive engineer would consider a useful device versus what a real user values.
Stuart Russell, director of the University of California, Berkeleys Center for Human-Compatible Artificial Intelligence
My favorite movie AI is TARS, the robot in Interstellar. TARS does exactly what humans need it to do, including sacrificing itself to save the humans. Theres no danger of confusing it with a human, and little temptation to think of it as consciouseven though the humans have a hard time letting it commit suicide. My favorite AI movie is Ex Machina. It is very effective in portraying some of the unanswered questions about consciousness in machines and our own reactions to machines, including the way those reactions are conditioned on our built-in response to the human forma really good reason not to build humanoid robots! The movie also conveys the difficulty of controlling a machine that can easily outwit you if its designed with objectives that are eventually in conflict with yours. And it does all this with a seamless, low-key narrative that operates at several levels.
Tuomas Sandholm, creator of Carnegie Mellons Libratus, the AI that recently outplayed four top poker pros
I liked Her for many reasons. It was refreshing to see an AI movie that was not about violent robots and raised many interesting AI issues in the broader public spheresuch as scalability (dating at massive scale), the realistic and sad aspect of human loneliness being filled by machines (already happening in China via chatbots) and the issues that arise as AI surpasses human intelligence. I also liked Blade Runner, a fun action movie that addressed the question of what it means to be human versus machine, and how one could tell, even about oneself.
Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence
That is the hardest question youve asked me today because, for example, Ex Machina is fun in terms of discussing issues around the Turing test [in which a machine tries to convince an interrogator that it is human]. There are a lot of movies that Ive enjoyed, but if you ask me what movie has done a good job depicting AI, Im still waiting for that to come outif only because its easy to cast AI as the villain. Ask me the three movies in the past 20 years where AI was the good guy, and I can think of WALL-Eabout a robot thats trying to create peaceand then I draw a blank. If theres any Hollywood producers out there reading this, call me and well put together a script where AI does good things. There are very real possibilities, whether its avoiding traffic accidents or preventing medical errors. I think thered be a good script out there. At least it would be refreshing.
Read more:
Ghost in the Sell: Hollywood's Mischievous Vision of AI - Scientific American
Posted in Ai
Comments Off on Ghost in the Sell: Hollywood’s Mischievous Vision of AI – Scientific American
Trudeau looks to make Canada ‘world leader’ in AI research – Phys.Org
Posted: at 7:09 am
March 30, 2017 Canadian PM Justin Trudeau said the government would spend $30 million on a new artificial intelligence research center in Toronto, Ontario
Prime Minister Justin Trudeau announced his hopes Thursday of making Canada a "world leader" in artificial intelligence and so-called "deep learning" research and development.
The government, he said, would spend Can$40 million (US$30 million) on a new artificial intelligence research center in Toronto, Ontario.
The Ontario provincial government will also contribute Can$50 million to the new Vector Institute, which will be led by Geoffrey Hintona British-born Canadian cognitive psychologist and computer scientist at the University of Toronto, who also works for Google.
In its recent budget, Ottawa had also earmarked Can$125 million over five years to bolster clusters of scientists in Edmonton, Montreal and Toronto devoted to the "futuristic-sounding" research that hopes to create machines that learn like humans.
It is hoped these will lead to collaborations and breakthroughs in artificial neural networks and algorithms that seek to mimic human brain functions.
As well, the government aimed to broaden education in the field and create new AI research chairs at universities across the country.
"In the same way that electricity revolutionized manufacturing and the microprocessor reinvented how we gather, analyze and communicate information, artificial intelligence will cut across nearly every industry," Trudeau said.
"It will shape the world that our kids and our grandkids grow up in," he said.
The field of artificial intelligence dates back to the mid-20th century when a group of scientists held the first conference devoted to the subject at Dartmouth College in the US state of New Hampshire.
Interest and investment in AI accelerated in the last decade alongside advancements in robotics and automation.
A 2013 University of Oxford study concluded that of the 700 trades in the United States, 47 percent of them were likely to become automated.
"In the years to come, we will see this leadership pay dividends in everything from manufacturing improvements to health-care breakthroughs, to stronger and more sustained economic and job growth," Trudeau said.
Explore further: GM to add 700 technical jobs in Ontario, Canada
2017 AFP
General Motors will add at least 700 engineering, software development and urban mobility jobs at three sites in Canada.
Google said on Monday that it had agreed to buy British artificial intelligence start-up company DeepMind for an undisclosed amount.
Ford Motor Co. will hire approximately 400 employees from software company BlackBerry Ltd. as part of sizable new investments in Canada that include a connected-vehicle research center in Ottawa, company officials said Thursday.
n the quest for reliable artificial intelligence, EPFL scientist Marcel Salath argues that AI technology should be openly available. He will be discussing the topic at this year's edition of South by South West on March ...
Canada will impose a national minimum carbon price in 2018, an effective tax, to meet the landmark Paris commitment on climate change, Prime Minister Justin Trudeau said Monday.
Major technology firms have joined forces in a partnership on artificial intelligence, aiming to cooperate on "best practices" on using the technology "to benefit people and society."
Spotting a face in a crowd, or recognizing any small or distant object within a large image, is a major challenge for computer vision systems. The trick to finding tiny objects, say researchers at Carnegie Mellon University, ...
A major update to Microsoft's Windows 10 system will start reaching consumers and businesses on April 11, offering 3-D drawing tools, game-broadcasting capabilities and better ways to manage your web browsing.
Most people look for a place to hide when a typhoon is on the horizon, but Atsushi Shimizu hopes that the fury of nature may one day help resource-poor Japan tackle its energy woes.
Samsung seems to be playing it safe at least with its batteryas it unveils its first major smartphone since the embarrassing recall of its fire-prone Note 7.
US lawmakers voted Tuesday to roll back rules that would block internet service providers from selling user data to third parties, following a heated debate over privacy protections.
Not content to reach for Mars and dethrone fossil fuels, tech entrepreneur Elon Musk on Tuesday is turning his focus to delving into people's minds.
Please sign in to add a comment. Registration is free, and takes less than a minute. Read more
Original post:
Trudeau looks to make Canada 'world leader' in AI research - Phys.Org
Posted in Ai
Comments Off on Trudeau looks to make Canada ‘world leader’ in AI research – Phys.Org
‘Machine folk’ music shows the creative side of AI – The Conversation UK
Posted: at 7:09 am
Folk music is part of a rich cultural context that stretches back into the past, encompassing the real and the mythical, bound to the traditions of the culture in which it arises. Artificial intelligence, on the other hand, has no culture, no traditions. But it has shown great ability: beating grand masters at chess and Go, for example, or demonstrating uncanny wordplay skills when IBM Watson beat human competitors at Jeopardy. Could the power of AI be put to use to create music?
This is not entirely unprecedented: an artificial intelligence co-wrote a piece of musical theatre, from the storyline to the music and lyrics. It premiered in London in 2016. The advancement of AI techniques and ever-larger collections of data to use to train them presents broad opportunities for creative research. The AI co-wrote its musical based on an analysis of hundreds of other successful musicals, for example. There are other projects aimed at providing creators of art and music with new artificial intelligence-based tools for their craft, such as Googles Magenta project, Sonys Flow Machines, or British start-up Jukedeck. And long before those was The Illiac Suite, a string quartet programmatically composed by a supercomputer in 1957.
Our research examines how state-of-the art AI techniques can contribute to musical practice, specifically the Celtic folk tradition of session music. Enthusiasts transcribe versions of folk tunes using ABC, a reduced form of music notation developed by Chris Walshaw of the University of Greenwich, using text characters as a rough guide to the musician. We trained our AI system using more than 23,000 ABC transcriptions of folk music, crowd-sourced from the excellent online resource thesession.org. And at our recent workshop at the Inside Out festival we had accomplished folk musicians performing some of this machine folk music.
Our AI is trained so that given one ABC symbol it can predict the next, which means it can generate new tunes that draw upon patterns and structures learned from the original tunes. We have generated more than 100,000 new machine folk tunes, and its interesting to see what the AI has and has not learned. Many tunes have the typical structure of this style: two repeated parts of the same eight-bar length, that often complement each other musically. The AI also shows some ability to repeat and vary musical patterns in a way that is very characteristic of Celtic music. It was not programmed to do this with rules it learned to do so because these patterns exist in the data we fed it.
However, unlike a human the system isnt immediately able to generalise these properties beyond the immediate context. Much of what we originally thought the system learned about basic musical features (for example how rhythm works) in fact it hadnt learned it was simply able to reproduce those conventions. Venture slightly outside the conventions of the data and the system begins to act unusually. This is where things can get musically interesting:
To evaluate the AIs compositions we consulted the experts: folk musicians. We asked for feedback on The Endless Traditional Music Session, and later about a volume of 3,000 tunes generated by our system. Feedback from members in the thesession.org forums shows divided opinions: some found the idea intriguing and identified machine folk tunes they liked and could work with. Others were dead against the entire notion of computer-generated music.
One obstacle was that not only was this music composed by computers, it was also played by computer synthesis, and so lacking the interpretation and expressivity of human musicians who bring each tune to life elements not incorporated in the data the AI had trained on. So we recruited professional folk musicians and asked them to look at our volume of 3,000 tunes. One musician observed that about one in five tunes are actually fairly good.
By their nature, folk tunes are less fixed in nature and are treated as a frame upon which to elaborate: performers develop their own version and change elements in performance. The musicians found interesting features and some patterns that are unusual but work well within the style. Perhaps there are regions of this musical space that humans have not yet discovered and can be reached with the help of a machine.
Much discussion around AI focuses on computers as competitors to humans. We seek to harness the same technology as a creative tool to enrich, not replace.
A concert, Partnerships, on May 23, 2017, will feature music co-created by humans and computers.
See more here:
'Machine folk' music shows the creative side of AI - The Conversation UK
Posted in Ai
Comments Off on ‘Machine folk’ music shows the creative side of AI – The Conversation UK
Boffins give ‘D.TRUMP’ an AI injection – The Register
Posted: March 29, 2017 at 11:23 am
Let's give this points in the Academic Sense of Humour stakes for 2017: the wryly-named Data-mining Textual Responses to Uncover Misconception Patterns, or D.TRUMP, looks to automate the process of working out just how confused someone might be, from how they answer open-response questions.
The problem three boffins from Rice University and Princeton are trying to answer arises because of the rise of large-scale online learning.
At a smaller scale for example, in a lecture theatre or tutorial gathering it's relatively easy for a capable instructor to work out from a student's question what part of a topic they're finding hard to grasp.
That scales badly: in a MOOC (massive open online course), the tutor-to-student ratio could be thousands to one or more, but there's an upside, since the scale of the student body is also a rich source of data.
D.TRUMP seeks to mine that student data for evidence of clue deficit, with the authors of this paper writing: The scale of this data presents a great opportunity to revolutionise education by using machine learning algorithms to automatically deliver personalised analytics and feedback to students and instructors in order to improve the quality of teaching and learning.
To achieve that, D.TRUMP transforms answers into low-dimensional textual vectors using tools like Word2Vec and the like; and the authors' work, which is a statistical model that jointly models both the transformed response textual feature vectors and expert expert labels on whether a response exhibits one or more misconceptions.
The researchers tested their work against 386 students' answers to a total of 1,668 questions in the AP Biology high-school level classes at OpenStax Tutor, giving them a total of 60,000 labelled responses.
It's probably helpful at this point to identify just how fine the line can be between correct and an almost-correct misconception. From the paper:
Question 1: People who breed domesticated animals try to avoid inbreeding even though most domesticated animals are indiscriminate. Evaluate why this is a good practice. Correct Response: A breeder would not allow close relatives to mate, because inbreeding can bring together deleterious recessive mutations that can cause abnormalities and susceptibility to disease. Student Response 1: Inbreeding can cause a rise in unfavorable or detrimental traits such as genes that cause individuals to be prone to disease or have unfavourable mutations. Student Response 2: Interbreeding can lead to harmful mutations.
For those (like the author) who didn't study biology: Inbreeding leads to harmful mutations is a lay understanding of genetics. To be marked correct, the student needs to identify the mechanism, that inbreeding can bring together recessive mutations from mother and father.
Having developed D.TRUMP to the level that it can spot that kind of misconception, the system provides another bit of help to the educator: it can identify groups of students who share a misconception. This could indicate whether the students arrived in a course with a clue deficit, or that the courseware isn't getting its message across.
Read more:
Posted in Ai
Comments Off on Boffins give ‘D.TRUMP’ an AI injection – The Register
The Trade-Off Every AI Company Will Face – Harvard Business Review
Posted: at 11:23 am
It doesnt take a tremendous amount of training to begin a job as a cashier at McDonalds. Even on their first day, most new cashiers are good enough. And they improve as they serve more customers. Although a new cashier may be slower and make more mistakes than their experienced peers, society generally accepts that they will learn from experience.
We dont often think of it, but the same is true ofcommercial airline pilots. We take comfort that airline transport pilot certification is regulated by the U.S. Department of Transportations Federal Aviation Administration and requires minimum experience of 1,500 hours of flight time, 500 hours of cross-country flight time, 100 hours of night flight time, and 75 hours of instrument operations time. Butwe also know that pilots continue to improve from on-the-job experience.
On January 15, 2009, when US Airways Flight 1549 was struck by a flock of Canada geese, shutting down all engine power, Captain Chelsey Sully Sullenberger miraculously landed his plane in the Hudson River, saving the lives of all 155 passengers. Most reporters attributed his performance to experience. He had recorded 19,663 total flight hours, including 4,765 flying an A320. Sully himself reflected: One way of looking at this might be that for 42 years, Ive been making small, regular deposits in this bank of experience, education, and training. And on January 15, the balance was sufficient so that I could make a very large withdrawal. Sully, and all his passengers, benefited from the thousands of peoplehed flown before.
How it will impact business, industry, and society.
The difference between cashiers and pilots in what constitutes good enough is based on tolerance for error. Obviously, our tolerance is much lower for pilots. This is reflected in the amount of in-house training we require them to accumulate prior to servingtheir first customers, even though they continue to learn from on-the-job experience. We have different definitions for good enough when it comes to how much training humans requirein different jobs.
The same is true ofmachines that learn.
Artificial intelligence (AI) applications are based on generating predictions. Unlike traditionally programmed computer algorithms, designed to take data and follow a specified path to produce an outcome, machine learning, the most common approach to AI these days, involves algorithms evolving through various learning processes. Amachine is given data, including outcomes, it finds associations, and then, based on those associations, it takes new data ithas never seen before and predicts an outcome.
This means that intelligent machines need to be trained, just aspilots and cashiers do. Companies design systems to train new employees until they aregood enough and then deploy them into service, knowing that they will improve as they learn from experience doing their job. While this seems obvious, determining what constitutes good enough is an important decision. In the case of machine intelligence, it can be a major strategic decision regarding timing: when to shift from in-house training to on-the-job learning.
There is no ready-made answer as to what constitutes good enough for machine intelligence. Instead, there are trade-offs. Success with machine intelligence will require taking these trade-offs seriously and approaching them strategically.
The first question firms must ask is what tolerance they and their customers have for error. We have high tolerance for error with some intelligent machines and a low tolerance for others. For example, Googles Inbox application reads your email, uses AI to predict how you will want to respond, and generates three short responses for the user to choose from. Many users report enjoying using the application even when it has a 70% failure rate (i.e., the AI-generated response is only useful 30% of the time). The reason for this high tolerance for error is that the benefit of reduced composing and typing outweighs the cost of wasted screen real estate when the predicted short response is wrong.
In contrast, we have low tolerance for error in the realm of autonomous driving. The first generation of autonomous vehicles, largely pioneered by Google, was trained using specialist human drivers who took a limited set of vehicles and drove them hundreds of thousands of kilometers. It was like a parent taking a teenager on supervised driving experiences before letting them drive on their own.
The human specialist drivers provide a safe training environment, but are also extremely limited. The machine only learns about a small number of situations. It may take many millions of miles in varying environments and situations before someone has learned how to deal with the rare incidents that are more likely to lead to accidents. For autonomous vehicles, real roads arenasty and unforgiving precisely because nasty or unforgiving human-causedsituations can occur on them.
The second question to ask, then, is how important it is to capture user data in the wild. Understanding that training might take a prohibitively long time, Tesla rolled out autonomous vehicle capabilities toall itsrecent models. These capabilities included a set of sensors that collect environmental data as well as driving data that isuploaded to Teslas machine learning servers. In a very short period of time, Tesla can obtain training data just by observing how the drivers of its cars drive. The more Tesla vehicles there are on the roads, the more Teslas machines can learn.
However, in addition to passively collecting data as humans drive their Teslas, the company needs autonomous driving data to understand how itsautonomous systems are operating. For that, it needs to have cars drive autonomously so that it can assess performance, but also assess when a human driver, required to be there and paying attention, chooses to intervene. Teslas ultimate goal is not to produce a copilot, or a teenager who drives under supervision, but a fully autonomous vehicle. That requiresgetting to the point where real people feel comfortable in a self-driving car.
Herein lies a tricky trade-off. In order to get better, Tesla needs its machines to learnin real situations. But putting its current cars in real situations means giving customers a relatively young and inexperienced driver although perhaps as good as or better than many younghuman drivers. Still, this is far riskier than beta testing, for example, whetherSiri or Alexa understoodwhat you said, or whether Google Inbox correctly predicts your response to an email. In the case of Siri, Alexa, or Google Inbox, itmeans a lower-quality user experience. In the case of autonomous vehicles, it means putting lives at risk.
As Backchannel documented in a recent article, that experience can be scary. Cars can exit freeways without notice, or put on the brakes when mistaking an underpass for an obstruction. Nervous drivers may opt not to use the autonomous features, and, in the process, may hinderTeslas ability to learn. Furthermore, even if the company can persuade some people to become beta testers, are those the people it wants? After all, a beta tester for autonomous driving may be someone with a taste for more risk than the average driver. In that case, who is the company training their machines to be like?
Machines learn faster with more data, and more data is generated when machines are deployed in the wild. However, bad things can happen in the wild and harm the company brand. Putting products in the wild earlier accelerates learning but risks harming the brand (and perhaps the customer!); putting products in the wild later slows learning but allows for more time to improve the product in-house and protect the brand (and, again, perhaps the customer).
For some products, like Google Inbox, the answer to the trade-off seems clear because the cost of poor performance is low and the benefits from learning from customer usage are high. It makes sense to deploy this type of product in the wild early. For other products, like cars, the answer is less clear.As more companies seek to take advantage of machine learning, this is a trade-off more and more will have to make.
Read the original:
The Trade-Off Every AI Company Will Face - Harvard Business Review
Posted in Ai
Comments Off on The Trade-Off Every AI Company Will Face – Harvard Business Review
It’s a riot: the stressful AI simulation built to understand your emotions – The Guardian (blog)
Posted: at 11:23 am
A protester hurls a tear gas canister fired by police in Ferguson, Missouri, on 13 August 2014. Photograph: AP
An immersive film project is attempting to understand how people react in stressful situations by using artificial intelligence (AI), film and gaming technologies to place participants inside a simulated riot and then detecting their emotions in real time.
Called Riot, the project is the result of a collaboration between award winning multidisciplinary immersive filmmaker Karen Palmer and Professor Hongying Meng from Brunel University. The two have worked together previously on Syncself2, a dynamic interactive video installation.
Riot was inspired by global unrest, and was specifically inspired by Palmers experience of watching live footage of the Ferguson protests in 2015. I felt a big sense of frustration, anger and helplessness. I needed to create a piece of work that would encourage dialogue around these types of social issues. Riots all over the world now seem to be [the] last form of [community] expression, she said.
Whereas Syncself2 used an EEG headset to place the user in the action, with Riot Palmer wanted to try and achieve a more seamless interface. Hongying and I discussed AI and facial recognition; the tech came from creating an experience which simulated a riot it needed to be as though you were there.
Designed as an immersive social digital experience, the objective is to get through a simulated riot alive. This is achieved through interacting with a variety of characters who can help you reach home. The video narrative is controlled by the emotional state of the user, which is monitored through AI software in real time.
Machine learning is the key technology for emotion detection systems. From the dataset collected from audiences, AI methods are used to learn from the data and build the computational model which can be integrated into the interactive film system and detect the emotions in real-time, explained Meng.
The programme in development at Brunel can read seven emotions, but not all are appropriate for the experience created by the Riot team. Currently,Riots pilot interface can recognise three emotional states: fear, anger and calm.
I tried it along with Dr Erinma Ochu, a lecturer in science communication and future media at the University of Salford, whose PhD was in applied neuroscience.
Riot is played out on a large screen, with 3D audio sound surrounding us as a camera watches our facial expressions and computes in real time how we are reacting. Based on this feedback, the algorithm determines how the story unfolds.
We see looters, anarchists and police playing their parts and interacting directly with us . What happens next is up to us: our reactions and responses determine the story, and as the screen is not enclosed in a headset, but open for others to see, it also creates a public narrative.
Ochu reacted with jumps and gasps to what was happening around her and ultimately didnt make it home. Its interesting to try something you wouldnt do in real life so you can explore a part of your character that you might suppress if you were going to get arrested, she said.
As a scientist and storyteller she felt Riot was ahead of the curve: This has leapfrogged virtual reality, she said.
According to the Riot team, virtual reality (VR) developers have struggled to create satisfying stories in an environment in which, unlike film, you cant control where the user looks or what route they take through the narrative.
In order to overcome these issues and create a coherent, convincing storyline, the team from Brunel re-trained their software versions of facial recognition technology to work for Riot. [This] provides a perfect platform to show our research and development. Art makes our work easier to understand. We have been doing research in emotion detection from facial expression, voice, body gesture, EEG, etc for many years, said Meng. He hopes the projects success will make people see the benefits of AI, leading to the development of smart homes, building and cities.
For now, the emotion detection tool being worked on at Brunel can be used in clinical settings to measure pain and emotional states such as depression in patients. Similar tech has already been used in a therapeutic setting; a study last year at the University of Oxford used VR to help those with persecutory delusions. Those who trialed real life scenarios combined with cognitive therapy saw significant improvement in their symptoms.
But can Riots current AI facial recognition tech work for everyone? People with Parkinsons, sight or hearing issues might need an EEG headset and other physical monitors to gain the same immersive experience unless tech development rapidly catches up with Palmers ultimate vision of a 360 degree screen, which would also allow a group of participants to play together.
Perhaps Riot and its tech could herald a new empathetic, responsible and responsive future for storytelling and gaming in which the viewer or player is encouraged to bring about change both in the narrative and in themselves. After all, if you could truly see a story from the another persons point of view what might you learn about them and yourself? How might you carry those insights into the real world to make a difference?
The V&A will be exhibiting Riot as part of the Digital Design Weekend September 2017. The project is currently shortlisted for the Sundance New Frontier Storytelling Lab.
Read the rest here:
It's a riot: the stressful AI simulation built to understand your emotions - The Guardian (blog)
Posted in Ai
Comments Off on It’s a riot: the stressful AI simulation built to understand your emotions – The Guardian (blog)
AI will transform information security, but it won’t happen overnight – CSO Online
Posted: at 11:23 am
Although it dates as far back as the 1950s, Artificial Intelligence (AI) is the hottest thing in technology today.
An overarching term used to describe a set of technologies such as text-to-speech, natural language processing (NLP) and computer vision, AI essentially enables computers to do things normally done by people.
Machine learning, the most prominent subset of AI, is about recognizing patterns in data and computer learning from them like a human. These algorithms draw inferences without being explicitly programmed to do so. The idea is the more data you collect, the smarter the machine becomes.
At consumer level, AI use cases include chatbots, Amazons Alexa and Apples Siri, while enterprise efforts see AI software aim to cure diseases and optimize enterprise performance, such as improving customer experience or fraud detection.
There is plenty to back-up the hype; A Narrative Science survey found that 38 percent of enterprises are already using AI, growing to 62 percent by 2018, with Forrester Research predicting a 300 percent year-on-year increase in AI investment this year. AI is clearly here to stay.
Unsurprisingly given the constant evolution of criminals and malware, InfoSec also wants a piece of the AI pie.
With its ability to learn patterns of behavior by sifting through huge datasets, AI could help CISOs by finding those known unknown security threats, automating SOC response and improving attack remediation. In short, with skilled personnel hard to come by, AI fills some (but not all) of the gap.
Experts have called for the need of a smart, autonomous security system, and American cryptographer Bruce Schneier believes that AI could offer the answer.
It is hyped, because security is nothing but hype, but it is good stuff, said the CTO of Resilient Systems.
Were a long way off AI from making humans redundant in cybersecurity, but theres more interest in [using AI for] human augmentation, which is making people smarter. You still need people defending you. Good systems use people and technology together.
Martin Ford, futurist and author of Rise of the Robots, says both white and black hats are already leveraging these technologies, such deep learning neural networks.
It's already being used on both the black and white hat sides, Ford told CSO. There is a concern that criminals are in some cases ahead of the game and routinely utilize bots and automated attacks. These will rapidly get more sophisticated.
...AI will be increasingly critical in detecting threats and defending systems. Unfortunately, a lot of organizations still depend on a manual process -- this will have to change if systems are going to remain secure in the future.
Some CISOs, though, are preparing to do just that.
It is a game changer, Intertek CISO Dane Warren said. Through enhanced automation, orchestration, robotics, and intelligent agents, the industry will see greater advancement in both the offensive and defensive capabilities.
Warren adds that improvements could include responding quicker to security events, better data analysis and using statistical models to better predict or anticipate behaviors.
Andy Rose, CISO at NATS, also sees the benefits: Security has always had a need for smart processes to apply themselves to vast amounts of disparate data to find trends and anomalies whether that is identifying and stopping spam mail, or finding a data exfiltration channel.
People struggle with the sheer volume of data so AI is the perfect solution for accelerating and automating security issue detection.
Security providers have always tried to evolve with the ever-changing threat landscape and AI is no different.
However, with technology naturally outpacing vendor transformation, start-ups have quickly emerged with novel AI-infused solutions for improving SOC efficiency, quantifying risks and optimizing network traffic anomaly detection.
Relative newcomers Tanium, Cylance and - to lesser extent - LogRhythm have jumped into this space, but its start-ups like Darktrace, Harvest.AI, PatternEx (coming out of MIT), and StatusToday that have caught the eye of the industry. Another relative unknown, SparkCognition, unveiled what it called the first AI-powered cognitive AV system at BlackHat 2016.
The tech giants are now playing with AI in security too; Google is working on AI-based system which replaces traditional CAPTCHA forms and its researchers have taught AI to create its own encryption. IBM launched Watson for Cyber Security earlier this month, while in January Amazon acquired Harvest.AI, which uses algorithms to identify important documents and IP of a business, and then user behavior analytics with data loss prevention techniques to protect them from attack.
Some describe these products as first-gen AI security solutions, primarily focused on sifting through data, hunting for threats, and facilitating human-led remediation. In the future, AI could automate 24x7 SOCs, enabling workers to focus on business continuity and critical support issues.
I see AI initially as an intelligent assistant able to deal with many inputs and access expert level analytics and processes, agrees Rose, adding AI will support security professionals in higher level analysis and decisions.
Ignacio Arnaldo is chief data scientist at PatternEx, which offers an AI detection system that automates tasks in SecOps, such as the ability to detect APTs from network, applications and endpoint logs. He says that AI offers CISOs a new level of automation.
CISOs are well aware of the problems - they struggle to hire talent, and there are more devices and data that need to be analyzed. CISOs acknowledge the need for tools that will increase the efficiency of their SOCs. AI holds the promise but CISOs have not yet seen an AI platform that clearly/proves to increase human efficiency.
More and more CISOs fully understand that the global skills shortage, and the successful large-scale attacks against high maturity organizations like Dropbox, NSA/CIA, and JPMorgan are all connected, says Darktrace CTO Dave Palmer, whose firm provides machine learning technology to thousands of companies across 60 countries worldwide.
No matter how well funded a security team is, it cant buy its way to high security using traditional approaches that have been demonstrably failing and that dont stand a chance of working in the anticipated digital complexity of our economy in 10 years time.
But for all of this, some think were jumping the gun. AI, after all, seems a luxury item in an era in which many firms still dont do regular patch management.
At this years RSA conference, crypto experts mulled how AI is applicable in security, with some questioning how to train the machine and what the humans role is. Machine reliability and oversight were also mentioned, while others suggested its odd to see AI championed given security is often felled by low-level basics.
I completely agree, says Rose. Security professionals need to continually reassess the basics patching, culture, SDLP etc. otherwise AI is just a solution that will tell you about the multitude of breaches you couldnt, and didnt, prevent.
Schneier sees it slightly differently. He believes security can be advanced and yet still fail at the basics, while he poignantly notes AI should only be for those who have got the security posture and processes in place, and are ready to leverage the machine data.
Ethics, he says, is only an issue for full automation, and hes unconcerned about such tools being utilized by black hats or surveillance agencies.
I think this is all a huge threat, says Ford, disagreeing. I would rank it as one of the top dangers associated with AI in the near to medium term. There is a lot of focus on "super-intelligent machines taking over"...but this lies pretty far in the future. The main concern now is what bad people will do when they have access to AI.
Warren agrees there are obstacles for CISOs to overcome. It is forward thinking, and many organizations still flounder with the basics.
He adds that with these AI benefits will come challenges, such as the costly rewriting of apps and the possibility of introducing new threats. ...Advancements in technology introduce new threat vectors.
A balance is required, or the environment will advance to a point where the industry simply cannot keep pace.
AI and security is not necessarily a perfect match. As Vectra CISO Gunter Ollmann blogged about recently, buzzwords have made it appear that security automation is the same as AI security - meaning theres a danger of CISOs buying solutions they dont need, while there are further concerns over AI ethics, quality control and management.
Arnaldo critically points out that AI security is no panacea either. Some attacks are very difficult to catch: there are a wide range of attacks at a given organization, over various ranges of time, and across many different data sources.
Second, the attacks are constantly changing...Therefore; the biggest challenge is training the AI.
If this points to some AI solutions being ill-equipped, Palmer adds further weight to the claim.
Most of the machine learning inventions that have been touted arent really doing any learning on the job within the customers environment. Instead, they have models trained on malware samples in a vendors cloud and are downloaded to customer businesses like anti-virus signatures. This isnt particularly progressive in terms of customer security and remains fundamentally backward looking.
So, how soon can we see it in security?
A way off, notes Rose. Remember that the majority of IPS systems are still in IDS mode because firms lack the confidence to rely on intelligent systems to make automated choices and unsupervised changes to their core infrastructure. They are worried that, in acting without context, the control will damage the service and thats a real threat.
But the need is imperative: If we don't succeed in using AI to improve security, then we will have big problems because the bad guys will definitely be using it, says Ford.
I absolutely believe increased automation and ease of use are the only ways in which we are going to improve security, and AI will be a huge part of that, says Palmer.
Read this article:
AI will transform information security, but it won't happen overnight - CSO Online
Posted in Ai
Comments Off on AI will transform information security, but it won’t happen overnight – CSO Online