The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
AI bots will kill us all! Or at least may seriously inconvenience humans – The Register
Posted: July 18, 2017 at 4:11 am
Analysis Elon Musk the CEO of Tesla, SpaceX, and Neuralink, not to mention co-chairman of OpenAI and founder of The Boring Company is once again warning that artificial intelligence threatens humanity.
In an interview at the National Governors Association 2017 Summer Meeting in Providence, Rhode Island on Saturday, Musk insisted that AI endangers human civilization and called for its regulation.
"I have exposure to the most cutting edge AI and I think people should be really concerned about it," he said. "I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react, you know, 'cause it seems so ethereal."
Musk said AI represents a rare case where regulation should be proactive rather than reactive, "because I think by the time we're reactive in AI regulation, it's too late."
Arguing that we need to depart from the traditional method of regulation, in which rules follow disaster and public outcry, Musk insisted that dangers posed by clever code running amok are so great that we cannot wait.
"AI is a fundamental existential risk for human civilization and I don't think people fully appreciate that," Musk declared, even as he allowed that regulation can be "pretty irksome" and businesses would prefer not to be saddled with onerous rules.
Fear of apocalyptic AI is a longstanding theme for Musk. He rang the same alarm bell back in 2014 at at Massachusetts Institute of Technology's AeroAstro Centennial Symposium when he said, "I think we should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it's probably that. I'm increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure we aren't doing something foolish."
Despite the potential for nuclear annihilation, flu pandemic, biowarfare, socio-economic upheaval, meteor strike, and climate catastrophe, Musk believes we need to focus more on overseeing algorithms.
Worries about Machiavellian machines, voiced by Musk and other technical luminaries like Stephen Hawking, have prompted conferences and calls to adopt protective principles.
Musk serves as co-chairman of OpenAI, a non-profit research company founded in late 2015, ostensibly to promote the development of AI that benefits humanity. The organization's initial blog post, penned by Greg Brockman and Ilya Sutskever, echoes Musk's framing of the situation: "It's hard to fathom how much human-level AI could benefit society, and its equally hard to imagine how much it could damage society if built or used incorrectly."
And it's hard to understand what AI actually refers to.
Read more here:
AI bots will kill us all! Or at least may seriously inconvenience humans - The Register
Posted in Ai
Comments Off on AI bots will kill us all! Or at least may seriously inconvenience humans – The Register
Coach AI: Iverson doesn’t play in Philly Big3 homecoming – ABC News
Posted: July 17, 2017 at 4:11 am
Allen Iverson walked onto a familiar court to a rousing ovation and then took an unfamiliar spot on the sideline.
The Answer was Coach AI in his Big3 homecoming.
Iverson said a few hours before his team played against Dr. J's squad that his doctor advised him not to play Sunday night for unspecified reasons.
So, he assumed his role as coach of 3's Company in Ice Cube's 3-on-3 league. That didn't stop fans from chanting: "We want AI!"
"I'm glad I had a chance to come back home," Iverson told the crowd after introductions. "Ain't nothing like this relationship we have. I love you for supporting me throughout my career and still today you're still supporting me."
After he walked on the floor, Iverson cupped his ear the way he used to during his days leading the Philadelphia 76ers and implored the crowd to cheer even louder.
They made it sound like 2001 at the Wells Fargo Center.
Iverson hugged everyone in sight, smiled, waved, blew kisses and went to work as a coach. He stood in front of the bench, arms folded, interacted with officials and did his best Larry Brown impression.
The Hall of Famer hasn't played much so far. He had six points on 3-for-13 shooting in the first three games before sitting out this one.
Julius Erving, coach of the Tri-State team, embraced Iverson and whispered in his ear before addressing the crowd first.
"Big3 is a new concept but it's an old story," Erving said. "It's about playing ball the way we all learned how to play ball out in the playground, like the playgrounds all around Philadelphia."
Erving's team, led by Jermaine O'Neal and Bonzi Wells, won the game. Iverson didn't speak to reporters afterward.
Cube wrote on Twitter hours after the game that the only one feeling worse than him was Iverson.
"A.I. not playing was disappointing to everybody, including myself," the rapper-actor wrote. "Doctors told him not to get out of bed and he came anyway. Sad but true."
Follow Rob Maaddi on Twitter: https://twitter.com/APRobMaaddi
See the article here:
Coach AI: Iverson doesn't play in Philly Big3 homecoming - ABC News
Posted in Ai
Comments Off on Coach AI: Iverson doesn’t play in Philly Big3 homecoming – ABC News
Elon Musk warns that AI could destroy us all, begs governors to take preemptive action – The Daily Dot
Posted: at 4:11 am
Billionaire tech entrepreneur Elon Musk has a cautionary warning for Americas decision-makers: regulate and control artificial intelligence now, before its too late.
Speaking at the National Governors Association Summer Meeting in Providence, Rhode Island this week, Musk spoke to a group of Democratic and Republican governors, urging them to take proactive action to prepare for the rise of AI. Specifically, he argued that the possible negative effects of AI on human society cant necessarily belegislated away after theyve already begun. Instead it requires preemptive regulations and restrictions for the safety of humankind.
AI is a rare case where I think we need to be proactive in regulation, instead of reactive. Because I think by the time we are reactive in AI regulation, its too late, Must said, viaCNet. AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.
A big part of Musks concern is that AI systems could spark needless wars by way of fake news and information manipulation, a vision not that far from what fans ofThe Terminatorfilm franchise have had bouncing through their heads for the last few decades.
But Musk fervently believes this is a real threat, not something isolated to the realms of science fiction. And while he believes that the general public doesnt have adequate appreciation for the scale of the threat just yet, he thinks that will change in due time.
Once there is awareness, Musk said, people will be extremely afraid, as they should be.
This is far from the first time the Tesla and SpaceX CEO has made dire public warnings about the threats associated with artificial intelligence. In 2014, he cautioned about the perils of rapid AI advancementin a speech to students at MIT. Said Musk then:
I think we should be very careful about artificial intelligence. If I wereto guess at what our biggest existential threatis, its probablythat. So, weneed to be very careful with the artificialintelligence. Im increasingly inclined tothink there should be some regulatory oversight at the national and international level, just to make sure that we dont do something very foolish. With artificial intelligence, we are summoning the demon. You know, in all those stories where theres the guy with the pentagram and the holy water, and hes like yeah, hes sure he can control the demon. Didnt work out.
Hes been joined in these concerns by other prominent thinkers in the world of tech and science, namely Microsoft founder Bill Gates and world-renowned theoretical physicist Stephen Hawking.
View original post here:
Posted in Ai
Comments Off on Elon Musk warns that AI could destroy us all, begs governors to take preemptive action – The Daily Dot
What an artificial intelligence researcher fears about AI – CBS News – CBS News
Posted: July 15, 2017 at 11:14 pm
Arend Hintzeis assistant professor of Integrative Biology & Computer Science and Engineering at Michigan State University.
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It's perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, "Matrix"-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become "the destroyer of worlds," as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldn't avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in "2001: A Space Odyssey," is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASA's space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didn't know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Play Video
Five years after beating humans on "Jeopardy!" an IBM technology known as Watson is becoming a tool for doctors treating cancer, the head of IBM ...
Systems like IBM's Watson and Google's Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they loseon "Jeopardy!" or don't defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that "to err is human," so it is likely impossible for us to create a truly safe system.
I'm not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures' performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Play Video
On 60 Minutes Overtime, Charlie Rose explores the labs at Carnegie Mellon on the cutting edge of A.I. See robots learning to go where humans can'...
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we'll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility that's farther down the line is using evolution to influence the ethics of artificial intelligence systems. It's likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesn't prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
Being a scientist doesn't absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don't yet know what it's capable of. But we do need to decide what the desired outcome of advanced AI is.
Play Video
Business leaders weigh in on the possibility of artificial intelligence replacing jobs
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady "hand." Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
tenaciousme, CC Wikimedia Commons
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind's existence in it probably doesn't matter at all.
But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn't just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don't find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
This article was originally published on The Conversation.
See the original post here:
What an artificial intelligence researcher fears about AI - CBS News - CBS News
Posted in Ai
Comments Off on What an artificial intelligence researcher fears about AI – CBS News – CBS News
Elon Musk just told a group of America’s governors that we need to regulate AI before it’s too late – Recode
Posted: at 11:14 pm
Elon Musk doesnt scare easily he wants to send people to Mars and believes that all cars will be driving themselves in the next ten years. Hes excited about it!
But there is something that really scares Musk: Artificial Intelligence, and the idea of software and machines taking over their human creators.
Hes been warning people about AI for years, and today called it the biggest risk we face as a civilization when he spoke at the National Governors Association Summer Meeting in Rhode Island.
Musk then called on the government to proactively regulate artificial intelligence before things advance too far.
Until people see robots going down the street killing people, they dont know how to react because it seems so ethereal, he said. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, its too late.
Normally the way regulations are set up is a while bunch of bad things happen, theres a public outcry, and after many years a regulatory agency is set up to regulate that industry, he continued. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.
Musk has been concerned about AI for years, and hes working on technology that would connect the human brain to the computer software meant to mimic it.
Musk has even said that his desire to colonize Mars is, in part, a backup plan for if AI takes over on Earth. But even though hes shared those concerns in the past, they hit their mark today in front of Americas governors, with multiple governors asking Musk follow up questions about how he would recommend regulating an industry that is so new and, at the moment, primarily posing hypothetical threats.
The first order of business would be to try to learn as much as possible, to understand the nature of the issues, he said, citing the recent success of AI in beating humans at the game Go, which was once thought to be nearly impossible.
AI wasnt the only topic of conversation. A large portion of the conversation was about electric vehicles, which Musks company, Tesla, is hoping to perfect.
Musk said that the biggest risk to autonomous cars is a fleet-wide hack of the software controlling them, and added that in 20 years, owning a car that doesnt drive itself will be the equivalent of someone today owning a horse.
There will be people that will have non-autonomous cars, like people have horses, he said. It just would be unusual to use that as a mode of transport.
Heres the full interview below. Fast forward to the 43 minute mark for Musks talk.
Excerpt from:
Posted in Ai
Comments Off on Elon Musk just told a group of America’s governors that we need to regulate AI before it’s too late – Recode
AI Is Getting Scary Good At Mimicking Us – Inc.com
Posted: at 11:14 pm
Artificial intelligence, more commonly referred to AI, is changing the game for many industries. From healthcare to finance, AI is revolutionizing the way we go about our day-to-day lives.
However, it isn't just major industries that are influenced by the developments in artificial intelligence. With the direction things are moving, AI may also affect the everyday person more than we think.
While we struggle with an influx of fake news, AI could take that to an entirely different level. With the ability to make fake videos, audio messages and images, new types of fake content could make it even more difficult to tell what's real and what isn't.
Both business professionals and consumers need to be aware of how these new forms of AI could affect them. Let's take a look at the various ways this fake content could influence your life, both personally and professionally, and what you can do to protect yourself.
Identity fraud isn't anything new. When your information falls into the wrong hands, it can be incredibly damaging. Not only can they hack your bank accounts and open new lines of credit in your name, but they can also potentially tarnish your professional or personal image.
AI makes identity fraud even more of a problem by attempting to mimic your voice or image. Through mimicking your voice, the person behind the AI may try to convince your bank or other financial institutions you are on the phone. By pairing your information with your voice, the criminal could trick your bank into handing over sensitive data.
Because of these new developments in AI, there may be more requirements and hurdles you need to jump through to connect with your bank to prevent identity fraud. While this can be annoying to someone with the right to access certain accounts, additional protection may be necessary.
People in the public eye have always faced rumors about things they may or may not have done. Without proof, these accusations are simply a matter of he-said, she-said. While the rumors may spread, they eventually die out without adequate proof to back them up.
Unfortunately, this new AI can make it easier to fake images, videos and even audio messages. While applications like Photoshop have been around for a few years, they're becoming more advanced, making it even more difficult to tell what is real and what isn't. Video editors and audio recordings, which used to be extremely difficult to edit, are also becoming more advanced.
For the business professional or other individual in the spotlight, it may mean it is more difficult to protect yourself against public rumors. This means that it may take an incredibly trained eye to determine whether or not something is real, which could damage your professional image or make it more challenging to defend yourself if a rumor begins to spread. It can also mean video and audio recordings cannot be entirely trusted.
Artificial intelligence is changing the way we do just about everything. As computers become smarter and more developed, it's getting easier to replicate everything from images and videos to products or even high-end goods.
With AI continuing to advance, you should understand what is editable and what is not. If a video or audio recording comes across your desk, you'll want to take it with a grain of salt as it may be doctored. Likewise, you'll want to be aware of anything regarding you or your company that could be a fraud.
Read this article:
Posted in Ai
Comments Off on AI Is Getting Scary Good At Mimicking Us – Inc.com
Google is using AI to create stunning landscape photos using Street View imagery – The Verge
Posted: at 11:14 pm
Googles latest artificial intelligence experiment is taking in Street View imagery from Google Maps and transforming it into professional-grade photography through post-processing all without a human touch. Hui Fang, a software engineer on Googles Machine Perception team, says the project uses machine learning techniques to train a deep neural network to scan thousands of Street View images in California for shots with impressive landscape potential. The software then mimics the workflow of a professional photographer to turn that imagery into an aesthetically pleasing panorama.
Google is training AI systems to perform subjective tasks like photo editing
The research, posted to the pre-print server arXiv earlier this week, is a great example of how AI systems can be trained to perform tasks that arent binary, with a right or wrong answer, and more subjective, like in the fields of art and photography. Doing this kind of aesthetic training with software can be labor-intensive and time-consuming, as it has traditionally required labeled data sets. That means human beings have to manually pick out which lighting effects or saturation filters, for example, result in a more aesthetically pleasing photograph.
Fang and his team used a different method. They were able to train the neural network quickly and efficiently to identify what most would consider superior photographic elements using whats known as a generative adversarial network. This is a relatively new and promising technique in AI research that pits two neural networks against one another and uses the results to improve the overall system.
In other words, Google had one AI photo editor attempt to fix professional shots that had been randomly tampered with using an automated system that changed lighting and applied filters. Another model then tried to distinguish between the edited shot the original professional image. The end result is software that understands generalized qualities of good and bad photographs, which allows it to then be trained to edit raw images to improve them.
To test whether its AI software was actually producing professional-grade images, Fang and his team used a Turing-test-like experiment. They asked professional photographers to grade the photos its network produced on a quality scale, while mixing in shots taken by humans. Around two out of every five photos received a score on par with that of a semi-pro or pro, Fang says.
The Street View panoramas served as a testing bed for our project, Fang says. Someday this technique might even help you to take better photos in the real world. The team compiled a gallery of photos its network created out of Street View images, and clicking on any one will pull up the section of Google Maps that it captures. Fang concludes with a neat thought experiment about capturing photos in the real world: Would you make the same decision if you were there holding the camera at that moment?
See the article here:
Google is using AI to create stunning landscape photos using Street View imagery - The Verge
Posted in Ai
Comments Off on Google is using AI to create stunning landscape photos using Street View imagery – The Verge
Testing Microsoft’s new AI for iPhone: Can this app really detect our ages and sense our moods? – GeekWire
Posted: July 14, 2017 at 5:14 am
Microsoft released a new app called Seeing AI for the iPhone this week. Its billed as a talking camera for the blind, but its also a showcase for the companys artificial intelligence technologies, bringing together several interesting features in one app.
Seeing AI can read short signs, scan barcodes, describe whats in a room, identify people, estimate their ages and genders, and guess their moods.
Pretty cool stuff, at least in theory. So how well does it work? We ran Seeing AI through its paces this week on our Geared Up podcast, and came away impressed with its capabilities.
Watch our hands-on preview of the Seeing AI app above, and listen to this weeks full episode of Geared Up below, starting with a recap and review of Amazon Prime Day. Download the MP3 here, and follow Geared Up viaApple,RSS,YouTube,FacebookorGoogle Play Music.
Read more:
Posted in Ai
Comments Off on Testing Microsoft’s new AI for iPhone: Can this app really detect our ages and sense our moods? – GeekWire
Robots and AI are going to make social inequality even worse, says new report – The Verge
Posted: at 5:14 am
Most economists agree that advances in robotics and AI over the next few decades are likely to lead to significant job losses. But whats less often considered is how these changes could also impact social mobility. A new report from UK charity Sutton Trust explains the danger, noting that unless governments take action, the next wave of automation will dramatically increase inequality within societies, further entrenching the divide between rich and poor.
The are a number of reasons for this, say the reports authors, including the ability of richer individuals to re-train for new jobs; the rising importance of soft skills like communication and confidence; and the reduction in the number of jobs used as stepping stones into professional industries.
Traditionally, jobs like these have been a vehicle for social mobility.
For example, the demand for paralegals and similar professions is likely to be reduced over the coming years as artificial intelligence is trained to handle more administrative tasks. In the UK more than 350,000 paralegals, payroll managers, and bookkeepers could lose their jobs if automated systems can do the same work.
Traditionally, jobs like these have been a vehicle for social mobility, Sutton Trust research manager Carl Cullinane tells The Verge. Cullinane says that for individuals who werent able to attend university or get particular qualifications, semi-administrative jobs are often a way in to professional industries. But because they dont require more advanced skills theyre likely to be vulnerable to automation, he says.
Similarly, as automation reduces the need for administrative skills, other attributes will become more sought after in the workplace. These include so-called soft skills like confidence, motivation, communication, and resilience. Its long established that private schools put a lot of effort into making sure their pupils have those sorts of skills, says Cullinane. And these will become even more important in a crowded labor market.
Re-training for new jobs will also become a crucial skill, and its individuals from wealthier backgrounds that are more able to do so, says the report. This can already be seen in the disparity in terms of post-graduate education, with individuals in the UK with working class or poorer backgrounds far less likely to re-train after university.
The report, which was carried out by the Boston Consulting Group and published this Wednesday, looks specifically at the UK, where it says some 15 million jobs are at risk of automation. But the Sutton Trust says its findings are also relevant to other developed nations, particularly the US, where social mobility is a major problem.
Social mobility is already a big problem in America
One study in 2016 found that America has become significantly less conducive to social mobility over the past few decades. It is increasingly the case that no matter what your educational background is, where you start has become increasingly important for where you end, one of the studys authors, Michael D. Carr, told The Atlantic last year. Another report found that around half of 30-year-olds in the US earn less than their parents at the same age, compared to the 1970s, when almost 90 percent earned more.
Its important to note, though, that there is disagreement about how bad the impact of automation on the job market will be. Some reports have suggested that up to 50 percent of jobs in developed countries are at risk, while others point out that only specific tasks will be automated rather than whole professions. Economists also note that new categories of jobs are likely to be created, although exactly what, and how many, is impossible to accurately predict.
The Sutton Trust report also says that there is some reason to be optimistic about the coming wave of automation, particularly if governments can encourage people to train for STEM professions (those involving science, technology, engineering, and mathematics).
From a social mobility perspective there are two important things about the STEM sector, says Cullinane of the UK job market. Firstly, there doesnt seem to be a substantial gap in the income background of people taking STEM related subjects, and secondly, there isnt a resulting pay gap for those who come from different backgrounds. If the STEM sector is going to be the main source of growth over the medium to long term, thats a real opportunity to leverage social mobility there.
Follow this link:
Robots and AI are going to make social inequality even worse, says new report - The Verge
Posted in Ai
Comments Off on Robots and AI are going to make social inequality even worse, says new report – The Verge
Infosys eyes robotics, AI and driverless cars for next round of growth – Economic Times
Posted: at 5:14 am
NEW DELHI: Infosys CEO Vishal Sikka may have given a glimpse of his firm's future plans as it looks to score big on newer technologies to ramp up revenue.
Sikka arrived for the earnings briefing in a driverless car, completey developed by the firm's engineering services in Mysuru. "Who says we can't build transformative technologies," Sikka tweeted.
"The driverless car is kind of the technology we are strongly focussed on. If you go by our numbers, about 10% of our revenue has come from new technoogies, services that did not exist 2 years ago. These are high growth services and that's where our focus will be," Sikka said.
Sikka said the firm's attempt is to create a pool of thousands of engineers with capability to work on projects in artificial intelligence and tap business opportunities.
Autonomous driving is something every automobile company will get into, and we are trying to build talent around this, Sikka said.
See the original post:
Infosys eyes robotics, AI and driverless cars for next round of growth - Economic Times
Posted in Ai
Comments Off on Infosys eyes robotics, AI and driverless cars for next round of growth – Economic Times