The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
China Wants To Become Leader In Artificial Intelligence, Somehow – 24/7 Wall St.
Posted: July 21, 2017 at 12:16 pm
The Chinese government means to become a major leader in the world of artificialintelligence, via a massive investment in the sector. What The State Council, which disclosed the initiative, did not note is that it is up against U.S. based tech giants like Amazon (NASDAQ: AMZN) and Microsoft (NASDAQ: MSFT). It also has to compete with R&D by the U.S. government and Americas largest universities. The goal, based on those hurdles, is out of reach.
In an announcement, the Peoples Republic disclosed:
The State Council has issued a guideline on developing artificial intelligence (AI), setting a goal of becoming a global innovation center in this field by 2030. The total output value of artificial intelligence industries should surpass 1 trillion yuan ($147.80 billion). A mature theory and technology system should be formed. Developing AI is a complicated and systematic project according to the guideline. An open and coordinated AI innovation system should be constructed to develop not only the technology but also products and market. AI in China should be used to promote the countrys technology, economy, social welfare, maintain national security, and contribute to the world. Breakthroughs should be made in basic theories of AI, such as big data intelligence, multimedia aware computing, human-machine hybrid intelligence, swarm intelligence and automated decision-making. Advanced theories which can potentially transform AI should also be looked at, including advanced machine learning, brain-like computing, and quantum intelligent computing. Trans-boundary research should be promoted to connect AI with other subjects, such as cognitive science, psychology, mathematics, and economics.
A look at Microsofts recent earnings shows how advanced American companies are within the sector. Forits fiscal fourth quarter, which ended June 30, Microsoft posted revenue of $23.3 billion, up from $20.billion in the same quarter a year ago. Net income rose to $6.5 billion from $3.1 billion. And much of the improvement was driven by revenue from cloud initiatives.
Innovation across our cloud platforms drove strong results this quarter, said Satya Nadella, chief executive officer at Microsoft, Customers are looking to Microsoft and our thriving partner ecosystem to accelerate their own digital transformations and to unlock new opportunity in this era of intelligent cloud and intelligent edge.
Granted, even if all Microsofts revenue was from artificial intelligence and cloud operations, it would not come close to the Chinese goal. However, by most measures, Microsofts revenue in the sector is well behind Amazons, and close to sales at Alphabet (NASDAQ: GOOGL). A number of other U.S. companiesalso produce billions of dollars a year in cloud and artificial intelligencesales.
China has an ambitious goal, which does not seem even close to a real one.
By Douglas A. McIntyre
Read this article:
China Wants To Become Leader In Artificial Intelligence, Somehow - 24/7 Wall St.
Posted in Artificial Intelligence
Comments Off on China Wants To Become Leader In Artificial Intelligence, Somehow – 24/7 Wall St.
UK government launches artificial intelligence inquiry – CNET
Posted: July 20, 2017 at 3:13 am
Facebook showed off some artificial intelligence at its F8 event.
The United Kingdom's government has some questions about artificial intelligence.
On Wednesday, the House of Lords announced a public call for experts to weigh in on issues surrounding AI, including its ethical, economic and social effects as the technology becomes more prevalent.
When you think about all the crazy things that AI can accomplish, like a sex robot with a "brain," yeah, we've got some questions too.
AI is already poised to take over jobs, as it has for an insurance company in Japan, but Britain's Parliament has concerns from all sides. Members of Parliament want to know who AI is helping the most, who it's hurting, what role the government should play, and how AI will look in the next 20 years.
"The Committee wants to use this inquiry to understand what opportunities may exist for society in the development and use of artificial intelligence, as well as what risks there might be," Lord Clement-Jones, chairman of the committee on AI, said in a statement.
Experts can submit their testimonies here. The deadline for entries is on Sept. 6.
Read more:
UK government launches artificial intelligence inquiry - CNET
Posted in Artificial Intelligence
Comments Off on UK government launches artificial intelligence inquiry – CNET
Musk’s Warning Sparks Call For Regulating Artificial Intelligence – NPR
Posted: at 3:13 am
Artificial intelligence poses an existential risk to human civilization, Elon Musk (right) told the National Governors Association meeting Saturday in Providence, R.I. Stephan Savoia/AP hide caption
Artificial intelligence poses an existential risk to human civilization, Elon Musk (right) told the National Governors Association meeting Saturday in Providence, R.I.
Elon Musk is warning that artificial intelligence is a "fundamental existential risk for human civilization," and Colorado Gov. John Hickenlooper is looking into how states can respond.
Musk, the Tesla and SpaceX CEO, made the remarks over the weekend at the National Governors Association meeting in Rhode Island. He has long warned of the threats he believes artificial intelligence will pose, from automation to apocalypse. Bill Gates, Stephen Hawking and others have also sounded warnings over AI.
"Of all the things that I heard over this weekend with the National Governors Association, this was the one that I've spent more time thinking about," says Hickenlooper, a Democrat.
Not everyone at the NGA meeting received Musk's comments as warmly as Hickenlooper. Republican Gov. Doug Ducey of Arizona told Musk: "As someone who's spent a lot of time in [my] administration trying to reduce and eliminate regulations, I was surprised by your suggestion to bring regulations before we know exactly what we're dealing with."
Colorado Gov. John Hickenlooper suggests that governors need to work together on possible solutions to problems like the potential threats posed by artificial intelligence. Brennan Linsley/AP hide caption
Colorado Gov. John Hickenlooper suggests that governors need to work together on possible solutions to problems like the potential threats posed by artificial intelligence.
Other Silicon Valley thinkers are skeptical of Musk's doomsday prophesying. Yann LeCun, the head of AI at Facebook, told NPR's Aarti Shahani that humans are projecting when we predict Terminator-style robot takeovers. He says the "desire to dominate socially is not correlated with intelligence"; it's correlated with testosterone, "which AI systems won't have."
Hickenlooper spoke to NPR on Tuesday evening. Here are highlights from that interview.
On the mood in the room while Musk was speaking
You could have heard a pin drop. A couple of times he paused and it was totally silent. I felt like I think a lot of us felt like we were in the presence of Alexander Graham Bell or Thomas Alva Edison ... because he looks at things in such a different perspective.
On the threat that AI could pose
Right now we worry about cybersecurity and issues like that, but when you really have artificial intelligence at a great level, the weaponry and the ability to shut down whole parts of our cities, the ability to create such damage by turning off the electricity, or making sure there's no water ... everyone was spellbound I mean no one knew what to say.
On when government needs to step in
Usually what happens is something gets a little out of hand and then government begins to regulate. And [Musk] said, in this case, with artificial intelligence we need to get the regulations out well ahead of the problems appearing. Because it's going to happen so quickly that we need to have that anticipation and be working on it, because once you get to regulating something, everyone's got a self-interest, and it means taking away something from somebody who's already got it.
On how states can tackle such a big problem
Oftentimes, I think with the really difficult problems and we're trying to do this with health care now is to look at getting a number of state governors, both Republicans and Democrats, to come together around a specific issue and what the possible solutions are and have the governors work through possible solutions, because so often we're the ones where the solution gets implemented.
Dave Blanchard is an editor with Morning Edition. You can follow him @blanchardd.
See more here:
Musk's Warning Sparks Call For Regulating Artificial Intelligence - NPR
Posted in Artificial Intelligence
Comments Off on Musk’s Warning Sparks Call For Regulating Artificial Intelligence – NPR
Apple Just Got More Public About Its Artificial Intelligence Plans – Fortune
Posted: at 3:13 am
Apple is lifting the veil on some of its work in the red-hot field of artificial intelligence.
The consumer technology giant debuted Wednesday a website that highlights the companys various AI-related research projects. Named the Apple Machine Learning Journal, the tech giant is pitching it as a way for people to read about the work of its various engineers working on cutting-edge AI techniques like deep learning.
Get Data Sheet , Fortunes technology newsletter.
Ruslan Salakhutdinov, Apples director of AI research, announced the debut of the website via Twitter. Apple hired Salakhutdinov, who is also a Carnegie Mellon University associate professor specializing in machine learning, in October amid an increasing push by big tech companies like Google that are hiring AI experts from academia.
Apple currently lists only one research project on the new website that appears to be a re-written version of a research paper published in fall. Unlike the original research paper, Apple has removed the names of the researchers who worked on the paper for the revised version now posted on the Apple Machine Learning Journal.
The websites first post shows how Apple is researching how to teach computers to recognize faces by training them on images of computer-generated faces rather than pictures of actual human faces. Apple, which has stricter user privacy rules than companies like Google and Facebook , could benefit from this research if it could create AI systems do not require as much personal data for training.
With the new website, Apple joins other big tech companies like Google ( goog ) , Facebook ( fb ) , and Microsoft ( msft ) that maintain blogs highlighting how they are researching or using AI technologies in their products.
Besides functioning as a way to publicly demonstrate that these companies are using cutting-edge tech, these research blogs also function as recruiting tools for machine-learning specialists.
See original here:
Apple Just Got More Public About Its Artificial Intelligence Plans - Fortune
Posted in Artificial Intelligence
Comments Off on Apple Just Got More Public About Its Artificial Intelligence Plans – Fortune
What artificial intelligence means for sustainability – GreenBiz
Posted: at 3:13 am
Its hard to open a newspaper these days without encountering an article on the arrival of artificial intelligence. Predictions about the potential of this new technology are everywhere.
Media hype aside, real evidence shows that artificial intelligence (AI) already drives a major shift in the global economy. You now use it in your day-to-day life, as you look to Netflix to recommend your next binge or ask Alexa to play music in your home. And the benefits of AI are driving the technologies into every corner of the global economy. Look, for example, at the number of times the largest U.S. companies mention artificial intelligence in their 10-K filings. (See chart below, which measures mentions of "artificial intelligence" and related worlds in 10-K filings of S&P companies, from 2011 to 2016.)
For all of the debate about the dawn of artificial intelligence, there is little talk about what AI means for sustainability.
Will AI mean a massive technological boost to sustainability priorities? Or will the rapid changes associated with AI give us a net negative sustainability outcome? By mining the narrative disclosures that companies make about their CSR activities, we can derive some insights into how AI is transforming corporate sustainability activity. Using keyword searches in ESG Trends, a dataset of corporate sustainability disclosures, we looked across thousands of CSR reports and CDP disclosures from large, global companies to see what, if anything, companies are disclosing about the impact of artificial intelligence. This analysis below, which measures mentions of AI in corporate sustainability reports and CDP filings, can help us start to answer the question: What does AI mean for sustainability?
What we see is that AI is already having an impact on corporate sustainability activity. Companies already are making use of AI to achieve step changes in, for example, efficiency and emissions reductions, and to innovate new products and services. These AI applications for sustainability are not widespread, and they are early stage, but the data suggests that AI can bring significant benefits for sustainability in the medium term. What we dont see, however, is much evidence that companies are understanding the numerous and serious risks that AI presents.
The vast majority of the mentions of artificial intelligence in CSR reports and CDP filings relate to how AI presents opportunities for companies. AI is helping the next generation of companies reduce their environmental and social impact by improving efficiency and developing new products.
We can look first at utility company Xcel Energy. When the company creates electricity from burning coal at its two plants in Texas, one major byproduct is a potent greenhouse gas called nitrous oxide. Nitrous oxide emissions contribute to climate change, as well as harming the ozone layer.
Recently, the company has received a little extra help in reducing its emissions from artificial intelligence. Xcel has equipped its smokestacks in Texas with neural networks, an advanced artificial intelligence that simulates a human brain. The neural network quickly can analyze the data that results from the complex dynamics of coal combustion. It then can make highly accurate recommendations about how to adjust the plants operations to reduce nitrous oxide emissions and operate at peak efficiency. Neural networks have helped Xcel Energy and over a hundred other companies around the world reduce their nitrous oxide emissions.A report from the International Energy Agency estimated that artificial intelligence control systems such as Xcel Energys neural networks could reduce nitrous oxide emissions by 20 percent.
AI applications for sustainability are early stage, but the data suggests they can bring significant benefits in the medium term.
Another example is Google. The search giant recently hit a wall in improving data center efficiency. The company had optimized its data center energy use to a point where engineers felt it could not be improved much more. Then one of its engineers had an idea to deploy a machine learning model developed for another application to assist in optimizing efficiency in its data centers.
Google deployed the artificial model to "learn" when and why certain processes occurred in the data center. Based on this data, Googles algorithms were able to identify options for significant additional savings. Googles application of AI has helped to reduce the amount of energy used for cooling data centers by 40 percent good for the companys bottom line, and good for the planet.
Artificial intelligence is also enabling companies to develop new products and services that were unthinkable just a few years ago. In some of these cases, companies are deploying artificial intelligence directly to help them make progress on tough environmental and social challenges.
IBM, for example, is using its artificial intelligence expertise to improve weather forecasting and renewable energy predictions. The system, known as SMT, "uses machine learning, big data and analytics to continuously analyze, learn from and improve solar forecasts derived from a large number of weather models." Through the application of artificial intelligence and "cognitive computing," IBM can generate demand forecasts that are 30 percent more accurate. This type of forecasting can help utilities with large renewable installations better manage their energy load, maximize renewable energy production and reduce greenhouse gas emissions.
One of the best-known examples of artificial intelligence in action is in autonomous vehicles. Cars that drive themselves may offer a promising sustainability future: currently one-quarter of U.S. greenhouse gas emissions come from transportation. Machines will be more efficient at driving than humans. Engines in machine-driven cars can be smaller, using less gasoline. And autonomous vehicles can platoon together just inches from one another, improving efficiency and leaving more space on the road for cyclists, public transport or pedestrians. Google, Uber, Tesla, Ford, Nissan and other companies are working hard to develop self-driving cars.
It is not just tech companies that see report sustainability-related opportunities from AI. Interserve, for example, a FTSE-listed construction company, builds and manages sensitive facilities, including schools, hospitals and clinical facilities, where operational safety is critical. The company uses real-time data to alert personnel when dangerous, waterborne pathogens such as Legionnaires bacteria develop. The company reported that it is exploring artificial intelligence to predict when these diseases will occur so it can fix issues before they develop, increasing safety and saving on maintenance costs.
Interserves work, alongside that of Xcel Energy, Google, IBM and other companies, shows that AI has the potential to provide a major technological boost to help companies achieve sustainability goals.
However, AI applications for sustainability are in their infancy. Only a small percentage of the thousands of companies we analyzed mention artificial intelligence at all in their CSR disclosures. And as AI scales to create more sustainability opportunity, companies also will have to navigate the risks.
Judging from their official disclosures, companies are eager to embrace the opportunities presented by AI. They also appear remarkably unconcerned about the risks. In a review of more than 8,000 CSR reports and CDP disclosures over the last two years, we failed to find more than a handful of mentions of the risks to companies that AI poses.
One sustainability-related risk that AI poses is automated bias. Bias can happen when the machine learns to identify patterns in data and make recommendations based on, for example, race, gender or age.As AI algorithms do more analysis, companies must be diligent in ensuring that their algorithms analyze data and make predictions in a fair way.
One sustainability-related risk that AI poses is automated bias.
For example, credit scoring companies such as TransUnion use artificial intelligence to analyze a variety of data points to determine credit worthiness. Undiagnosed bias in such algorithms could lead to poor credit scores for groups of people based in part on gender or race, which is expressly prohibited by law and could expose the company to legal claims. What is a companys policy toward algorithmic decisions? Are the companys algorithms certified by a third-party to be bias-free? These are essential questions that companies should begin assessing and disclosing now.
Another risk from AI is that the sustainability benefits that companies tout such as major efficiency breakthroughs and clean, self-driving cars may not materialize, or may be offset by other consequences of AI.
For example, some studies suggest that the environmental benefits from self-driving cars may turn out to be mixed at best. Machines driving our cars, for example, may lead to people making more trips, which could lead to increases in emissions, not decreases.
Another major risk for the planet is that large-scale implementation of artificial intelligence may eat all of our jobs, leading to widespread unemployment. A recent report estimated that automation will replace 6 percent of U.S. jobs by 2021, with further job reductions coming in the medium term. A world without jobs presents a host of new, uncharted challenges for sustainability, few of which we can predict.
Artificial intelligence is already here. It will continue to gain in complexity and sophistication. It presents excellent opportunities for efficiencies and innovation, many of which were unthinkable just a few years ago.
Many of these innovations will allow us to make significant progress on the most difficult environmental and social problems facing humans. At the same time, these same efficiencies and innovations bring with them new risks, such as automated bias and large-scale job losses. More companies quickly must come to grips with both the sustainability opportunities and risks that AI brings.
See the rest here:
What artificial intelligence means for sustainability - GreenBiz
Posted in Artificial Intelligence
Comments Off on What artificial intelligence means for sustainability – GreenBiz
Nvidia Faces Much Tougher Competition in Artificial Intelligence, but Will Still Be OK – TheStreet.com
Posted: at 3:13 am
Nvidia Corp. (NVDA) is set to face a much tougher competitive environment in the white-hot market for server co-processors used to power artificial intelligence projects, as the likes of Intel Corp. (INTC) , AMD Inc. (AMD) , Fujitsu and Alphabet Inc./Google (GOOGL) join the fray. But the ecosystem that the GPU giant has built in recent years, together with its big ongoing R&D investments, should allow it to remain a major player in this space.
This column originally appeared on Real Money, our premium site for active traders. Click here to get great columns like this.
It's a basic rule of economics that when a market sees a surge in demand that leads to a small number of suppliers amassing huge profits, more suppliers will enter in hopes of getting a chunk of those profits. That's increasingly the case for the server accelerator cards used for AI projects, as a surge in AI-related investments by enterprises and cloud giants contribute to soaring sales of Nvidia's Tesla server GPUs.
Thanks partly to soaring AI-related demand, Nvidia's Datacenter product segment saw revenue rise 186% annually in the company's April quarter to $409 million, after rising 205% in the January quarter. Growth like that doesn't go unnoticed. Over the last 12 months, several other chipmakers and one cloud giant have either launched competing chips or announced plans to do so.
To understand why some of these rival products could be competitive with Tesla GPUs on a raw price/performance basis, it's important to understand what made Nvidia's chips so popular for AI workloads in the first place. Whereas server CPUs, like their PC and mobile counterparts, feature a small number of relatively powerful CPU cores -- the most powerful chip in Intel's new Xeon Scalable server CPU line has 28 cores -- GPUs can feature thousands of smaller cores that work in parallel, and which have access to to blazing-fast memory.
That gives GPUs a big edge for projects that involve a subset of AI known as deep learning. Deep learning involves training models that attempt to function much like how neurons in the human brain do to detect patterns in content such as voice, text and images, with the algorithms used by the models (like the human brain) getting better at both understanding these patterns as they take in more content and applying what they've learned to future tasks. Once an algorithm has gotten good enough, it can be used against real-world content in an activity known as inference.
Follow this link:
Posted in Artificial Intelligence
Comments Off on Nvidia Faces Much Tougher Competition in Artificial Intelligence, but Will Still Be OK – TheStreet.com
Could artificial intelligence disrupt the photography world? – TechRepublic
Posted: July 19, 2017 at 4:12 am
Scroll through some of the recent stories found on TechRepublic and you'll see the topic of artificial intelligence (AI) mentioned on several occasions. AI isn't something widely seen in action today, but the reality of its becoming more common is definitely on the lips and text editors of technologists. Can AI disrupt the world of photography? Will it eventually replace human input when it comes to processing photos? Anything is possible, but I truly doubt it.
In a recent blog post, a team at Google shared how its deep learning technology has been able to produce "professional quality" photo editing for a batch of landscape photos. In blind testing, pro photographers rated up to 40% of the images edited by AI as semi-pro or pro level quality. Quite frankly, some of the images published were quite nice, but is this enough to disrupt the world of photography? I don't think so. Disrupt the world of photography editing? Well it could be useful, but not disruptive. Allow me to explain.
Let's think of a scenario that a photographer may face. First there's a scheduled photo shoot with a client. In general, the client will have ideas on what they're looking for in the session and the photographer works closely with the client to meet those needs. We'll just throw headshot sessions out the window and look more at product photography or photography based on a scene in our example. Now close your eyes, be the client, and think of an ad showing a boardroom setting. In any scenario, it's up to the client and photographer to determine the mood and message it wants presented in that boardroom photo shoot.
Is the message "Board meetings are serious and powerful"? Or is the message "Come together and collaborate"? Both messages can be answered from the same scene by making a few nuance changes with lighting, the models' posture, facial expressions, and gestures, or even the props used within the scene. The client may not understand those concepts, but the photographer will. In this scenario, I can't say AI will aid in getting the client's message across. Right now, the AI used by Google isn't based on compositing or replacing props in a scene. A boardroom with with a few bottles of water or cups of coffee does not give the same vibe as a boardroom with an open box of doughnuts and crumpled cans of energy drinks. AI isn't ready to replace the analytical skills a photographer brings to the set of a photo shoot.
In the editing process, the photographer and AI share the same data. If a client were to upload an image into an AI system, it could easily input specified parameters to assist in the editing process. Keywords and maybe even a brief description of what the client is looking for is handy data. The AI could analyze the keywords against the uploaded image, proceed with editing to fit the client's needs, and display it within minutes or even SECONDS as a preview. The client could then approve the image and download it for use.
But what if the client doesn't approve?
Speaking from experience, I've edited photos for clients who didn't always agree with my post processingespecially when dealing with humans in the images. "Can you make my neck look slimmer?" "Can you remove that small mole that's under my left eye?" Those are not outlandish requests and are pretty common because most people want aesthetically superior models in their photographs. On the other hand, some individuals have taken pride in or made a name for themselves around their imperfections. Think of the former NFL player, Michael Strahan. Strahan has a gap between his two front teeth. With the gazillions of dollars he's earned as a professional football player, he could easily have gotten orthodontic care to correct the gap. He didn't. How will AI photo editing handle such situations? Sure, the machine can learn to touch up skin blemishes or imperfections, but to what extent? Will the AI understand the context of the edit or the subject matter better than a human?
When I hosted a Smartphone Photographers Community, we discussed how photos that tell a story are usually the photos that capture our emotions. It may not be the photo with the best exposure or color saturation, but when you see it, you stop to admire it. For example, one of the more iconic images of US history is the raising of the US flag at Iwo Jima. This image isn't technically sound. The exposure isn't quite right and the contrast could be increased. But at the end of the day, WHO CARES? It's an awesome photo capturing an emotional moment. Who's to say that running the image through post processing wouldn't have ruined it?
I think it would be tough for AI to know when and where to draw the line when it comes to post processing photos. Some photos need human intervention in the editing process to understand the mood and message the photo is supposed to convey, not just the adjusting of exposure or white balance. If a photo is just a run-of-the-mill landscape photograph, there just may be a place for AI photo editing. But even with that said, I'd much rather lean on the professional skills of landscape photographers, such as Trey Ratcliff or Thomas Heaton, who have a way of tugging at your emotions with their photography.
What are your thoughts about AI photo editing? Leave a comment below or tag me on Twitter with your thoughts.
Visit link:
Could artificial intelligence disrupt the photography world? - TechRepublic
Posted in Artificial Intelligence
Comments Off on Could artificial intelligence disrupt the photography world? – TechRepublic
Artificial Intelligence Experts Respond to Elon Musk’s Dire Warning for US Governors – Discover Magazine (blog)
Posted: at 4:12 am
If you hadnt heard, Elon Musk is worried about the machines.
Though that may seem a quixotic stance for the head of multiple techcompanies to take, it seems that his proximity to the bleeding edge of technological development has given him the heebie-jeebies when it comes to artificial intelligence. Hes shared his fears of AI running amok before, likening it to summoning the demon, and Musk doubled down on his stanceat a meeting of the National Governors Association this weekend, telling state leaders that AI poses an existential threat to humanity.
Amid a discussion of driverless vehicles and space exploration, Musk called for greater government regulations surrounding artificial intelligence research and implementation, stating:
Until people see robots going down the street killing people, they dont know how to react because it seems so ethereal. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, its too late, according to theMIT Tech Review.
Its far from delusional to voice such concerns, given that AI could one day reach the point where it becomes capable of improving upon itself, sparking a feedback loop of progress that takes it far beyond human capabilities. When well actually reach that point is anyones guess, and were not at all close at the moment, as todays footage of a security robot wandering blindly into a fountain makes clear.
While computers may be snapping up video game records and mastering poker, they cannot approximate anything like general intelligence the broad reasoning skills that allow us to accomplish many variable tasks. This is why AI that excels at a single task, like playing chess, fails miserably when asked to do something as simple as describe a chair.
To get some perspective on Musks comments,Discover reached out to computer scientists and futurists working on the very kind of AI that the tech CEO warns about.
Elon Musks obsession with AI as an existential threat for humanity is a distraction from the real concern about AIs impact on jobs and weapons systems. What the public needs is good information about the actual consequences of AI both positive and negative. We have to distinguish between science and science fiction. In fictional accounts, AI is often cast as the bad guy, scheming to take over the world, but in reality AI is a tool, a technology and one that has the potential to save many lives by improving transportation, medicine, and more. Instead of creating a new regulatory body, we need to better educate and inform people on what AI can and cannot do. We need research on how to build AI guardiansAI systems that monitor and analyze other AI systems to help ensure they obey our laws and values. The world needs AI for its benefits, AI needs regulation like the Pacific ocean needs global warming.
Elon Musks remarks arealarmist. I recently surveyed300 leading AI researchers andthe majority of themthinkit will take at least 50 moreyearsto get tomachines as smart as humans. Sothis is not a problem that needs immediate attention.
And Im not too worried about what happenswhen we get to super-intelligence, astheresa healthy research communityworking onensuring that thesemachines wont pose an existential threat to humanity. I expecttheyll have worked out preciselywhat safeguards are needed by then.
But Elon is right about one thing: We do need government to startregulating AI now.However, it isthe stupid AI we have today that we need to start regulating.The biased algorithms. Thearms race to develop killer robots, where stupid AI will be giventhe ability to make life or death decisions. The threat to our privacy as the techcompanies get hold of all our personal and medical data. And the distortionof political debate that the internet is enabling.
The tech companies realizethey have a problem, and they havemade some efforts to avoid government regulation by beginning toself-regulate.Butthere are serious questions to be askedwhether they can be left to do this themselves.We are witnessing anAI race between the big tech giants, investing billionsof dollars in thiswinner takes all contest. Many other industries have seengovernment step in to prevent monopolies behaving poorly. Ive said thisin a talk recently, but Ill repeat it again: If some of the giants like Google and Facebookarent broken up in twenty years time, Ill be immensely worried for thefuture of our society.
There are no independent machine values; machine values are human values. If humanity is truly worried about the future impact of a technology, be it AI or energy or anything else, lets have all walks and voices of life be represented in developing and applying this technology. Every technologist has a role in making benevolent technology for bettering our society, no matter if its Stanford, Google or Tesla. As an AI educator and technologist, my foremost hope is to see much more inclusion and diversity in both the development of AI as well as the dissemination of AI voices and opinions.
Artificial Intelligence is already everywhere. Its ramifications of use rival that of the Internet,and actually reinforces them. AIis being embedded in almost every algorithm and system were building now and in the future. There is an essential opportunity to prioritize ethical and responsible design today for AI. However, this is more related to the greater immediate risk for AI and society, which isthe prioritization of exponential economic growth while ignoring environmental and societal issues.
In terms of whether Musks warnings of existential threats regarding Artificial Super-intelligence merit immediate attention, we actually risk large-scale negative and unintended consequences because were placing exponential growth and shareholder value abovesocietal flourishing metricsas indicators of successfor these amazing technologies.
To address these issues, every stakeholder creating AI must address issues of transparency, accountability and traceability in their work. They must ensure the safe and trusted access to and exchange of user data as encouraged by the GDPR (General Data Protection Regulation)in the EU. And they must prioritize human rights-centric well being metrics like the UN Sustainable Development Goals as predetermined global metrics of success that can provably increase human prosperity.
TheIEEE Global AI Ethics InitiativecreatedEthically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systemsto pragmatically help any stakeholders creating these technologies toproactively deal with the general types of ethical issues Musks concerns bring up. The group of over 250 global AI and Ethics experts were also the inspiration behind the series ofIEEE P7000 Standards Model Process for Addressing Ethical Concerns During System Designcurrently in progress, designed to create solutions to these issues in a global consensus building process.
My biggest concern about AI is designing and proliferating the technology without prioritizing ethical and responsible design or rushing to increase economic growth in a time we so desperately need to focus on environmental and societal sustainability to avoid the existential risks weve already created without the help of AI. Humanity doesnt need to fear AI, as long as we actnowto prioritize ethical and responsible design of it.
Elon Musks concerns about AI that will pose an existential threat to humanity are legitimate and should not be dismissedbut they concern developments that almost certainly lie in the relatively far future, probably at least 30 to 50 years from now, and perhaps much more.
Calls to immediately regulate or restrict AI development are misplaced for a number of reasons, perhaps most importantly because the U.S. is currently engaged in active competition with other countries, especially China. We cannot afford to fall behind in this critical race.
Additionally, worries about truly advanced AI taking over distract us from the much more immediate issues associated with progress in specialized artificial intelligence. These include the possibility of massive economic and social disruption as millions of jobs are eliminated, as well as potential threats to privacy and the deployment of artificial intelligence in cybercrime and cyberwarfare, as well as the advent of truly autonomous military and security robots. None of these more near term developments rely on the development of the advanced super-intelligence that Musk worries about. They are a simple extrapolation of technology that already exists. Our immediate focus should be on addressing these far less speculative risks, which are highly likely to have a dramatic impact within the next two decades.
More here:
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence Experts Respond to Elon Musk’s Dire Warning for US Governors – Discover Magazine (blog)
Artificial intelligence is our chance to shoot for the moon – City A.M.
Posted: at 4:12 am
A common criticism of the recent election has been that the campaigns failed to address many of the big issues that will affect the futures of most people.
Its a fair point. I dont recall hearing one politician talk about the rapidly changing world of business and work, and what this country may look like in the future or indeed what they intend to do about it.
As someone who works in the field of artificial intelligence (AI), you would expect me to pay close attention to political statements on such things. But my interest in what national leaders think is not purely a commercial one. I truly believe that AI can change our world for the better.
Read more: Meet the pair preparing London (and the world) for our AI future
However, despite its vast potential, I believe there is a key component which would allow AI to take off that is currently lacking and that is strategic ambition.
I have read frequently, including in these pages, that we are supposedly on the brink of a Fourth Industrial Revolution. But you would not know this from the way that many in government are speaking.
Two cross-party groups have recently been formed on the subject of AI, and that is encouraging. But given the silence from Whitehall and Downing Street, I am forced to wonder whether their work is being taken seriously.
The simple truth is that the UK will not be able to lead and certainly not change the world without a clear national strategy.
I ask you to consider the Space Race for a moment. This was a period of international technological innovation that spurred many of the things that we take for granted today from powerful computers and commercial flights, to satellite television and non-stick frying pans.
There was a clear objective: get into space and land on the moon. That entailed exploiting the latest technology to get there first. This competition stimulated the United States and Russia to strive for technological breakthroughs.
The result was that the Space Race opened a new era in technology, fuelling innovations that have improved the daily lives of millions of people. AI can do the same today.
It is time for the UK to shoot for the moon. That way it can and will be a leader in this new landscape. If it does not, it will merely be a guest at someone elses party.
The government urgently needs to start thinking about not just how to improve what we do already, but about what we can do differently. It needs to set out its strategic vision for a new society, taking into account automation, the gig economy, and the increasingly interconnected nature of work.
This isnt about how we confront the difficulties of developing AI its about how we use AI to solve the problems facing our country, supporting the economy and improving lives in the process. These are challenges that require a new type of intelligence.
What is intelligence? Intelligence is using knowledge effectively.
Through technology we now have a wealth of knowledge that we did not have before. We have data, vast amounts of it. But what we have often lacked so far is a way of making it useful.
AI is the key to making that knowledge useful.
Government departments sit on information that provides the clues to how society ticks. The data is there on our health, the benefits we draw upon, the types of transport we take, and the taxes we pay.
In isolation, this information is only useful to the department that holds it. But if aggregated and integrated in the right way, it can create intelligent solutions to societys problems.
A country that wants to lead in this area needs a software platform to bring the datasets together. It needs the right people who know to ask the data the right questions, and the right AI to provide them with answers.
Britain needs to be clear and united on what it wants AI to achieve so that it can make the most of the data it has at its disposal. This may sound obvious, but often governments create a policy for a perceived problem, then look for data to prove that policy will work.
In addition, the government needs to design a regulatory framework that sets out the rules of engagement for developing AI solutions that benefit society.
Ideally we want a framework that drives us all towards a common goal of making business and public services function better, and confronts concerns about privacy and data security that prevent people from embracing AIs potential.
If we act fast, and all pull in the same direction, this is an area in which UK can really shoot for the moon.
Read more: Artificial intelligence could boost the UK's household spending power
The rest is here:
Artificial intelligence is our chance to shoot for the moon - City A.M.
Posted in Artificial Intelligence
Comments Off on Artificial intelligence is our chance to shoot for the moon – City A.M.
What an Artificial Intelligence Researcher Fears About AI – Government Technology
Posted: at 4:12 am
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?
Oper proprie, CC BY-SA
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.
Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
If this guy comes for you, how will you convince him to let you live? tenaciousme, CC BY
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.
But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University
This article was originally published on The Conversation. Read the original article.
See the article here:
What an Artificial Intelligence Researcher Fears About AI - Government Technology
Posted in Artificial Intelligence
Comments Off on What an Artificial Intelligence Researcher Fears About AI – Government Technology