The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Baidu, Samsung Electronics Announce Production of its Cloud-to-Edge AI Accelerator to Start Early 2020 – HPCwire
Posted: December 18, 2019 at 8:44 pm
BEIJING AND SEOUL, South Korea, Dec. 18, 2019 Baidu, a leading Chinese-language Internet search provider, and Samsung Electronics, a world leader in advanced semiconductor technology, today announced that Baidus first cloud-to-edge AI accelerator, Baidu KUNLUN, has completed its development and will be mass-produced early next year.
Baidu KUNLUN chip is built on the companys advanced XPU, a home-grown neural processor architecture for cloud, edge, and AI, as well as Samsungs 14-nanometer (nm) process technology with its I-Cube (Interposer-Cube) package solution.
The chip offers 512 gigabytes per second (GBps) memory bandwidth and supplies up to 260 Tera operations per second (TOPS) at 150 watts. In addition, the new chip allows Ernie, a pre-training model for natural language processing, to infer three times faster than the conventional GPU/FPGA-accelerating model.
Leveraging the chips limit-pushing computing power and power efficiency, Baidu can effectively support a wide variety of functions including large-scale AI workloads, such as search ranking, speech recognition, image processing, natural language processing, autonomous driving, and deep learning platforms like PaddlePaddle.
Through the first foundry cooperation between the two companies, Baidu will provide advanced AI platforms for maximizing AI performance, and Samsung will expand its foundry business into high performance computing (HPC) chips that are designed for cloud and edge computing.
We are excited to lead the HPC industry together with Samsung Foundry, said OuYang Jian, Distinguished Architect of Baidu. Baidu KUNLUN is a very challenging project since it requires not only a high level of reliability and performance at the same time, but is also a compilation of the most advanced technologies in the semiconductor industry. Thanks to Samsungs state of the art process technologies and competent foundry services, we were able to meet and surpass our goal to offer superior AI user experience.
We are excited to start a new foundry service for Baidu using our 14nm process technology, said Ryan Lee, vice president of Foundry Marketing at Samsung Electronics. Baidu KUNLUN is an important milestone for Samsung Foundry as were expanding our business area beyond mobile to datacenter applications by developing and mass-producing AI chips. Samsung will provide comprehensive foundry solutions from design support to cutting-edge manufacturing technologies, such as 5LPE, 4LPE, as well as 2.5D packaging.
As higher performance is required in diverse applications such as AI and HPC, chip integration technology is becoming more and more important. Samsungs I-Cube technology, which connects a logic chip and high bandwidth memory (HBM) 2 with an interposer, provides higher density/ bandwidth on minimum size by utilizing Samsungs differentiated solutions.
Compared to previous technology, these solutions maximize product performance with more than 50% improved power/signal integrity. It is anticipated that I-Cube technology will mark a new epoch in the heterogeneous computing market. Samsung is also developing more advanced packaging technologies, such as redistribution layers (RDL) interposer and 4x, 8x HBM integrated package.
About Samsung Electronics Co., Ltd.
Samsung Electronics inspires the world and shapes the future with transformative ideas and technologies. The company is redefining the worlds of TVs, smartphones, wearable devices, tablets, digital appliances, network systems, and memory, system LSI, foundry and LED solutions. For the latest news, please visit the Samsung Newsroom athttp://news.samsung.com.
About Baidu
Baidu, Inc. is the leading Chinese language Internet search provider. Baidu aims to make the complicated world simpler through technology. Baidus ADSs trade on the NASDAQ Global Select Market under the symbol BIDU. Currently, ten ADSs represent one Class A ordinary share.
Source: Samsung Electronics Co., Ltd.
Excerpt from:
Posted in Ai
Comments Off on Baidu, Samsung Electronics Announce Production of its Cloud-to-Edge AI Accelerator to Start Early 2020 – HPCwire
Tech connection: To reach patients, pharma adds AI, machine learning and more to its digital toolbox – FiercePharma
Posted: at 8:44 pm
Pharmas desire to build direct relationships with patients isnt new. But even as rapidly changing technology makes those connections more possible than ever, it's also making them more important.
Opt-in health apps. 24/7 call centers that depend on machine learning. Voice-enabled artificial intelligence that helpsmanage chronic conditions. Digital therapeutics with automated reporting. They're just a few of the tech toolsbecoming indispensable in pharma marketingand not just because of the value those tools offer patients.
It's also because thedata and analytics those provide are important as pharma companiesshiftto more patient-centric businesses.
Simplify and Accelerate Drug R&D With the MarkLogic Data Hub Service for Pharma R&D
Researchers are often unable to access the information they need. And, even when data does get consolidated, researchers find it difficult to sift through it all and make sense of it in order to confidently draw the right conclusions and share the right results. Discover how to quickly and easily find, synthesize, and share informationaccelerating and improving R&D.
Astellas, for instance, hired its first senior vice president of patient centricity from Sanofi, where he spent eight years creating a system that integrates patient and physician perspectives into the drug discovery and development process.
Emerging digitaltools have also become important marketing devices that can convey pharma personality.
Take Reckitt Benckisers Mucinex Halloween TikTok videos. The brand translated its zombie-themed TV ad campaign for new product NightShift into a challenger TikTok promotion called #TooSickToBeSickand racked up more than 400 million views in just five days. Almost as importantly, it drummed up credibility with a young hip audience of influencers.
Another example is Eisais voice-enabled play and meditation skill called Ella the Jellyfish, created for children with Lennox-Gastaut syndrome and their families. The skill can sing, play games, tell stories and offer guided meditations and offers friendly support for a challenging rare disease.
And although the word relationship is often used in regard to pharmas emerging connections with patients, that may not be the exactly right term, said Syneos Health Managing Director of Insights and Innovation Leigh Householder.
Its not a relationship in that its what loyalty looks like in other categorieslike airlines, she said. In pharma, it looks more like what you see from really good health insurers who are able to know enough about you to find those moments when a nudge or reconnect or their next product would be very useful in your life. Instead of relationship, maybe we could just say person-level relevance.
Whatever its called, the successes creating those connections means the industry should expect even more digital tools and optimization from pharma in 2020.
Kendalle Burlin OConnell, chief operating officer at life science nonprofit MassBio, said, The rise of mobile apps has created a new age of patient engagement that I expect will grow in 2020. Well see increased app development from both providers and manufacturers to track medical adherence, relay updates between patients and physicians regarding care, and disseminate real-time data that captures the full patient journey.
Read more:
Posted in Ai
Comments Off on Tech connection: To reach patients, pharma adds AI, machine learning and more to its digital toolbox – FiercePharma
Introducing AI to the Back OfficeDoes the Tech Measure Up to the Hype? – www.waterstechnology.com
Posted: at 8:44 pm
Introducing AI to the Back OfficeDoesthe Tech Measure Up to theHype? - WatersTechnology.com Sponsored by: ?
This article was paid for by a contributing third party.
Throughout 2019, artificial intelligence (AI) has beenone of themost predominant buzzwords in thefinancial technology space. AI haspromisedenhanced accuracy andimproved efficiencies, allowing staff to focus on higher-value tasksit trulyhas the potential to revolutionize the back office.
So, whats stopping capital markets firms taking the leap?
This webinar identifies firms already using AI across their back offices, the benefits of doing so and the challenges they face.
Key topics discussed:
You need to sign in to use this feature. If you dont have a WatersTechnology account, please register for a trial.
Best Digital B2B Publishing Company 2016, 2017 & 2018
Best Digital B2B Publishing Company
You need to sign in to use this feature. If you dont have a WatersTechnology account, please register for a trial.
To use this feature you will need an individual account. If you have one already please sign in.
Alternatively you can request an individual account here:
Read the original:
Introducing AI to the Back OfficeDoes the Tech Measure Up to the Hype? - http://www.waterstechnology.com
Posted in Ai
Comments Off on Introducing AI to the Back OfficeDoes the Tech Measure Up to the Hype? – www.waterstechnology.com
Instagram Touts Anti-Bullying AI Created to Curb Offensive Speech – NewsBusters
Posted: at 8:44 pm
Its the future you probably didnt ask for -- being nagged by Artificial Intelligence to stop being offensive and bullying.
Instagram touted its new anti-bullying Artificial Intelligence program in its Dec. 16 blog about the social media giants long-term commitment to lead the fight against online bullying. Instagram claims the AI program notifies people when their captions on a photo or video may be considered offensive, and gives them a chance to pause and reconsider their words before posting.
Instagram originally announced this new AI that preempts offensive posts in a July 8th blog headlined Our Commitment to Lead the Fight Against Online Bullying. The Big Tech photo-sharing giant wrote that the program gives users a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification. From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.
Instagram has been experimenting with tackling the issue of bullying for quite some time now. Previously this took the form of a content filter that was created to help keep Instagram a safe place for self-expression by blocking offensive comments. Instagram CEO & Co-Founder Kevin Systrom wrote in a June 2017 blog that weve developed a filter that will block certain offensive comments on posts and in live video, further specifying that the content filter was intended to foster kind, inclusive communities on Instagram. This filter program came to fruition in May 2018 with a follow-up blog proclaiming that Instagram will filter bullying comments intended to harass or upset people in order to keep the platform an inclusive, supportive place.
Instagram followed this filter up with a separate AI program that will anticipate users offensive posts rather than merely filter them retroactively.
Providing an update on the AI program, Instagram wrote that the "[r]esults have been promising, and weve found that these types of nudges can encourage people to reconsider their words when given a chance.
This program is initially being rolled out in select countries, though it will soon beexpanding globally in the coming months, noted Instagram in their blog.
The process, as Instagram explains it, is that when an Instagram user writes a caption on a post, and our AI detects the caption as potentially offensive, they will receive a prompt informing them that their caption is similar to those reported for bullying. Users will then have the opportunity to change their caption before posting it.
How serious are these warnings? What is the price of not heeding them? According to the recent blog, In addition to limiting the reach of bullying, this warning helps educate people on what we dont allow on Instagram, and when an account may be at risk of breaking our rules.
The example of one such offensive comment shown on the blog was a user commenting youre stupid before getting sent the notification, which read: This caption looks similar to others that have been reported. The question remains on the platform as to what constitutes bullying, what constitutes a critique, and what are the potential biases that enable the AI to classify various comments as offensive or bullying.
But how can a computer program be biased? Rep. Alexandria Ocasio-Cortez explained this in a way that the left may find difficult to debunk. She accused algorithms of potentially being rife with bias while speaking at an MLK Now event in January. She claimed that algorithms "always have these racial inequities that get translated if you don't fix the bias, then you're just automating the bias. LiveScience backed up Ocasio-Cortez claim by citing an example about facial recognition. It wrote that if a program is being trained to recognize women in photographs, and all the images it is given are of women with long hair, then it will think anyone with short hair is a man.
If instagrams algorithm is trained to see mere disagreement as a form of bullying or fact-checking by opposing political figures as offensive then it will be categorized as such. This has scary implications for current American politics.
Instagram may have done something similar already, when it protected Sen. Elizabeth Warren (D-MA) from critique in February, 2019. GOP spokeswoman Kayleigh McEnany tweeted, I have been warned by @instagram and cannot operate my account because I posted an image of Elizabeth Warrens Bar of Texas registration form via @washingtonpost. Im warned that I am harassing, bullying, and blackmailing her.
Later, as reported by the Daily Caller, Instagram reinstated McEnanys account and sent an apology, saying that it mistook the post for sharing her private address.
More:
Instagram Touts Anti-Bullying AI Created to Curb Offensive Speech - NewsBusters
Posted in Ai
Comments Off on Instagram Touts Anti-Bullying AI Created to Curb Offensive Speech – NewsBusters
AI has bested chess and Go, but it struggles to find a diamond in Minecraft – The Verge
Posted: December 13, 2019 at 3:24 pm
Whether were learning to cook an omelet or drive a car, the path to mastering new skills often begins by watching others. But can artificial intelligence learn the same way? A new challenge teaching AI agents to play Minecraft suggests its much trickier for computers.
Announced earlier this year, the MineRL competition asked teams of researchers to create AI bots that could successfully mine a diamond in Minecraft. This isnt an impossible task, but it does require a mastery of the games basics. Players need to know how to cut down trees, craft pickaxes, and explore underground caves while dodging monsters and lava. These are the sorts of skills that most adults could pick up after a few hours of experimentation or learn much faster by watching tutorials on YouTube.
But of the 660 entries in the MineRL competition, none were able to complete the challenge, according to results that will be announced at the AI conference NeurIPS and that were first reported by BBC News. Although bots were able to learn intermediary steps, like constructing a furnace to make durable pickaxes, none successfully found a diamond.
The task we posed is very hard, Katja Hofmann, a principal researcher at Microsoft Research, which helped organize the challenge, told BBC News. While no submitted agent has fully solved the task, they have made a lot of progress and learned to make many of the tools needed along the way.
This may be a surprise, especially when you think that AI has managed to best humans at games like chess, Go, and Dota 2. But it reflects important limitations of the technology as well as restrictions put in place by MineRLs judges to really challenge the teams.
The bots in MineRL had to learn using a combination of methods known as imitation learning and reinforcement learning. In imitation learning, agents are shown data of the task ahead of them, and they try to imitate it. In reinforcement learning, theyre simply dumped into a virtual world and left to work things out for themselves using trial and error.
Often, AI is only able to take on big challenges by combining these two methods. The famous AlphaGo system, for example, first learned to play Go by being fed data of old games. It then honed its skills and surpassed all humans by playing itself over and over.
The MineRL bots took a similar approach, but the resources available to them were comparatively limited. While AI agents like AlphaGo are created with huge datasets, powerful computer hardware, and the equivalent of decades of training time, the MineRL bots had to make do with just 1,000 hours of recorded gameplay to learn from, a single Nvidia graphics processor to train with, and just four days to get up to speed.
Its the difference between the resources available to an MLB team coaches, nutritionists, the finest equipment money can buy and what a Little League squad has to make do with.
It may seem unfair to hamstring the MineRL bots in this way, but these constraints reflect the challenges of integrating AI into the real world. While bots like AlphaGo certainly push the boundary of what AI can achieve, very few companies and research labs can match the resources of Google-owned DeepMind.
The competitions lead organizer, Carnegie Mellon University PhD student William Guss, told BBC News that the challenge was meant to show that not every AI problem should be solved by throwing computing power at it. This mindset, said Guss, works directly against democratizing access to these reinforcement learning systems, and leaves the ability to train agents in complex environments to corporations with swathes of compute.
So while AI may be struggling in Minecraft now, when it cracks this challenge, itll hopefully deliver benefits to a wider audience. Just dont think about those poor Minecraft YouTubers who might be out of a job.
Read the original post:
AI has bested chess and Go, but it struggles to find a diamond in Minecraft - The Verge
Posted in Ai
Comments Off on AI has bested chess and Go, but it struggles to find a diamond in Minecraft – The Verge
AI for Peace – War on the Rocks
Posted: at 3:24 pm
This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the fourth question (part a.) which asks what international norms for artificial intelligence should the United States lead in developing, and whether it is possible to create mechanisms for the development and enforcement of AI norms.
In 1953, President Dwight Eisenhower asked the world to join him in building a framework for Atoms for Peace. He made the case for a global agreement to prevent the spread of nuclear weapons while also sharing the peaceful uses of nuclear technology for power, agriculture, and medicine. No one would argue the program completely prevented the spread of weapons technology: India and Pakistan used technology gained through Atoms for Peace in their nascent nuclear weapons programs. But it made for a safer world by paving the way for a system of inspections and controls on nuclear facilities, including the establishment of the International Atomic Energy Agency and, later, the widespread ratification of the Treaty on the Nonproliferation of Nuclear Weapons (NPT). These steps were crucial for building what became known as the nuclear nonproliferation regime.
The world stands at a similar juncture today at the dawn of the age of artificial intelligence (AI).The United States shouldapply lessonsfrom the 70-year history of governing nuclear technology by building a framework for governing AI military technology.
What would AI for Peace look like? The nature of AI is different than nuclear technology, but some of the principles that underpinned the nonproliferation regime can be applied to combat the dangers of AI. Government, the private sector, and academia can work together to bridge national divides. Scientists and technologists not just traditional policymakers will be instrumental in providing guidance about how to govern new technology. At a diplomatic level, sharing the peaceful benefits of technology can encourage countries to open themselves up to inspection and controls. And even countries that are competitors can cooperate to establish norms to prevent the spread of technology that would be destabilizing.
AI for Peace couldgo beyond current efforts by involving the private sector from the get-go and identifying the specific dangers AI presents and the global norms that could prevent those dangers (e.g., what does meaningful human control over smart machines mean in specific contexts?). It would also go beyond Department of Defense initiativesto build norms by encompassing peaceful applications. Finally, it would advance the United States historic role as a leader in forging global consensus.
The Dangers of Artificial Intelligence
The uncertainty surrounding AIs long-term possibilities makes it difficult to regulate, but the potential for chaos is more tangible. It could be used to inflict catastrophic kinetic, military, and political damage. AI-assisted weapons are essentially very smart machines that can find hidden targets more quickly and attack them with greater precision than conventional computer-guided weapons.
As AI becomes incorporated into societys increasingly autonomous information backbone, it could also pose a risk of catastrophic accidents. If AI becomes pervasive, banking, power generation, and hospitals will be even more vulnerable to cyberattack. Some speculate than an AI superintelligence could develop a strategic calculating ability so superior that it destabilizes arms control efforts.
There are limits to the nuclear governance analogy. Whereas nuclear technology was once the purview only of the most powerful states, the private sector leads AI innovation. States could once agree to safeguard nuclear secrets, but AI is already everywhere including in every smartphone on the planet.
Its ubiquity shows its appeal, but the same ubiquity lowers the cost of sowing disorder. A recent study found that for less than $10 anyone could create a fake United Nations speech credible enough to be shared on the internet as real. Controlling the most dangerous uses of technology will require private sector initiatives to build safety into AI systems.
Scientists Speak Out
In 2015, Stephen Hawking, Peter Norvig, and others signed an open letter calling for more research on AIs impacts on society. The letter recognized the tremendous benefits AI could bring for human health and happiness, but also warned of unpredictable dangers. The key issue is that humans should remain in control. More than 700 AI and robotics researchers signed the 2017 Asilomar AI Principles calling for shared responsibility and warning against an AI arms race.
The path to governing nuclear technology followed a similar pattern of exchange between scientists and policymakers. Around 1943, Niels Bohr, a famous Danish physicist, made the case that since scientists created nuclear weapons, they should take responsibility for efforts to control the technology. Two years later, after the first use of nuclear weapons, the United States created a committee to deliberate about whether the weapons should become central to U.S. military strategy, or whether the country should forego them and avoid a costly arms race. The Acheson-Lilienthal committees proposal to put nuclear weapons under shared international control failed to gain support, but it was one step in a consensus-building process. The U.S. Department of Defense, Department of State, and other agencies developed their own perspectives, and U.N. negotiations eventually produced the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Since entering into force in 1970, it has become the most widely subscribed arms control treaty in history with a total of 191 signatory states.
We are in the Acheson-Lilienthal age of governing AI. Neither disarmament nor shared control is feasible in the short term, and the best hope is to limit risk. The NPT was created with the principles of non-possession and non-transfer of nuclear weapons material and technology in mind, but AI code is too diffuse and too widely available for those principles to be the lodestar of AI governance.
What Norms Do We Want?
What then does nonproliferation look like in AI? What could or should be prohibited? One popular proposal is a no kill rule for unassisted AI: humans should bear responsibility for military attack.
A current Defense Department directive requires appropriate levels of human judgment in autonomous system attacks aimed at humans. This allows the United States to claim the moral high ground. The next step is to add specificity to what appropriate levels of judgement means in particular classes of technology. For example, greater human control might be proportional to greater potential for lethality. Many of AIs dangers stem from the possibility that it might act through code too complex for humans to understand, or that it might learn so rapidly so as to be outside of human direction and therefore threaten humanity. We must consider how these situations might arise and what could be done to preserve human control. Roboticists say that such existing tools as reinforcement learning and utility functions will not solve the control problem.
An AI system might need to be turned off for maintenance or, crucially, in cases where the AI system poses a threat. Robots often have a red shutdown button in case of emergency, but an AI system might be able learn to turn off its own off switch, which would likely be software rather than a big red button. Google is developing an off switch it terms a kill switch for its applications, and European lawmakers are debating whether and how to make a kill switch mandatory. This may require a different kind of algorithm than currently exists one with safety and interpretability at the core. It is not clear what an off switch means in military terms, but American-Soviet arms control faced a similar problem. Yet arms control proceeded though technical negotiations that established complex yet robust command and control systems.
Building International Consensus
The NPT was preceded by a quarter century of deliberation and consensus building. We are at the beginning of that timeline for AI. The purpose of treaties and consensus building is to limit the risks of dangerous technology by convincing countries that restraint is in the interests of mankind and their own security.
Nuclear nonproliferation agreements succeeded because the United States and the Soviet Union convinced non-nuclear nations that limiting the spread of nuclear weapons was in their interest even if it meant renouncing weapons while other countries still had them. In 1963, John F. Kennedy asked what it would mean to have nuclear weapons in so many hands, in the hands of countries large and small, stable and unstable, responsible and irresponsible, scattered throughout the world. The answer was that more weapons in the hands of more countries would increase the chance of accident, proxy wars, weak command and control systems, and first strikes. The threat of nuclear weapons in the hands of regional rivals could be more destabilizing than in the hands of the superpowers. We do not yet know if the same is true for AI, but we should investigate the possibility.
Access to Peaceful Technology
It is a tall order to ask countries to buy into a regime that limits their development of a powerful new technology. Nuclear negotiations offered the carrot of eventual disarmament, but what disarmament means in the AI context is not clear. However, the principle that adopting restrictions on AI weapons should be linked to access to the benefits of AI for peaceful uses and security cooperation could apply. Arms control negotiator William Foster wrote in 1967 that the NPT treaty would stimulate widespread, peaceful development of nuclear energy. Why not promise to share peaceful and humanitarian applications of AI for agriculture and medicine, for example with countries that agree to participate in global controls?
The foundation of providing access to peaceful nuclear technology in exchange for monitoring materials and technology led to the development of a system of inspections known as safeguards. These were controversial and initially not strong enough to prevent the spread of nuclear weapons, but they took hold over time. A regime for AI inspection and verification will take time to emerge.
As in the nuclear sphere, the first step is to build consensus and identify what other nations want and where common interest lies. AI exists in lines of code, not molecules of uranium. For publicly available AI code, principles of transparency may help mutual inspection. For code that is protected, more indirect measures of monitoring and verification may be devised.
Finally, nuclear arms control and nonproliferation succeeded as part of a larger strategy (including extended deterrence) that provided strategic stability and reassurance to U.S. allies. America and the Soviet Union despite their Cold War competition found common interests in preventing the spread of nuclear weapons. AI strategy goes hand-in-hand with a larger defense strategy.
A New AI for Defense Framework
Once again, the world needs U.S. global leadership this time to prevent an AI arms race, accident, or catastrophic attack. U.N.-led discussions are valuable but overly broad, and the technology has too many military applications for industry alone to lead regulation. Current U.N. talks are preoccupied with discussion of a ban on lethal autonomous weapons. These are sometimes termed killer robots because they are smart machines that can move in the world and make decisions without human control. They cause concern if human beings are not involved in the decision to kill. The speed and scale of AI deployment calls for more nuance than the current U.N. talks can provide, and more involvement by more stakeholders, including national level governments and industry.
As at the dawn of the nuclear age, the United States can build global consensus in the age of AI to reduce risks and make the world safe for one of its leading technologies one thats valuable to U.S. industry and to humanity.
Washington should build a framework for a global consensus on how to govern AI technology that could be weaponized. Private sector participation would be crucial to address governance, as well as how to share peaceful benefits to incentivize participation. The Pentagon, in partnership with private sector technology firms, is a natural leader because of its budget and role in the industrial base.
An AI for Peace program should articulate the dangers of this new technology, principles (e.g. no kill, human control, off switch) to manage the dangers, and a structure to shape the incentives for other states (perhaps a system of monitoring and inspection). Our age is not friendly to new treaties, but we can foster new norms. We can learn from the nuclear age that countries will agree to limit dangerous technology with the promise of peaceful benefits for all.
Patrick S. Roberts is a political scientist at the nonprofit, nonpartisan RAND Corporation.Roberts served as an advisor in the State Departments Bureau of International Security and Nonproliferation, where he worked on the NPT and other nuclear issues.
Image: Nuclear Regulatory Commission
More:
Posted in Ai
Comments Off on AI for Peace – War on the Rocks
An AI conference once known for blowout parties is finally growing up – MIT Technology Review
Posted: at 3:24 pm
Only two years ago, so Im told, one of the hottest AI research conferences of the year was more giant party than academic exchange. In a fight for the best talent, companies handed out endless free swag and threw massive, blowout events, including one featuring Flo Rida, hosted by Intel. The attendees (mostly men in their early 20s and 30s), flush with huge salaries and the giddiness of being highly coveted, drank free booze and bumped the night away.
I never witnessed this version of NeurIPS, short for the Neural Information Processing Systems conference. I came for my first time last year, after the excess had reached its peak. Externally, the community was coming under increasing scrutiny as the upset of the 2016 US presidential election drove people to question the influence of algorithms in society. Internally, reports of sexual harrassment, anti-Semitism, racism, and ageism were also driving conference goers to question whether they should continue to attend.
Sign up for The Algorithm artificial intelligence, demystified
So when I arrived in 2018, a diversity and inclusion committee had been appointed, and the long-standing abbreviation NIPS had been updated. Still, this years proceedings feel different from the last. The parties are smaller, the talks are more socially minded, and the conversations happening in between seem more aware of the ethical challenges that the field needs to address.
As the role of AI has expanded dramatically, along with the more troubling aspects of its impact, the community, it seems, has finally begun to reflect on its power and the responsibilities that come with it. As one attendee put it to me: It feels like this community is growing up.
This change manifested in some concrete ways. Many of the technical sessions were more focused on addressing real-world, human-centric challenges rather than theoretical ones. Entire poster tracks were centered on better methods for protecting user privacy, ensuring fairness, and reducing the amount of energy it can take to run and train state-of-the-art models. Day-long workshops, scheduled to happen today and tomorrow, have titles like Tackling Climate Change with Machine Learning and Fairness in Machine Learning for Health.
Additionally, many of the invited speakers directly addressed the social and ethical challenges facing the fieldtopics once dismissed as not core to the practice of machine learning. Their talks were also well received by attendees, signaling a new openness to engage with these issues. At the opening event, for example, cognitive psychologist and #metoo figurehead Celeste Kidd gave a rousing speech exhorting the tech industry to take responsibility for how its technologies shape peoples beliefs and debunking myths around sexual harassment. She received a standing ovation. In an opening talk at the Queer in AI symposium, Stanford researcher Ria Kalluri also challenged others to think more about how their machine-learning models could shift the power in society from those who have it to those who dont. Her talk was widely circulated online.
Much of this isnt coincidental. Through the work of the diversity and inclusion committee, the conference saw the most diverse participation in the its history. Close to half the main-stage speakers were women and a similar number minorities; 20% of the over 13,000 attendees were also women, up from 18% last year. There were seven community-organized groups for supporting minority researchers, which is a record. These included Black in AI, Queer in AI, and Disability in AI, and they held parallel proceedings in the same space as NeurIPS to facilitate mingling of people and ideas.
When we involve more people from diverse backgrounds in AI, Kidd told me, we naturally talk more about how AI is shaping society, for good or for bad. They come from a less privileged place and are more acutely aware of things like bias and injustice and how technologies that were designed for a certain demographic may actually do harm to disadvantaged populations, she said. Kalluri echoed the sentiment. The intentional efforts to diversify the community, she said, are forcing it to confront the questions of how power works in this field.
Despite the progress, however, many emphasized that the work is just getting started. Having 20% women is still appalling, and this year, as in past years, there continued to be Herculean challenges in securing visas for international researchers, particularly from Africa.
Historically, this field has been pretty narrowed in on a particular demographic of the population, and the research that comes out reflects the values of those people, says Katherine Heller, an assistant professor at Duke University and co-chair of the diversity committee. What we want in the long run is a more inclusive place to shape what the future direction of AI is like. Theres still a far way to go.
Yes, theres still a long way to go. But on Monday, as people lined up to thank Kidd for her talk one by one, I let myself feel hopeful.
Visit link:
An AI conference once known for blowout parties is finally growing up - MIT Technology Review
Posted in Ai
Comments Off on An AI conference once known for blowout parties is finally growing up – MIT Technology Review
12 Everyday Applications Of Artificial Intelligence Many People Aren’t Aware Of – Forbes
Posted: at 3:24 pm
By now, almost everyone knows a little bit about artificial intelligence, but most people arent tech experts, and many may not be aware of just how big an impact AI has. The truth is most consumers interact with technology incorporating AI every day. From the searches we perform in Google to the advertisements we see on social media, AI is an ever-present feature of our lives.
To help nonspecialists grasp the degree to which AI has been woven into the fabric of modern society, 12 experts from Forbes Technology Council detail some applications of AI that many may not be aware of.
1. Offering Better Customer Service
Calling customer service used to be as exciting as seeing a dentist. AI has changed that: You no longer have to repeat the same information countless times to different call center agents. Brands are able to tap into insights on all their previous interactions with you. Data analytics and AI help brands anticipate what their customers want and deliver more intelligent customer experiences. - Song Bac Toh, Tata Communications
2. Personalizing The Shopping Experience
Every time you shop online at an e-commerce site, as soon as you start clicking on a product the site starts to provide personalized recommendations of relevant products. Nowadays most of these applications use some form of AI algorithms (reinforced learning and others) to come up with such results. The experience is so transparent most shoppers dont even realize its AI. - Brian Sathianathan, Iterate.ai
3. Making Recruiting More Efficient
Next time you go to look for a new job, write your rsum for a computer, not a recruiter. AI is aggregating the talent pool, slimming the selection to a shortlist and ranking matches based on skills and qualifications. AI has thoroughly reviewed your rsum and application through machine learning before a human ever gets to look at them. - Tammy Cohen, InfoMart Inc.
4. Keeping Internet Services Running Smoothly
Consumers have come to expect their favorite apps and services to run smoothly, and AI makes that possible. AI does what humans cannot: It monitors apps, identifies problems and helps humans resolve them in a fraction of the time it would take manually. AI has the ability to spot patterns at scale in monitored data with the goal of having service interruptions solved before customers even notice. - Phil Tee, Moogsoft
5. Protecting Your Finances
For credit card companies and banks, AIs incredible ability to analyze massive amounts of data has become indispensable behind the scenes. These financial institutions leverage machine learning algorithms to identify potential fraudulent activity in your accounts and get ahead of any resulting detrimental effects. Every day, this saves people from tons of agony and headaches. - Marc Fischer, Dogtown Media LLC
6. Enhancing Vehicle Safety
Even if you dont have a self-driving vehicle, your car uses artificial intelligence. Lane-departure warnings notify a driver if the car has drifted out of its lane. Adaptive cruise control ensures that the car maintains a safe distance while cruising. Automated emergency braking senses when a collision is about to happen and applies the brakes faster than the driver can. - Amy Czuchlewski, Bottle Rocket
7. Converting Handwritten Text To Machine-Readable Code
The post office has tech called optical character recognition that converts handwritten text to machine-readable code. Reading handwriting requires human intelligence, but there are machines that can do it, too! Fun fact: This technology was invented in 1914 (yes, you read that right!). So, we experience forms of AI all the time. Its just a lot trendier now to call it AI. - Parry Malm, Phrasee
8. Improving Agriculture Worldwide
Most people dont think of AI when they eat a meal, but AI is improving agriculture worldwide. Some examples: satellites scanning farm fields to monitor crop and soil health; machine learning models that track and predict environmental impacts, like droughts; and big data to differentiate between plants and weeds for pesticide control. Thank AI for the higher crop yields. - John McDonald, ClearObject
9. Helping Humanitarian Efforts
While we often hear about AI going wrong, its doing good things, like guiding humanitarian aid, supporting conservation efforts and helping local government agencies fight droughts. AI always seems to get painted as some sci-fi type of endeavor when really its already the framework of many things going on around us all the time. - Alyssa Simpson Rochwerger, Figure Eight
10. Keeping Security Companies Safe From Cyberattacks
AI has become the main way that security companies keep us safe from cyber attacks. Deep learning models run against billions of events each day, identifying threats in ways that were simply unimaginable five years ago. Unfortunately, the bad actors also have access to AI tools, so the cat-and-mouse game continues. - Paul Lipman, BullGuard
11. Improving Video Surveillance Capabilities
In cities, along highways and in neighborhoods, video cameras are proliferating. Federal, state and/or local authorities deploy these devices to monitor traffic and security. In the background, AI-related technologies that include object and facial recognition technologies underpinned by machine and deep learning capabilities speed problem identification, reducing crime and mitigating traffic. - Michael Gurau, Kaiser Associates, Inc.
12. Altering Our Trust In Information
AI will change how we learn and the level of trust we place in information. Deepfakes and the ability to create realistic videos, pictures, text, speech and other forms of communication on which we have long relied to convey information will give rise to concerns about the foundational facts used to inform decision-making in every aspect of life. - Mike Fong, Privoro
Read more here:
12 Everyday Applications Of Artificial Intelligence Many People Aren't Aware Of - Forbes
Posted in Ai
Comments Off on 12 Everyday Applications Of Artificial Intelligence Many People Aren’t Aware Of – Forbes
Why AI Leads Us to Think Less, Act Impulsively – PCMag.com
Posted: at 3:24 pm
Since MIT Professor Bernhardt Trout's engineering ethics course shifted to focus on the ethics of artificial intelligence, the class has ballooned from a handful of students per semester in 2009 to more than 150 this year.
As deep learning and neural networks take center stage, "the students have much more of a concern about AI...particularly over the last year or so," Trout says.
A key challenge, according to Trout, is that "these algorithms push us toward us thinking less and acting based on impressions that may or may not be correct, as opposed to [making] our own decisions in a fully informed way. In general, we want to have the answer and move on. And these algorithms tend to play off on that psychology."
As AI evolves, "we need to be actively engaged in questioning what the algorithms do, what the results mean, and how inherent bias in the training set can affect the results," Trout says.
There are many ways this blind faith in algorithms can have adverse effects. For instance, when you start to believe (and "like") everything you see in your Facebook News Feed, which is powered by AI algorithms, you'll end up seeing only articles that confirm your viewpoints and biases, and you could become less tolerant of opposing views.
On other online platforms, content-recommendation algorithms can shape your preferences and nudge you in specific directions without your conscious knowledge. And in fields such as banking and criminal justice, blind trust in algorithms can be more damaging, such as the unwarranted decline of a loan application or an unfair verdict passed against a defendant.
"We have to remember that these are all mathematical algorithms. And there's a good argument against thinking that everything in human life is reducible to mathematics," Trout warns.
One of the major challenges of contemporary AI is lack of explainability. Deep-learning algorithms develop their logic from data and work in very complicated ways that are often opaque even to their creators. And this can cause serious trouble, especially where ethical issues are involved.
"It has become harder to trace decisions and analysis with methods like deep learning and neural nets," says Element AI's Marc-Etienne Ouimett. "The ability to know when a decision has been made or informed by an AI system, or to explain or interpret the logic behind that decision, becomes increasingly important in this context. You cannot effectively seek redress for harm caused by the misuse of an AI system unless you know that one has been used, or how it influenced the outcome."
This lack of transparency also makes it difficult to spot and fix ethical issues in algorithms. For instance, in one case, an AI algorithm designed to predict recidivism had silently used ZIP codes as a determining factor for the likelihood that a defendant would re-offend and wound up with a bias against black defendants, even though the programmers had removed racial information from their datasets.
In another case, a hiring algorithm penalized applicants whose resumes included the term "women," as in women's sports. More recently, Apple's new credit card was found to be biased against women, offering them up to 20 times less credit than menbecause of the AI algorithms it uses.
In these cases, the developers had gone to great lengths to remove any characteristics from the data that would cause bias in the algorithms. But AI often finds intricate correlations that indirectly allude to things like gender and race. And without any way to investigate those algorithms, finding these problematic correlations becomes a challenge.
Thankfully, efforts to create explainable AI models are taking place, including an ambitious project by DARPA, the research arm of the Department of Defense.
Another factor in the increased interest in the ethics of AI is the active engagement of the commercial sector.
"While the growth of deep learning and neural networks is a part of the growing attention toward ethical AI, another major contributor is...leaders in tech raising the issue and trying to actively make their points of view known to the broader public," Professor Trout says.
Execs like Bill Gates and Elon Musk, as well as scientists such as Stuart Russell and the late Stephen Hawking, have issued warnings about the potentially scary unintended consequences of AI. And tech giants like Microsoft and Google have been forced to explain their approach to AI and develop ethnical guidelines, particularly as it relates to selling their technology to government agencies.
"Ethical principles are a good start, but operationalizing these across the company is what counts. Each team, from fundamental/applied research to product design, development, and deployment, must understand how these principles apply to their functions," Element AI's Ouimett says.
Ouimett also underlines the need for companies to work with lawmakers actively. "It's important for businesses that have the technical expertise to engage in good faith with regulators to help them understand the nature of the risks posed by the technology," he says.
Element AI recently partnered with The Rockefeller and Mozilla Foundations to produce a list of recommendations for governments and companies on the role of the human-rights framework in AI governance.
"The collaboration will focus on advancing research on the legal, governance, and technical components of data trusts, which both Element AI and the Mozilla Foundation believe have tremendous potentialas safe and ethical data-sharing mechanisms, as many governments have thus far conceived of them, but also as tools that could be used to empower the public to participate in decisions regarding the use of their personal data, and to collectively seek redress in cases of harm," Ouimett says.
But Professor Trout has a slightly different view on the involvement of tech companies in AI ethics. "At the end of the day, they're doing this to a large extent for commercial reasons. They want to make their employees happy. That was the reason Google decided not to work with the Department of Defense. And they want to make their customers and the government happyand they want to enhance their bottom line," he says.
"I have not seen these companies really promote a thoughtful, deep approach to ethics, and that's where I would find them fall short. They have resources, they would be able to, but I don't see that happening. And I think that's a pity."
Continued here:
Posted in Ai
Comments Off on Why AI Leads Us to Think Less, Act Impulsively – PCMag.com
The applications of AI in eCommerce – ITProPortal
Posted: at 3:24 pm
In many industries, artificial intelligence (AI) is seen more as a buzzword than a tangible solution to accelerate outcomes. In fact, resources are commonly used to establish what AI can and cant do eCommerce is an exception to this.
In eCommerce, brands have invested in the power of AI. The trend is only set to grow, with a compound annual growth rate (CAGR) of 42.8 per cent in retail and eCommerce between 2019 and 2025. Whether its informing pricing strategies and product promotions, or satisfying the demand for more nuanced customer journeys, theres no shortage of applications in eCommerce.
While use-cases may be too specialised to drive widespread adoption in other industries, in eCommerce, AI enables merchants to add a personal touch to the way consumers buy their goods convenient given consumer demands for flexibility and consistency across multiple platforms.
So, how are eCommerce merchants turning this technology from a buzzword to a panacea and what can other sectors learn from it?
To begin with the most well-known solution - chatbots automate community management, customer engagement and even sales leads. According to Gartner, the average person will have more conversations with bots than with their spouse by 2020. Meanwhile, 70 per cent of white-collar workers will interact with conversational platforms on a daily basis by 2022. AI-enabled bots provide eCommerce merchants with a scalable solution which works around the clock, using natural language processing (NLP) to help people find the right product or make complaints. Equally, they are integrated with organisations internal APIs to provide visibility over product availability or assist employees with customer engagement.
Elsewhere, AI helps brands to build meaningful relationships with their customers by making sense of increasingly large volumes of data. When a consumer visits a website, they leave behind a trail of digital breadcrumbs, much of which has been left untapped. However, AI allows retailers to rapidly sift through transactional data to help employees generate insights from trends, purchasing patterns and marketing leads, and turn them into improved decisions.
In the digital era, retailers must be able to contextualise, optimise and narrow down search results for their buyers. AI enables merchants to leverage cookie data and provide consumers with highly tailored offerings. By utilising natural language processing capabilities, image, video and audio recognition, retailers can home in on what it is their customers really want.
Clearly, there is no shortage of use cases for AI in eCommerce. While some are more obvious than others, what is certain is that it enables merchants to provide customers with seamless experiences while enabling employees to do their work more effectively. So how can AI be leveraged successfully?
AI is nothing without data. It derives intelligence from the vast quantities of information possessed by organisations, meaning data science and data engineering become crucial. However, deriving insights from this data is by no means easy, and organisations need to ensure that they have the necessary foundations in place to apply analytics.
The problem is that this data is often extracted from fragmented and siloed sources, meaning there is a need to make data more accessible this requires coherent integration structures. Whats more, screening and aligning this data is a manual process and preparing data can take up a significant amount of time and resources.
Additionally, much of the data needed for AI to perform requires perishable insights. By this, we mean insights where the value degrades over time and which need to be detected and actioned as quickly as possible. Therefore, if companies struggle to collect sufficient amounts of the necessary data, it can quickly be rendered useless.
Preparing data is a complex process, particularly as large organisations tend to have their information spread across multiple sources. This all needs to be aligned if AI is to yield the hoped-for results. This means that data quality becomes a key challenge for eCommerce merchants to overcome, as poor data could prove detrimental so, when it comes to implementing AI, do the rewards outweigh the challenges?
eCommerce stands to benefit tremendously from AI. Already, we see companies shape the buying and selling experience for both shoppers and sellers AI is forecasted to be worth $27 billion in retail alone by 2025.
Customer experience will be the most significant beneficiary of developments in AI. With consumer adoption of technology and increasing demands for personalisation driving adoption, merchants cant afford to sit tight. While the technology is costly and difficult to implement, those early adopters will reap the rewards.
Whether or not eCommerce merchants can truly benefit will depend on how prepared they are. Before investing in AI, retailers need to think about the business case, whether there are opportunities to exploit, and whether they have the right data, people and technology.
Ultimately, there is a lot of preparation to do before AI can begin producing results. Organisations need to ensure they have clean, accessible and high-quality data from which they can derive meaningful insights only then can they ride the hype.
Richard Mathias, Senior Technology Architect, LiveArea
Read the original:
Posted in Ai
Comments Off on The applications of AI in eCommerce – ITProPortal