The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
How machine learning influences your productivity – VentureBeat
Posted: May 7, 2017 at 11:55 pm
If there is one word that the enterprise wants to be associated with, its productive.
It is the metric that influences so many others by which business is measured success, efficiency, profit. And recently, artificial intelligence (AI) has been touted as a new way to increase productivity by replacing expensive workers with tireless machines. One recent demonstration that has garnered media attention is the first demonstration of an autonomous big rig, the use of which could replace millions of truck drivers.
But AI has been getting a lot of undeserved limelight. Because long before machines replace us humans, they will be helping us to make smart decisions so we can become more productive autonomous machines be damned. This use of technology is called intelligence augmentation and because of its imminent and extensive impact, it deserves a closer look.
For many in the enterprise, artificial intelligence (AI) vs. intelligence augmentation (IA) is a distinction without a difference. And certainly, that case can be made. In a Wall Street Journal op-ed, IBM President, Chairman and CEO Ginni Romettypoints out that, whether you call them AI or IA, these cognitive systems are neither autonomous nor sentient, but they form a new kind of intelligence that has nothing artificial about it. They augment our capacity to understand what is happening in the complex world around us.
This is absolutely true. But there is still a distinction to be made when it comes to maximizing productivity in the modern, data-diverse workplace. Applying either of these technologies to the wrong task will be counterproductive, however advanced the application might be.
The intelligence provided by AI technology entails tapping into increasingly cheap computer processing power to evaluate alternate options more quickly than humans. This is why AI-driven computers have been successful at playing chess, winning at go, and even playing Jeopardy. Each of these tasks is characterized by the need to evaluate the best move from a finite set of options, however large that number of options might be. Evaluating many options and learning from past experience, using a technology called machine learning, is how artificial intelligence is able to pick the best outcome available.
But business decisions involve more than just evaluating many options. Business decisions involve ethics and intangibles, things that computers cant account for. Thats where humans come in. And that is what is so compelling about IA. IA enables humans to direct computers to evaluate options and then offer suggestions about what to do next. It is this type of cooperation between man and machine that will take humanity to the next level of productivity.
One practical example of exploiting machine intelligence to augment humans in an everyday business scenario is to collect disparate information from a wide variety of apps, then employ intelligence augmentation technologies such as natural language processing and machine learning to automatically match related information. For example, first collecting information from Salesforce, Dropbox, email, Office 365, Workday, and many other apps, then putting together related information in a puzzle-like fashion across all the apps, so a human can see the information forest for the data trees. This is an incredibly taxing cognitive process for humans, but a straightforward one for intelligent machines. With all the related information presented in a coherent context, the human can then make intelligent decisions about what to do next.
This doesnt mean that IA will supplant AI. Each use of intelligent technology has its place. AI examples such as using chatbots to replace human operators will become more commonplace. Today, you can order food from Taco Bell via Slack, or a pizza from Dominos using bots. These bots represent the types of tasks that AI can do more efficiently than a person, because the context is clearly defined and the degrees of decision freedom are extremely limited.
Its when the context becomes ambiguous, the decision criteria become fuzzy, and ethical considerations must be taken into consideration, that AI falls short. Its here where intelligence augmentation can help people by presenting information and options in a coherent manner, and letting the human take it from there. This is how machine intelligence will truly help organizations and individuals become more productive in the near- to mid-term, so that is where enterprises should be focused.
While artificial intelligence can improve efficiency for focused tasks by replacing humans, it is the application of machine intelligence to augment humans where the real increase in business productivity will occur. Understanding the respective roles for machine learning is the key to maximizing both in the enterprise.
David Lavenda is the cofounder and VP of Product Strategy at harmon.ie, a leading provider of user experience products.
Above: The Machine Intelligence Landscape This article is part of our Artificial Intelligence series. You can download a high-resolution version of the landscape featuring 288 companies by clicking the image.
Visit link:
How machine learning influences your productivity - VentureBeat
Posted in Artificial Intelligence
Comments Off on How machine learning influences your productivity – VentureBeat
Jeff Bezos explains Amazon’s artificial intelligence and machine … – GeekWire
Posted: at 11:55 pm
Amazon CEO Jeff Bezos appearedthis week at the Internet Associations annual gala in Washington, D.C., taking part in a wide-ranging discussion about the onlineeconomy, media coverage of Amazon, the companysbusiness principles, and even going off topic a bit todiscuss his Blue Origin space venture.
But Bezos seemed especiallyenergizedwhen Internet Association CEO Michael Beckerman asked him about artificial intelligence and machine learning.
It is a renaissance, it is a golden age, Bezos said. We are solving problems with machine learning and artificial intelligence that were in the realm of science fiction for the last several decades. Natural language understanding, machine vision problems, it really is an amazingrenaissance.
So how does Amazon see this playing out? Heres what Bezos said.
Machine learning and AI is a horizontal enabling layer. It will empower and improve every business, every government organization, every philanthropy basically theres no institution in the world that cannot be improved with machine learning. At Amazon, some of the things were doing are superficially obvious, and theyre interesting, and theyre cool. And you should pay attention. Im thinking of things like Alexa and Echo, our voice assistant, Im thinking about our autonomous Prime Air delivery drones. Those things use a tremendous amount of machine learning, machine vision systems, natural language understanding and a bunch of other techniques.
But those are kind of the showy ones. I would say, a lot of the value that were getting from machine learning is actually happening beneath the surface. It is things like improved search results. Improved product recommendations for customers. Improved forecasting for inventory management. Literally hundreds of other things beneath the surface.
The most exciting thing that I think were working on in machine learning, is that we are determined, through Amazon Web Services where we have all these customers who are corporations and software developers to make these advanced techniques accessible to every organization, even if they dont have the current class of expertise thats required. Right now, deploying these techniques for your particular institutions problems is difficult. It takes a lot of expertise, and so you have to go compete for the very best PhDs in machine learning and its difficult for a lot of organizations to win those competitions. Were in a great position, because of the success of Amazon Web Services, to be able to put energy into making those techniques easy and accessible. And so were determined to do that.
I think we can build a great business doing that, for ourselves, and it will be incredibly enabling for organizations that want to use these sophisticated technologies.
Amazon is one of several tech giants offeringartificial intelligence servicesviathe cloud, including Microsoft Azure and Google Cloud, but AWSspositionas the top public cloud vendormakes the company a force to be reckoned with inAI and ML.
Watch the full video of Bezos talk above.
Visit link:
Jeff Bezos explains Amazon's artificial intelligence and machine ... - GeekWire
Posted in Artificial Intelligence
Comments Off on Jeff Bezos explains Amazon’s artificial intelligence and machine … – GeekWire
This artificial intelligence platform’s picks could win you so much money at the Kentucky Derby – For The Win
Posted: May 6, 2017 at 3:38 am
You can pick your Kentucky Derby horses based on names or jockey uniform colors, but you might want to listen to this for some betting advice.
Last year, a startup called Unanimous A.I.used a platform to predict the exact top four finishers the superfecta at the 2016 Derby. And this year, Unanimous A.I.s UNU platform has made its picks.
First, heres how it works, via Newsweek:
UNU uses a unique form of artificial intelligence called swarm intelligence that aims to amplify rather than replace human intelligence.
It works like this: a group of people login to an UNU online forum through their smartphones or computers. At the start of each session, all participants are simultaneously presented with a question and a set of possible answers.
Each participant has control of a graphical magnet that they can move around the screen to drag a puck to the answer they think is correct. The puck can only fall on one answer, and the group has 60 seconds to collectively agree on a decision that best suits them all.
The result for this year, from some thoroughbred experts using the system?
1. Classic Empire
2. McCraken
3. Irish War Cry
4. Always Dreaming
A $20 bet on last years A.I. picks would have won you $10,842, so you might want to listen to them again.
Visit link:
Posted in Artificial Intelligence
Comments Off on This artificial intelligence platform’s picks could win you so much money at the Kentucky Derby – For The Win
Watson won ‘Jeopardy,’ but IBM is not winning with artificial intelligence – MarketWatch
Posted: at 3:38 am
On a February evening in 2011, Watsona supercomputer with artificial intelligence created by International Business Machines Corp.made history when it beat humans in a game of cognitive intelligence. And these werent just any humans. These were longtime Jeopardy champions, deemed among the most intelligent human contestants to ever grace the game show stage.
The Watson win was a major win for IBM IBM, -2.51% at the time, underscoring its transition to a new-age technology company with artificially-intelligent computers. It sent a clear message that IBM was no longer an aging legacy hardware company crippling under newer competition, but a bellwether of innovation yet to come in the realms of AI and big-data analytics.
Read also: Is hyperconvergence the next big thing in tech?
Today, technologies such as these underpin much of the technological development fueling the shift from mobile to the cloud. Evidence of Watson-like AI can be found in the digital assistants offered by companies like Apple Inc. AAPL, +1.66% and Amazon.com Inc. AMZN, -0.36% , and in a wide range of cloud services.
IBM, however, has never managed to take advantage of the head start it seemed to have in AI. The companys multiyear turnaround continues to move at a snails pace, which has been off-putting to some very large investors, and despite a large PR campaign around Watson, has never broken out the revenue it receives from the initiative.
On Friday, shares of IBM fell 3% to $154.45 after billionaire investor Warren Buffett announced that his company Berkshire Hathaway Inc. BRK.A, +0.18% sold about a third of its stake in IBM. It has unloaded 30 million shares so far in 2017, from the 81 million shares it held at the end of 2016.
I dont value IBM the same way that I did six years ago when I started buying, Buffett said.
Buffett bought more than $10 billion in IBM shares the year Watson won Jeopardy, and increased his stake a few more times in the years proceeding.
See also: 6 topics Warren Buffett cant avoid at Berkshires annual meeting
The Buffett blow followed a downgrade of IBMs credit rating by Moodys on Wednesday, which pointed to IBMs high level of investments in recent years to support its transformation. The spending spree has negatively impacted Big Blues profitability and cash flow for a longer-than-expected period of time, Moodys said.
IBM also announced weak quarterly earnings report on April 18. IBM reported profit and revenue declines, even as its newer software-as-a-service business saw revenue gains of 60%. The companys revenue slipped to its lowest level in 15 years.
A few analysts came to IBMs defense the day after those earnings hit, encouraging investors willing to wait through the transition to buy on the low. Stifel analyst David Grossman said the selloff provided an opportunistic entry point, but admitted that the transition was testing his patience.
The average rating on the stock is the equivalent to hold, while the average 12-month price target of $166.27 implies 7.5% upside from Friday afternoon trading prices. Shares of IBM have declined by 12% in the past three months, underperforming both the Dow Jones Industrial Average DJIA, +0.26% and the S&P 500 SPX, +0.41% The Dow 30 has gained 4.5% over that time.
The lengthy transition juxtaposed against an influx of competition has investors and analysts on edge. Morningstar analyst Andrew Lang said the jury is still out on whether Watson and other strategic initiatives benefit IBMs long-term competitive position. The competitive environment is fierce, he said, particularly with cloud computing permeating throughout the IT landscape.
Dont miss: Watson moves to Silicon Valley as IBM looks to cash in on Jeopardy champ
Patrick Moorhead, principal analyst at Moor Insights & Strategy, pointed to Watson, Power AI, SoftLayer Bluemix and Quantum Computing as successes of the IBM transition, but said IBM seems to be having trouble attracting new customers, evidenced by its historically low sales.
IBM has many solid elements to their new business, he said, but the company needs to move quickly to exploit those capabilities.
In a note to clients Friday, Bernstein Research analyst Toni Sacconaghi reiterated a market-perform rating and $150 target on the stock, but said he believes shares are too expensive given significant secular pressures and uncertainties surrounding several aspects of IBMs turnaround.
It has been a rough few weeks for Big Blue, he said. The key question for investors is whether they should look to build positions now? Our simple answer: No.
Key Words: Chess grandmaster Garry Kasparov talks about artificial intelligence, competing against IBMs Deep Blue and Putins grasp on Kremlin power
Link:
Watson won 'Jeopardy,' but IBM is not winning with artificial intelligence - MarketWatch
Posted in Artificial Intelligence
Comments Off on Watson won ‘Jeopardy,’ but IBM is not winning with artificial intelligence – MarketWatch
Cancer cells detected more accurately in hospital with artificial … – Phys.Org
Posted: at 3:38 am
May 5, 2017 Microscopic landscape of various types of cellsincluding tumour cells (in red). Credit: University of Warwick
Cancer cells are to be detected and classified more efficiently and accurately, using ground-breaking artificial intelligence thanks to a new collaboration between the University of Warwick, Intel Corporation, the Alan Turing Institute and University Hospitals Coventry & Warwickshire NHS Trust (UHCW).
Scientists at the University of Warwick's Tissue Image Analytics (TIA) Laboratoryled by Professor Nasir Rajpoot from the Department of Computer Scienceare creating a large, digital repository of a variety of tumour and immune cells found in thousands of human tissue samples, and are developing algorithms to recognize these cells automatically.
"We are very excited about working with Intel under the auspices of the strategic relationship between Intel and the Alan Turing Institute," said Professor Rajpoot, who is also an Honorary Scientist at University Hospitals Coventry & Warwickshire NHS Trust (UHCW).
"The collaboration will enable us to benefit from world-class computer science expertise at Intel with the aim of optimising our digital pathology image analysis software pipeline and deploying some of the latest cutting-edge technologies developed in our lab for computer-assisted diagnosis and grading of cancer."
The digital pathology imaging solution aims to enable pathologists to increase their accuracy and reliability in analysing cancerous tissue specimens over what can be achieved with existing methods.
"We have long known that important aspects of cellular pathology can be done faster with computers than by humans," said Professor David Snead, clinical lead for cellular pathology and director of the UHCW Centre of Excellence.
"With this collaboration, we finally see a pathway toward bringing this science into practice. The successful adoption of these tools will stimulate better organisation of services, gains in efficiency, and above all, better care for patients, especially those with cancer."
The initial work focuses on lung cancer. The University of Warwick and Intel are collaborating to improve a model for computers to recognize cellular distinctions associated with various grades and types of lung cancer by using artificial intelligence frameworks such as TensorFlow running on Intel Xeon processors.
UHCW is annotating the digital pathology images to help inform the model. The aim is to create a model that will eventually be useful in many types of cancercreating more objective results, lowering the risk of human errors, and aiding oncologists and patients in their selection of treatments.
The TIA lab at Warwick and the Pathology Department at the UHCW have established the UHCW Centre of Excellence for Digital Pathology and begun digitising their histopathology service.
This digital pathology imaging solution will be the next step in revolutionising traditional healthcare with computerised systems and could be placed in any pathology department, in any hospital.
The project has been launched in collaboration with Intel and the Alan Turing Institutethe latter being the UK's national centre for data science, founded in 2015 in a joint venture between the University of Warwick and other top UK universities.
"This project is an excellent example of data science's potential to underpin critical improvements in health and well-being, an area of great importance to the Alan Turing Institute," said Dr. Anthony Lee, the Strategic Programme Director at the Alan Turing Institute for the collaboration between the Institute and Intel.
Rick Cnossen, general manager of HIT-Imaging Analytics in Intel's Data Center Group, commented, "This project has massive potential benefit for cellular pathology, and Intel technologies are the foundation for enabling this transformation.
"We've seen what has happened over recent years with the digitisation of X-rays (PACS). The opportunity to transform the way pathology images are handled and analysed, building on experience with PACS and combining data with other sources, could be truly ground-breaking.
"This collaboration could not only improve service efficiency, but also open up new and exciting analytical techniques for more personalised precision care."
Explore further: New advances in cancer diagnosis
A University of Warwick computer scientist is working with technology that could revolutionise how some cancers are diagnosed.
US-based Intel announced a deal to buy an artificial intelligence startup as the computer chip colossus looks to broaden its role in data centers and the expanding internet of things.
Research by the University of Warwick, the University Hospital Coventry and Warwickshire NHS Trust (UHCW), and Tangent Reprofiling Limited, has discovered that statin drugs interact with a gap junction protein called GJC3 ...
New research from the University of Warwick and University Hospitals Coventry and Warwickshire (UHCW) NHS Trust could transform treatments and diagnosis for a common digestive condition which affects thousands of patients.
A serious problem in the Turing test for computer intelligence is exposed in a study published in the Journal of Experimental and Theoretical Artificial Intelligence.
Technology fan will.i.am, the frontman of The Black Eyed Peas, is teaming up with US computer chip giant Intel.
Burger King pulled a pretty juicy marketing stunt last month that drew plenty of attentionnot just to the Whopper, but also to the intrinsic vulnerabilities of a new type of voice-activated gadget.
There is something spooky about being able to see and talk to the pirate Blackbeard while one walks down a dark alley and then stepping right through him as he disappears into thin air. Such entertainment experiences are ...
The first solar plane aimed at reaching the stratosphere made an initial low-altitude test flight over Switzerland Friday.
More than 3 million people (three times previous estimates) are estimated to be actively using cryptocurrencies like bitcoin, finds the first global cryptocurrency benchmarking study by the Cambridge Centre for Alternative ...
A young man in a white t-shirt pulls on a dark blue denim trucker jacket, tucks his smartphone in an inside pocket and puts in-ear headphones in his right ear.
Community detection is an important tool for scientists studying networks. It provides descriptions of the large-scale network by dividing its nodes into related communities. To test community detection algorithms, researchers ...
Please sign in to add a comment. Registration is free, and takes less than a minute. Read more
See original here:
Cancer cells detected more accurately in hospital with artificial ... - Phys.Org
Posted in Artificial Intelligence
Comments Off on Cancer cells detected more accurately in hospital with artificial … – Phys.Org
How to get Google’s artificial intelligence on the Raspberry Pi – InfoWorld
Posted: at 3:38 am
By Swapnil Bhartiya, star Thought Leader, InfoWorld | May 5, 2017
Opinions expressed by ICN authors are their own.
Your message has been sent.
There was an error emailing this page.
I am a heavy user of Raspberry Pis. Every year I build a massive musical lighting setup for Christmas using couple of Pis. During Halloween, I build Pi-poweredtalking skeletons, spooky pumpkins, and scary lights. Raspberry Pi powers the wireless controller for my open source 3D printer and my water foundation in the garden. I am using Pi to run a Retro gaming rig, and my next project includes a remote controlled car and a possible drone.
You get the point, I am a heavy user of Raspberry Pi and IoT.
There is one thing that I miss in all of these projects. I wish I was able to not only control them with voice, but also give them some intelligence so that they can make some logical decision. And when it comes to AI and machine learning, no other platform beats Googles machine learning and Google Assistant. Now I am closer to bringing those capabilities to my Pi devices.
Google has started a project called AIY Project, do-it-yourself artificial intelligence for Maker, to bring Google Assistant to Raspberry Pi powered projects. Google says that along with everything the Google Assistant already does, you can add your own question and answer pairs. Google has teamed up with the Raspberry Pi foundation to create a new hardware add-on for Raspberry Pi called the Voice Kit.
Voice Kit is a fully open source reference project that includes Voice Hardware Accessory on Top (HAT) which contains electronics components for audio capture and playback, connectors for the dual mic daughter board and speaker, GPIO pins to connect low-voltage components like micro-servos and sensors, and an optional barrel connector for dedicated power supply.
The kit is designed and tested with the Raspberry Pi 3 Model B. Just like Google Cardboard, Voice Kit comes with a neat cardboard case.
Those who are more ambitious can also run Android Things on the Voice Kit, turning it into a fully functional prototype to build their own commercial IoT products.
Its amazing to see that people can now take advantage of Googles massive machine learning capabilities in their own home brew projects. I cant wait to get my hands on the kit so I can talk to my 3D printer and add smart features to my drone and RC cars.
It will be so incredible to say "printer, change filament," or "water the marigold pot" or "turn the Christmas lights on" and have these commands obeyed! I am overwhelmed with the possibilities because these are the devices that I built.
If you want the kit, Google is giving it away with the latest issues of MagPi magazine. If you dont want to subscribe to the magazine, you can sign-up for the waiting list to just get the hardware unit from Google. Barnes & Noble is also selling the kit in its stores.
This is the first kit from Google, and the company is working on many more such kits. I think a real IoT revolution is ahead of us. I am going to build some neat projects, are you building something?
This article is published as part of the IDG Contributor Network. Want to Join?
Swapnil Bhartiya is a journalist and writer who has been covering Linux & Open Source for 10 years.
Sponsored Links
Read more:
How to get Google's artificial intelligence on the Raspberry Pi - InfoWorld
Posted in Artificial Intelligence
Comments Off on How to get Google’s artificial intelligence on the Raspberry Pi – InfoWorld
5 Ways Artificial Intelligence Is Already Changing Government – Government Technology
Posted: at 3:38 am
We don't have enough people to keep up."
"We have to go through miles of case law on this one."
"The paperwork is killing our productivity."
"We don't know because we can't track events like that."
Spend enough time in or around government agencies, and these are the kinds of pressures you're likely to hear about. How can governments overcome challenges like these that are both detail-oriented and labor-intensive? Increasingly, they could be turning to artificial intelligence (AI).
You might think of AI as futuristic, but it's already having a profound impact on government. Cognitive technologies can't replace the complex strategic planning and management required of public administrators. But we're entering an era of automated intelligence -- the computerization of tasks previously thought to require human judgment.
Here, as explored in a new Deloitte study, are five ways AI can help government agencies cut costs, free workers for critical tasks, and deliver better, faster services.
1: Overcoming resource constraints: From Facebook posts to sensor readings, we generate far too much data for humans to make sense of without help. Cognitive technologies can help to sift that data. Electronic document discovery, for example, can locate 95 percent of relevant documents in the discovery phase of legal cases, compared to about 50 percent for humans, and in a fraction of the time. And then there's NASA's Volcano Sensorweb, a network of space, terrestrial and airborne sensors that can trigger closer observation by human experts who can pinpoint and record just-in-time imagery of volcanoes and other cryospheric events. This is a major promise of AI: humans and computers combining their strengths.
2: Dramatically cutting paperwork: By pointing the way to new opportunities for automation, AI can help to significantly reduce administrative tasks, maximizing time for mission-focused work. One Colorado survey, for example, found child-welfare caseworkers spending 37.5 percent of their time on documentation and administration, versus just 9 percent on actual contact with children and their families. And at the federal level, our research indicates that simply documenting and recording information consumes a half-billion staff hours each year. "Bots" can automate all kinds of activities like these, from invoice processing to filling in forms, from data entry to writing budget-reporting documents. By freeing up all that time, we can create a more effective government, empowering employees to do the work that really matters: serving citizens in need.
3: Reducing backlogs: Backlogs and long wait times can be hugely frustrating to both citizens and government employees. At the U.S. Patent and Trademark Office, the backlog of patent applications topped half a million in 2015. Cognitive technologies can sift through data backlogs and perform end-to-end business processes on a massive scale while leaving difficult cases to human experts.
4: Improving prediction: Machine learning and natural-language processing can reveal patterns, enabling better predictive capabilities. By trial and error, computers learn how to learn, mining information to discover patterns in data that can help predict future events. When your email program flags a message as spam, or your credit card company warns you of a potentially fraudulent use of your card, machine learning is probably involved. In government, the Army is developing wearable monitors that use a machine-learning algorithm to determine wound seriousness, helping medics prioritize treatment. Meanwhile, the Department of Energy's self-learning weather and renewable forecasting technology uses machine learning, sensor information, cloud-motion physics derived from sky cameras, and satellite observations to improve solar forecasting accuracy by 30 percent.
5: Answering citizen queries: Giving citizens quick answers to important questions improves service while reducing costs and backlog. "Chatbots" can handle tasks such as password resets (which one North Carolina agency's IT help desk found made up more than 80 percent of its tickets), freeing staff for more complex tasks. On the U.S. Army website, an interactive virtual assistant does the work of 55 recruiters: It answers questions, checks qualifications and refers prospective recruits to human recruiters. The system uses machine learning to improve recognition and helpful responses, with an accuracy rate of over 94 percent.
As these examples illustrate, cognitive technologies eventually will fundamentally change how government works, and the changes will likely come much sooner than many think. Some traditional models assume limits on the tasks that information technology can execute. Increasingly, however, such assumptions will no longer apply. As cognitive technologies advance in power, government agencies will need to bring more creativity to workforce planning and work design. The most forward-leaning jurisdictions will see cognitive technologies as an opportunity to reimagine the nature of government work itself -- to make the most of complementary human and machine skills.
This article was originally published on Governing.
Read the original here:
5 Ways Artificial Intelligence Is Already Changing Government - Government Technology
Posted in Artificial Intelligence
Comments Off on 5 Ways Artificial Intelligence Is Already Changing Government – Government Technology
AI everywhere – TechCrunch
Posted: at 3:38 am
I asked Huang to compare the GTC of eight years ago to the GTC of today, given how much of Nvidias focus has changed.
We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago, Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. And so we started evangelizing all over the world. GTC is our developers conference for that. The first year, with just a few hundred people, we were mostly focused in two areas: They were focused in computer graphics, of course, and physics simulation, whether its finite element analysis or fluid simulations or molecular dynamics. Its basically Newtonian physics.
A lot can change in a decade, however, and Huang points to a few things that have changed in the past 10years that have shifted the landscape in which Nvidia operates.
The first thing is that Moores Law has really slowed, he said. So as a result GPU-accelerated computing gave us life after Moores Law, and it extended the capability of computing so that these applications that desperately need more computing can continue to advance. Meanwhile, the reach of GPUs hasgone far and wide, and its much more than computer graphics today. Weve reached out into all fields of course computer graphics, virtual reality, augmented reality to all kinds of interesting and challenging physics simulations.
But it doesnt end there. Nvidias tech now resides in many of the worlds most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. However,the train that Nvidia has been riding to great success recently, AI, was a later development still.
"AI is just the modern way of doing software."
Almost every supercomputer in the world today has some form of acceleration, much of it from Nvidia, Huang told me. And then there was quantum mechanics. The field of quantum chemistry is going quite well and theres a great deal of research in quantum chemistry, in quantum mechanics. And then several years ago I would say about five years ago we saw an emergence of a new field in computer science called deep learning. And deep learning, combined with the rich amount of data thats available, and the processing capability came together to become what people call the Big Bang of modern AI.
This was a landscape shift that moved Nvidia from the periphery. Now, Nvidias graphics hardware occupies a more pivotal role, according to Huang and the companys long list of high-profile partners, including Microsoft, Facebook and others, bears him out.
GPUs really have become the center of the AI universe, though some alternatives like FPGAs are starting to appear, as well. At GTC, Nvidia has had many industry-leading partners onstage and off, and this year will be no exception: Microsoft, Facebook, Google and Amazon will all be present. Its also a hub for researchers, and representatives fromthe University of Toronto, Berkeley, Stanford, MIT, Tsinghua University, the Max Plank Institutes and many more will also be in attendance.
GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world. Nowhere else can you find most of the major tech companies in the world, along with academic and research organizations under one roof. And Nvidia is also focusing on bringing a third group more into the mix: startups.
Nvidia has an accelerator program called Inception that Huang says is its AI platform for startups. About 2,000 startups participate,getting support from Nvidia in one form or another, includingfinancing, platform access, exposure to experts and more.
Huang also notes that GTC is an event for different industry partners, including GlaxoSmithKline, Procter &Gamble and GE Healthcare. Some of these industry-side partners would previously have been out of place even at very general computing events. Thats because, unlike with the onset of smartphones, AI isnt just changing how you present computing products to a user, but also what areas actually represent opportunities for computing innovation, according to Huang.
AI is eating software, Huang continued. The way to think about it is thatAI is just the modern way of doing software. In the future, were not going to see software that is not going to continue to learn over time, and be able to perceive and reason, and plan actions and that continues to improve as we use it. These machine-learning approaches, these artificial intelligence-based approaches, will define how software is developed in the future. Just about every startup company does software these days,and even non-startup companies do their own software. Similarly, every startup in the future will have AI.
Nor will this be limited to cloud-based intelligence, resident in powerful, gigantic data centers. Huang notes that were now able to apply computing to things where before it made no sense to do so, including to air conditioners and other relatively dumb objects.
Youve got cars, youve got drones, youve got microphones; in the future, almost every electronic device will have some form of deep learning inferencing within it.We call that AI at the edge, he said. And eventually therell be a trillion devices out there: Vending machines; every microphone; every camera; every house will have deep learning capability. And some of it needs a lot of performance; some of it doesnt need a lot of performance. Some of it needs a lot of flexibility because it continues to evolve and get smarter. Some of it doesnt have to get smarter.And well have custom solutions for it all.
See the original post here:
Posted in Artificial Intelligence
Comments Off on AI everywhere – TechCrunch
Kasparov on Putin and the Future of Artificial Intelligence – NBCNews.com
Posted: May 4, 2017 at 3:19 pm
When IBM supercomputer Deep Blue defeated chess champion Garry Kasparov in May 1997, it was considered a monumental achievement for artificial intelligence. But in his new book, Deep Thinking, Kasparov shows how this discovery only set up a number of new hurdles for the artificial intelligence world to overcome.
"We realized after 1997 that it's very much back to square one because beating the world champion and making it to the top of Mount Everest in chess - we just discovered at that point that there are many, many high peaks ahead of us," Kasparov told NBC News' Chuck Todd in the latest edition of 1947: The Meet the Press Podcast. "The question of artificial intelligence still remained unanswered."
Kasparov, who has been a vocal critic of Vladimir Putin's regime in Russia, notes that there is an intersection between his interests in computing and democracy. Kasparov said that technological development and Russia's alleged cyber attacks on the free world go hand in hand.
Although the rise in networked computing has given foreign adversaries a tool with which to attack governments and undermine democracies, Kasparov says the same tools can be used against autocratic regimes.
"Cybersecurity and social media become front lines of this new conflict," he said. "So we're trying to play by the rules and they use our own technology, technology invented in the free world, against the very foundation of the free world."
The threat of foreign government conflicts was central to the evolution of the early internet, but Kasparov noted that the founding fathers of computer science, great minds like Alan Turing and Claude Shannon, believed that chess would be the ultimate test for artificial intelligence because winning a game of chess requires intelligence. What they were unable to anticipate was the dramatic growth of "brute force" computing.
Today, "a free chess app on the latest mobile phone is stronger than Deep Blue," Kasparov said.
With the rapid rate at which technology is developing today, there may yet be a new Holy Grail for artificial intelligence. According to Kasparov, part of the problem for AI rests in the fact that many Americans view technology as competition. Automation has long been viewed as a menace to the working class, but he suggests looking at the development from another perspective.
"What about looking for a positive side?" Kasparov said. "Now we have new intelligent machines, and they will be taking over I would say more menial aspects of cognition. So maybe it will help us to elevate our lives toward curiosity, creativity, beauty, joy so there are other things that we can do if we move to the next level of the development of our civilization."
"Machines will be better at anything that we're doing now, but as long as we are capable of dreaming and creating new things - say moving to other planets or exploring oceans. There are many things that we can do where machines will need human qualities. The question is how we combine it, how we become proper operators of these massive brute force and also certain other new qualities," Kasparov said.
1947: The Meet the Press Podcast is available on iTunes, Google Play, or wherever you get your podcasts.
Excerpt from:
Kasparov on Putin and the Future of Artificial Intelligence - NBCNews.com
Posted in Artificial Intelligence
Comments Off on Kasparov on Putin and the Future of Artificial Intelligence – NBCNews.com
How Artificial Intelligence May Help Doctors Save Lives – Fortune
Posted: at 3:19 pm
Artificial intelligence has shown promise in helping doctors predict which patients may be susceptible to chronic diseases like Alzheimers.
But despite the rapid advances , the healthcare industry is still in the early days of rolling out AI-powered treatments and drugs, Morten Sogaard, Pfizers vice president and head of genome sciences and technologies, said at Fortunes Brainstorm Health conference in San Diego on Wednesday.
Pfizer ( pfe ) has been using AI techniques like machine learning for years to sift through data, help research new drug compounds (essentially the combination of multiple drugs), and determine the best participants for clinical trials, he said. In some cases, it is nothing new, Sogaard said about AI in healthcare.
Get Brainstorm Health Daily , Fortunes health newsletter .
What is new, however, is the rising flood of information like genomic data and sensor data from medical devices, he explained. This influx has made it more difficult to understand key connections that could help researchers discover new treatments.
Currently, Pfizer is using deep learning , which Google ( goog ) helped to popularize as a way to train computers to recognize cats in photos, to mine electronic health records and lab data. By doing so, Pfizer can better understand how ailments like autoimmune and fatty liver diseases progress, he explained.
Sogaard said that these deep learning techniques have shown promise in finding disease patterns across large groups of people, but the ultimate goal is to eventually help individual patients.
Pfizer has also partnered with IBM ( ibm ) to use its Watson data-crunching technology in pharmaceutical research. But the company is also open to partnering with Google, Amazon ( amzn ) , and other cloud-computing providers to incorporate their own respective technologies.
Sogaard believes a handful of cloud computing providers will have AI technologies that drug companies could eventually use for research and development. However, it will not all happen the day after tomorrow, of course, he said.
For more about technology and finance, watch:
Federal regulations have not yet caught up to the rapid pace of innovation that could one day help predict and diagnose diseases using a combination of genomic, protein, and medical imaging data. But Sogaard is hopeful, and based on Pfizers meetings with regulators, he believes the Federal Drug Administration is open-minded to AI-assisted medical treatment.
View original post here:
How Artificial Intelligence May Help Doctors Save Lives - Fortune
Posted in Artificial Intelligence
Comments Off on How Artificial Intelligence May Help Doctors Save Lives – Fortune