AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming – Express.co.uk

Fear surrounding artificial intelligence has remained prevalent as society has witnessed the mass leaps the technology sector has made in recent years. Shadow Robot Company Director, Rich Walker explained it is not evil A.I. people should necessarily be afraid of but rather the companies they masquerade behind. During an interview with Express.co.uk, Mr Walker explained advanced A.I. that had nefarious intent for mankind would not openly show itself.

He noted companies that actively do harm to society and people within them would be more appealing to A.I. that had goals of destroying humanity.

He said: There is the kind of standard fear of A.I. that comes from science fiction.

Which is either the humanoid robot, like from the Terminator, takes over and tries to destroy humanity.

Or it is the cold compassionless machine that changes the world around it in its own image and there is no space for humans in there.

DON'T MISS:Elon Musk issues terrifying prediction on 'AI robot swarms'

There is actually quite a good argument that there are cold compassionless machines that change the world around us in their own image.

They are called corporations.

We shouldnt necessarily worry about A.I as something that will come along and change everything.

We already have these organisations that will do that.

They operate outside of national rules of laws and societal codes of conduct.

So, A.I. is not the bit that makes that happen, the bits that make that happen are already in place.

He later added: I guess you could say that a company that has known for 30 years that climate change was inevitable and has systematically defunded research into climate change and funded research that shows climate change isnt happening is the kind of organisation I am thinking of.

That is the kind of behaviour you have to say: That is trying to destroy humanity.

DON'T MISSTESS satellite presents stunning new southern sky mosaic[VIDEO]Life discovered deep underground points to subterranean Galapagos'[INTERVIEW]Shadow land: Alien life can exist in 2D universe'[INTERVIEW]

They would argue no they are not trying to do that but the fact would be the effects of what you are doing is trying to destroy humanity.

If you wanted to have an Artificial Intelligence that was a bad guy, a large corporation that profits from fossil fuels and systematically hid the information that fossil fuels were bad for the planet, that would be an A.I bad guy in my book.

The Shadow Robot Company has directed there focus on creating complex dexterous robot hands that mimicked humans hands.

The robotics company uses tactical Telerobot technology to demonstrate how A.I programmes can be used alongside human interaction to create complex robotic relationship.

More here:

AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming - Express.co.uk

Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the U.S. Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Bob Kocher, a partner at the venture capital firm Venrock, are more blunt. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

Relaxed AI Standards At The FDA

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Jesse Ehrenfeld, who chairs the physician groups board of trustees.In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

When Good Algorithms Go Bad

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation that is not affiliated with Kaiser Permanente.

Continued here:

Artificial Intelligence Is Rushing Into Patient Care - And Could Raise Risks - Scientific American

One key to artificial intelligence on the battlefield: trust – C4ISRNet

To understand how humans might better marshal autonomous forces during battle in the near future, it helps to first consider the nature of mission command in the past.

Derived from a Prussian school of battle, mission command is a form of decentralized command and control. Think about a commander who is given an objective and then trusted to meet that goal to the best of their ability and to do so without conferring with higher-ups before taking further action. It is a style of operating with its own advantages and hurdles, obstacles that map closely onto the autonomous battlefield.

At one level, mission command really is a management of trust, said Ben Jensen, a professor of strategic studies at the Marine Corps University. Jensen spoke as part of a panel on multidomain operations at the Association of the United States Army AI and Autonomy symposium in November. Were continually moving choice and agency from the individual because of optimized algorithms helping [decision-making]. Is this fundamentally irreconcilable with the concept of mission command?

The problem for military leaders then is two-fold: can humans trust the information and advice they receive from artificial intelligence? And, related, can those humans also trust that any autonomous machines they are directing are pursuing objectives the same way people would?

To the first point, Robert Brown, director of the Pentagons multidomain task force, emphasized that using AI tools means trusting commanders to act on that information in a timely manner.

A mission command is saying: youre going to provide your subordinates the depth, the best data, you can get them and youre going to need AI to get that quality data. But then thats balanced with their own ground and then the art of whats happening, Brown said. We have to be careful. You certainly can lose that speed and velocity of decision.

Before the tools ever get to the battlefield, before the algorithms are ever bent toward war, military leaders must ensure the tools as designed actually do what service members need.

How do we create the right type of decision aids that still empower people to make the call, but gives them the information content to move faster? said Tony Frazier, an executive at Maxar Technologies.

Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

An intelligence product, using AI to provide analysis and information to combatants, will have to fall in the sweet spot of offering actionable intelligence, without bogging the recipient down in details or leaving them uninformed.

One thing thats remained consistent is folks will do one of three things with overwhelming information, Brown said. They will wait for perfect information. Theyll just wait wait, wait, theyll never have perfect information and adversaries [will have] done 10 other things, by the way. Or theyll be overwhelmed and disregard the information.

The third path users will take, Brown said, is the very task commanders want them to follow: find golden needles in eight stacks of information to help them make a decision in a timely manner.

Getting there, however, where information is empowering instead of paralyzing or disheartening, is the work of training. Adapting for the future means practicing in the future environment, and that means getting new practitioners familiar with the kinds of information they can expect on the battlefield.

Our adversaries are going to bring a lot of dilemmas our way and so our ability to comprehend those challenges and then hopefully not just react but proactively do something to prevent those actions, is absolutely critical, said Brig. Gen. David Kumashiro, the director of Joint Force Integration for the Air Force.

When a battle has thousands of kill chains, and analysis that stretches over hundreds of hours, humans have a difficult time comprehending what is happening. In the future, it will be the job of artificial intelligence to filter these threats. Meanwhile, it will be the role of the human in the loop to take that filtered information and respond as best it can to the threats arrayed against them.

What does it mean to articulate mission command in that environment, the understanding, the intent, and the trust? said Kumashiro, referring to the fast pace of AI filtering. When the highly contested environment disrupts those connections, when we are disconnected from the hive, those authorities need to be understood so that our war fighters at the farthest reaches of the tactical edge can still perform what they need to do.

Planning not just for how these AI tools work in ideal conditions, but how they will hold up under the degradation of a modern battlefield, is essential for making technology an aide, and not a hindrance, to the forces of the future.

If the data goes away, and you still got the mission, youve got to attend to it, said Brown. Thats a huge factor as well for practice. If youre relying only on the data, youll fail miserably in degraded mode.

See the article here:

One key to artificial intelligence on the battlefield: trust - C4ISRNet

China should step up regulation of artificial intelligence in finance, think tank says – Reuters

QINGDAO, China/BEIJING (Reuters) - China should introduce a regulatory framework for artificial intelligence in the finance industry, and enhance technology used by regulators to strengthen industry-wide supervision, policy advisers at a leading think tank said on Sunday.

FILE PHOTO: China Securities Regulatory Commission Chairman Xiao Gang addresses the Asian Financial Forum in Hong Kong January 19, 2015. REUTERS/Bobby Yip/File Photo

We should not deify artificial intelligence as it could go wrong just like any other technology, said the former chief of Chinas securities regulator, Xiao Gang, who is now a senior researcher at the China Finance 40 Forum.

The point is how we make sure it is safe for use and include it with proper supervision, Xiao told a forum in Qingdao on Chinas east coast.

Technology to regulate intelligent finance - referring to banking, securities and other financial products that employ technology such as facial recognition and big-data analysis to improve sales and investment returns - has largely lagged development, showed a report from the China Finance 40 Forum.

Evaluation of emerging technologies and industry-wide contingency plans should be fully considered, while authorities should draft laws and regulations on privacy protection and data security, the report showed.

Lessons should be learned from the boom and bust of the online peer-to-peer (P2P) lending sector where regulations were not introduced quickly enough, said economics professor Huang Yiping at the National School of Development of Peking University.

Chinas P2P industry was once widely seen as an important source of credit, but has lately been undermined by pyramid-scheme scandals and absent bosses, sparking public anger as well as a broader government crackdown.

Changes have to be made among policy makers, said Zhang Chenghui, chief of the finance research bureau at the Development Research Institute of the State Council.

We suggest regulation on intelligent finance to be written in to the 14th five-year plan of the countrys development, and each financial regulator - including the central bank, banking and insurance regulators and the securities watchdog - should appoint its own chief technology officer to enhance supervision of the sector.

Zhang also suggested the government brings together the data platforms of each financial regulatory body to better monitor potential risk and act quickly as problems arise.

Reporting by Cheng Leng in Qingdao, China, and Ryan Woo in Beijing; Editing by Christopher Cushing

Read more:

China should step up regulation of artificial intelligence in finance, think tank says - Reuters

Can AI restore our humanity? – Gigabit Magazine – Technology News, Magazine and Website

Sudheesh Nair, CEO of ThoughtSpot earnestly campaigns for artificial intelligence as a panacea for restoring our humanity - by making us able to do more work.

Whether AI is helping a commuter navigate through a city or supporting a doctors medical diagnosis, it relieves humans from mind-numbing, repetitive and error-prone tasks. This scares some business leaders, who worry AI could make people lazy, feckless and over-dependent. The more utopian minded - me included - see AI improving society and business while individuals get to enjoy happier, more fulfilling lives.

Fortunately, this need not launch yet another polarised debate. The more we apply AI to real world problems, the more glaringly clear it becomes that machine and human intelligence must work together to produce the right outcomes. Humans teach AI to understand context and patterns so that algorithms produce fair, ethical decisions. Equally, AIs blind rationality helps humans overcome destructive failings like confirmation bias.

Crucially, as humans and machines are increasingly able to converse through friendlier interfaces, decision-making improves and consumers are better served. Through this process, AI is already ending what I call the tyranny of averages - where people with similar preferences, habits, or even medical symptoms, get lumped into broad categories and receive identical service or treatment.

Fewer hours, higher productivity

In business AI is taking over mundane tasks like expense reporting and timesheets, along with complex data analysis. This means people can devote time to charity work, spend time with their kids, exercise more or just kick back. In their jobs, they get to do all those human things that often wind up on the back burner, like mentor others and celebrate success. For this reason alone, I see AI as an undeniable force for good.

One strong indicator that AIs benefits are kicking in is that some companies are successfully moving to a four-day workweek. Companies like the American productivity software firm Basecamp and New Zealands Perpetual Guardian are recent poster children for working shorter hours while raising productivity. This has profound implications for countries like Japan, whose economy is among the least productive despite its people notoriously working the longest hours.

SEE ALSO:

However, AI is about more than having to work fewer hours. Having to multitask less means less stress over the possibility of dropping the ball. Workers can focus more on tasks that contribute positively and visibly to their companies success. Thats why more employers are starting to place greater value now on business outcomes and less on presenteeism.

AI and transparency go hand in hand

But we mustnt get complacent or apply AI uniformly. Even though many studies say that AI will create many more jobs than it replaces we have to manage its impact differently depending on the type of work it affects. Manual labourers like factory workers, farmers and truck drivers understandably fear the march of technology. In mass-market industries, technology has often (but not always) completely replaced the clearly defined tasks that these workers carry out repeatedly during their shifts. Employers and governments must work together to communicate honestly to workers about the trajectory of threatened jobs and help them to adapt and develop new skills for the future.

Overcoming the tyranny of averages in service

An area where we risk automating inappropriately is that which includes entry- and mid-level customer service professions like call centre workers, bank managers, and social care providers. Most will agree that automating some formerly personal transactions, like withdrawing cash, turned out pretty well. However higher involvement decisions like buying home insurance or selecting the best credit card usually benefit from having a sympathetic human guide them through to the right decision.

Surprisingly, AI may be able to help re-humanise customer service in these areas threatened by over- or inappropriate automation. Figuring out the right product or service to offer someone with complex needs at the right time, price and place is notoriously hard. Whether its to give a medical diagnosis or recommend pet insurance, AI can give service workers the data they need to provide highly personalised information and expert advice.

There are no simple formulae to apply to the labour market as technology advances and affects all of our lives. While it's becoming clear that the AI's benefits to knowledge workers are almost universally positive, others must get the support to adapt and reskill so they are not left behind.

For consumers, however, AI means being freed from the tyranny of averages that makes so many transactions, particularly with large, faceless organisations so soul-destroying. For this and other reasons I mentioned, I truly believe AI will indeed help restore our humanity

More here:

Can AI restore our humanity? - Gigabit Magazine - Technology News, Magazine and Website

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

The past decade has seen the rise of remarkably human personal assistants, increasing automation in transportation and industrial environments, and even the alleged passing of Alan Turings famous robot consciousness test. Such innovations have taken artificial intelligence out labs and into our hands.

A.I. programs have become painters, drivers, doctors assistants, and even friends. But with these new benefits have also come increasing dangers. This ending decade saw the first, and likely not the last, death caused by a self-driving car.

This is #20 on Inverses 20 predictions for the 2020s.

And as we head toward another decade of machine learning and robotics research, questions surrounding the moral programming of A.I. and the limits of their autonomy will no longer be just thought-experiments but time-sensitive problem.

One such area to keep on eye on going forward into a new decade will be partially defined by this question: what kind of legal status will A.I. be granted as their capabilities and intelligence continues to scale closer to that of humans? This is a conversation the archipelago nation Malta started in 2018 when its leaders proposed that it should prepare to grant or deny citizenship to A.I.s just as they would humans.

The logic behind this being that A.I.s of the future could have just as much agency and potential to cause disruption as any other non-robotic being. Francois Piccione, policy advisor for the Maltese government, told Inverse in 2019 that not taking such measures would be irresponsible.

Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity, said Piccione. To realize that such a revolution is taking place and not do ones best to prepare for it would be irresponsible.

While the 2020s might not see fully fledged citizenship for A.I.s, Inverse predicts that there will be increasing legal scrutiny in coming years over who is legally responsible over the actions of A.I., whether it be their owners or the companies designing them. Instead of citizenship or visas for A.I., this could lead to further restrictions on the humans who travel with them and the ways in which A.I. can be used in different settings.

Another critical point of increasing scrutiny in the coming years will be how to ensure A.I. programmers continue to think critically about the algorithms they design.

This past decade saw racism and death as the result of poorly designed algorithms and even poorer introspection. Inverse predicts that as A.I. continues to scale labs will increasingly call upon outside experts, such as ethicists and moral psychologists, to make sure these human-like machines are not doomed to repeat our same, dehumanizing, mistakes.

As 2019 draws to a close, Inverse is looking to the future. These are our 20 predictions for science and technology for the 2020s. Some are terrifying, some are fascinating, and others we can barely wait for. This has been #20. Read a related story here.

Go here to read the rest:

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test - Inverse

The Crazy Government Research Projects You Might’ve Missed in 2019 – Nextgov

If you imagine the U.S. research community as a family party, the Defense Advanced Research Projects Agency is your crazy uncle ranting at the end of the table and the governments other ARPA organizations are the in-laws who are buying into his theories.

DARPA and its counterpartsthe Intelligence Advanced Research Projects Activity and the Advanced Research Projects Agency-Energyare responsible for conducting some of the most innovative and bizarre projects in the governments $140 billion research portfolio. DARPAs past research has laid the groundwork for the internet, GPS and other technologies we take for granted today, and though the other organizations are relatively new, theyre similarly charged with pushing todays tech to new heights.

That means the futuristic-sounding projects the agencies are working on today could give us a sneak peek of where the tech industry is headed in the years ahead.

And based on the organizations 2019 research efforts, the future looks pretty wild.

DARPA Pushes the Limits of AI

Last year, DARPA announced it would invest some $2 billion in bringing about the so-called third wave of artificial intelligence, systems capable of reasoning and human-like communication. And those efforts are already well underway.

In March, the agency started exploring ways to improve how AI systems like Siri and Alexa teach themselves language. Instead of crunching gargantuan datasets to learn the ins and outs of a language, researchers essentially want the tech to teach itself by observing the world, just like human babies do. Through the program, AI systems would learn to associate visual cuesphotos, videos and live demonstrationswith audible sounds. Ultimately, the goal is to build tech that actually understand the meaning of what theyre saying.

DARPA also wants AI tools to assess their own expertise and inform their operators know when they dont know something. The Competency-Aware Machine Learning program, launched in February, looks to enable AI systems to model their own behavior, evaluate past mistakes and apply that information to future decisions. If the tech thinks its results could be inaccurate, it would let users know. Such self-awareness will be critical as the military leans on AI systems for increasingly consequential tasks.

One of the biggest barriers to building AI is the amount of computing power required to run them, but DARPA is looking to the insect world to lower that barrier to entry. Through the MicroBRAIN program, the agency is examining the brains of very small flying insects to get inspiration for more energy efficient AI designs.

Beyond improving the tech itself, DARPA is also looking to AI to tackle some of the most pressing problems facing the government today. The agency is funding research to teach computers to automatically detect errors in deepfakes and other manipulated media. Officials are also investing in AI that could help design more secure weapons systems, vehicles and other network-connected platforms.

Outside of artificial intelligence, DARPA is also working to develop a wide-range of other capabilities that sound like they came straight from a sci-fi movie, including but not limited to satellite-repair robots, automated underground mapping technologies and computers powered by biological processes.

IARPA Wants Eyes in the Sky

Today, the intelligence community consumes an immeasurable amount of information, so much that its virtually impossible for analysts to make sense of it in any reasonable amount of time. In this world of data abundance, intelligence officials see AI as a way to stay one step ahead of adversaries, and the tech is a major priority their bleeding-edge research shop.

AI has numerous applications across the national security world, and in 2019, improving surveillance was a major goal.

In April, the Intelligence Advanced Research Projects Activity announced it was pursuing AI that could stitch together and analyze satellite images and footage collected from planes, drones and other aircraft. The program, called Space-based Machine Automated Recognition Technique, essentially looks to use AI to monitor all human activity around the globe in real-time.

The tech would automatically detect and monitor major construction projects and other anthropogenic activity around the planet, merging data from multiple sources and keeping tabs on how sites change over time. Though their scopes somewhat differ, the SMART harkens back to the Air Forces controversial Project Maven program, which sought to use artificial intelligence to automatically analyze video footage collected by drones.

IARPA is also looking to use artificial intelligence to better monitor human activity closer to the ground. In May, the agency started recruiting teams to help train algorithms to follow people as they move through video surveillance networks. According to the solicitation, the AI would piece together footage picked up by security cameras scattered around a particular space, letting agencies track individuals movements in crowded.

Combine this capability with long-range biometric identification systemsa technology IARPA also began exploring in 2019and you could have machines naming people and tracking their movements without spy agencies needing to lift a finger.

The Funding Fight at ARPA-E

The Energy Departments bleeding-edge research office, ARPA-E, is also supporting a wide array of efforts to advance the nations energy technologies. This year, the organization launched programs to improve carbon-capture systems, reduce the cost of nuclear energy and increase the efficiency of the power grid, among other things.

But despite those efforts, the Trump administration has repeatedly tried to shut down the office.

In its budget request for fiscal 2020, the White House proposed reducing ARPA-Es funding by 178%, giving the agency a final budget of negative $287 million. The administration similarly defunded the office in its 2019 budget request.

While its unclear exactly how much funding ARPA-E will receive next year, its safe to say its budget will go up. The Senate opted to increase the agencys funding by $62 million in its 2020 appropriations, and the House version of the legislation included a $59 million increase. In October, the House Science, Space and Technology Committee advanced a bill that would provide the agency with nearly $2.9 billion over the course of five years, though the bill has yet to receive a full vote in the chamber.

Read the original post:

The Crazy Government Research Projects You Might've Missed in 2019 - Nextgov

16 Artificial Intelligence Pros and Cons Vittana.org

Artificial intelligence, or AI, is a computer system which learns from the experiences it encounters. It can adjust on its own to new inputs, allowing it to perform tasks in a way that is similar to what a human would do. How we have defined AI over the years has changed, as have the tasks weve had these machines complete.

As a term, artificial intelligence was defined in 1956. With increasing levels of data being processed, improved storage capabilities, and the development of advanced algorithms, AI can now mimic human reasoning. AI personal assistants, like Siri or Alexa, have been around for military purposes since 2003.

With these artificial intelligence pros and cons, it is important to think of this technology as a decision support system. It is not the type of AI from science-fiction stories which attempts to rule the world by dominating the human race.

1. Artificial intelligence completes routine tasks with ease.Many of the tasks that we complete every day are repetitive. That repetition helps us to get into a routine and positive work flow. It also takes up a lot of our time. With AI, the repetitive tasks can be automated, finely tuning the equipment to work for extended time periods to complete the work. That allows human workers to focus on the more creative elements of their job responsibilities.

2. Artificial intelligence can work indefinitely.Human workers are typically good for 8-10 hours of production every day. Artificial intelligence can continue operating for an indefinite time period. As long as there is a power resource available to it, and the equipment is properly cared for, AI machines do not experience the same dips in productivity that human workers experience when they get tired at the end of the day.

3. Artificial intelligence makes fewer errors.AI is important within certain fields and industries where accuracy or precision is the top priority. When there are no margins for error, these machines are able to breakdown complicated math constructs into practical actions faster, and with more accuracy, when compared to human workers.

4. Artificial intelligence helps us to explore.There are many places in our universe where it would be unsafe, if not impossible, for humans to see. AI makes it possible for us to learn more about these places, which furthers our species knowledge database. We can explore the deepest parts of the ocean because of AI. We can journey to inhospitable planets because of AI. We can even find new resources to consume because of this technology.

5. Artificial intelligence can be used by anyone.There are multiple ways that the average person can embrace the benefits of AI every day. With smart homes powered by AI, thermostat and energy regulation helps to cut the monthly utility bill. Augmented reality allows consumers to picture items in their own home without purchasing them first. When it is correctly applied, our perception of reality is enhanced, which creates a positive personal experience.

6. Artificial intelligence makes us become more productive.AI creates a new standard for productivity. It will also make each one of us more productive as well. If you are texting someone or using word processing software to write a report and a misspelled word is automatically corrected, then youve just experienced a time benefit because of AI. An artificial intelligence can sift through petabytes of information, which is something the human brain is just not designed to do.

7. Artificial intelligence could make us healthier.Every industry benefits from the presence and use of AI. We can use AI to establish healthier eating habits or to get more exercise. It can be used to diagnose certain diseases or recommends a treatment plan for something already diagnosed. In the future, AI might even assist physicians who are conducting a surgical procedure.

8. Artificial intelligence extends the human experience.With an AI helping each of us, we have the power to do more, be more, and explore more than ever before. In some ways, this evolutionary process could be our destiny. Some believe that computers and humanity are not separate, but instead a single, cognitive unit that already works together for the betterment of all. Through AI, people who are blind can now see. Those who are deaf can now hear. We become better because we have a greater capacity to do thins.

1. Artificial intelligence comes with a steep price tag.A new artificial intelligence is costly to build. Although the price is coming down, individual developments can still be as high as $300,000 for a basic AI. For small businesses operating on tight margins or low initial capital, it may be difficult to find the cash necessary to take advantage of the benefits which AI can bring. For larger companies, the cost of AI may be much higher, depending upon the scope of the project.

2. Artificial intelligence will reduce employment opportunities.There will be jobs gained because of AI. There will also be jobs lost because of it. Any job which features repetitive tasks as part of its duties is at-risk of being replaced by an artificial intelligence in the future. In 2017, Gartner predicted that 500,000 net jobs would be created because of AI. On the other end of the spectrum, up to 900,000 jobs could be lost because of it. Those figures are for jobs only within the United States.

3. Artificial intelligence will be tasked with its own decisions.One of the greatest threats we face with AI is its decision-making mechanism. An AI is only as intelligent and insightful as the individuals responsible for its initial programming. That means there could be a certain bias found within is mechanisms when it is time to make an important decision. In 2014, an active shooter situation caused people to call Uber to escape the area. Instead of recognizing the dangerous situation, the algorithm Uber used saw a spike in demand, so it decided to increase prices.

4. Artificial intelligence lacks creativity.We can program robots to perform creative tasks. Where we stall out in the evolution of AI is creating an intelligence which can be originally creative on its own. Our current AI matches the creativity of its creator. Because there is a lack of creativity, there tends to be a lack of empathy as well. That means the decision of an AI is based on what the best possible analytical solution happens to be, which may not always be the correct decision to make.

5. Artificial intelligence can lack improvement.An artificial intelligence may be able to change how it reacts in certain situations, much like a child stops touching a hot stove after being burned by it. What it does not do is alter its perceptions, responses, or reactions when there is a changing environment. There is an inability to distinguish specific bits of information observed beyond the data generated by that direct observation.

6. Artificial intelligence can be inaccurate.Machine translations have become an important tool in our quest to communicate with one another universally. The only problem with these translations is that they must be reviewed by humans because the words, not the intent of the words, is what machines translate. Without a review by a trained human translator, the information received from a machine translation may be inaccurate or insensitive, creating more problems instead of fewer with our overall communication.

7. Artificial intelligence changes the power structure of societies.Because AI offers the potential to change industries and the way we live in numerous ways, societies experience a power shift when it becomes the dominant force. Those who can create or control this technology are the ones who will be able to steer society toward their personal vision of how people should be. It also removes the humanity out of certain decisions, like the idea of having autonomous AI responsible for warfare without humans actually initiating the act of violence.

8. Artificial intelligence treats humanity as a commodity.When we look at the possible outcomes of AI on todays world, the debate is often about how many people benefit compared to how many people will not. The danger here is that people are treated as a commodity. Businesses are already doing this, looking at the commodity of automation through AI as a better investment than the commodity of human workers. If we begin to perceive ourselves as a commodity only, then AI will too, and the outcome of that decision could be unpredictable.

These artificial intelligence pros and cons show us that our world can benefit from its presence in a variety of ways. There are also many potential dangers which come with this technology. Jobs may be created, but jobs will be lost. Lives could be saved, but lives could also be lost. That is why the technologies behind AI must be made available to everyone. If only a few hold the power of AI, then the world could become a very different place in a short period of time.

Here is the original post:

16 Artificial Intelligence Pros and Cons Vittana.org

Top 45 Artificial Intelligence ETFs – ETFdb.com

This is a list of all Artificial Intelligence ETFs traded in the USA which are currently tagged by ETF Database. Please note that the list may not contain newly issued ETFs. If youre looking for a more simplified way to browse and compare ETFs, you may want to visit our ETFdb Categories, which categorize every ETF in a single best fit category.

This page includes historical return information for all Artificial Intelligence ETFs listed on U.S. exchanges that are currently tracked by ETF Database.

The table below includes fund flow data for all U.S. listed Artificial Intelligence ETFs. Total fund flow is the capital inflow into an ETF minus the capital outflow from the ETF for a particular time period.

Fund Flows in millions of U.S. Dollars.

The following table includes expense data and other descriptive information for all Artificial Intelligence ETFs listed on U.S. exchanges that are currently tracked by ETF Database. In addition to expense ratio and issuer information, this table displays platforms that offer commission-free trading for certain ETFs.

Clicking on any of the links in the table below will provide additional descriptive and quantitative information on Artificial Intelligence ETFs.

The following table includes ESG Scores and other descriptive information for all Artificial Intelligence ETFs listed on U.S. exchanges that are currently tracked by ETF Database. Easily browse and evaluate ETFs by visiting our Responsible Investing themes section and find ETFs that map to various environmental, social and governance themes.

This page includes historical dividend information for all Artificial Intelligence listed on U.S. exchanges that are currently tracked by ETF Database. Note that certain ETFs may not make dividend payments, and as such some of the information below may not be meaningful.

The table below includes basic holdings data for all U.S. listed Artificial Intelligence ETFs that are currently tagged by ETF Database. The table below includes the number of holdings for each ETF and the percentage of assets that the top ten assets make up, if applicable. For more detailed holdings information for any ETF, click on the link in the right column.

The following table includes certain tax information for all Artificial Intelligence ETFs listed on U.S. exchanges that are currently tracked by ETF Database, including applicable short-term and long-term capital gains rates and the tax form on which gains or losses in each ETF will be reported.

This page contains certain technical information for all Artificial Intelligence ETFs that are listed on U.S. exchanges and tracked by ETF Database. Note that the table below only includes limited technical indicators; click on the View link in the far right column for each ETF to see an expanded display of the products technicals.

This page provides links to various analyses for all Artificial Intelligence ETFs that are listed on U.S. exchanges and tracked by ETF Database. The links in the table below will guide you to various analytical resources for the relevant ETF, including an X-ray of holdings, official fund fact sheet, or objective analyst report.

This page provides ETFdb Ratings for all Artificial Intelligence ETFs that are listed on U.S. exchanges and tracked by ETF Database. The ETFdb Ratings are transparent, quant-based evaluations of ETFs relative to other products in the same ETFdb.com Category. As such, it should be noted that this page may include ETFs from multiple ETFdb.com Categories.

Excerpt from:

Top 45 Artificial Intelligence ETFs - ETFdb.com

Artificial intelligence jobs on the rise, along with everything else AI – ZDNet

AI jobs are on the upswing, as are the capabilities of AI systems. The speed of deployments has also increased exponentially. It's now possible to train an image-processing algorithm in about a minute -- something that took hours just a couple of years ago.

These are among the key metrics of AI tracked in the latest release of theAI Index, an annual data update from Stanford University'sHuman-Centered Artificial Intelligence Institutepublished in partnership with McKinsey Global Institute. The index tracks AI growth across a range of metrics, from papers published to patents granted to employment numbers.

Here are some key measures extracted from the 290-page index:

AI conference attendance: One important metric is conference attendance, for starters. That's way up. Attendance at AI conferences continues to increase significantly. In 2019, the largest, NeurIPS, expects 13,500 attendees, up 41% over 2018 and over 800% relative to 2012. Even conferences such as AAAI and CVPR are seeing annual attendance growth around 30%.

AI jobs: Another key metric is the amount of AI-related jobs opening up. This is also on the upswing, the index shows. Looking at Indeed postings between 2015 and October 2019, the share of AI jobs in the US increased five-fold since 2010, with the fraction of total jobs rising from 0.26% of total jobs posted to 1.32% in October 2019. While this is still a small fraction of total jobs, it's worth mentioning that these are only technology-related positions working directly in AI development, and there are likely an increasingly large share of jobs being enhanced or re-ordered by AI.

Among AI technology positions, the leading category being job postings mentioning "machine learning" (58% of AI jobs), followed by artificial intelligence (24%), deep learning (9%), and natural language processing (8%). Deep learning is the fastest growing job category, growing 12-fold between 2015 and 2018. Artificial Intelligence grew by five-fold, machine learning grew by five-fold, machine learning by four-fold, and natural language processing two-fold.

Compute capacity: Moore's Law has gone into hyperdrive, the AI Index shows, with substantial progress in ramping up the computing capacity required to run AI, the index shows. Prior to 2012, AI results closely tracked Moore's Law, with compute doubling every two years. Post-2012, compute has been doubling every 3.4 months -- a mind-boggling net increase of 300,000x. By contrast, the typical two-year doubling period that characterized Moore's law previously would only yield a 7x increase, the index's authors point out.

Training time: The among of time it takes to train AI algorithms has accelerated dramatically -- it now can happen in almost 1/180th of the time it took just two years ago to train a large image classification system on a cloud infrastructure. Two years ago, it took three hours to train such a system, but by July 2019, that time shrunk to 88 seconds.

Commercial machine translation: One indicator of where AI hits the ground running is machine translation -- for example, English to Chinese. The number of commercially available systems with pre-trained models and public APIs has grown rapidly, the index notes, from eight in 2017 to over 24 in 2019. Increasingly, machine-translation systems provide a full range of customization options: pre-trained generic models, automatic domain adaptation to build models and better engines with their own data, and custom terminology support."

Computer vision: Another benchmark is accuracy of image recognition. The index tracked reporting through ImageNet, a public dataset of more than 14 million images created to address the issue of scarcity of training data in the field of computer vision. In the latest reporting, the accuracy of image recognition by systems has reached about 85%, up from about 62% in 2013.

Natural language processing: AI systems keep getting smarter, to the point they are surpassing low-level human responsiveness through natural language processing. As a result, there are also stronger standards for benchmarking AI implementations. GLUE, the General Language Understanding Evaluation benchmark, was only released in May 2018, intended to measure AI performance for text-processing capabilities. The threshold for submitted systems crossing non-expert human performance was crossed in June, 2019, the index notes. In fact, the performance of AI systems has been so dramatic that industry leaders had to release a higher-level benchmark, SuperGLUE, "so they could test performance after some systems surpassed human performance on GLUE."

Read this article:

Artificial intelligence jobs on the rise, along with everything else AI - ZDNet