No Real Value: Former Bitcoin Core Developer Peter Todd Asserts Ripples XRP Doesnt Need To Exist – ZyCrypto

Ripple has been one company working hard to revolutionize global funds transfer. The company has also been making efforts to boost XRP adoption, and it has already won over a good number of clients and partners willing to utilize XRP in cross-border payments. However, there have been some objections from various people who think XRP isnt really what its expected to be. One such person is Peter Todd, a former Bitcoin Core developer, and cryptography consultant.

In a recent post on Twitter, Todd shared views made by one Larry Cermak, another personality who thinks XRP doesnt have much value to investors.

According to Larry, Ripple will be the only beneficiary if ever its project works and its technology gets adopted. As such, its the people who will have accumulated XRP that will be left holding massive bags.

Larry went on to point out that Ripples continued sale of XRP has so far earned it around $1.2 billion. Ripple uses the proceeds of these sales to fund its projects as well as support strategic startups and partners like MoneyGram.

While sharing Larrys remarks about Ripple and XRP, Peter Todd opined that just like most ICOs, XRP doesnt really need to exist, claiming that the crypto itself doesnt offer any real rights of ownership to buyers.

However, the cryptography consultant agreed that Ripples technology could be useful as a fault-tolerant database, but it doesnt need XRP to work. For one, just like with any other cryptos, the early adopters gain when the tokens value increases as demand rises.

That said, its still not clear if XRPs value will be influenced by an impending bull run thats expected to boost Bitcoins market as the top coin prepares for the next block reward halving slated for May 2020. If the effect affects the rest of the market, XRP could gain in the process.

Get Daily Crypto News On Facebook | Twitter | Telegram | Instagram

See original here:
No Real Value: Former Bitcoin Core Developer Peter Todd Asserts Ripples XRP Doesnt Need To Exist - ZyCrypto

Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the U.S. Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Bob Kocher, a partner at the venture capital firm Venrock, are more blunt. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

Relaxed AI Standards At The FDA

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Jesse Ehrenfeld, who chairs the physician groups board of trustees.In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

When Good Algorithms Go Bad

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation that is not affiliated with Kaiser Permanente.

Continued here:

Artificial Intelligence Is Rushing Into Patient Care - And Could Raise Risks - Scientific American

One key to artificial intelligence on the battlefield: trust – C4ISRNet

To understand how humans might better marshal autonomous forces during battle in the near future, it helps to first consider the nature of mission command in the past.

Derived from a Prussian school of battle, mission command is a form of decentralized command and control. Think about a commander who is given an objective and then trusted to meet that goal to the best of their ability and to do so without conferring with higher-ups before taking further action. It is a style of operating with its own advantages and hurdles, obstacles that map closely onto the autonomous battlefield.

At one level, mission command really is a management of trust, said Ben Jensen, a professor of strategic studies at the Marine Corps University. Jensen spoke as part of a panel on multidomain operations at the Association of the United States Army AI and Autonomy symposium in November. Were continually moving choice and agency from the individual because of optimized algorithms helping [decision-making]. Is this fundamentally irreconcilable with the concept of mission command?

The problem for military leaders then is two-fold: can humans trust the information and advice they receive from artificial intelligence? And, related, can those humans also trust that any autonomous machines they are directing are pursuing objectives the same way people would?

To the first point, Robert Brown, director of the Pentagons multidomain task force, emphasized that using AI tools means trusting commanders to act on that information in a timely manner.

A mission command is saying: youre going to provide your subordinates the depth, the best data, you can get them and youre going to need AI to get that quality data. But then thats balanced with their own ground and then the art of whats happening, Brown said. We have to be careful. You certainly can lose that speed and velocity of decision.

Before the tools ever get to the battlefield, before the algorithms are ever bent toward war, military leaders must ensure the tools as designed actually do what service members need.

How do we create the right type of decision aids that still empower people to make the call, but gives them the information content to move faster? said Tony Frazier, an executive at Maxar Technologies.

Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

An intelligence product, using AI to provide analysis and information to combatants, will have to fall in the sweet spot of offering actionable intelligence, without bogging the recipient down in details or leaving them uninformed.

One thing thats remained consistent is folks will do one of three things with overwhelming information, Brown said. They will wait for perfect information. Theyll just wait wait, wait, theyll never have perfect information and adversaries [will have] done 10 other things, by the way. Or theyll be overwhelmed and disregard the information.

The third path users will take, Brown said, is the very task commanders want them to follow: find golden needles in eight stacks of information to help them make a decision in a timely manner.

Getting there, however, where information is empowering instead of paralyzing or disheartening, is the work of training. Adapting for the future means practicing in the future environment, and that means getting new practitioners familiar with the kinds of information they can expect on the battlefield.

Our adversaries are going to bring a lot of dilemmas our way and so our ability to comprehend those challenges and then hopefully not just react but proactively do something to prevent those actions, is absolutely critical, said Brig. Gen. David Kumashiro, the director of Joint Force Integration for the Air Force.

When a battle has thousands of kill chains, and analysis that stretches over hundreds of hours, humans have a difficult time comprehending what is happening. In the future, it will be the job of artificial intelligence to filter these threats. Meanwhile, it will be the role of the human in the loop to take that filtered information and respond as best it can to the threats arrayed against them.

What does it mean to articulate mission command in that environment, the understanding, the intent, and the trust? said Kumashiro, referring to the fast pace of AI filtering. When the highly contested environment disrupts those connections, when we are disconnected from the hive, those authorities need to be understood so that our war fighters at the farthest reaches of the tactical edge can still perform what they need to do.

Planning not just for how these AI tools work in ideal conditions, but how they will hold up under the degradation of a modern battlefield, is essential for making technology an aide, and not a hindrance, to the forces of the future.

If the data goes away, and you still got the mission, youve got to attend to it, said Brown. Thats a huge factor as well for practice. If youre relying only on the data, youll fail miserably in degraded mode.

See the article here:

One key to artificial intelligence on the battlefield: trust - C4ISRNet

China should step up regulation of artificial intelligence in finance, think tank says – Reuters

QINGDAO, China/BEIJING (Reuters) - China should introduce a regulatory framework for artificial intelligence in the finance industry, and enhance technology used by regulators to strengthen industry-wide supervision, policy advisers at a leading think tank said on Sunday.

FILE PHOTO: China Securities Regulatory Commission Chairman Xiao Gang addresses the Asian Financial Forum in Hong Kong January 19, 2015. REUTERS/Bobby Yip/File Photo

We should not deify artificial intelligence as it could go wrong just like any other technology, said the former chief of Chinas securities regulator, Xiao Gang, who is now a senior researcher at the China Finance 40 Forum.

The point is how we make sure it is safe for use and include it with proper supervision, Xiao told a forum in Qingdao on Chinas east coast.

Technology to regulate intelligent finance - referring to banking, securities and other financial products that employ technology such as facial recognition and big-data analysis to improve sales and investment returns - has largely lagged development, showed a report from the China Finance 40 Forum.

Evaluation of emerging technologies and industry-wide contingency plans should be fully considered, while authorities should draft laws and regulations on privacy protection and data security, the report showed.

Lessons should be learned from the boom and bust of the online peer-to-peer (P2P) lending sector where regulations were not introduced quickly enough, said economics professor Huang Yiping at the National School of Development of Peking University.

Chinas P2P industry was once widely seen as an important source of credit, but has lately been undermined by pyramid-scheme scandals and absent bosses, sparking public anger as well as a broader government crackdown.

Changes have to be made among policy makers, said Zhang Chenghui, chief of the finance research bureau at the Development Research Institute of the State Council.

We suggest regulation on intelligent finance to be written in to the 14th five-year plan of the countrys development, and each financial regulator - including the central bank, banking and insurance regulators and the securities watchdog - should appoint its own chief technology officer to enhance supervision of the sector.

Zhang also suggested the government brings together the data platforms of each financial regulatory body to better monitor potential risk and act quickly as problems arise.

Reporting by Cheng Leng in Qingdao, China, and Ryan Woo in Beijing; Editing by Christopher Cushing

Read more:

China should step up regulation of artificial intelligence in finance, think tank says - Reuters

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

The past decade has seen the rise of remarkably human personal assistants, increasing automation in transportation and industrial environments, and even the alleged passing of Alan Turings famous robot consciousness test. Such innovations have taken artificial intelligence out labs and into our hands.

A.I. programs have become painters, drivers, doctors assistants, and even friends. But with these new benefits have also come increasing dangers. This ending decade saw the first, and likely not the last, death caused by a self-driving car.

This is #20 on Inverses 20 predictions for the 2020s.

And as we head toward another decade of machine learning and robotics research, questions surrounding the moral programming of A.I. and the limits of their autonomy will no longer be just thought-experiments but time-sensitive problem.

One such area to keep on eye on going forward into a new decade will be partially defined by this question: what kind of legal status will A.I. be granted as their capabilities and intelligence continues to scale closer to that of humans? This is a conversation the archipelago nation Malta started in 2018 when its leaders proposed that it should prepare to grant or deny citizenship to A.I.s just as they would humans.

The logic behind this being that A.I.s of the future could have just as much agency and potential to cause disruption as any other non-robotic being. Francois Piccione, policy advisor for the Maltese government, told Inverse in 2019 that not taking such measures would be irresponsible.

Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity, said Piccione. To realize that such a revolution is taking place and not do ones best to prepare for it would be irresponsible.

While the 2020s might not see fully fledged citizenship for A.I.s, Inverse predicts that there will be increasing legal scrutiny in coming years over who is legally responsible over the actions of A.I., whether it be their owners or the companies designing them. Instead of citizenship or visas for A.I., this could lead to further restrictions on the humans who travel with them and the ways in which A.I. can be used in different settings.

Another critical point of increasing scrutiny in the coming years will be how to ensure A.I. programmers continue to think critically about the algorithms they design.

This past decade saw racism and death as the result of poorly designed algorithms and even poorer introspection. Inverse predicts that as A.I. continues to scale labs will increasingly call upon outside experts, such as ethicists and moral psychologists, to make sure these human-like machines are not doomed to repeat our same, dehumanizing, mistakes.

As 2019 draws to a close, Inverse is looking to the future. These are our 20 predictions for science and technology for the 2020s. Some are terrifying, some are fascinating, and others we can barely wait for. This has been #20. Read a related story here.

Go here to read the rest:

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test - Inverse

Can AI restore our humanity? – Gigabit Magazine – Technology News, Magazine and Website

Sudheesh Nair, CEO of ThoughtSpot earnestly campaigns for artificial intelligence as a panacea for restoring our humanity - by making us able to do more work.

Whether AI is helping a commuter navigate through a city or supporting a doctors medical diagnosis, it relieves humans from mind-numbing, repetitive and error-prone tasks. This scares some business leaders, who worry AI could make people lazy, feckless and over-dependent. The more utopian minded - me included - see AI improving society and business while individuals get to enjoy happier, more fulfilling lives.

Fortunately, this need not launch yet another polarised debate. The more we apply AI to real world problems, the more glaringly clear it becomes that machine and human intelligence must work together to produce the right outcomes. Humans teach AI to understand context and patterns so that algorithms produce fair, ethical decisions. Equally, AIs blind rationality helps humans overcome destructive failings like confirmation bias.

Crucially, as humans and machines are increasingly able to converse through friendlier interfaces, decision-making improves and consumers are better served. Through this process, AI is already ending what I call the tyranny of averages - where people with similar preferences, habits, or even medical symptoms, get lumped into broad categories and receive identical service or treatment.

Fewer hours, higher productivity

In business AI is taking over mundane tasks like expense reporting and timesheets, along with complex data analysis. This means people can devote time to charity work, spend time with their kids, exercise more or just kick back. In their jobs, they get to do all those human things that often wind up on the back burner, like mentor others and celebrate success. For this reason alone, I see AI as an undeniable force for good.

One strong indicator that AIs benefits are kicking in is that some companies are successfully moving to a four-day workweek. Companies like the American productivity software firm Basecamp and New Zealands Perpetual Guardian are recent poster children for working shorter hours while raising productivity. This has profound implications for countries like Japan, whose economy is among the least productive despite its people notoriously working the longest hours.

SEE ALSO:

However, AI is about more than having to work fewer hours. Having to multitask less means less stress over the possibility of dropping the ball. Workers can focus more on tasks that contribute positively and visibly to their companies success. Thats why more employers are starting to place greater value now on business outcomes and less on presenteeism.

AI and transparency go hand in hand

But we mustnt get complacent or apply AI uniformly. Even though many studies say that AI will create many more jobs than it replaces we have to manage its impact differently depending on the type of work it affects. Manual labourers like factory workers, farmers and truck drivers understandably fear the march of technology. In mass-market industries, technology has often (but not always) completely replaced the clearly defined tasks that these workers carry out repeatedly during their shifts. Employers and governments must work together to communicate honestly to workers about the trajectory of threatened jobs and help them to adapt and develop new skills for the future.

Overcoming the tyranny of averages in service

An area where we risk automating inappropriately is that which includes entry- and mid-level customer service professions like call centre workers, bank managers, and social care providers. Most will agree that automating some formerly personal transactions, like withdrawing cash, turned out pretty well. However higher involvement decisions like buying home insurance or selecting the best credit card usually benefit from having a sympathetic human guide them through to the right decision.

Surprisingly, AI may be able to help re-humanise customer service in these areas threatened by over- or inappropriate automation. Figuring out the right product or service to offer someone with complex needs at the right time, price and place is notoriously hard. Whether its to give a medical diagnosis or recommend pet insurance, AI can give service workers the data they need to provide highly personalised information and expert advice.

There are no simple formulae to apply to the labour market as technology advances and affects all of our lives. While it's becoming clear that the AI's benefits to knowledge workers are almost universally positive, others must get the support to adapt and reskill so they are not left behind.

For consumers, however, AI means being freed from the tyranny of averages that makes so many transactions, particularly with large, faceless organisations so soul-destroying. For this and other reasons I mentioned, I truly believe AI will indeed help restore our humanity

More here:

Can AI restore our humanity? - Gigabit Magazine - Technology News, Magazine and Website

The Crazy Government Research Projects You Might’ve Missed in 2019 – Nextgov

If you imagine the U.S. research community as a family party, the Defense Advanced Research Projects Agency is your crazy uncle ranting at the end of the table and the governments other ARPA organizations are the in-laws who are buying into his theories.

DARPA and its counterpartsthe Intelligence Advanced Research Projects Activity and the Advanced Research Projects Agency-Energyare responsible for conducting some of the most innovative and bizarre projects in the governments $140 billion research portfolio. DARPAs past research has laid the groundwork for the internet, GPS and other technologies we take for granted today, and though the other organizations are relatively new, theyre similarly charged with pushing todays tech to new heights.

That means the futuristic-sounding projects the agencies are working on today could give us a sneak peek of where the tech industry is headed in the years ahead.

And based on the organizations 2019 research efforts, the future looks pretty wild.

DARPA Pushes the Limits of AI

Last year, DARPA announced it would invest some $2 billion in bringing about the so-called third wave of artificial intelligence, systems capable of reasoning and human-like communication. And those efforts are already well underway.

In March, the agency started exploring ways to improve how AI systems like Siri and Alexa teach themselves language. Instead of crunching gargantuan datasets to learn the ins and outs of a language, researchers essentially want the tech to teach itself by observing the world, just like human babies do. Through the program, AI systems would learn to associate visual cuesphotos, videos and live demonstrationswith audible sounds. Ultimately, the goal is to build tech that actually understand the meaning of what theyre saying.

DARPA also wants AI tools to assess their own expertise and inform their operators know when they dont know something. The Competency-Aware Machine Learning program, launched in February, looks to enable AI systems to model their own behavior, evaluate past mistakes and apply that information to future decisions. If the tech thinks its results could be inaccurate, it would let users know. Such self-awareness will be critical as the military leans on AI systems for increasingly consequential tasks.

One of the biggest barriers to building AI is the amount of computing power required to run them, but DARPA is looking to the insect world to lower that barrier to entry. Through the MicroBRAIN program, the agency is examining the brains of very small flying insects to get inspiration for more energy efficient AI designs.

Beyond improving the tech itself, DARPA is also looking to AI to tackle some of the most pressing problems facing the government today. The agency is funding research to teach computers to automatically detect errors in deepfakes and other manipulated media. Officials are also investing in AI that could help design more secure weapons systems, vehicles and other network-connected platforms.

Outside of artificial intelligence, DARPA is also working to develop a wide-range of other capabilities that sound like they came straight from a sci-fi movie, including but not limited to satellite-repair robots, automated underground mapping technologies and computers powered by biological processes.

IARPA Wants Eyes in the Sky

Today, the intelligence community consumes an immeasurable amount of information, so much that its virtually impossible for analysts to make sense of it in any reasonable amount of time. In this world of data abundance, intelligence officials see AI as a way to stay one step ahead of adversaries, and the tech is a major priority their bleeding-edge research shop.

AI has numerous applications across the national security world, and in 2019, improving surveillance was a major goal.

In April, the Intelligence Advanced Research Projects Activity announced it was pursuing AI that could stitch together and analyze satellite images and footage collected from planes, drones and other aircraft. The program, called Space-based Machine Automated Recognition Technique, essentially looks to use AI to monitor all human activity around the globe in real-time.

The tech would automatically detect and monitor major construction projects and other anthropogenic activity around the planet, merging data from multiple sources and keeping tabs on how sites change over time. Though their scopes somewhat differ, the SMART harkens back to the Air Forces controversial Project Maven program, which sought to use artificial intelligence to automatically analyze video footage collected by drones.

IARPA is also looking to use artificial intelligence to better monitor human activity closer to the ground. In May, the agency started recruiting teams to help train algorithms to follow people as they move through video surveillance networks. According to the solicitation, the AI would piece together footage picked up by security cameras scattered around a particular space, letting agencies track individuals movements in crowded.

Combine this capability with long-range biometric identification systemsa technology IARPA also began exploring in 2019and you could have machines naming people and tracking their movements without spy agencies needing to lift a finger.

The Funding Fight at ARPA-E

The Energy Departments bleeding-edge research office, ARPA-E, is also supporting a wide array of efforts to advance the nations energy technologies. This year, the organization launched programs to improve carbon-capture systems, reduce the cost of nuclear energy and increase the efficiency of the power grid, among other things.

But despite those efforts, the Trump administration has repeatedly tried to shut down the office.

In its budget request for fiscal 2020, the White House proposed reducing ARPA-Es funding by 178%, giving the agency a final budget of negative $287 million. The administration similarly defunded the office in its 2019 budget request.

While its unclear exactly how much funding ARPA-E will receive next year, its safe to say its budget will go up. The Senate opted to increase the agencys funding by $62 million in its 2020 appropriations, and the House version of the legislation included a $59 million increase. In October, the House Science, Space and Technology Committee advanced a bill that would provide the agency with nearly $2.9 billion over the course of five years, though the bill has yet to receive a full vote in the chamber.

Read the original post:

The Crazy Government Research Projects You Might've Missed in 2019 - Nextgov

gInk is an on-screen annotation software for Windows – Ghacks Technology News

On-screen annotation software is useful in a number of situations including during presentations or demonstrations. The main idea behind the open source application glnk is to provide Windows users with an easy to use yet powerful program to make on-screen annotations with ease.

Windows users may download the latest version of the program from the project's GitHub website. Those interested in the source code find it hosted there as well.

All it takes is to download the latest version of the software, extract the archive it comes in, and run the executable from the destination directory.

The on-screen annotation software sits idly in the background on start. You may launch it either with a left-click on the system tray icon or use the global hotkey Ctrl-Alt-G instead. The toolbar is displayed at the bottom and most on-screen activity is blocked at the same time.

Use hotkeys, the mouse or touch-input to select one of the available tools to start using it. Several pencils are provided to draw on the screen; there is also an eraser, an undo function, and a trashbin to destroy everything that has been annotated up to that point. The arrow icon does not paint arrows on the screen but is used to activate mouse functionality (to activate links or buttons). A click on the camera icon creates a snapshot of the screen.

The application supports mouse, pen, and touch input. Pen users may notice that it can distinguish between different pen pressures. Another useful feature is that glnk supports multi-display devices as well.

The options of the open source software provide additional settings. You may select the drawing tools that you want displayed when you invoke the toolbar. All but the pen width panel are displayed by default and all but the pencil selection options may be removed from the toolbar.

Other options provided include the ability to drag the toolbar around on the screen, to define up to ten pens each with its distinct color, alpha and width, and to set up or edit hotkeys (for each of the pens and tools).

Tip: check out ScreenMarker which provides similar functionality.

gInk is a well-designed screen annotation software for Windows. It is portable and open source, and supports most tools and features that one would expect from a program of its kind. I'd like to see options to place some elements on the screen as well as text. While you can create those using the pens, it would make things easier if these would be provided by default.

Now You: have you used screen annotation programs in the past?

Author Rating

Software Name

gInk

Operating System

Windows

Software Category

Productivity

Landing Page

Read the rest here:
gInk is an on-screen annotation software for Windows - Ghacks Technology News

Code Analysis and Happy Holidays – Enterprise License Optimization Blog

December 19, 2019 Kendra Morton

Its been a great year at Flexera, and Im hoping my readers, too, prospered and experienced their own versions of success in 2019. Ive enjoyed the time Ive spent on my blog, delivering my views to all valued members of the open source community. Software Composition Analysis (SCA) is thriving; yes at Flexera, but also as a technology that is impacting companies across the globe and how they manage open source software, provide transparency across teams, and enable more innovation because license, IP and security risk protocols are in place. Theres peace of mind.

And thats what everyone wants as 2019 rolls into a new year.

2020 is bound to be another year that brings unprecedented stories and trends related to code analysis. Trends like:

Im looking forward to 2020 while stopping to reflect on the past year. Its my greatest pleasure to wish you happy holidays and a successful 2020.

Id like to hear from you.

Whats your trend meter say about open source technologies in 2020?

What are you the most excited about?

Related

Tags: Open Source Compliance, Open Source Security, Open Source Software (OSS)

Read more:
Code Analysis and Happy Holidays - Enterprise License Optimization Blog

Big Data Predictions: What 2020 Will Bring – Datanami

(ju_see/Shutterstock)

With just over a week left on the 2019 calendar, its now time for predictions. Well run several stories featuring the 2020 predictions of industry experts and observers in the field. It all starts today with what is arguably the most critical aspect of the big data question: The data itself.

Theres no denying that Hadoop had a rough year in 2019. But is it completely dead? Haoyuan HY Li, the founder and CTO of Alluxio, says that Hadoop storage, in the form of the Hadoop Distributed File System (HDFS) is dead, but Hadoop compute, in the form of Apache Spark, lives strong.

There is a lot of talk about Hadoop being dead, Li says. But the Hadoop ecosystem has rising stars. Compute frameworks like Spark and Presto extract more value from data and have been adopted into the broader compute ecosystem. Hadoop storage (HDFS) is dead because of its complexity and cost and because compute fundamentally cannot scale elastically if it stays tied to HDFS. For real-time insights, users need immediate and elastic compute capacity thats available in the cloud. Data in HDFS will move to the most optimal and cost-efficient system, be it cloud storage or on-prem object storage. HDFS will die but Hadoop compute will live on and live strong.

As HDFS data lake deployments slow, Cloudian is ready to swoop in and capture the data into its object store, says Jon Toor, CMO of Cloudian.

In 2020, we will see a growing number of organizations capitalizing on object storage to create structured/tagged data from unstructured data, allowing metadata to be used to make sense of the tsunami of data generated by AI and ML workloads, Toor writes.

The end of one thing, like Hadoop, will give rise the beginning of another, according to ThoughtSpot CEO Sudheesh Nair.

(Swill Klitch/Shutterstock)

Over the last 10 years or so, weve seen the rise, plateau, and the beginning of the end for Hadoop, Nair says. This isnt because Big Data is dead. Its exactly the opposite. Every organization in the world is becoming a Big Data company. Its a requirement to operate in todays business landscape. Data has become so voluminous, and the need for agility with this data so great, however, that organizations are either building their own data lakes or warehouses, or going directly to the cloud. As that trend accelerates in 2020, well see Hadoop continue to decline.

When data gets big enough, it exerts a gravitational-like force, which makes it difficult to move, while also serving to attract even more data. Understanding data gravity will help organizations overcome barriers to digital transformation, says Chris Sharp, CTO of Digital Realty.

Data is being generated at a rate that many enterprises cant keep up with, Sharp says. Adding to this complexity, enterprises are dealing with data both useful and not useful from multiple locations that is hard to move and utilize effectively. This presents enterprises with a data gravity problem that will prevent digital transformation initiatives from moving forward. In 2020, well see enterprises tackle data gravity by bringing their applications closer to data sources rather than transporting resources to a central location. By localizing data traffic, analytics and management, enterprises will more effectively control their data and scale digital business.

All things being equal, its better to have more data than less of it. But companies can move the needle just by using available technology to make better use of the data they already have, argues Beaumont Vance, the director of AI, data science, and emerging technology at TD Ameritrade.

As companies are creating new data pools and are discovering better techniques to understand findings, we will see the true value of AI delivered like never before, Vance says. At this point, companies are using less than 20% of all internal data, but through new AI capabilities, the remaining 80% of untapped data will be usable and easier to understand. Previous questions which were unanswerable will have obvious findings to help drive massive change across industries and societies.

Big data is tough to manage. What if you could do AI with small data? You can, according to Arka Dhar, the CEO of Zinier.

Going forward, well no longer require massive big data sets to train AI algorithms, Dhar says. In the past, data scientists have always needed large amounts of data to perform accurate inferences with AI models. Advances in AI are allowing us to achieve similar results with far less data.

(Drendan/Shutterstock)

How you store your data dictates what you can do with it. You can do more with data stored in memory than on disk, and in 2020, well see organizations storing more data on memory-based systems, says Abe Kleinfled, the CEO of GridGain.

In 2020, the adoption of in-memory technologies will continue to soar as digital transformation drives companies toward real-time data analysis and decision-making at massive scale, Kleinfled says. Lets say youre collecting real-time data from sensors on a fleet of airplanes to monitor performance and you want to develop a predictive maintenance capability for individual engines. Now you must compare anomalous readings in the real-time data stream with the historical data for a particular engine stored in the data lake. Currently, the only cost-effective way to do this is with an in-memory data integration hub, based on an in-memory computing platform like Apache Ignite that integrates Apache Spark, Apache Kafka, and data lake stores like Hadoop.2020 promises to be a pivotal year in the adoption of in-memory computing as data integration hubs continue to expand in enterprises.

Big data can make your wildest business dreams come true. Or it can turn into a total nightmare. The choice is yours, say Eric Raab and Kabir Choudry, vice presidents at Information Builders.

Those that have invested in the solutions to manage, analyze, and properly action their data will have a clearer view of their business and the path to success than has ever been available to them, Raab and Choudry write. Those that have not will be left with a mountain of information that they cannot truly understand or responsibly act upon, leaving them to make ill-informed decisions or deal with data paralysis.

Lets face it: Managing big data is hard. That doesnt change in 2020, which will bring a renewed focus on data orchestration, data discovery, data preparation, and model management, says Todd Wright, head of data management and data privacy solutions at SAS.

(a-image/Shutterstock)

According to the World Economic Forum, it is predicted by 2020 that the amount of data we produce will reach a staggering 44 zettabytes, Wright says. The promise of big data never came from simply having more data and from more sources but by being able to develop analytical models to gain better insights on this data. With all the work being done to advance the work of analytics, AI and ML, it is all for not if organizations do not have a data management program in place that can access, integrate, cleanse and govern all this data.

Organizations are filling up NVMe drives as fast as they can to help accelerate the storage and analysis of data, particularly involving IoT. But doing this alone is not enough to ensure success, says Nader Salessi, the CEO and founder of NGD Systems.

NVMe has provided a measure of relief and proven to remove existing storage protocol bottlenecks for platforms churning out terabytes and petabytes of data on a regular basis, Salessi writes. Even though NVMe is substantially faster, it is not fast enough by itself when petabytes of data are required to be analyzed and processed in real time. This is where computational storage comes in and solves the problem of data management and movement.

Data integration has never been easy. With the ongoing data explosion and expansion of AI and ML use cases, it gets even harder. One architectural concept showing promise is the data fabric, according to the folks at Denodo.

Through real-time access to fresh data from structured, semi-structured and unstructured data sets, data fabric will enable organization to focus more on ML and AI in the coming year, the Denodo company says. With the advancement in smart technologies and IoT devices, a dynamic data fabric provides quick, secure and reliable access to vast data through logical data warehouse architecture. Thus, facilitating AI-driven technologies and revolutionizing businesses.

Seeing how disparate data sets are connected using semantic AI and enterprise knowledge graphs (EKG) provide other approaches for tackling the data silo problem, says Saurav Chakravorty, the principal data scientist at Brillio.

An organizations valuable information and knowledge is often spread across multiple documents and data silos, creating big headaches for a business, Chakravorty says. EKG will allow organizations to do away with semantic incoherency in fragmented knowledge landscape. Semantic AI with EKG complement each other and can bring great value overall to enterprise investments in data lake and big data.

2020 holds the potential to be a breakout year for storage-class memory, argues Charles Fan, the CEO and co-founder of MemVerge.

With an increasing demand from data center applications, paired with the increased speed of processing, there will be a huge push towards a memory-centric data center, Fan says. Computing innovations are happening at a rapid pace, with more and more computation techfrom x86 to GPUs to ARM. This will continue to open up new topology between CPU and memory units. While architecture currently tends to be more disaggregated between the computing layer and the storage layer, I believe we are headed towards a memory-centric data center very soon.

We are rapidly moving toward a converged storage and processing architecture for edge deployments, says Bob Moul, CEO of machine data intelligence platform Circonus.

Gartner predicts there will be approximately 20 billion IoT-connected devices by 2020, Moul says. As IoT networks swell and become more advanced, the resources and tools that managed them must do the same. Companies will need to adopt scalable storage solutions to accommodate the explosion of data that promises to outpace current technologys ability to contain, process and provide valuable insights.

Dark data will finally see the light of day in 2020, according to Rob Perry, the vice president of product marketing at ASG Technologies.

(PictureDragon/Shutterstock)

Every organization has islands of data, collected but no longer (or perhaps never) used for business purposes, Perry says. While the cost of storing data has decreased dramatically, the risk premium of storing it has increased dramatically. This dark data could contain personal information that must be disclosed and protected. It could include information subject to Data Subject Access Requests and possible required deletion, but if you dont know its there, you cant meet the requirements of the law. Though, this data could also hold the insight that opens up new opportunities that drive business growth. Keeping it in the dark increases risk and possibly masks opportunity. Organizations will put a new focus on shining the light on their dark data.

Open source databases will have a good year in 2020, predicts Karthik Ranganathan, founder and CTO at Yugabyte.

Open source databases that claimed zero percent of the market ten years ago, now make up more than 7%, Ranganathan says. Its clear that the market is shifting and in 2020, there will be an increase in commitment to true open source. This goes against the recent trend of database and data infrastructure companies abandoning open source licenses for some or all of their core projects. However, as technology rapidly advances it will be in the best interest of database providers to switch to a 100% open source model, since freemium models take a significantly longer period of time for the software to mature to the same level as a true open source offering.

However, 2019 saw a pull back away from pure open source business models from companies like Confluent, Redis, and MongoDB. Instead of open source software, the market will be responsive to open services, says Dhruba Borthakur, the co-founder and CTO of Rockset.

Since the public cloud has completely changed the way software is delivered and monetized, I predict that the time for open sourcing new, disruptive data technologies will be over as of 2020, Borthakur says. Existing open-source software will continue to run its course, but there is no incentive for builders or users to choose open source over open services for new data offerings..Ironically, it was ease of adoption that drove the open-source wave, and it is ease of adoption of open services that will precipitate the demise of open source particularly in areas like data management. Just as the last decade was the era of open-source infrastructure, the next decade belongs to open services in the cloud.

Related Items:

2019: A Big Data Year in Review Part One

2019: A Big Data Year in Review Part Two

Read more:
Big Data Predictions: What 2020 Will Bring - Datanami