AI data-monopoly risks to be probed by UK parliamentarians – TechCrunch

Posted: July 20, 2017 at 3:13 am

The UKs upper house of parliament is asking for contributions to an enquiry into the socioeconomic and ethical impacts of artificial intelligence technology.

Among the questions the House of Lords committee will consider as part of the enquiry are:

The committee says it is looking for pragmatic solutions to the issues presented, and questions raised by the development and use of artificial intelligence in the present and the future.

Commenting in a statement, Lord Clement-Jones, chairman of the Select Committee on Artificial Intelligence, said: This inquiry comes at a time when artificial intelligence is increasingly seizing the attention of industry, policymakers and the general public. The Committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.

We are looking to be pragmatic in our approach, and want to make sure our recommendations to government and others will be practical and sensible. There are significant questions to address relevant to both the present and the future, and we want to help inform the answers to them. To do this, we need the help of the widest range of people and organisations.

If you are interested in artificial intelligence and any of its aspects, we want to hear from you. If you are interested in public policy, we want to hear from you. If you are interested in any of the issues raised by our call for evidence, we want to hear from you, he added.

The committees call for evidence can be found here. Written submissions can be submitted via this webform on the committees webpage.

The deadline for submissions to the enquiry is September 6, 2017.

Concern over the societal impacts of AI has been rising up the political agenda in recent times, with another committee of UK MPs warning last fall the government needs to take proactive steps tominimise bias being accidentally built into AI systems and ensure transparency so that autonomous decisions can be audited and systems vettedto ensure AI tech is operating as intended and that unwanted, or unpredictable, behaviours are not produced.

Another issue that weve flaggedhere on TechCrunch is the risk of valuable publicly funded data-sets effectively being asset-stripped by tech giants hungry for data to feed and foster commercial AI models.

Since 2015, for example, Google-owned DeepMind has been forging a series of data-sharing partnerships with National Health Service Trusts in the UK which has provided it withaccess to millions of citizens medical information. Some of these partnerships explicitly involve AI; in other cases it has started by building clinical task management apps yet applying AI to the same health data-sets is a stated, near-termambition.

It alsorecently emergedthat DeepMind is not charging NHS Trusts for the app development and research work its doing with them rather its price appears to be access to what are clearly highly sensitive (and publicly funded) data-sets.

This is concerning as there are clearly only a handful of companies with deep enough pockets to effectively buy access to highly sensitive publicly-funded data-sets i.e. by offering five years of free work in exchange for access using that data to develop a new generation of AI-powered products. A small startup cannot hope to compete on the same terms as the Alphabet-Google behemoth.

The risk ofdata-based monopolies and winner-takes-all economics from big techs big data push to garner AI advantage should be loud and clear. As should the pressing need for public debate on how best to regulate this emerging sector so that future wealth and any benefits derived from the power of AI technologies can be widely distributed, rather than simply locking in platform power.

In another twist pertaining to DeepMind Healths activity in the UK, the countrys data protection watchdog ruled earlier this month that the companys first data-sharing arrangement with an NHS Trust broke UK privacy law. Patients consent had not been sought nor obtained for the sharing of some 1.6 million medical records for the purpose of co-developing a clinical task management app to provide alerts of the risk of a patient developing a kidney condition.

The Royal Free NHS Trust now has three monthsto change how it works with DeepMind to bring the arrangement into compliance with UK data protection law.

In that instance the app in question does not involve DeepMind applying any AI. However, in January 2016, the company and the same Trust agreed on wider ambitions to apply AI to medical data sets within five years. So the NHS app development freebies that DeepMind Health is engaged with now are clearly paving the way for a broad AI push down the line.

Commenting on the Lords enquiry, Sam Smith, coordinator of health data privacy group, medConfidential an early critic of how DeepMind was being handed NHS patient data told us: This inquiry is important, especially given the unlawful behaviour weve seen from DeepMinds misuse of NHS data. AI is slightly different, but the rules still apply, and this expert scrutiny in the public domain will move the debate forward.

Link:

AI data-monopoly risks to be probed by UK parliamentarians - TechCrunch

Related Posts