Beyond privacy: there are wider issues at stake over Big Tech in medicine – Open Democracy

Posted: February 7, 2022 at 7:14 am

Big Techs role in facilitating a host of digital harms in recent years has become painfully clear: political polarization, consumer manipulation, discrimination-by-algorithm, worker insecurity, to name just a few.

The European Union has taken on a leading role in safeguarding citizens from these harms, first and foremost with the implementation of privacy standards and data protection law. Indeed, privacy has taken on a dominant position in the marketplace of public values in the digital age.

There are good reasons for this. Privacy is undoubtedly a core value of democratic societies, to be championed and cherished. It is the breathing room we need to engage in the process of self-development. But at this stage in our digital evolution, our heightened sensitivity to privacy, our fixation on data protection, may act as an obstacle to a digital Europe fit for all.

In fact, our focus on privacy may be unwittingly enabling, rather than hindering, the continued expansion of Big Tech into new sectors. This is what weve found in an ongoing European Commission-funded research project I am leading at Radboud University in the Netherlands, which is investigating the risks raised by the increased involvement of Big Tech in health and medicine.

Since at least 2014, all major tech corporations, including Alphabet (Googles parent company), Apple, Microsoft, Amazon, Facebook and IBM have moved into health and medicine. They have done so either by setting up partnerships with public research institutions on medical projects or by developing health-related applications themselves, including software for carrying out remote clinical studies, wearables for medical research, devices for home medical surveillance, artificial intelligence (AI) systems for diagnostics and prediction, or funding schemes for biomedical research.

These companies were also quick to offer digital support in the fight against the COVID-19 pandemic, most famously with the Google-Apple application programming interface (API) for digital contact tracing, on which most contact tracing apps in Europe, as well as other parts of the world, run.

To anyone who had been paying attention to this Googlisation of health, privacy and data protection issues were, from the get-go, potential red flags. Indeed, it is not difficult to conjure up horror scenarios of these companies getting hold of our personal health data and feeding it into a metaverse of increasingly more precise data profiles that could be used to target, surveil and manipulate us.

In response, data protection watchdogs were unleashed (with more or less success), and digital security experts began designing state-of-the-art privacy-by-design data exchange techniques specifically for collaborations between Big Tech and hospitals.

These efforts may be good deterrents. To date, there have been relatively few privacy scandals around Big Techs involvement in health and medicine. And privacy now seems to be a shared concern of companies too regardless of what their motives might be.

Google and Apples COVID-19 digital contact tracing protocol, for example, conforms to the stringent privacy-protecting criteria defined by leading privacy and security experts. First and foremost amongst these, it only works with decentralised data storage systems that keep our data on our individual phones rather than sending them to a central repository, thereby pre-empting any surveillance creep. It is precisely for this reason that many privacy experts, the European Data Protection Supervisor and most European states applauded and adopted the initiative when it was launched in April 2020.

But privacy is only the tip of the iceberg when it comes to the risks that the increased involvement of Big Tech in health and medicine raise. And the near exclusive focus on privacy has prevented much broader questions from being asked.

For example, will these companies become the new gatekeepers of the very valuable datasets they are helping to compile? These may be open access in the future, but there is no such guarantee.

Another crucial question is what role these companies will play in asking research questions and setting research agendas in health and medicine? Sergey Brin, previously the president of Google, has been open about the fact that a rare form of Parkinsons disease runs in his family and that this is why Google has invested in Parkinsons research. In the future, will Silicon Valley tycoons get a say in which global diseases receive attention and which wont?

And what kind of clash of expertise will we witness between medical experts and these tech companies, and whose expertise will prevail? The Google-Apple contact tracing API is a case in point. Some public health experts were unhappy with the API: decentralised storage is good for privacy, but not for the type of oversight of infections you want in a pandemic.

Ultimately, we need to ask how the health and medical sector will be reshaped by the growing presence of these actors. But also, and perhaps most importantly, as these companies move into virtually all sectors of society, from education and city planning to news provision, transport and even space exploration, we need to ask how society, as an aggregation of sectors, is being transformed.

These are questions that are concerned with health and medicine as a public good, who is involved in the development and distribution of this public good, and new configurations of power in a society that is increasingly digitised. These are questions that remain even if privacy and data protection are properly addressed.

This has to do with the fact that Big Techs business models are evolving. In health and medicine at least, the initiatives and partnerships that companies are launching are not about capturing as much data as possible and using it to target us with advertisements. To be successful, many of these initiatives do not require using data in ways that are privacy unfriendly. Actually, some of them do not require the use of data at all.

Take Apples ResearchKit software for example, which allows researchers to use iPhones as a means of collecting data for clinical studies. Apple does not need to see, control, analyse or in any way handle the data collected in order for the ResearchKit and the iPhone to become a new tool for remote clinical studies. No privacy issues at stake here.

Similarly, Verilys involvement in Parkinsons research seeks to develop new digital biomarkers for Parkinsons disease. These efforts will be successful if these biomarkers are accurate, in which case they may become integral to future Parkinsons research. There is no need to share data with third parties to achieve this, and so here, too, privacy is a non-issue.

In both these examples, what the companies seek to do is to become indispensable players in the future of biomedicine, and to accomplish this they do not need to breach any privacy or data protection rules.

In this context, conceptual approaches which focus almost exclusively on data privacy, such as surveillance capitalism and data colonialism, just as regulatory frameworks such as the EUs GDPR and privacy-by-design techniques, are insufficient, if not counterproductive. They risk only scratching the surface, and worse, facilitating the entrance of Big Tech via privacy friendly solutions into ever more sectors of society. The over-emphasis on privacy in our discussions and regulation of Big Tech draws our critical attention away from bigger questions about agenda setting, infrastructural power and new dependencies on a handful of companies across sectors of society,

What we need are approaches that reach beyond privacy risks and data protection. To do this, and drawing on political theorist Michael Walzers seminal work Spheres of Justice, my group at Radboud University has developed a conceptual framework that understands Big Techs push into ever more sectors as sphere transgressions.

Walzer argues that a just society is one where advantages or inequalities that exist in one sphere, such as having more money (market sphere), should not be translated into advantages in other spheres, such as access to better education (sphere of education).

Yet, what we are witnessing with the growing influence of Big Tech across sectors of society is the conversion of advantages these companies have acquired in the sphere of digital production (expertise in the development of digital infrastructure), into advantages in all spheres that undergo digitalisation be this health, education, public administration, transportation, etc. Such sphere transgressions are illegitimate, insofar as these companies do not have domain expertise proportional to their new level of influence in these different spheres, and because they are not accountable in a way that public sector actors are.

This framework moves beyond a narrow focus on privacy, to identify the risks of sphere transgressions on two levels. First, at the sectoral level, it enables us to ask how Big Tech is contributing to a reshaping of individual sectors. How do the values, norms and expertise that these companies bring with them, which typically promote technological efficiency and standardization, crowd out the traditional values and expertise that underpin a sector such as health or education values such as care and access based on need, or public health expertise which prefers centralised over decentralised oversight in a pandemic?

Second, it enables us to ask how Big Tech is reshaping society, as an aggregation of spheres. In what new ways are we becoming dependent on tech companies for the provision of public goods, and what kind of decision-making power does this confer them across society?

At Radboud, we have translated this framework into an open-data digital tool Sphere Transgressions Watch which allows users to track the presence of Big Tech in different sectors over time, and to contribute data on instances of sphere transgressions themselves. Our hope is that this tool will both raise awareness about this phenomenon among policy makers, the media and civil society, and that it will be used by other scholars to ask their own research questions. We hope that it will contribute to developing more robust governance frameworks for the digital age that move beyond a narrow focus on privacy and data protection.

Read more from the original source:

Beyond privacy: there are wider issues at stake over Big Tech in medicine - Open Democracy

Related Posts