Police and governments may increasingly adopt surveillance technologies in response to coronavirus fears – The Conversation CA

Posted: March 24, 2020 at 6:14 am

The COVID-19 pandemic has dominated headlines with public fears mounting and governments around the world scrambling to find ways to control the spread of the virus. Many governments have declared national emergencies, with the Canadian federal government also considering this as option.

In the midst of these fears, tech companies in the United States have reportedly been in talks with the U.S. government and other agencies to use their data gathering and data location tools to track virus transmission trends. This includes the controversial facial-recognition startup, Clearview AI.

The use of Clearview AI by Canadian police agencies has sparked much media coverage focusing on the privacy implications of the application. It matches uploaded photographs of individuals with billions of other photos scraped mainly from social media and stored on Amazon servers. The technology infringes privacy laws by exploiting the biometric data of ordinary individuals without their consent.

As reports first began to emerge on the use of Clearview AI by American police, it became evident that Canadian police were also testing and using the technology; not all were initially forthcoming on their use.

One of the common themes among police agencies in Canada was that Clearview AI was used by individual officers within different investigative units. What became apparent was the lack of knowledge of Clearview AIs use by senior management in some police departments. It was only after leaked documents of Clearview AIs client list that police agencies began conducting internal investigations to see which of their officers used the application, in what units, and how many times.

The use of such technologies reveals a larger issue that goes beyond privacy, namely how surveillance technologies are deployed by police agencies in the first place.

This is not the first time that police agencies in Canada have adopted a technology in a similar ad hoc manner. Long before Clearview AI, senior management within local police agencies in Canada began to adopt social media as an investigative tool, after individual officers noticed their utility for investigations and intelligence gathering. Research has also shown police officers in Canada utilizing new methods of investigations including monitoring social media, initially unbeknownst by senior management.

Police agencies newfound interest in social media fostered a new market for tech companies selling social media monitoring or listening technologies. These use natural language processing to identify and monitor keywords not protected by privacy settings on social platforms, with some companies providing trials to police.

Both tech companies and police argue that social media accounts not set to private can be considered publicly available information, or open source, and available for anyone to see.

Tor Ekeland, the lawyer for Clearview AI, has used this very same logic, arguing that the photographs of the faces that the application gathers and stores is publicly available information.

Police forces are increasingly relying on AI for data collection and analysis. Open source information from social media is playing a role in shaping the development of new AI tools. Photographs may not be the only type of social media data subject to Clearview AIs algorithmic analysis voice recognition is also apparently under development.

Efforts are now being made by federal and provincial privacy regulators to build a framework on the use and regulation of biometric data, including facial recognition software used by organizations including the police, though it is unknown what this framework will look like until after the bodies complete their investigations.

In response to COVID-19, governments in Canada keep reminding the public that these are extraordinary times that require extraordinary measures. The effects of COVID-19 extend beyond its health impact. Tech companies can use the fear arising from the crisis to spread more surveillance technologies, offering them to governments as solutions to control the spread of the virus.

For example, facial recognition like Clearview AI could be used to identify anyone whos been in contact with an infected person similar to how tech companies responded in the U.S. after 9/11 with the passing of the Patriot Act, paving way to mass surveillance

Government and agencies including law enforcement need to practice extreme caution and openness if measures involve surveillance technologies. There is potential that they may become features of everyday life long after the virus has gone, opening up new areas of use (or abuse) a phenomenon known as surveillance creep.

Surveillance technologies can come at a cost not only to privacy, but to other political rights and freedoms their use can cost innocent people the right to live their lives free of surveillance. Marginalized communities are even more vulnerable given their history in being over-policed.

The revelations about the use of Clearview AI by police in Canada reveals little oversight on how surveillance technologies are adopted, used and for what purpose. This calls for more understanding on the use of such technologies by police and for proper internal and external mechanisms of accountability. Fears from COVID-19 shouldnt lead to any knee-jerk reactions that will affect our democracy once the pandemic is over.

See the rest here:

Police and governments may increasingly adopt surveillance technologies in response to coronavirus fears - The Conversation CA

Related Posts