Clearer rules needed for facial recognition technology

February 28th, 2020 by Michael Hackl

A version of this article was first published on rabble.ca

In a previous column, I wrote about the dangers that some police technology poses for civil liberties. In that column, I addressed police use of a computer program that claims to identify geographic areas that are more likely to experience crimes in order to direct police resources to those areas. Now, with Toronto police Chief Mark Saunders’ recent admission that some officers in the Toronto Police Service have been using a piece of facial recognition software called Clearview AI (named for the company that developed the software) since at least October 2019, we have another example of how law enforcement can use technology in a way that seriously threatens our civil liberties.

Clearview AI has apparently mined the internet for billions of photos of people, largely from social media sites and the open web, whereas other companies providing facial recognition technology to police rely upon government sources such as mugshots and driver’s license photos.

Our privacy legislation

In theory, you should not have to worry about social media posts being used by police to identify you or others. After all, the federal government as well as all of Canada’s provinces and territories have privacy laws and bureaucracies assigned to enforce them. However, those laws may not be well suited to protect you from companies using your social media posts in this manner.

If we take, for example, the Personal Information Protection and Electronic Documents Act (PIPEDA), which is the federal law governing businesses’ duty to protect personal information they collect, there are a number of issues with images posted online that weaken the protections that the law provides for your personal information.

In general, PIPEDA provides that businesses in Canada cannot collect, use or disclose any personal information about you without your consent, subject to certain exceptions. Personal information is defined as “information about an identifiable individual,” so photos of you should be protected from unwanted use or distribution, especially since the whole point of photos used in facial recognition software is to identify the individual. So why wouldn’t PIPEDA prohibit a company like Clearview AI from taking your photos off of the internet and using them in its program?

First, PIPEDA may not even apply to Clearview AI, for the simple fact that it is not a Canadian company. Even when it provides services to Canadian police organizations, if it is doing so via the internet, it may be difficult to pinpoint a physical location where the services are supplied. Further, PIPEDA also may not apply to whatever social media platform the photos were posted to in the first place.

Second, when you join a social media site, you need to sign on to its privacy policy. Depending on what the privacy policy says, you may be providing your consent to the use and distribution of any photos you post simply by creating an account. But even if the social media site in question has a privacy policy that permits you to restrict the use and distribution of your posted content, it may not be enough to protect the personal information contained in the photos that are posted. After all, one of the main points of posting to a social media account is to share one’s messages and images over the internet. By doing so, you are arguably providing your implied consent to the sharing of your images.

Sharing posts and the resulting loss of control over their use

On top of that, your posts may be liked, shared, reposted, copied or forwarded in a manner you may not have intended. By posting them in the first place, you are arguably consenting to your posts being shared in that way. Each time a post is redistributed in such a way, you lose some degree of control over how the information in the post is used or who sees it. This is, arguably, the nature of media on the internet. That fact may undercut an argument that you had consented to the use of personal information only for certain limited purposes. By willingly posting your photos in a media in which you know that third parties can access and redistribute them in ways that you cannot control, you have arguably consented implicitly to your photos being used in ways that you cannot control and did not intend.

Not everyone shares this view. Former Ontario privacy commissioner Ann Cavoukian has rejected the idea that by posting images online, people are implicitly consenting to the use of those images for purposes other than what they specifically intended.

Whether or not posting an image online means you have implicitly consented to the world seeing and sharing it, often in a manner that you cannot control, the reality is companies such as Clearview AI are using images from social media for purposes their posters probably did not intend, and some police are willing to take advantage of those companies’ services.

As recently as January, the Toronto Police Service denied using Clearview AI. However, Saunders has since admitted that some officers had been using Clearview AI since last October without his knowledge, and that he ordered them to stop. But how can we know for sure if the technology is still being used or not?

This is an area where it is not enough that current legislation “might” prohibit police from using services like Clearview AI. Clear rules need to be set for the police to follow and a workable method of enforcing those rules needs to be put in place.

Filed in: Civil Rights

Tags: , , , ,