Stevens and Solomun: Facial recognition technology speeds ahead as Canada's privacy law lags behind

Share:

Recently, a joint investigation by four privacy commissioners in Canada determined that controversial software company Clearview AI had engaged in illegal mass surveillance. The company was found to have scraped three billion images from the web and social media, including photos of children, without consent. Their clients — which have included several police departments in Canada (several Ottawa police officers also tested it) — can use this massive database to match images of people’s faces at incredible speed and scale through the company’s automated facial recognition software.

Although Clearview AI has stopped offering its software to Canadian clients, the company has refused to comply with the investigation’s two other recommendations: to delete previously collected data and to stop collecting images of Canadian residents.

These recommendations come on the heels of mounting evidence of clear harms from automated facial recognition technology, which can often amplify existing inequalities and exacerbate discrimination against marginalized groups. Research has found repeated instances of bias across gender and race, resulting in significantly higher rates of inaccuracy for the very groups who stand to be most affected by its use in law enforcement, particularly people of colour. The dangers are so grave that some private companies have instituted temporary moratoriums on their own facial recognition technology and several cities worldwide have banned its use.

In Canada, it is currently possible to collect and share facial images for identification purposes without consent, and without adequate legal procedures, including the right to challenge decisions made with this technology. This poses a tremendous risk of mistaken identification or arrests made through the use of facial recognition systems.

Despite these harms, Canadian privacy law — meant to guard against this very kind of mass surveillance — currently lacks any real enforcement power and adequate safeguards to protect facial information. Unlike protections for the collection of other kinds of biometric information such as DNA or fingerprint data, Canada also lacks clear guidelines and consent mechanisms for how our facial information, which is highly sensitive and vulnerable to misuse, can or should be used.

But it’s not just private companies that Canadians need to be worried about.

Federal institutions are also using facial recognition software while Canadians are kept in the dark about how, when and for what purposes. It was only through news reporting that the public learned of the RCMP and military’s use of automated facial recognition on the population.

The Privacy Act, Canada’s privacy legislation for the federal government, does not explicitly include facial and biometric information as subsets of personal information worthy of special protection. The Act consequently fails to provide adequate safeguards against the significant risks associated with the collection, use and disclosure of some of our most sensitive personal information: our faces.