June 10 (Reuters) - The Canadian federal police force broke the law when it used controversial facial recognition software, the country's top privacy regulator found in a report released on Thursday.
The Royal Canadian Mounted Police (RCMP) initially denied that it used Clearview AI, a U.S.-based facial recognition software that cross-references photos with a database of photos posted to social media. In February 2020, the agency said it had been using it for several months.
Clearview AI became the subject of privacy investigations in countries around the world after revelations that it scraped data from sites such as Facebook and Instagram to build a database of billions of faces.
The RCMP continued using the software until Clearview AI was barred from operating in Canada in July 2020.
In a statement, the RCMP said it accepted the Office of the Privacy Commissioner's (OPC) findings, and the two organizations had strengthened their relationship.
OPC said the onus was on the RCMP to ensure the tools it used were lawful. "A government institution cannot collect personal information from a third party agent if that third party agent collected the information unlawfully," Commissioner Daniel Therrien said in a statement.
The RCMP ultimately agreed to implement the OPC's recommendations, including creating an oversight function, after initially disagreeing the force had responsibility for ensuring services they used did not violate laws.
In a press conference, Therrien said his biggest concern was that the RCMP could not explain the purpose of the majority of searches made in Clearview AI's database.
Just 6% of searches were related to child exploitation victim identification, which the RCMP said was the main reason it used Clearview AI. Another 85% of searches could not be explained, the OPC's report found.
The RCMP said the discrepancy was due to differences in how it tracked searches versus how Clearview AI did so.
Our Standards: The Thomson Reuters Trust Principles.