Canada crawling toward AI regulatory regime
But, experts say reform is urgent
Last week, privacy watchdogs revealed that five million images of shoppers’ faces were collected without their consent at a dozen of Canada’s most popular malls. Real estate company Cadillac Fairview embedded cameras equipped with facial-recognition technology, which draws on machine-learning algorithms, in digital information kiosks to discern customers’ ages and genders, according to an investigation by the federal, Alberta and B.C. privacy commissioners.
But the commissioners had no authority to levy fines against the firm, or any companies that violate Canadians’ personal information, an “incredible shortcoming of Canadian law that should really change,” B.C. information and privacy commissioner Michael McEvoy said in an email.
Legal void around algorithmic technology
The revelation shines a light on the legal void around algorithmic technology. Despite its status as an artificial-intelligence hub, Canada has yet to develop a regulatory regime to deal with problems of privacy, discrimination and accountability to which AI systems are prone, prompting renewed calls for regulation from experts and businesses.
“We are now being required to expect systematic monitoring and surveillance in the way that we walk down the road, drive in our cars, chat with our friends online in small social-media bubbles. And it changes the way that public life occurs, to subject that free activity to systematic monitoring,” said Kate Robertson, a Toronto-based criminal and constitutional lawyer.
Law enforcement and Clearview AI
At least 10 Canadian police agencies, including the RCMP and Calgary and Toronto police services, have used Clearview AI, a facial-recognition company that has scraped more than three billion images from the Internet for use in law enforcement investigations, according to a report co-written by Robertson.
Other Ontario police forces also may be “unlawfully intercepting” private conversations in online chat rooms via “algorithmic social-media surveillance technology,” according to the September report from the University of Toronto’s Citizen Lab and International Human Rights Program. Clearview AI said in July it will no longer provide facial recognition services in Canada, but many companies offer similar services.
“We have seen the lack of clear limits and focused regulation leaving an overly broad level of discretion in both the public and police sectors that is a call to action for governments across the country,” Robertson said in a phone interview.
To trust AI is to have clear regulations
Canada needs to roll out concrete rules that balance privacy and innovation, said Carolina Bessega, co-founder and chief scientific officer of Montreal startup Stradigi AI. Public trust in artificial intelligence becomes increasingly crucial as machine-learning companies move from the conceptual to the commercial stage, she said. “And the best way to trust AI is to have clear regulations.” The regulatory vacuum also discourages businesses from deploying AI, holding back innovation and efficiency _ particularly in hospitals and clinics, where the implications can be life or death.
“I strongly believe AI can be extremely helpful in the diagnosis and treatment of different diseases. But when human wellbeing can be affected, it is important to have the right regulations in place to ensure we do AI in a responsible way,” Bessega said.
Promote responsible innovation
Innovation Minister Navdeep Bains tells The Canadian Press that an update to 20-year-old privacy legislation is due “in the coming weeks” to address gaps in personal-data protection, but refused to nail down a timeline. The would-be law should “empower Canadians to have better accountability and to promote responsible innovation,” he said. Bains pointed to the European Union’s General Data Protection Regulation from 2018 as a model that hands citizens more control over their privacy and digital information through “clear enforcement mechanisms,” he said.
The absence of an AI legal framework has implications for Canadians in areas ranging from law enforcement to immigration. So-called predictive policing _ automated decision-making based on data that forecasts where a crime will occur or who will commit it – has had a disproportionate impact on racialized communities, said Robertson of Citizen Lab.
Target crime hot spots
Examples include a now-abandoned Chicago police initiative where the majority of people on a list of potential perpetrators were Black men who had no arrests or shooting incidents to their names, as well as a scuttled Los Angeles police strategy that saw officers targeting possible crime hot spots based on information gleaned from utility bills, foreclosure records and social-service files.
Since 2015, police departments in Vancouver, Edmonton, Saskatoon and London, Ont. have implemented or piloted predictive policing. The federal immigration and refugee system also relies on algorithm-driven decisions to help determine factors such as whether a marriage is genuine or if someone should be designated as a “risk,” according to another Citizen Lab study, which found the practice threatens to violate human-rights law.
International agreement on AI deployment
AI testing and deployment in Canada’s military prompted Canadian deep-learning pioneers Geoffrey Hinton and Yoshua Bengio to warn about the dangers of robotic weapons and outsourcing lethal decisions to machines, and to call for an international agreement on their deployment. The federal government launched the Advisory Council on Artificial Intelligence in May 2019, and Canada was among the first states to develop an official AI research plan, unveiling a $125-million strategy in 2017. But the focus of both is largely scientific and commercial.
The advisory council, which includes a working group that aims to “foster trust” in the technology, has yet to produce a public report. In June, Canada, France and 13 other countries launched an international AI partnership to guide policy development with an eye to human rights, with its working group on responsible AI co-chaired by Bengio (who also co-chairs the advisory council). But drafting laws is beyond its mandate.
“Canada’s approach to AI appears to be focused on funding research as opposed to developing regulations and governance structures,” according to a U.S. Library of Congress report from January 2019.
Until legislation arrives in Parliament, experts say the observation still holds true.