Federal use of A.I. in visa applications could breach human rights, report says

Impacts of automated decision-making involving immigration applications and how errors and assumptions could lead to “life-and-death ramifications”

OTTAWA — A new report is warning about the federal government’s interest in using artificial intelligence to screen and process immigrant files, saying it could create discrimination, as well as privacy and human rights breaches.

The research, conducted by the University of Toronto’s Citizen Lab outlines the impacts of automated decision-making involving immigration applications and how errors and assumptions within the technology could lead to “life-and-death ramifications” for immigrants and refugees.

The authors of the report issue a list of seven recommendations calling for greater transparency and public reporting and oversight on government’s use of artificial intelligence and predictive analytics to automate certain activities involving immigrant and visitor applications.

“We know that the government is experimenting with the use of these technologies … but it’s clear that without appropriate safeguards and oversight mechanisms, using A.I. in immigration and refugee determinations is very risky because the impact on people’s lives are quite real,” said Petra Molnar, one of the authors of the report.

“A.I. is not neutral. It’s kind of like a recipe and if your recipe is biased, the decision that the algorithm will make is also biased and difficult to challenge.”

Earlier this year, federal officials launched two pilot projects to have an A.I. system sort through temporary resident visa applications from China and India. Mathieu Genest, a spokesman for Immigration Minister Ahmed Hussen, says the analytics program helps officers triage online visa applications to “process routine cases more efficiently.”

He says the technology is being used exclusively as a “sorting mechanism” to help immigration officers deal with an ever-growing number of visitor visas from these countries by quickly identifying standard applications and flagging more complex files for review.

Immigration officers always make final decisions about whether to deny a visa, Genest says.

But this isn’t the only dive into artificial intelligence being spearheaded by the Immigration Department.

In April, the department started gauging interest from the private sector in developing other pilot projects involving A.I., or ”machine learning,” for certain areas of immigration law, including in humanitarian and compassionate applications, as well as pre-removal risk assessments.

These two refugee streams of Canada’s immigration system are often used as a last resort by vulnerable people fleeing violence and war to remain in Canada, the Citizen Lab report notes.

“Because immigration law is discretionary, this group is really the last group that should be subject to technological experiments without oversight,” Molnar says.

She notes that A.I. has a “problematic track record” when it comes to gender and race, specifically in predictive policing that has seen certain groups over-policed.

“What we are worried about is these types of biases are going to be imported into this high risk laboratory of immigration decision-making.”

The government says officials are only interested developing or acquiring a tool to help Immigration and Justice Department officials manage litigation and develop legal advice in immigration law.

“The intent is to support decision makers in their work and not replace them,” Genest said.

“We are monitoring and assessing the results and success of these pilots before we launch or consider expanding it to other countries and lines of business.”

In April, Treasury Board released a white paper on “responsible artificial intelligence in the government of Canada,” and is currently consulting with stakeholders to develop a draft directive on the use of automated decision-making technologies within government.

Molnar says she hopes officials will consider the Citizenship Lab’s research and recommendations, including their call for an independent, arms-length oversight body to monitor and review the use of A.I. decision-making systems.

“We are beyond the conversation whether or not A.I. is being used. The question is, if A.I. is here to stay we want to make sure it is done right.”

The Canadian Press

Like us on Facebook and follow us on Twitter.

Just Posted

‘Police are ready’ for legal pot, say Canadian chiefs

But Canadians won’t see major policing changes as pot becomes legal

Illegal dumping pushes BC Conservation to the tipping point

Terrace office may bring violators to court to seek higher penalties

Natural gas pipeline cost soars

Coastal GasLink to carry gas from northeastern B.C. to LNG Canada plant at Kitimat

Airport wants LNG companies to build own terminals

Best way to handle expected crush of traffic

Naked man jumping into Toronto shark tank a ‘premeditated’ stunt: official

The man swam in a tank at Ripley’s Aquarium of Canada

Trump: Saudi king ‘firmly denies’ any role in Khashoggi mystery

Secretary of State Mike Pompeo is travelling to the Middle East to learn more about the fate of the Saudi national

Microsoft co-founder Paul Allen dies at 65

Allen died in Seattle from complications of non-Hodgkin’s lymphoma

Man surrenders after Terrace drive-by shooting

Mayor confident with police assurance of public safety

Transport Canada to take new look at rules, research on school bus seatbelts

Canada doesn’t currently require seatbelts on school buses

Sockeye run in Shuswap expected to be close to 2014 numbers

Salute to the Sockeye on Adams River continues until Sunday, Oct. 21 at 4 p.m.

Michelle Mungall’s baby first in B.C. legislature chamber

B.C. energy minister praises support of staff, fellow MLAs

Canucks: Pettersson in concussion protocol, Beagle out with broken forearm

Head coach Travis Green called the hit ‘a dirty play’

5 tips for talking to your kids about cannabis

Health officials recommend sharing a harm reduction-related message.

Most Read