Hilltimes
Menu
Get free News Updates Sign in
×
Saturday, August 2, 2025
Canada’s Politics and Government News Source Since 1989
Latest Paper

From chatbots to translation: how the public service is using AI

Long before Prime Minister Mark Carney made sweeping election promises of harnessing artificial intelligence to boost productivity in the federal bureaucracy, AI programs were hard at work in the public service. 

Whether it’s tax questions, addressing Phoenix pay issues, translation services, or drafting documents, it seems there’s an AI bot for that.

In fact, the annual report about the federal public service itself was developed with the help of generative artificial intelligence. The July 16 Privy Council Office report for Carney (Nepean, Ont.) includes the disclaimer that the AI section was written using “a generative AI tool approved for use by plublic servants.” The tool helped “with plain language” and to brainstorm while developing the narrative, but the generated content was reviewed and edited by humans.

Artificial intelligence is technology that enables computers to simulate human processes like logic, problem solving, pattern recognition, and more. Generative AI is based on machine learning models that create original content by mimicking the human-generated information that it’s trained on. 

The government’s AI strategy for 2025 to 2027 has been in place since the spring, and numerous applications of the technology have been put in place throughout the public sector. The strategy requires public disclosure of which technologies are used and the tasks they perform, as both public and private sector surveys show mistrust about the use of artificial intelligence remains high.

The Canada Revenue Agency (CRA), Public Services and Procurement Canada (PSPC), and Shared Services Canada are among the departments making prominent use of these programs in their daily work.

Translation tool misses ‘critical nuances,’ warns union

Last month, PSPC’s Translation Bureau launched an AI translation tool prototype called PSPC Translate, spokesperson Jullian Paquin confirmed to The Hill Times.

The prototype was designed after a “noticeable decline” in translation request volumes, which suggested employees were using external translation tools like Google Translate and DeepL, an AI translation tool, Paquin said.

“To address this, the Translation Bureau sought to provide a secure, high-quality alternative that reflects Canadian linguistic standards and ensures data confidentiality.”

The tool is not intended to replace human translators, but to support the day-to-day communication needs of employees, and to reduce reliance on “unvetted external platforms,” Paquin said.

No jobs are being eliminated as a result of the prototype’s launch, and it’s designed to support Translation Bureau staff by handling routine internal content, Paquin said, allowing for translators to focus on more complex work.

“Their expertise remains also essential to ensuring quality, refining the tool, and updating the lexicon to reflect evolving language use and organizational needs,” Paquin said.

Nathan Prier, president of the Canadian Association of Professional Employees, one of the public service’s largest unions, criticized the tool as undermining Canadian bilingualism, as “critical nuances are lost” when translations are performed by machines instead of humans.

CAPE president Nathan Prier criticized PSPC’s AI-powered translation bot, saying ‘critical nuances’ are lost when translation isn’t performed by humans. The Hill Times photograph by Andrew Meade

“Current translation technologies are far from being able to independently translate complex legal or technical texts,” he said in an emailed statement. “Errors in critical translations can create huge problems and cost more to resolve.”

Federal translators are “very high-calibre and remain more nuanced and sophisticated than current AI products,” he continued. “Their work can and is supported by emerging technologies but replacing them with AI in a rush is simply a direct attack on the standards Canadians should expect.

“We need to recruit more qualified people, give them the right tools, and consult them to find out what they need to do a quality job that puts both official languages on an equal footing and ensures professional and first-rate translations.”

Tax chat, document drafting, and more

As previously reported by The Hill Times, PSPC has also developed a bot to clear up the backlog of complaints related to the Phoenix pay system.

Speaking to reporters on June 23, PSPC associate deputy minister Alex Benay said an AI virtual agent, or “Ava,” has been trained by compensation advisers for the past year, and is now tasked with clearing 160,000 backlogged cases of issues with the beleaguered Phoenix pay system, under the supervision of humans.

Benay said complex cases will be managed by humans, while “the lighter-touch stuff will be first passed by machines, reviewed by humans.”

PSPC senior official Alex Benay
Alex Benay, associate deputy minister of PSPC, says an AI virtual agent dubbed ‘Ava’ has been trained by compensation advisers for the past year. The Hill Times photograph by Andrew Meade

In March, the CRA launched its beta chatbot, billing it as available 24/7 to answer questions about charities, personal income tax, and information related to accessing a CRA account. But it’s not without limits.

“Do not share any personal details like your name, address, social insurance number (SIN), date of birth, or financial information while using the chatbot,” reads CRA’s fact sheet about the product. “The GenAI chatbot beta is not designed to securely handle personal or sensitive information.”

The disclaimer continues that the information provided by the chatbot beta is based on Canada.ca webpages, and should not be taken as professional, legal, or medical advice. It further warns information from the bot “may not be fully accurate,” and ends with the advice to “use this information at your own risk.”

CRA spokesperson Sylvie Branch said the department first began experimenting with AI in 2023, and the new chatbot was developed in-house. It helps reduce pressure on contact centres, but isn’t intended to replace human agents.

“Instead, it enables taxpayers to find answers to basic questions independently, allowing contact centre staff to focus on more complex inquiries that require human expertise.”

The CRA chatbot reflects the Treasury Board Secretariat’s commitment to responsible, ethical, and effective AI adoption in the public sector, Branch said, and “closely aligns” with the TBS’s AI strategy and its guiding principles, including being human-centred, collaborative, ready, and responsible.

Liberal MP Shafqat Ali
Treasury Board President Shafqat Ali. The Treasury Board Secretariat’s AI strategy and its guiding principles include being human-centred, collaborative, ready and responsible. The Hill Times photograph by Andrew Meade

Shared Services Canada has an internal chatbot for its employees. Dubbed CANchat, the bot can assist with document drafting, content editing, summarizing information, research assistance, meeting preparation, and more. 

“SSC is conducting a pilot to evaluate CANChat’s value and potential for productivity,” the department’s webpage says. “The pilot aims to gather insights on integrating generative AI into daily workflows. This ensures employees have access to modern digital tools to improve their work experience and collaboration.”

Making bureaucrats more efficient

Sean O’Reilly, president of the Professional Institute of the Public Service of Canada (PIPSC), says artificial intelligence is here to stay, and should be used as a tool to augment the work public servants do, allowing them to take on new and ambitious projects.

“It should be used to make the work we do better and more efficient, I don’t think AI should be replacing public servants,” he said. “I don’t think any Canadian wants to get answers from a computer. It needs a human touch.”

While the public sector is bracing for impact after Finance Minister François-Philippe Champagne (Saint-Maurice–Champlain, Que.) called for a spending review and for departments to cut their budgets by 15 per cent over the next three years, there is a concern that AI will be used to replace public servants, O’Reilly said. 

But, he added, as it stands, public servants are using AI to do their jobs better.

“It’s removing that mundane work so they can actually use their skill that they train for to do the work, and they can do better work because of it,” O’Reilly said.

As AI usage expands across the public service, those programs will need to be maintained, and will need someone to “actually care for them and nurture them,” O’Reilly said, to make sure data models are working and biases are removed. Some PIPSC members are already working in that capacity across various departments, he added.

“You’re going to need a human for that,” he said. “Some researchers, their day to day job is AI.”

O’Reilly said unions have been pushing for an AI register that shows all applications of the technology, not just in the government, as well as a regulatory body for the technology. 

The Privy Council Office established an AI Secretariat in 2024 “to improve co-ordination and delivery of the government’s significant AI agenda,” including in the public service to “drive service improvements and operational efficiency across government.”

Better tracking of AI programs needed

Joanna Redden is an associate professor in Western University’s faculty of information and media studies. She compiled a database of more than 300 applications of artificial intelligence in the federal government spanning from 2013 to present, many of them pilot projects.

The database was developed via information from documents tabled in Parliament, access-to-information requests, and news reporting. The database says just four per cent of the tools have publicly available government assessments on their impacts on the protected characteristics of individuals.

In an interview, Redden says the public isn’t well-informed about how widely AI is being used in the public service, and more work needs to be done to make that information clear, noting the public service’s AI strategy for 2025-27 mentions plans to develop an AI register for the government.

“I think the public doesn’t know enough about where AI is happening. But not only that, I think even if you know AI is being used, you don’t necessarily know how you’ve been impacted,” she said. “That’s a huge problem because in the research I’ve been part of, these systems are often not as accurate as they’re expected to be, not efficient, they get things wrong.”

What’s more, she added, public trust in AI is also low, as people using generative AI tools have experienced firsthand their failures and shortcomings.

Aside from more transparency, more conversations are needed about the pitfalls of AI, Redden said, including risks to people and communities, as well as environmental harms.

AI systems can be deployed in ways that can “disproportionately negatively affect” people with low incomes, or people who face discrimination based on race, ethnicity, or religion, she said, in some cases leading to wrongful arrests and detainment, biases in hiring, and more. 

Academics have developed an AI incident database to track some of the harms related to AI.

“The broad trend we see, and what I’m concerned about, is there’s not enough attention being paid to the ways in which AI is already leading to harm, and the kind of work that we should be doing, urgently, to prevent that harm from happening in the future,” Redden said. “There’s a lack of oversight in the first place to ensure that AI harms don’t occur.”

Redden said Bill C-27, the Artificial Intelligence and Data Act, was the government’s first crack at legislating and regulating AI, but it died on the Order Paper when Parliament was prorogued in January.

“We’d like to see that energy around regulating AI picked back up again,” she said.

with files from Irem Koca

mglass@hilltimes.com

The Hill Times