Canadian companies’ AI policies aim to balance risks and benefits

“It would be bad if the power of this technology was not harnessed. It offers huge opportunities in terms of productivity and functionality,” says the founder of an artificial intelligence management software company

When talent search platform Plum noticed that ChatGPT was causing a storm in the tech world and beyond, it decided to go to the source to find out how employees can and can’t use a generative AI chatbot.

ChatGPT, which can turn simple text instructions into poems, essays, emails and more, produced a draft document last summer that brought the Kitchener, Ont.-based company about 70 percent to the final policy.

“There was nothing wrong with it, there was nothing crazy with it,” recalls Plum CEO Caitlin MacGregor. “But there was an opportunity to get a little more specific or tailor it more to our business.”

Plum’s final policy – a four-page document based on a draft of ChatGPT with advice from other startups collected last summer – recommends that employees not feed customer and proprietary information into AI systems, check everything the technology spits out for accuracy and attribute any self-generated content.

This makes Plum one of several Canadian organizations codifying its position on artificial intelligence as people increasingly rely on the technology to increase their productivity at work.

Many people have been encouraged to develop policies by the federal government, which last fall published a set of artificial intelligence guidelines for the public sector. Currently, dozens of startups and larger organizations have adapted them for their own needs or are developing their own versions.

These companies say their goal is not to limit the use of generative AI, but to ensure employees feel empowered enough to use it responsibly.

“It would be a mistake not to harness the power of this technology. It offers enormous opportunities in terms of productivity and functionality,” said Niraj Bhargava, founder of, an Ottawa-based artificial intelligence management software company.

“But on the other hand, if you use it without (putting up) a guardrail, there are many dangers. There is an existential risk to our planet, but there are also practical risks related to bias, fairness or privacy issues.”

Finding a balance between both is key, but Bhargava said there is no one-size-fits-all approach that will work for every organization.

If you’re a hospital, you may have a very different answer to the question of what’s acceptable than a private-sector technology company, he said.

However, there are some principles that often appear in the guidelines.

One is not to connect customer or proprietary data to AI tools because companies cannot guarantee that such information will remain private. It can even be used to train models that support AI systems.

Another is to treat everything the AI ​​spits out as potentially false.

AI systems are still not foolproof. Tech startup Vectara estimates that AI chatbots come up with information at least three percent of the time, and in some cases as much as 27 percent.

In February, a BC lawyer had to admit in court that she had cited two cases of family dispute fabricated by ChatGPT.

A California lawyer similarly discovered accuracy issues when, in April 2023, he asked a chatbot to compile a list of lawyers who had experienced sexual harassment. She misspelled the scientist’s name and quoted a non-existent Washington Post article.

Organizations creating AI policies also often raise issues of transparency.

“If you wouldn’t attribute something someone else wrote as your own work, why would you attribute something ChatGPT wrote as your own work?” asked Elissa Strome, executive director of the pan-Canadian artificial intelligence strategy at the Canadian Institutes of Advanced Research (CIFAR).

Many argue that people should be informed when it is used to analyze data, write text, or create images, video or audio, but other cases are not so clear.

“We can use ChatGPT 17 times a day, but do we have to write an email saying so every time? Probably not if you’re thinking about an itinerary and whether to go by plane or by car, something like that,” Bhargava said.

“There are many innocent cases where I don’t think I need to disclose that I used ChatGPT.”

It’s unclear how many companies have reviewed all the ways workers can use AI and communicated what is and isn’t acceptable.

An April 2023 survey by consulting firm KPMG of 4,515 Canadians found that 70 per cent of Canadians who use generative AI say their employer has a policy regarding the technology.

However, an October 2023 study by software company Salesforce and YouGov found that 41 per cent of the 1,020 Canadians surveyed said their company did not have a policy on the use of generative AI at work. About 13 percent had only “loosely defined” guidelines.

At Sun Life Financial Inc. Employees are prohibited from using third-party AI tools at work because the company cannot guarantee that customer, financial or health information will remain private when using these systems.

However, the insurer is allowing employees to use internal versions of Anthropic Claude’s AI chatbot and GitHub’s Copilot, an AI-powered programming assistant, because the company was able to ensure both were adhering to its data privacy policies, said Laura Money, its chief information officer.

So far, she has seen employees use these tools to write code and create notes and scripts for videos.

To encourage more people to experiment, the insurer is encouraging employees to sign up for a free online course run by CIFAR that teaches the principles of artificial intelligence and its effects.

Money said of the move: “You want your employees to become familiar with these technologies because it can increase their productivity, improve the quality of their work life and make work a little more enjoyable.”

Since the course was offered to them a few weeks ago, approximately 400 employees have taken part.

Despite offering the course, Sun Life knows its approach to technology needs to evolve because artificial intelligence is developing so quickly.

For example, Plum and CIFAR introduced their policies before generative artificial intelligence tools, which go beyond text and are used to create audio, audio or video, became widely available.

“It wasn’t the same level of image generation as we have today,” MacGregor said of the summer of 2023, when Plum launched its AI policy during a hackathon, asking employees to write poems about the business using ChatGPT or experimenting with how could solve some of the company’s problems.

“An annual review is definitely probably necessary.”

Bhargava agrees, but said many organizations still have to catch up because they don’t have a policy yet.

“It’s time to do it,” he said.

“If the genie is out of the bottle, we can’t think, ‘maybe next year we’ll do it’.”

This report by The Canadian Press was first published May 6.