At this point, most of us have heard of ChatGPT – the artificial intelligence program that is increasing in popularity and quickly making its way into workplaces, whether welcomed or not.

Many employers are wondering what this technology means for their company. After all, ChatGPT’s recent success has highlighted the broad use of artificial intelligence, including the potential to enhance workers’ productivity or replace some workers altogether. As ChatGPT becomes more sophisticated, its applications will continue to broaden.

That being said, employers should be very mindful of developing legal considerations. The implementation of artificial intelligence, and specifically ChatGPT, may trigger numerous issues relating to employment, labour, human rights, privacy, and ethics.

First of all, what exactly is ChatGPT?

ChatGPT is a chat bot powered by artificial intelligence. It is available to the public and can be used in many ways. Anyone can type in a request on the ChatGPT website to receive an answer. Requests may vary from asking it to write an email addressing a particular subject, to summarizing data, to explaining information, or even solving a specific problem. It’s not surprising that employees may be attracted to ChatGPT for assistance with completing certain time-intensive aspects of their work.

What should employers be aware of?

1. Inaccurate Information

ChatGPT is still developing and only has access to a specific scope of information. There is no guarantee that ChatGPT-generated responses will be accurate. Sometimes, the risk of inaccuracy is small (for instance, writing a generic email). However, if employees rely on ChatGPT to provide information to customers, drafting key contracts, or carry out research on behalf of the business, serious issues can easily arise.

2. Privacy and Confidentiality

Canadian employers are subject to a variety of privacy and confidentiality obligations under law. Complying with these obligations may become challenging when relying on ChatGPT given the possible privacy risks.

ChatGPT works by returning answers to inputs given by human beings. The inputs that humans insert are endless and may be used by future versions of artificial intelligence. In other words, if employees input confidential, personal, or sensitive information into ChatGPT, this information may not be secure and could potentially be used in future outputs, including those provided to the public.

3. Ethics

Ethical obligations will always trump convenience.

Generally, an employer hires an employee for their skills and experience and pays them to complete daily work tasks. An employee using ChatGPT may break trust between an employer and employee, especially if the employer has not been informed about the employee’s use of ChatGPT.

Similarly, representing work completed by ChatGPT as a company’s work product to clients is concerning. The client is paying for the company’s expertise, not the use of a chat bot. If a client discovers a company is passing off a chat bot’s work as their services without telling them, this could lead to client dissatisfaction and even a complete breakdown of the client relationship.

Takeaway for Employers

ChatGPT and other artificial intelligence programs are still in the early stages and their role in the workplace remains unclear. However, as ChatGPT increases in popularity, employers need to stay informed and consider the impact of such tools on their businesses – however helpful or risky. While artificial intelligence can certainly be very beneficial in the workplace, there are also significant legal risks it creates.

Employers may consider getting ahead of the game by creating a workplace policy regarding artificial intelligence to outline expectations for their employees and define how these tools should be used, if at all.

If you have questions on your rights as an employer regarding employee use of ChatGPT in this developing area, be sure to reach out to us. Our Advisors are happy to assist!