"

AI and the Importance of Ethics in the Workplace

Ashley McDonald

Comic showing a paralegal using AI to draft a letter, verifying accuracy and case law, confirming details with a lawyer, and getting approval for the final version.
Image Description: Comic showing a paralegal using AI to draft a letter, verifying accuracy and case law, confirming details with a lawyer, and getting approval for the final version. Image Credit: Stasiewich, A (2025). Generated with ChatGPT and edited in Canva.

Learning Objectives

Upon successful completion of this chapter, learners will be able to:

  1. Recognize what generative AI is and how it is used in legal practice.
  2. Identify common risks associated with using generative AI
  3. Review steps to assess the accuracy and validity of AI-generated legal content.
  4. Outline key ethical responsibilities of legal professionals when using AI.

Introduction

This chapter will focus on the uses of AI in the workplace and the importance of ethics when using AI tools in the legal field. AI is a new and evolving technology, and it is important to know what generative AI is, what its potential uses are in the law office, and what the Law Society rules are when it comes to using AI for tasks. It is also important to understand the ethics involved in using generative AI and the consequences it can have when misused.

What is Generative AI?

Generative Artificial Intelligence, or Generative AI, is defined by the Law Society of Alberta (2024) as being a “type of artificial intelligence that can create new content, such as text, images, or audio, based on some input or prompt” (p.1). Simply typing in precisely what you would like the AI to do will create that within minutes of receiving your instructions. One way to think of how generative AI technology works is that there are three stages: the input, analysis, and output (Queen’s University, 2025).

This technology can drastically improve the overall efficiency of a task, as it will take minutes to complete a task that could have otherwise taken hours or even days to complete, depending on the complexity. AI can be used for many things in the law office, including drafting documents or correspondence and researching a legal issue. The Law Society of Alberta suggests that Generative AI can be used to proofread documents, summarize complex documents, and generate ideas for writing a document.

Although AI can improve the overall efficiency of work, it also has drawbacks and risks that must be considered. The University of Alberta (2025) lists some of the issues to consider when using Generative AI, such as;

  • Hallucinations: This term describes when AI produces information that does not exist. The AI creates false cases that relate to the information it was given. It is important to fact-check the cases that the AI produces to ensure that they are real cases and are not cases that the AI hallucinated.
  • Errors: When using AI, it is important to check for errors, as it can produce inaccurate or biased information depending on the given information.
  • Tasks: Although generative AI is seen as a tool that can do virtually anything, it does have its limits. There may be some tasks that the AI cannot complete, depending on the advancement of the program you are using.

It is important to keep these points in mind as you explore the uses of AI and how it can benefit you and the work you conduct. Generative AI can simplify tasks and increase efficiency. However, it is also important to double-check the information it produces, as there are risks and limitations to what it can do.


How To Use Generative AI 

Using Prompts with Generative AI

AI is a new and evolving technology, constantly changing as technology advances. It may seem like a daunting task to figure out how to use AI and how to utilize it in a law office; however, there is a quick guide created by the University of Alberta (2025) that can be used to assist in creating the right prompts to generate the information needed.

The University of Alberta (2025) recommends using the 5P method. The 5Ps each represent a category that can be used when writing a prompt for the generative AI to produce information. The 5P’s are as follows:

Using the 5P method suggested by the University of Alberta can drastically improve how you use AI, depending on what type of information you need to obtain. For more information on using AI in legal research, follow the University of Alberta’s guide.

Assessing Generative AI information 

After obtaining the information provided by the AI, it is important to learn how to assess the information to validate its accuracy properly. As discussed previously, AI has limitations and can produce hallucinations, errors in data, and potentially biased information. So, although it can be a helpful tool when conducting tasks, it is important to know how to assess the accuracy and validity of the information the AI is giving you. Queen’s University (2025) suggests using five steps to measure the AI-generated content. The steps are as follows;

Assess System Limitations:

This step analyzes how the AI system works, which in turn helps us understand how it generates the data it gives us. One way to look at this is by using the 3-layer model (Queen’s University, 2025). The three layers comprise the input, analysis, and output layers. The input layer consists of the data on which the system operates. Consider how current the data the system uses is, and if there is any human bias in the data. The second layer is the analysis layer, where the system interprets the data given. Again, be sure to consider if there is any bias in the data and if the system tells you how it analyzes the data. The last layer is the output layer, which is the result that the system gives you after inputting the initial data. Consider again if there is any bias in the AI data, if anything is missing from the results, and if you can train the AI to help it learn how to perform tasks.

Overall, assessing the AI system itself is important to ensure that it will give you accurate, current, and non-biased data. The 3-step method suggested by Queen’s University is an excellent way to assess the system you are using.

Verify Information:

As mentioned previously, AI tools can generate hallucinations, which are inaccurate information or cases that do not exist (Queen’s University, 2025). It is important to double-check the sources and information the AI gives you to verify that the cases and information exist. One way to do this is to find the information in a legal database such as CanLII or Westlaw, review the source to ensure the reference is accurate, and then note the source to refer to in your reference guide.

Compare with Other Sources:

Although generative AI is helpful, conducting separate research to find your own information is important, as you should not rely solely on the AI content as sources that are behind a paywall, like those found in legal databases (aside from CanLII), will not be found using a non-AI platform. With this step, be sure to research legal databases such as CanLII and Westlaw to find additional information that can help aid in your legal research.

Update for Currency:

Do not assume that the information given to you by the generative AI is current (Queen’s University, 2025). Not all systems will rely on the most current information, so it is important to do additional research to ensure that the data is current or if there is a new law that must be referred to instead. Some AI platforms may show how current the information is, but this is not always the case, so additional research may be necessary.

Take Steps to Address Bias:

The last step that Queen’s University (2025) suggests is to check to see if any bias is evident in your given data, both by checking for bias in your initial prompt and also within the output, as this can affect the outcome of the results.

 

These steps suggested by Queen’s University can help in the process of using AI tools. AI can help streamline legal research, but it is important to ensure that the information you are given is valid, accurate, and free from error and bias. Following the above steps will aid in that process to ensure that accurate information is given. AI is an effective tool, but should not be the only source of information used when conducting research. It is still important to fact-check the information you are given and to do your additional research.


AI Databases 

Many different types of generative AI databases can be used, and each has advantages and disadvantages depending on the task you need completed. Before using a platform to conduct your task, ensure that it is reliable and can accurately complete the task you need. In addition, it is important to note that some platforms require a subscription. If your law firm has a subscription to a generative AI platform, taking advantage of this tool is beneficial. However, if your law firm has no subscription, numerous platforms are free for public use. Below is a list of different platforms that can be used.

Free Generative AI Platforms  Subscription Required Platforms
ChatGPT Lexis+ AI | Legal Research Platform + AI Assistant | LexisNexis
Scribe  – Free version, but also paid versions with more features. Alexi 
Microsoft CoPilot – Free version, but also paid versions (Microsoft 365 subscription) CoCounsel
Canadian Legal Information Institute | CanLII Westlaw Edge (AI-Enhanced)

Inequality in Access to AI Platforms 

Although there are many options for AI platforms to use, there is much discussion about the inequality regarding access to AI platforms. Accessing legal-specific AI services comes at a cost, and smaller firms most often cannot afford these extra expenses (James, 2025). Regardless of AI platform choice, it is important that whichever platform a firm uses, whether it is free or requires a subscription, the service should be researched and tested by the firm to ensure that it produces accurate information. Although there are free platforms, like ChatGPT, Grok, etc., these tools may not be as accurate as platforms that may require a fee, nor provide a comprehensive and intelligent response to your query (James, 2025). This inevitably creates a divide not only between “Big Law” firms and smaller boutique operations, but also for self-represented individuals who cannot afford a lawyer and are seeking alternative means for legal advice. Although outside the scope of this chapter, Rachel Paterson wrote a compelling article in 2024 discussing the digital divide that the rise of AI tools in the legal profession has on self-represented litigants. Paterson states that the rise in AI “is causing the digital divide to widen at an unprecedented rate, which has concerning implications for people who self-represent, and access to justice as a whole.”


AI and the Importance of Ethics 

Ethics in law refers to a “code of conduct and professional responsibilities that govern lawyers’ behaviour” (Pathfinder Editorial, 2024). Legal ethics is the most crucial aspect of the Canadian justice system. This system is the guiding force for all legal professionals to follow. Legal ethics ensures that lawyers prioritize their clients’ interests, ensure justice is upheld fairly and impartially, establish trust with client confidentiality, act with diligence and competence, and promote the public’s trust in the Canadian Justice System (Pathfinder Editorial, 2024). The Law Society of Alberta has a Code of Conduct to which all lawyers practicing in Alberta must adhere. This Code enforces the above-mentioned points that legal professionals must follow to enforce ethics in the legal field. More information on the Law Society of Alberta’s Code of Conduct.

Knowing the importance of ethics in the legal field and the standards that legal professionals hold helps us understand how ethics relates to AI use in the legal field. There have been cases where the improper use of AI has led to errors during the court process that could have been avoided if the AI tool had been used correctly.

A case in Ontario described how a judge discovered a submission that was filed with the court that was written with AI (Draaisma, 2025). This lawyer relied on court materials produced by AI and cited fictitious cases and irrelevant cases that did not relate to the criminal law in question (Draaisma, 2025). The judge had ordered the lawyer to prepare new submissions and resubmit them to the court, and that the lawyer must pinpoint the information being referred to in the case citations and check the citations, ensuring they have a link to CanLII (Draaisma, 2025). Another case, Ko v Li 2025, ONSC 2766, also dealt with the repercussions of improperly using AI. The lawyer in this case submitted a factum with hyperlinks to CanLII that were inaccurate and did not link to relevant cases and displayed error messages. In addition, when asked to provide opposing counsel with citations and copies of the cases referenced in the factum, the lawyer could not provide the documentation. A third case Halton (Regional Municipality) v Rewa et al., 2025 ONSC 4503, found one of the parties submitting a factum that relied on fictitious cases. It was suggested that the party used AI to generate factum and that it had produced hallucination cases, and as a result, the party admitted to using AI as they were unable to hire counsel to assist in the matter. The Justice stated that all parties, whether self-represented or not, must verify the cases they are relying upon before submitting to the court. The hearing was adjourned with costs against the party, giving them additional time to amend their submissions.

In the first two cases noted above, the lawyers were not seriously punished for their negligence in not fact-checking their research, but the party in the third case, who was self-represented, was ordered to pay costs (a surprising revelation, given to this date, many of the lawyers misusing AI have not received the same). The lawyer’s Code of Conduct guides their behaviours to ensure that justice is upheld and that they prioritize their clients’ interests (Pathfinder Editorial, 2024). The fear is that if a lawyer does not check the work that AI has generated and it has serious errors or even hallucination cases, this incorrect information could cause a miscarriage of justice (Draaisma, 2025). This could be detrimental to the client, as they have entrusted their lawyer to effectively represent them on their legal issues, potentially paying thousands of dollars to represent them in their case accurately. Having the lawyer not uphold their duty to their client and present inaccurate information to the judge could not only create a miscarriage of justice if followed, but can also negatively impact their client’s life. As stated in the case of Ko v. Li, lawyers must prepare and review material competently, not to fabricate or miscite cases, to read cases before submitting them to the court, to review information produced by artificial intelligence, and not to mislead the court. So, with this information on the standards that a lawyer is supposed to be held to, it is surprising that neither of these lawyers was sanctioned for their negligence, but a self-represented party was. This begins a conversation about how these types of situations should be handled, and raises questions not only in ethics, but competence, supervision of staff, and the proper administration of justice. Furthermore, should judges be responsible for fact-checking and verifying all written materials submitted by lawyers, including checking for AI-generated content? Is this a reasonable use of public resources and an effective way to manage workload in the legal system? The answer to how AI use should be managed, and what consequences should exist for unethical use, is still being worked out by the courts. It will be interesting to see how this evolves over time, especially as AI tools become more advanced and capable of completing complex tasks with increasing speed.


AI Policies in the Workplace

With AI rapidly increasing, ensuring it is used responsibly in a professional setting, such as in the legal field, is important. One way to ensure that all employees within a work setting are using AI responsibly is to develop an AI policy within the law firm to ensure that everyone knows how to use the AI appropriately (The Law Society of Alberta, 2024). If your firm does not already have a policy in place, the Law Society of Alberta has a reference guide on what information should be included in this policy. Within the Law Society of Alberta’s (2024) guide, they suggest including:

  • Examples of Permitted Use – Explain what an employee can use generative AI for and give examples of what this may look like.
  • Examples of Prohibited Use – Explain what an employee should not use the generative AI tool for.
  • Liability – What happens if an employee inappropriately uses the AI?
  • Disclosure – Being open with clients when generative AI is used in their case.
  • Monitoring – Have a clause stating that the firm can monitor how AI is used.
  • Confidentiality and Privacy – Have a clause that states the importance of confidentiality and privacy of client information. Ensure that sensitive information does not get breached.

These are just a few examples of clauses that can be implemented into a law firm’s policy on the use of AI. Each firm will have its own policies that its employees must follow. What should remain the same, however, is that the AI is used responsibly and the client’s interests are at the forefront to ensure that they can rely on the services they are paying for. The Law Society of Alberta has several suggestions for implementing an AI policy in a law firm.


Summary

AI is a rapidly evolving technology that is increasing in use. This is a tool that, when used effectively, can drastically improve work output, but it must be appropriately used to ensure that accurate information is being produced. At the forefront, the clients’ interests should be the number one priority to ensure that they get the quality service they are paying for to help them with their matters, as they are entrusting the law firm to help them during a time of need. Using generative can be a great tool to improve the work needed for the client’s case, but it must be used appropriately to protect their interests.


Reflection Questions

  1. Can you think of a situation in which relying too heavily on generative AI might create risk for a legal professional or their client? What would be the consequences?
  2. After reading about the cases where lawyers submitted hallucinated case law, how do you think legal regulators should respond to such incidents?
  3. How can a law firm balance the efficiency offered by AI with its obligation to act ethically and competently for clients?
  4. As a paralegal or legal assistant, how might your use of generative AI tools, both responsibly and irresponsibly, impact the lawyer you work with? Consider how your actions could influence the lawyer’s reputation, ethical obligations, or the outcome of a client’s matter.

Glossary Terms

  1. Generative AI – A type of artificial intelligence that creates new content such as text, images, or audio in response to a prompt.
  2. Hallucination  – The phenomenon where AI generates fabricated or inaccurate information, such as fictitious case law.
  3. 5P Method – A structured approach to prompt engineering, consisting of Prime, Prompt, Persona, Product, and Polish, by the University of Alberta.
  4. Legal Ethics – A legal professional’s duty to act competently, ethically, and in the client’s best interest.
  5. Bias in AI – Systematic error or skewed results generated by AI due to the nature of the data it was trained on.
  6. AI Policy – A workplace policy that outlines permitted and prohibited uses of AI, along with clauses on privacy, liability, and disclosure.

Short Answer Questions

  1. What is one major benefit of using generative AI in a law office?
  2. What are two common risks of relying on generative AI for legal research?
  3. What does the ‘Prime’ step in the 5P method involve?
  4. Why is it important to verify AI-generated information using legal databases like CanLII or Westlaw?

References

 

definition

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Legal Research for Alberta Legal Professionals Copyright © 2025 by MacEwan University Library is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.