{"id":233,"date":"2025-08-10T21:55:33","date_gmt":"2025-08-11T01:55:33","guid":{"rendered":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/?post_type=chapter&#038;p=233"},"modified":"2026-02-26T18:16:58","modified_gmt":"2026-02-26T23:16:58","slug":"ai-and-the-importance-of-ethics-in-the-workplace","status":"publish","type":"chapter","link":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/chapter\/ai-and-the-importance-of-ethics-in-the-workplace\/","title":{"raw":"AI and the Importance of Ethics in the Workplace","rendered":"AI and the Importance of Ethics in the Workplace"},"content":{"raw":"<div class=\"mceTemp\"><\/div>\r\n[caption id=\"attachment_311\" align=\"aligncenter\" width=\"1024\"]<img class=\"wp-image-311 size-large\" src=\"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-content\/uploads\/sites\/31\/2025\/08\/test1-1024x378.png\" alt=\"Comic showing a paralegal using AI to draft a letter, verifying accuracy and case law, confirming details with a lawyer, and getting approval for the final version.\" width=\"1024\" height=\"378\" \/> Figure 5.1 An AI-generated image of a paralegal using artificial intelligence ethically. [Image description \u2013 <a href=\"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/back-matter\/appendix-a-image-descriptions\/#fig5.1\">See Appendix A Figure 5.1<\/a>]. Source: Adapted by Stasiewich, A. from OpenAI. (2025). <em>ChatGPT-5<\/em> [Image generator]. Retrieved August 13, 2025, from <a href=\"https:\/\/chat.openai.com\">https:\/\/chat.openai.com<\/a>[\/caption]\r\n<div class=\"textbox textbox--learning-objectives\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\"><a id=\"retfig5.1\"><\/a>Learning Objectives<\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n\r\nUpon successful completion of this chapter, learners will be able to do the following:\r\n<ul>\r\n \t<li style=\"font-weight: 400\">Recognize what generative AI is and how it is used in legal practice.<\/li>\r\n \t<li style=\"font-weight: 400\">Identify common risks associated with using generative AI.<\/li>\r\n \t<li style=\"font-weight: 400\">Review steps to assess the accuracy and validity of AI-generated legal content.<\/li>\r\n \t<li style=\"font-weight: 400\">Outline the key ethical responsibilities of legal professionals when using AI.<\/li>\r\n<\/ul>\r\n<\/div>\r\n<\/div>\r\n<h2>Introduction<\/h2>\r\nThis chapter will focus on the uses of AI in the workplace and the importance of ethics when using AI tools in the legal field. AI is a new and evolving technology, and it is important to know what generative AI is, what its potential uses are in the law office, and what the law society rules are when it comes to using AI for tasks. It is also important to understand the ethics involved in using generative AI and the consequences it can have when misused.\r\n<h2>What is Generative AI?<\/h2>\r\nGenerative artificial intelligence, or [pb_glossary id=\"315\"]Generative AI[\/pb_glossary], is defined by the Law Society of Alberta (2024) as being a \u201ctype of artificial intelligence that can create new content, such as text, images, or audio, based on some input or prompt\u201d (p. 1). Simply typing in precisely what you would like the AI tool to do and it will create that within minutes of receiving your instructions. One way to think of how generative AI technology works is that there are three stages: the input, analysis, and output (Queen\u2019s University, 2025).\r\n\r\nThis technology can drastically improve the overall efficiency of a task, as it will take minutes to complete a task that could have otherwise taken hours or even days to complete, depending on the complexity. AI can be used for many things in the law office, including drafting documents or correspondence and researching a legal issue. The Law Society of Alberta suggests that generative AI can be used to proofread documents, summarize complex documents, and generate ideas for writing a document.\r\n\r\nAlthough AI can improve the overall efficiency of work, it also has drawbacks and risks that must be considered. The University of Alberta (2025) lists some of the issues to consider when using Generative AI, such as:\r\n<ul>\r\n \t<li style=\"font-weight: 400\"><strong>[pb_glossary id=\"321\"]Hallucinations[\/pb_glossary]<\/strong>: This term describes when AI produces information that does not exist. The AI tool creates false cases that relate to the information it was given. It is important to fact-check the cases that the AI tool produces to ensure that they are real and not AI hallucinations.<\/li>\r\n \t<li style=\"font-weight: 400\"><strong>Errors<\/strong>: When using AI, it is important to check for errors, as it can produce inaccurate or biased information depending on the given information.<\/li>\r\n \t<li style=\"font-weight: 400\"><strong>Tasks<\/strong>: Although generative AI is seen as a tool that can do virtually anything, it does have its limits. There may be some tasks that AI cannot complete, depending on the advancement of the program you are using.<\/li>\r\n<\/ul>\r\nIt is important to keep these points in mind as you explore the uses of AI and how it can benefit you and the work you conduct. Generative AI can simplify tasks and increase efficiency. However, it is also important to double-check the information it produces, as there are risks and limitations to what it can do.\r\n\r\n<hr \/>\r\n\r\n<h2>How To Use Generative AI<\/h2>\r\n<h3>Using Prompts with Generative AI<\/h3>\r\nAI is a new and evolving technology, constantly changing as technology advances. It may seem like a daunting task to figure out how to use AI and how to utilize it in a law office; however, there is a quick guide created by the University of Alberta (2025) that can be used to assist in creating the right prompts to generate the information needed.\r\n\r\nThe University of Alberta (2025) recommends using the [pb_glossary id=\"323\"]5P method[\/pb_glossary]. The 5Ps each represent a category that can be used when writing a prompt for the generative AI to produce information. The 5Ps are as follows:\r\n\r\n&nbsp;\r\n\r\n<details><summary><strong>Prime<\/strong><\/summary>The first \u201cP\u201d in the method stands for \u201cPrime.\u201d This is best used when giving generative AI a task by giving specific information such as facts, jurisdiction, and a time frame. This can best be used to find cases or legislation related to your fact scenario. See the following examples:\r\n<ol>\r\n \t<li>The client was involved in an armed robbery of a bank in Edmonton\u2026<\/li>\r\n \t<li>The <em>Residential Tenancy Act<\/em> in Alberta and how it relates to tenants wanting to end their lease early due to\u2026<\/li>\r\n<\/ol>\r\n&nbsp;\r\n\r\n<\/details><details><summary><strong>Prompt<\/strong><\/summary>The second \u201cP\u201d stands for \u201cPrompt\u201d and is best used when you want to give the AI tool specific instructions on a task. This is useful when wanting to do one of the following examples:\r\n<ol>\r\n \t<li>Compare and contrast different legislation.<\/li>\r\n \t<li>Summarize case decisions.<\/li>\r\n \t<li>Draft legal documents, such as a memorandum.<\/li>\r\n<\/ol>\r\n&nbsp;\r\n\r\n<\/details><details><summary><strong>Persona<\/strong><\/summary>The third \u201cP\u201d stands for \u201cPersona\u201d and should be used when you want the AI tool to take on a role in generating information from the prompt that you give it. Examples of this include:\r\n<ol>\r\n \t<li>Write from the perspective of opposing counsel in a matter involving\u2026<\/li>\r\n \t<li>Generate an email to opposing counsel in a formal tone requesting\u2026<\/li>\r\n<\/ol>\r\n&nbsp;\r\n\r\n<\/details><details><summary><strong>Product<\/strong><\/summary>The fourth \u201cP\u201d stands for \u201cProduct\u201d and should be used when you want a specific output for the AI tool to produce. See the following examples:\r\n<ol>\r\n \t<li>A letter to a client explaining\u2026<\/li>\r\n \t<li>A 1,000-word summary\u2026<\/li>\r\n<\/ol>\r\n&nbsp;\r\n\r\n<\/details><details><summary><strong>Polish<\/strong><\/summary>The last \u201cP\u201d stands for \u201cPolish\u201d and is used when you want to refine the results of the information that the AI tool has given you. This can be useful in almost all of the prompts you give the AI tool, as it is intended to narrow down the results to receive the desired end product. Examples of this include:\r\n<ol>\r\n \t<li>Contrast this information with\u2026<\/li>\r\n \t<li>This only includes information relevant in Ontario\u2019s jurisdiction. Is there relevant information in Alberta\u2019s jurisdiction?\u2026<\/li>\r\n<\/ol>\r\n<\/details>Using the 5P method suggested by the University of Alberta can drastically improve how you use AI, depending on what type of information you need to obtain. For more information on using AI in legal research, follow the University of Alberta\u2019s <a href=\"https:\/\/guides.library.ualberta.ca\/generative_ai_legal_research\/recommendations\" target=\"_blank\" rel=\"noopener\"><em>Generative AI for Legal Research<\/em><\/a> guide.\r\n<h3>Assessing Generative AI Information<\/h3>\r\nAfter obtaining the information provided by the AI tool, it is important to learn how to assess the information to validate its accuracy properly. As discussed previously, AI has limitations and can produce hallucinations, errors in data, and potentially biased information. So, although it can be a helpful tool when conducting tasks, it is important to know how to assess the accuracy and validity of the information AI is giving you. Queen\u2019s University (2025) suggests using five steps to measure the AI-generated content. The steps are as follows:\r\n<h4>Assess System Limitations<\/h4>\r\nThis step analyzes how the AI system works, which in turn helps us understand how it generates the data it gives us. One way to look at this is by using the three-layer model (Queen's University, 2025). The three layers comprise the input, analysis, and output layers. The input layer consists of the data on which the system operates. Consider how current the data the system uses is and whether there is any human bias in the data. The second layer is the analysis layer, where the system interprets the data given. Again, be sure to consider if there is any bias in the data and whether the system tells you how it analyzes the data. The last layer is the output layer, which is the result that the system gives you after inputting the initial data. Consider again if there is any [pb_glossary id=\"325\"]bias in AI[\/pb_glossary] data, if anything is missing from the results, and if you can train the AI tool to help it learn how to perform tasks.\r\n\r\nOverall, assessing the AI system itself is important to ensure that it will give you accurate, current, and non-biased data. The three-step method suggested by Queen\u2019s University is an excellent way to assess the system you are using.\r\n<h4>Verify Information<\/h4>\r\nAs mentioned previously, AI can generate hallucinations, which are inaccurate information or cases that do not exist (Queen\u2019s University, 2025). It is important to double-check the sources and information the AI tool gives you to verify that the cases and information exist. One way to do this is to find the information in a legal database such as CanLII or Westlaw, review the source to ensure the reference is accurate, and then note the source to refer to in your reference guide.\r\n<h4>Compare with Other Sources<\/h4>\r\nAlthough generative AI is helpful, conducting separate research to find your own information is important, as you should not rely solely on the AI content as sources that are behind a paywall, such as those found in legal databases (aside from CanLII), will not be found using a non-AI platform. With this step, be sure to research legal databases such as CanLII and Westlaw to find additional information that can help aid or verify your legal research.\r\n<h4>Update for Currency<\/h4>\r\nDo not assume that the information given to you by the generative AI is current (Queen\u2019s University, 2025). Not all systems will rely on the most current information, so it is important to do additional research to ensure that the data is current or if there is a new law or information that should be referenced instead. Some AI platforms may show how current the information is, but this is not always the case, so more research may be necessary.\r\n<h4>Take Steps to Address Bias<\/h4>\r\nThe last step that Queen\u2019s University (2025) suggests is to check to see whether any bias is evident in your given data. It is important to check for bias in your initial prompt as well as within the output\u2014both can affect the outcome of the results.\r\n\r\nThese steps suggested by Queen\u2019s University can help in the process of using AI tools. AI can help streamline legal research, but it is important to ensure the information you are given is valid, accurate, and free from error and bias. Following the above steps will aid in that process to ensure the information provided by AI is accurate. AI is an effective tool, but it should not be the only source of information used when conducting research. It is still important to fact-check the information you are given and to do your additional research.\r\n\r\n<hr \/>\r\n\r\n<h2>AI Databases<\/h2>\r\nMany different types of generative AI databases can be used, and each has advantages and disadvantages depending on the task you need completed. Before using a platform to conduct your task, ensure that it is reliable and can accurately complete the task you need. In addition, some platforms require a subscription. If your law firm has a subscription to a generative AI platform, it would be beneficial to take advantage of this tool. However, if your law firm has no subscription, there are numerous platforms available\u00a0 free for public use. Below is a list of different platforms that can be used.\r\n<table class=\"grid\" style=\"border-collapse: collapse;width: 100%;height: 75px\" border=\"0\" cellpadding=\"10\"><caption>Table 5.1 Generative AI platforms<\/caption>\r\n<thead>\r\n<tr class=\"shaded\" style=\"height: 15px\">\r\n<td style=\"width: 50%;height: 15px\"><strong>Free Platforms\u00a0<\/strong><\/td>\r\n<td style=\"width: 50%;height: 15px\"><strong>Subscription Required Platforms<\/strong><\/td>\r\n<\/tr>\r\n<\/thead>\r\n<tbody>\r\n<tr style=\"height: 15px\">\r\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/chatgpt.com\/\">ChatGPT<\/a><\/td>\r\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/www.lexisnexis.com\/en-us\/products\/lexis-plus-ai.page\" target=\"_blank\" rel=\"noopener\">Lexis+ AI | Legal Research Platform + AI Assistant | LexisNexis<\/a><\/td>\r\n<\/tr>\r\n<tr style=\"height: 15px\">\r\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/scribehow.com\/\">Scribe <\/a>\u00a0- Free version, but also paid versions with more features.<\/td>\r\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/www.alexi.com\/\" target=\"_blank\" rel=\"noopener\">Alexi\u00a0<\/a><\/td>\r\n<\/tr>\r\n<tr style=\"height: 15px\">\r\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/copilot.microsoft.com\/\">Microsoft CoPilot<\/a> - Free version, but also paid versions (Microsoft 365 subscription)<\/td>\r\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/www.thomsonreuters.com\/en\/cocounsel\" target=\"_blank\" rel=\"noopener\">CoCounsel<\/a><\/td>\r\n<\/tr>\r\n<tr style=\"height: 15px\">\r\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/www.canlii.org\/?origLang=en\">Canadian Legal Information Institute | CanLII<\/a><\/td>\r\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/legal.thomsonreuters.com\/en\/products\/westlaw-edge\" target=\"_blank\" rel=\"noopener\">Westlaw Edge (AI-Enhanced)<\/a><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<div align=\"left\">\r\n\r\n<hr \/>\r\n\r\n<h2>Inequality in Access to AI Platforms<\/h2>\r\nAlthough there are many options for AI platforms to use, there is much discussion about the inequality regarding access to AI platforms. Accessing legal-specific AI services comes at a cost, and smaller firms most often cannot afford these extra expenses (James, 2025). Regardless of AI platform choice, it is important that whichever platform a firm uses, whether it is free or requires a subscription, the service should be researched and tested by the firm to ensure that it produces accurate information. Although there are free platforms, such as ChatGPT, Grok, etc., these tools may not be as accurate or as comprehensive in response to your query as some of the platforms that require a fee (James, 2025). This inevitably creates a divide not only between \u201cBig Law\u201d firms and smaller boutique operations but also for self-represented individuals who cannot afford a lawyer and are seeking alternative means for legal advice.\r\n\r\nAlthough outside the scope of this chapter, <a href=\"https:\/\/representingyourselfcanada.com\/a-digital-wolf-in-sheeps-clothing-how-artificial-intelligence-is-set-to-worsen-the-access-to-justice-crisis\/\" target=\"_blank\" rel=\"noopener\">Rachel Paterson (2024) wrote a compelling article<\/a> discussing the digital divide that the rise of AI tools in the legal profession has on self-represented litigants. Paterson (2024) states that the rise in AI \u201cis causing the digital divide to widen at an unprecedented rate, which has concerning implications for people who self-represent, and access to justice as a whole.\u201d\r\n\r\n<hr \/>\r\n\r\n<h2>AI and the Importance of Ethics<\/h2>\r\nEthics in law refers to a \u201ccode of conduct and professional responsibilities that govern lawyers\u2019 behaviour\u201d (Pathfinder Editorial, 2024). [pb_glossary id=\"324\"]Legal ethics [\/pb_glossary]is the most crucial aspect of the Canadian justice system. This system is the guiding force that all legal professionals should follow. Legal ethics ensures that lawyers prioritize their clients\u2019 interests, ensure justice is upheld fairly and impartially, establish trust with client confidentiality, act with diligence and competence, and promote the public\u2019s trust in the Canadian justice system (Pathfinder Editorial, 2024). The Law Society of Alberta has a Code of Conduct to which all lawyers practicing in Alberta must adhere. <a href=\"https:\/\/www.lawsociety.ab.ca\/regulation\/act-code-and-rules\/\" target=\"_blank\" rel=\"noopener\">The Law Society of Alberta\u2019s Code of Conduct<\/a> enforces the above-mentioned points that legal professionals must follow to enforce ethics in the legal field.\r\n\r\nKnowing the importance of ethics in the legal field and the standards that legal professionals hold helps us understand how ethics relates to AI use in the legal field. There have been cases where the improper use of AI has led to errors during the court process that could have been avoided if the AI tool had been used correctly.\r\n\r\nA case in Ontario described how a judge discovered a submission that was filed with the court that was written with AI (Draaisma, 2025). This lawyer relied on court materials produced by AI and cited fictitious cases and irrelevant cases that did not relate to the criminal law in question (Draaisma, 2025). The judge had ordered the lawyer to prepare new submissions and resubmit them to the court, and instructed that the lawyer must pinpoint the information being referred to in the case citations and check the citations, ensuring they have a link to CanLII (Draaisma, 2025). Another case, <a href=\"https:\/\/www.canlii.org\/en\/on\/onsc\/doc\/2025\/2025onsc2766\/2025onsc2766.html\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline\"><em>Ko v Li,<\/em> <\/span>2025 ONSC 2766<\/a>, also dealt with the repercussions of improperly using AI. The lawyer in this case submitted a factum with hyperlinks to CanLII that were inaccurate and did not link to relevant cases and displayed error messages. In addition, when asked to provide opposing counsel with citations and copies of the cases referenced in the factum, the lawyer could not provide the documentation. A third case <a href=\"https:\/\/www.canlii.org\/en\/on\/onsc\/doc\/2025\/2025onsc4503\/2025onsc4503.html?resultId=65c98a40d50347ce8285998577c10b52&amp;searchId=2025-08-06T18:49:46:094\/462e95cde31941fdb46b0199b373ef42&amp;searchUrlHash=AAAAAQAraGFsdG9uIChyZWdpb25hbCBtdW5pY2lwYWxpdHkpIHYgcmV3YSBldCBhbAAAAAAB\" target=\"_blank\" rel=\"noopener\"><em>Halton (Regional Municipality) v Rewa et al<\/em>., 2025 ONSC 4503<\/a>, found one of the parties submitting a factum that relied on fictitious cases. It was suggested that the party used AI to generate factum and that it had produced hallucination cases, and as a result, the party admitted to using AI because they were unable to hire counsel to assist in the matter. The Justice stated that all parties, whether self-represented or not, must verify the cases they are relying upon before submitting to the court. The hearing was adjourned with costs against the party, giving them additional time to amend their submissions.\r\n\r\nIn the first two cases noted above, the lawyers were not seriously punished for their negligence in not fact-checking their research, but the party in the third case, who was self-represented, was ordered to pay costs (a surprising revelation, given to this date, many of the lawyers misusing AI have not received the same). The Code of Conduct guides the lawyers\u2019 behaviour to ensure that justice is upheld and that they prioritize their clients\u2019 interests (Pathfinder Editorial, 2024). The fear is that if a lawyer does not check the work that AI has generated and it has serious errors or even hallucination cases, this incorrect information could cause a miscarriage of justice (Draaisma, 2025). This could be detrimental to the client who entrusted their lawyer to effectively represent them on their legal issues, as well as potentially paying thousands of dollars to represent them in their case accurately. Having the lawyer not uphold their duty to their client and present inaccurate information to the judge could not only create a miscarriage of justice, but it can also negatively impact their client\u2019s life. As stated in the case of <em>Ko v Li<\/em>, lawyers must prepare and review material competently, not to fabricate or miscite cases, read cases before submitting them to the court, review information produced by artificial intelligence, and not to mislead the court. Given that lawyers are held to these standards, it is surprising that neither of these lawyers was sanctioned for their negligence, but the self-represented party was.\r\n\r\nThis begins a conversation about how these types of situations should be handled and raises questions not only in ethics but also about competence, supervision of staff, and the proper administration of justice. Furthermore, should judges be responsible for fact-checking and verifying all written materials submitted by lawyers, including checking for AI-generated content? Is this a reasonable use of public resources and an effective way to manage workload in the legal system? The answer to how AI use should be managed, and what consequences should exist for unethical use, is still being worked out by the courts. It will be interesting to see how these situations evolve over time, especially as AI tools become more advanced and capable of completing complex tasks with increasing speed.\r\n\r\n<\/div>\r\n\r\n<hr \/>\r\n\r\n<h2>AI Policies in the Workplace<\/h2>\r\nWith the use of AI becoming more prevalent in the workplace, it should be used responsibly in a professional legal setting. One way to ensure that all employees within a work setting are using AI responsibly is to develop an [pb_glossary id=\"326\"]AI policy[\/pb_glossary] within the law firm so that everyone knows how to use AI appropriately (Law Society of Alberta, 2024). If your firm does not already have a policy in place, the <a href=\"https:\/\/documents.lawsociety.ab.ca\/wp-content\/uploads\/2024\/09\/23082737\/How-to-Use-Gen-AI-in-Your-Legal-Practice.pdf\" target=\"_blank\" rel=\"noopener\">Law Society of Alberta has a reference guide<\/a> on what information should be included in an AI policy. Within the Law Society of Alberta\u2019s (2024) guide, they suggest including:\r\n<ul>\r\n \t<li style=\"font-weight: 400\">Examples of Permitted Use \u2013 Explain what an employee can use generative AI for and give examples of what this may look like.<\/li>\r\n \t<li style=\"font-weight: 400\">Examples of Prohibited Use \u2013 Explain what an employee should not use the generative AI tool for.<\/li>\r\n \t<li style=\"font-weight: 400\">Liability \u2013 What happens if an employee inappropriately uses the AI?<\/li>\r\n \t<li style=\"font-weight: 400\">Disclosure \u2013 Be open with clients when generative AI is used in their case.<\/li>\r\n \t<li style=\"font-weight: 400\">Monitoring \u2013 Have a clause stating that the firm can monitor how AI is used.<\/li>\r\n \t<li style=\"font-weight: 400\">Confidentiality and Privacy \u2013 Have a clause that states the importance of confidentiality and privacy of client information. Ensure that sensitive information does not get breached.<\/li>\r\n<\/ul>\r\nThese are just a few examples of clauses that can be implemented into a law firm's policy on the use of AI. Each firm will have its own policies that its employees must follow. What should remain the same, however, is that the AI tool is used responsibly and the client\u2019s interests are at the forefront, and they can rely on the services they are paying for.\r\n\r\n<hr \/>\r\n\r\n<h2>Summary<\/h2>\r\nAI is a rapidly evolving technology that is increasing in use. This is a tool that, when used effectively, can drastically improve work output, but it must be used appropriately to ensure that the information produced is accurate. At the forefront, the clients\u2019 interests should be the number one priority; they are entrusting the law firm to help them with their legal matters during a time of need, so they should get the quality service they are paying for. Using generative AI can be a great tool to improve the work needed for the client\u2019s case, but it must be used appropriately to protect their interests.\r\n\r\n<hr \/>\r\n\r\n<h2>Reflection Questions<\/h2>\r\n<ol>\r\n \t<li style=\"font-weight: 400\">Can you think of a situation in which relying too heavily on generative AI might create risk for a legal professional or their client? What would be the consequences?<\/li>\r\n \t<li style=\"font-weight: 400\">After reading about the cases where lawyers submitted hallucinated case law, how do you think legal regulators should respond to such incidents?<\/li>\r\n \t<li style=\"font-weight: 400\">How can a law firm balance the efficiency offered by AI with its obligation to act ethically and competently for clients?<\/li>\r\n \t<li style=\"font-weight: 400\">As a paralegal or legal assistant, how might your use of generative AI tools, both responsibly and irresponsibly, impact the lawyer you work with? Consider how your actions could influence the lawyer\u2019s reputation, ethical obligations, or the outcome of a client\u2019s matter.<\/li>\r\n<\/ol>\r\n\r\n<hr \/>\r\n\r\n<h2>Short-Answer Questions<\/h2>\r\n<ol>\r\n \t<li style=\"font-weight: 400\">What is one major benefit of using generative AI in a law office?<\/li>\r\n \t<li style=\"font-weight: 400\">What are two common risks of relying on generative AI for legal research?<\/li>\r\n \t<li style=\"font-weight: 400\">What does the \u201cPrime\u201d step in the 5P method involve?<\/li>\r\n \t<li style=\"font-weight: 400\">Why is it important to verify AI-generated information using legal databases like CanLII or Westlaw?<\/li>\r\n<\/ol>\r\n<details><summary><strong>Model Answers<\/strong><\/summary>\r\n<ol>\r\n \t<li><strong>What is one major benefit of using generative AI in a law office?<\/strong>\r\n<ul>\r\n \t<li><em>It can improve efficiency by completing tasks such as drafting documents or summarizing legal information much faster than manual methods.<\/em><\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>What are two common risks of relying on generative AI for legal research?<\/strong>\r\n<ul>\r\n \t<li><em>Hallucinations (false information) and errors or biases in the output.<\/em><\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>What does the 'Prime' step in the 5P method involve?<\/strong>\r\n<ul>\r\n \t<li><em>Providing specific facts, jurisdiction, and time frames to help the AI tool generate relevant results.<\/em><\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>Why is it important to verify AI-generated information using legal databases like CanLII or Westlaw?<\/strong>\r\n<ul>\r\n \t<li><em>Because AI may fabricate or misquote cases, and legal databases provide accurate, verifiable sources.<\/em><\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ol>\r\n<\/details>\r\n\r\n<hr \/>\r\n\r\n<details><summary><strong>References<\/strong><\/summary>\r\n<p class=\"hanging-indent\">Canadian Professional Path. (2024, February 14). <em>Legal ethics: The pillars of Canadian law<\/em>.\u00a0<a href=\"https:\/\/canadianprofessionpath.com\/legal-ethics\/\" target=\"_blank\" rel=\"noopener\">https:\/\/canadianprofessionpath.com\/legal-ethics\/<\/a><\/p>\r\nDraaisma, M. (2025, June 3). <em>An Ontario judge tossed a court filing seemingly written with A.I. Experts say it\u2019s a growing problem<\/em>. CBC. <a href=\"https:\/\/www.cbc.ca\/news\/canada\/toronto\/artificial-intelligence-legal-research-problems-1.7550358\" target=\"_blank\" rel=\"noopener\">https:\/\/www.cbc.ca\/news\/canada\/toronto\/artificial-intelligence-legal-research-problems-1.7550358<\/a>\r\n<p class=\"hanging-indent\">Halton (Regional Municipality) v Rewa et al., 2025 ONSC 4503. <a href=\"https:\/\/canlii.ca\/t\/kdn3w\" target=\"_blank\" rel=\"noopener\">https:\/\/canlii.ca\/t\/kdn3w<\/a><\/p>\r\n<p class=\"hanging-indent\">James, H. (2025, May 2). <em>Access and equity in legal AI: What about the rest?<\/em> 9twelve Legal Research + Consulting. <a href=\"https:\/\/www.9twelve.ca\/blog\/blog-post-title-two-3k9bx\" target=\"_blank\" rel=\"noopener\">https:\/\/www.9twelve.ca\/blog\/blog-post-title-two-3k9bx<\/a><\/p>\r\n<p class=\"hanging-indent\"><em>Ko v Li<\/em>, 2025 ONSC 2766. <a href=\"https:\/\/canlii.ca\/t\/kbzwn\" target=\"_blank\" rel=\"noopener\">https:\/\/canlii.ca\/t\/kbzwn<\/a><\/p>\r\n<p class=\"hanging-indent\">Law Society of Alberta. (2024, August 30). <em>How to use generative AI in your legal practice: A guide for lawyers and staff<\/em>. Retrieved June 26, 2025, from <a href=\"https:\/\/documents.lawsociety.ab.ca\/wp-content\/uploads\/2024\/09\/23082737\/How-to-Use-Gen-AI-in-Your-Legal-Practice.pdf\" target=\"_blank\" rel=\"noopener\">https:\/\/documents.lawsociety.ab.ca\/wp-content\/uploads\/2024\/09\/23082737\/How-to-Use-Gen-AI-in-Your-Legal-Practice.pdf<\/a><\/p>\r\n<p class=\"hanging-indent\">Paterson, R. (2024, June 5). <em>A digital wolf in sheep\u2019s clothing: How artificial intelligence is set to worsen the access to justice crisis<\/em>. National Self-Represented Litigants Project. <a href=\"https:\/\/representingyourselfcanada.com\/a-digital-wolf-in-sheeps-clothing-how-artificial-intelligence-is-set-to-worsen-the-access-to-justice-crisis\/\" target=\"_blank\" rel=\"noopener\">https:\/\/representingyourselfcanada.com\/a-digital-wolf-in-sheeps-clothing-how-artificial-intelligence-is-set-to-worsen-the-access-to-justice-crisis\/<\/a><\/p>\r\n<p class=\"hanging-indent\">Queen\u2019s University. (2025, July 11). <em>Critically assessing AI-generated content<\/em>. <a href=\"https:\/\/guides.library.queensu.ca\/legal-research-manual\/critically-assessing-generative-artificial-intelligence\" target=\"_blank\" rel=\"noopener\">https:\/\/guides.library.queensu.ca\/legal-research-manual\/critically-assessing-generative-artificial-intelligence<\/a><\/p>\r\n<p class=\"hanging-indent\">Queen\u2019s University. (2025, July 11). <em>How does GenAI work?<\/em> <a href=\"https:\/\/guides.library.queensu.ca\/legal-research-manual\/introduction-generative-artificial-intelligence\" target=\"_blank\" rel=\"noopener\">https:\/\/guides.library.queensu.ca\/legal-research-manual\/introduction-generative-artificial-intelligence<\/a><\/p>\r\n<p class=\"hanging-indent\">University of Alberta. (2025, February 11). <em>Generative AI for legal research<\/em>. <a href=\"https:\/\/guides.library.ualberta.ca\/generative_ai_legal_research\" target=\"_blank\" rel=\"noopener\">https:\/\/guides.library.ualberta.ca\/generative_ai_legal_research<\/a><\/p>\r\n\r\n<\/details>","rendered":"<div class=\"mceTemp\"><\/div>\n<figure id=\"attachment_311\" aria-describedby=\"caption-attachment-311\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-311 size-large\" src=\"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-content\/uploads\/sites\/31\/2025\/08\/test1-1024x378.png\" alt=\"Comic showing a paralegal using AI to draft a letter, verifying accuracy and case law, confirming details with a lawyer, and getting approval for the final version.\" width=\"1024\" height=\"378\" srcset=\"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-content\/uploads\/sites\/31\/2025\/08\/test1-1024x378.png 1024w, https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-content\/uploads\/sites\/31\/2025\/08\/test1-300x111.png 300w, https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-content\/uploads\/sites\/31\/2025\/08\/test1-768x284.png 768w, https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-content\/uploads\/sites\/31\/2025\/08\/test1-65x24.png 65w, https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-content\/uploads\/sites\/31\/2025\/08\/test1-225x83.png 225w, https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-content\/uploads\/sites\/31\/2025\/08\/test1-350x129.png 350w, https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-content\/uploads\/sites\/31\/2025\/08\/test1.png 1277w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-311\" class=\"wp-caption-text\">Figure 5.1 An AI-generated image of a paralegal using artificial intelligence ethically. [Image description \u2013 <a href=\"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/back-matter\/appendix-a-image-descriptions\/#fig5.1\">See Appendix A Figure 5.1<\/a>]. Source: Adapted by Stasiewich, A. from OpenAI. (2025). <em>ChatGPT-5<\/em> [Image generator]. Retrieved August 13, 2025, from <a href=\"https:\/\/chat.openai.com\">https:\/\/chat.openai.com<\/a><\/figcaption><\/figure>\n<div class=\"textbox textbox--learning-objectives\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\"><a id=\"retfig5.1\"><\/a>Learning Objectives<\/p>\n<\/header>\n<div class=\"textbox__content\">\n<p>Upon successful completion of this chapter, learners will be able to do the following:<\/p>\n<ul>\n<li style=\"font-weight: 400\">Recognize what generative AI is and how it is used in legal practice.<\/li>\n<li style=\"font-weight: 400\">Identify common risks associated with using generative AI.<\/li>\n<li style=\"font-weight: 400\">Review steps to assess the accuracy and validity of AI-generated legal content.<\/li>\n<li style=\"font-weight: 400\">Outline the key ethical responsibilities of legal professionals when using AI.<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<h2>Introduction<\/h2>\n<p>This chapter will focus on the uses of AI in the workplace and the importance of ethics when using AI tools in the legal field. AI is a new and evolving technology, and it is important to know what generative AI is, what its potential uses are in the law office, and what the law society rules are when it comes to using AI for tasks. It is also important to understand the ethics involved in using generative AI and the consequences it can have when misused.<\/p>\n<h2>What is Generative AI?<\/h2>\n<p>Generative artificial intelligence, or <a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_233_315\">Generative AI<\/a>, is defined by the Law Society of Alberta (2024) as being a \u201ctype of artificial intelligence that can create new content, such as text, images, or audio, based on some input or prompt\u201d (p. 1). Simply typing in precisely what you would like the AI tool to do and it will create that within minutes of receiving your instructions. One way to think of how generative AI technology works is that there are three stages: the input, analysis, and output (Queen\u2019s University, 2025).<\/p>\n<p>This technology can drastically improve the overall efficiency of a task, as it will take minutes to complete a task that could have otherwise taken hours or even days to complete, depending on the complexity. AI can be used for many things in the law office, including drafting documents or correspondence and researching a legal issue. The Law Society of Alberta suggests that generative AI can be used to proofread documents, summarize complex documents, and generate ideas for writing a document.<\/p>\n<p>Although AI can improve the overall efficiency of work, it also has drawbacks and risks that must be considered. The University of Alberta (2025) lists some of the issues to consider when using Generative AI, such as:<\/p>\n<ul>\n<li style=\"font-weight: 400\"><strong><a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_233_321\">Hallucinations<\/a><\/strong>: This term describes when AI produces information that does not exist. The AI tool creates false cases that relate to the information it was given. It is important to fact-check the cases that the AI tool produces to ensure that they are real and not AI hallucinations.<\/li>\n<li style=\"font-weight: 400\"><strong>Errors<\/strong>: When using AI, it is important to check for errors, as it can produce inaccurate or biased information depending on the given information.<\/li>\n<li style=\"font-weight: 400\"><strong>Tasks<\/strong>: Although generative AI is seen as a tool that can do virtually anything, it does have its limits. There may be some tasks that AI cannot complete, depending on the advancement of the program you are using.<\/li>\n<\/ul>\n<p>It is important to keep these points in mind as you explore the uses of AI and how it can benefit you and the work you conduct. Generative AI can simplify tasks and increase efficiency. However, it is also important to double-check the information it produces, as there are risks and limitations to what it can do.<\/p>\n<hr \/>\n<h2>How To Use Generative AI<\/h2>\n<h3>Using Prompts with Generative AI<\/h3>\n<p>AI is a new and evolving technology, constantly changing as technology advances. It may seem like a daunting task to figure out how to use AI and how to utilize it in a law office; however, there is a quick guide created by the University of Alberta (2025) that can be used to assist in creating the right prompts to generate the information needed.<\/p>\n<p>The University of Alberta (2025) recommends using the <a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_233_323\">5P method<\/a>. The 5Ps each represent a category that can be used when writing a prompt for the generative AI to produce information. The 5Ps are as follows:<\/p>\n<p>&nbsp;<\/p>\n<details>\n<summary><strong>Prime<\/strong><\/summary>\n<p>The first \u201cP\u201d in the method stands for \u201cPrime.\u201d This is best used when giving generative AI a task by giving specific information such as facts, jurisdiction, and a time frame. This can best be used to find cases or legislation related to your fact scenario. See the following examples:<\/p>\n<ol>\n<li>The client was involved in an armed robbery of a bank in Edmonton\u2026<\/li>\n<li>The <em>Residential Tenancy Act<\/em> in Alberta and how it relates to tenants wanting to end their lease early due to\u2026<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<\/details>\n<details>\n<summary><strong>Prompt<\/strong><\/summary>\n<p>The second \u201cP\u201d stands for \u201cPrompt\u201d and is best used when you want to give the AI tool specific instructions on a task. This is useful when wanting to do one of the following examples:<\/p>\n<ol>\n<li>Compare and contrast different legislation.<\/li>\n<li>Summarize case decisions.<\/li>\n<li>Draft legal documents, such as a memorandum.<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<\/details>\n<details>\n<summary><strong>Persona<\/strong><\/summary>\n<p>The third \u201cP\u201d stands for \u201cPersona\u201d and should be used when you want the AI tool to take on a role in generating information from the prompt that you give it. Examples of this include:<\/p>\n<ol>\n<li>Write from the perspective of opposing counsel in a matter involving\u2026<\/li>\n<li>Generate an email to opposing counsel in a formal tone requesting\u2026<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<\/details>\n<details>\n<summary><strong>Product<\/strong><\/summary>\n<p>The fourth \u201cP\u201d stands for \u201cProduct\u201d and should be used when you want a specific output for the AI tool to produce. See the following examples:<\/p>\n<ol>\n<li>A letter to a client explaining\u2026<\/li>\n<li>A 1,000-word summary\u2026<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<\/details>\n<details>\n<summary><strong>Polish<\/strong><\/summary>\n<p>The last \u201cP\u201d stands for \u201cPolish\u201d and is used when you want to refine the results of the information that the AI tool has given you. This can be useful in almost all of the prompts you give the AI tool, as it is intended to narrow down the results to receive the desired end product. Examples of this include:<\/p>\n<ol>\n<li>Contrast this information with\u2026<\/li>\n<li>This only includes information relevant in Ontario\u2019s jurisdiction. Is there relevant information in Alberta\u2019s jurisdiction?\u2026<\/li>\n<\/ol>\n<\/details>\n<p>Using the 5P method suggested by the University of Alberta can drastically improve how you use AI, depending on what type of information you need to obtain. For more information on using AI in legal research, follow the University of Alberta\u2019s <a href=\"https:\/\/guides.library.ualberta.ca\/generative_ai_legal_research\/recommendations\" target=\"_blank\" rel=\"noopener\"><em>Generative AI for Legal Research<\/em><\/a> guide.<\/p>\n<h3>Assessing Generative AI Information<\/h3>\n<p>After obtaining the information provided by the AI tool, it is important to learn how to assess the information to validate its accuracy properly. As discussed previously, AI has limitations and can produce hallucinations, errors in data, and potentially biased information. So, although it can be a helpful tool when conducting tasks, it is important to know how to assess the accuracy and validity of the information AI is giving you. Queen\u2019s University (2025) suggests using five steps to measure the AI-generated content. The steps are as follows:<\/p>\n<h4>Assess System Limitations<\/h4>\n<p>This step analyzes how the AI system works, which in turn helps us understand how it generates the data it gives us. One way to look at this is by using the three-layer model (Queen&#8217;s University, 2025). The three layers comprise the input, analysis, and output layers. The input layer consists of the data on which the system operates. Consider how current the data the system uses is and whether there is any human bias in the data. The second layer is the analysis layer, where the system interprets the data given. Again, be sure to consider if there is any bias in the data and whether the system tells you how it analyzes the data. The last layer is the output layer, which is the result that the system gives you after inputting the initial data. Consider again if there is any <a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_233_325\">bias in AI<\/a> data, if anything is missing from the results, and if you can train the AI tool to help it learn how to perform tasks.<\/p>\n<p>Overall, assessing the AI system itself is important to ensure that it will give you accurate, current, and non-biased data. The three-step method suggested by Queen\u2019s University is an excellent way to assess the system you are using.<\/p>\n<h4>Verify Information<\/h4>\n<p>As mentioned previously, AI can generate hallucinations, which are inaccurate information or cases that do not exist (Queen\u2019s University, 2025). It is important to double-check the sources and information the AI tool gives you to verify that the cases and information exist. One way to do this is to find the information in a legal database such as CanLII or Westlaw, review the source to ensure the reference is accurate, and then note the source to refer to in your reference guide.<\/p>\n<h4>Compare with Other Sources<\/h4>\n<p>Although generative AI is helpful, conducting separate research to find your own information is important, as you should not rely solely on the AI content as sources that are behind a paywall, such as those found in legal databases (aside from CanLII), will not be found using a non-AI platform. With this step, be sure to research legal databases such as CanLII and Westlaw to find additional information that can help aid or verify your legal research.<\/p>\n<h4>Update for Currency<\/h4>\n<p>Do not assume that the information given to you by the generative AI is current (Queen\u2019s University, 2025). Not all systems will rely on the most current information, so it is important to do additional research to ensure that the data is current or if there is a new law or information that should be referenced instead. Some AI platforms may show how current the information is, but this is not always the case, so more research may be necessary.<\/p>\n<h4>Take Steps to Address Bias<\/h4>\n<p>The last step that Queen\u2019s University (2025) suggests is to check to see whether any bias is evident in your given data. It is important to check for bias in your initial prompt as well as within the output\u2014both can affect the outcome of the results.<\/p>\n<p>These steps suggested by Queen\u2019s University can help in the process of using AI tools. AI can help streamline legal research, but it is important to ensure the information you are given is valid, accurate, and free from error and bias. Following the above steps will aid in that process to ensure the information provided by AI is accurate. AI is an effective tool, but it should not be the only source of information used when conducting research. It is still important to fact-check the information you are given and to do your additional research.<\/p>\n<hr \/>\n<h2>AI Databases<\/h2>\n<p>Many different types of generative AI databases can be used, and each has advantages and disadvantages depending on the task you need completed. Before using a platform to conduct your task, ensure that it is reliable and can accurately complete the task you need. In addition, some platforms require a subscription. If your law firm has a subscription to a generative AI platform, it would be beneficial to take advantage of this tool. However, if your law firm has no subscription, there are numerous platforms available\u00a0 free for public use. Below is a list of different platforms that can be used.<\/p>\n<table class=\"grid\" style=\"border-collapse: collapse;width: 100%;height: 75px\" cellpadding=\"10\">\n<caption>Table 5.1 Generative AI platforms<\/caption>\n<thead>\n<tr class=\"shaded\" style=\"height: 15px\">\n<td style=\"width: 50%;height: 15px\"><strong>Free Platforms\u00a0<\/strong><\/td>\n<td style=\"width: 50%;height: 15px\"><strong>Subscription Required Platforms<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"height: 15px\">\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/chatgpt.com\/\">ChatGPT<\/a><\/td>\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/www.lexisnexis.com\/en-us\/products\/lexis-plus-ai.page\" target=\"_blank\" rel=\"noopener\">Lexis+ AI | Legal Research Platform + AI Assistant | LexisNexis<\/a><\/td>\n<\/tr>\n<tr style=\"height: 15px\">\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/scribehow.com\/\">Scribe <\/a>\u00a0&#8211; Free version, but also paid versions with more features.<\/td>\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/www.alexi.com\/\" target=\"_blank\" rel=\"noopener\">Alexi\u00a0<\/a><\/td>\n<\/tr>\n<tr style=\"height: 15px\">\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/copilot.microsoft.com\/\">Microsoft CoPilot<\/a> &#8211; Free version, but also paid versions (Microsoft 365 subscription)<\/td>\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/www.thomsonreuters.com\/en\/cocounsel\" target=\"_blank\" rel=\"noopener\">CoCounsel<\/a><\/td>\n<\/tr>\n<tr style=\"height: 15px\">\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/www.canlii.org\/?origLang=en\">Canadian Legal Information Institute | CanLII<\/a><\/td>\n<td style=\"width: 50%;height: 15px\"><a href=\"https:\/\/legal.thomsonreuters.com\/en\/products\/westlaw-edge\" target=\"_blank\" rel=\"noopener\">Westlaw Edge (AI-Enhanced)<\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div style=\"text-align: left;\">\n<hr \/>\n<h2>Inequality in Access to AI Platforms<\/h2>\n<p>Although there are many options for AI platforms to use, there is much discussion about the inequality regarding access to AI platforms. Accessing legal-specific AI services comes at a cost, and smaller firms most often cannot afford these extra expenses (James, 2025). Regardless of AI platform choice, it is important that whichever platform a firm uses, whether it is free or requires a subscription, the service should be researched and tested by the firm to ensure that it produces accurate information. Although there are free platforms, such as ChatGPT, Grok, etc., these tools may not be as accurate or as comprehensive in response to your query as some of the platforms that require a fee (James, 2025). This inevitably creates a divide not only between \u201cBig Law\u201d firms and smaller boutique operations but also for self-represented individuals who cannot afford a lawyer and are seeking alternative means for legal advice.<\/p>\n<p>Although outside the scope of this chapter, <a href=\"https:\/\/representingyourselfcanada.com\/a-digital-wolf-in-sheeps-clothing-how-artificial-intelligence-is-set-to-worsen-the-access-to-justice-crisis\/\" target=\"_blank\" rel=\"noopener\">Rachel Paterson (2024) wrote a compelling article<\/a> discussing the digital divide that the rise of AI tools in the legal profession has on self-represented litigants. Paterson (2024) states that the rise in AI \u201cis causing the digital divide to widen at an unprecedented rate, which has concerning implications for people who self-represent, and access to justice as a whole.\u201d<\/p>\n<hr \/>\n<h2>AI and the Importance of Ethics<\/h2>\n<p>Ethics in law refers to a \u201ccode of conduct and professional responsibilities that govern lawyers\u2019 behaviour\u201d (Pathfinder Editorial, 2024). <a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_233_324\">Legal ethics <\/a>is the most crucial aspect of the Canadian justice system. This system is the guiding force that all legal professionals should follow. Legal ethics ensures that lawyers prioritize their clients\u2019 interests, ensure justice is upheld fairly and impartially, establish trust with client confidentiality, act with diligence and competence, and promote the public\u2019s trust in the Canadian justice system (Pathfinder Editorial, 2024). The Law Society of Alberta has a Code of Conduct to which all lawyers practicing in Alberta must adhere. <a href=\"https:\/\/www.lawsociety.ab.ca\/regulation\/act-code-and-rules\/\" target=\"_blank\" rel=\"noopener\">The Law Society of Alberta\u2019s Code of Conduct<\/a> enforces the above-mentioned points that legal professionals must follow to enforce ethics in the legal field.<\/p>\n<p>Knowing the importance of ethics in the legal field and the standards that legal professionals hold helps us understand how ethics relates to AI use in the legal field. There have been cases where the improper use of AI has led to errors during the court process that could have been avoided if the AI tool had been used correctly.<\/p>\n<p>A case in Ontario described how a judge discovered a submission that was filed with the court that was written with AI (Draaisma, 2025). This lawyer relied on court materials produced by AI and cited fictitious cases and irrelevant cases that did not relate to the criminal law in question (Draaisma, 2025). The judge had ordered the lawyer to prepare new submissions and resubmit them to the court, and instructed that the lawyer must pinpoint the information being referred to in the case citations and check the citations, ensuring they have a link to CanLII (Draaisma, 2025). Another case, <a href=\"https:\/\/www.canlii.org\/en\/on\/onsc\/doc\/2025\/2025onsc2766\/2025onsc2766.html\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline\"><em>Ko v Li,<\/em> <\/span>2025 ONSC 2766<\/a>, also dealt with the repercussions of improperly using AI. The lawyer in this case submitted a factum with hyperlinks to CanLII that were inaccurate and did not link to relevant cases and displayed error messages. In addition, when asked to provide opposing counsel with citations and copies of the cases referenced in the factum, the lawyer could not provide the documentation. A third case <a href=\"https:\/\/www.canlii.org\/en\/on\/onsc\/doc\/2025\/2025onsc4503\/2025onsc4503.html?resultId=65c98a40d50347ce8285998577c10b52&amp;searchId=2025-08-06T18:49:46:094\/462e95cde31941fdb46b0199b373ef42&amp;searchUrlHash=AAAAAQAraGFsdG9uIChyZWdpb25hbCBtdW5pY2lwYWxpdHkpIHYgcmV3YSBldCBhbAAAAAAB\" target=\"_blank\" rel=\"noopener\"><em>Halton (Regional Municipality) v Rewa et al<\/em>., 2025 ONSC 4503<\/a>, found one of the parties submitting a factum that relied on fictitious cases. It was suggested that the party used AI to generate factum and that it had produced hallucination cases, and as a result, the party admitted to using AI because they were unable to hire counsel to assist in the matter. The Justice stated that all parties, whether self-represented or not, must verify the cases they are relying upon before submitting to the court. The hearing was adjourned with costs against the party, giving them additional time to amend their submissions.<\/p>\n<p>In the first two cases noted above, the lawyers were not seriously punished for their negligence in not fact-checking their research, but the party in the third case, who was self-represented, was ordered to pay costs (a surprising revelation, given to this date, many of the lawyers misusing AI have not received the same). The Code of Conduct guides the lawyers\u2019 behaviour to ensure that justice is upheld and that they prioritize their clients\u2019 interests (Pathfinder Editorial, 2024). The fear is that if a lawyer does not check the work that AI has generated and it has serious errors or even hallucination cases, this incorrect information could cause a miscarriage of justice (Draaisma, 2025). This could be detrimental to the client who entrusted their lawyer to effectively represent them on their legal issues, as well as potentially paying thousands of dollars to represent them in their case accurately. Having the lawyer not uphold their duty to their client and present inaccurate information to the judge could not only create a miscarriage of justice, but it can also negatively impact their client\u2019s life. As stated in the case of <em>Ko v Li<\/em>, lawyers must prepare and review material competently, not to fabricate or miscite cases, read cases before submitting them to the court, review information produced by artificial intelligence, and not to mislead the court. Given that lawyers are held to these standards, it is surprising that neither of these lawyers was sanctioned for their negligence, but the self-represented party was.<\/p>\n<p>This begins a conversation about how these types of situations should be handled and raises questions not only in ethics but also about competence, supervision of staff, and the proper administration of justice. Furthermore, should judges be responsible for fact-checking and verifying all written materials submitted by lawyers, including checking for AI-generated content? Is this a reasonable use of public resources and an effective way to manage workload in the legal system? The answer to how AI use should be managed, and what consequences should exist for unethical use, is still being worked out by the courts. It will be interesting to see how these situations evolve over time, especially as AI tools become more advanced and capable of completing complex tasks with increasing speed.<\/p>\n<\/div>\n<hr \/>\n<h2>AI Policies in the Workplace<\/h2>\n<p>With the use of AI becoming more prevalent in the workplace, it should be used responsibly in a professional legal setting. One way to ensure that all employees within a work setting are using AI responsibly is to develop an <a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_233_326\">AI policy<\/a> within the law firm so that everyone knows how to use AI appropriately (Law Society of Alberta, 2024). If your firm does not already have a policy in place, the <a href=\"https:\/\/documents.lawsociety.ab.ca\/wp-content\/uploads\/2024\/09\/23082737\/How-to-Use-Gen-AI-in-Your-Legal-Practice.pdf\" target=\"_blank\" rel=\"noopener\">Law Society of Alberta has a reference guide<\/a> on what information should be included in an AI policy. Within the Law Society of Alberta\u2019s (2024) guide, they suggest including:<\/p>\n<ul>\n<li style=\"font-weight: 400\">Examples of Permitted Use \u2013 Explain what an employee can use generative AI for and give examples of what this may look like.<\/li>\n<li style=\"font-weight: 400\">Examples of Prohibited Use \u2013 Explain what an employee should not use the generative AI tool for.<\/li>\n<li style=\"font-weight: 400\">Liability \u2013 What happens if an employee inappropriately uses the AI?<\/li>\n<li style=\"font-weight: 400\">Disclosure \u2013 Be open with clients when generative AI is used in their case.<\/li>\n<li style=\"font-weight: 400\">Monitoring \u2013 Have a clause stating that the firm can monitor how AI is used.<\/li>\n<li style=\"font-weight: 400\">Confidentiality and Privacy \u2013 Have a clause that states the importance of confidentiality and privacy of client information. Ensure that sensitive information does not get breached.<\/li>\n<\/ul>\n<p>These are just a few examples of clauses that can be implemented into a law firm&#8217;s policy on the use of AI. Each firm will have its own policies that its employees must follow. What should remain the same, however, is that the AI tool is used responsibly and the client\u2019s interests are at the forefront, and they can rely on the services they are paying for.<\/p>\n<hr \/>\n<h2>Summary<\/h2>\n<p>AI is a rapidly evolving technology that is increasing in use. This is a tool that, when used effectively, can drastically improve work output, but it must be used appropriately to ensure that the information produced is accurate. At the forefront, the clients\u2019 interests should be the number one priority; they are entrusting the law firm to help them with their legal matters during a time of need, so they should get the quality service they are paying for. Using generative AI can be a great tool to improve the work needed for the client\u2019s case, but it must be used appropriately to protect their interests.<\/p>\n<hr \/>\n<h2>Reflection Questions<\/h2>\n<ol>\n<li style=\"font-weight: 400\">Can you think of a situation in which relying too heavily on generative AI might create risk for a legal professional or their client? What would be the consequences?<\/li>\n<li style=\"font-weight: 400\">After reading about the cases where lawyers submitted hallucinated case law, how do you think legal regulators should respond to such incidents?<\/li>\n<li style=\"font-weight: 400\">How can a law firm balance the efficiency offered by AI with its obligation to act ethically and competently for clients?<\/li>\n<li style=\"font-weight: 400\">As a paralegal or legal assistant, how might your use of generative AI tools, both responsibly and irresponsibly, impact the lawyer you work with? Consider how your actions could influence the lawyer\u2019s reputation, ethical obligations, or the outcome of a client\u2019s matter.<\/li>\n<\/ol>\n<hr \/>\n<h2>Short-Answer Questions<\/h2>\n<ol>\n<li style=\"font-weight: 400\">What is one major benefit of using generative AI in a law office?<\/li>\n<li style=\"font-weight: 400\">What are two common risks of relying on generative AI for legal research?<\/li>\n<li style=\"font-weight: 400\">What does the \u201cPrime\u201d step in the 5P method involve?<\/li>\n<li style=\"font-weight: 400\">Why is it important to verify AI-generated information using legal databases like CanLII or Westlaw?<\/li>\n<\/ol>\n<details>\n<summary><strong>Model Answers<\/strong><\/summary>\n<ol>\n<li><strong>What is one major benefit of using generative AI in a law office?<\/strong>\n<ul>\n<li><em>It can improve efficiency by completing tasks such as drafting documents or summarizing legal information much faster than manual methods.<\/em><\/li>\n<\/ul>\n<\/li>\n<li><strong>What are two common risks of relying on generative AI for legal research?<\/strong>\n<ul>\n<li><em>Hallucinations (false information) and errors or biases in the output.<\/em><\/li>\n<\/ul>\n<\/li>\n<li><strong>What does the &#8216;Prime&#8217; step in the 5P method involve?<\/strong>\n<ul>\n<li><em>Providing specific facts, jurisdiction, and time frames to help the AI tool generate relevant results.<\/em><\/li>\n<\/ul>\n<\/li>\n<li><strong>Why is it important to verify AI-generated information using legal databases like CanLII or Westlaw?<\/strong>\n<ul>\n<li><em>Because AI may fabricate or misquote cases, and legal databases provide accurate, verifiable sources.<\/em><\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<\/details>\n<hr \/>\n<details>\n<summary><strong>References<\/strong><\/summary>\n<p class=\"hanging-indent\">Canadian Professional Path. (2024, February 14). <em>Legal ethics: The pillars of Canadian law<\/em>.\u00a0<a href=\"https:\/\/canadianprofessionpath.com\/legal-ethics\/\" target=\"_blank\" rel=\"noopener\">https:\/\/canadianprofessionpath.com\/legal-ethics\/<\/a><\/p>\n<p>Draaisma, M. (2025, June 3). <em>An Ontario judge tossed a court filing seemingly written with A.I. Experts say it\u2019s a growing problem<\/em>. CBC. <a href=\"https:\/\/www.cbc.ca\/news\/canada\/toronto\/artificial-intelligence-legal-research-problems-1.7550358\" target=\"_blank\" rel=\"noopener\">https:\/\/www.cbc.ca\/news\/canada\/toronto\/artificial-intelligence-legal-research-problems-1.7550358<\/a><\/p>\n<p class=\"hanging-indent\">Halton (Regional Municipality) v Rewa et al., 2025 ONSC 4503. <a href=\"https:\/\/canlii.ca\/t\/kdn3w\" target=\"_blank\" rel=\"noopener\">https:\/\/canlii.ca\/t\/kdn3w<\/a><\/p>\n<p class=\"hanging-indent\">James, H. (2025, May 2). <em>Access and equity in legal AI: What about the rest?<\/em> 9twelve Legal Research + Consulting. <a href=\"https:\/\/www.9twelve.ca\/blog\/blog-post-title-two-3k9bx\" target=\"_blank\" rel=\"noopener\">https:\/\/www.9twelve.ca\/blog\/blog-post-title-two-3k9bx<\/a><\/p>\n<p class=\"hanging-indent\"><em>Ko v Li<\/em>, 2025 ONSC 2766. <a href=\"https:\/\/canlii.ca\/t\/kbzwn\" target=\"_blank\" rel=\"noopener\">https:\/\/canlii.ca\/t\/kbzwn<\/a><\/p>\n<p class=\"hanging-indent\">Law Society of Alberta. (2024, August 30). <em>How to use generative AI in your legal practice: A guide for lawyers and staff<\/em>. Retrieved June 26, 2025, from <a href=\"https:\/\/documents.lawsociety.ab.ca\/wp-content\/uploads\/2024\/09\/23082737\/How-to-Use-Gen-AI-in-Your-Legal-Practice.pdf\" target=\"_blank\" rel=\"noopener\">https:\/\/documents.lawsociety.ab.ca\/wp-content\/uploads\/2024\/09\/23082737\/How-to-Use-Gen-AI-in-Your-Legal-Practice.pdf<\/a><\/p>\n<p class=\"hanging-indent\">Paterson, R. (2024, June 5). <em>A digital wolf in sheep\u2019s clothing: How artificial intelligence is set to worsen the access to justice crisis<\/em>. National Self-Represented Litigants Project. <a href=\"https:\/\/representingyourselfcanada.com\/a-digital-wolf-in-sheeps-clothing-how-artificial-intelligence-is-set-to-worsen-the-access-to-justice-crisis\/\" target=\"_blank\" rel=\"noopener\">https:\/\/representingyourselfcanada.com\/a-digital-wolf-in-sheeps-clothing-how-artificial-intelligence-is-set-to-worsen-the-access-to-justice-crisis\/<\/a><\/p>\n<p class=\"hanging-indent\">Queen\u2019s University. (2025, July 11). <em>Critically assessing AI-generated content<\/em>. <a href=\"https:\/\/guides.library.queensu.ca\/legal-research-manual\/critically-assessing-generative-artificial-intelligence\" target=\"_blank\" rel=\"noopener\">https:\/\/guides.library.queensu.ca\/legal-research-manual\/critically-assessing-generative-artificial-intelligence<\/a><\/p>\n<p class=\"hanging-indent\">Queen\u2019s University. (2025, July 11). <em>How does GenAI work?<\/em> <a href=\"https:\/\/guides.library.queensu.ca\/legal-research-manual\/introduction-generative-artificial-intelligence\" target=\"_blank\" rel=\"noopener\">https:\/\/guides.library.queensu.ca\/legal-research-manual\/introduction-generative-artificial-intelligence<\/a><\/p>\n<p class=\"hanging-indent\">University of Alberta. (2025, February 11). <em>Generative AI for legal research<\/em>. <a href=\"https:\/\/guides.library.ualberta.ca\/generative_ai_legal_research\" target=\"_blank\" rel=\"noopener\">https:\/\/guides.library.ualberta.ca\/generative_ai_legal_research<\/a><\/p>\n<\/details>\n<div class=\"glossary\"><span class=\"screen-reader-text\" id=\"definition\">definition<\/span><template id=\"term_233_315\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_233_315\"><div tabindex=\"-1\"><p>A type of artificial intelligence that creates new content such as text, images, or audio in response to a prompt.<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><template id=\"term_233_321\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_233_321\"><div tabindex=\"-1\"><p>The phenomenon where AI generates fabricated or inaccurate information, such as fictitious case law.<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><template id=\"term_233_323\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_233_323\"><div tabindex=\"-1\"><p>A structured approach to prompt engineering, consisting of Prime, Prompt, Persona, Product, and Polish, by the University of Alberta.<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><template id=\"term_233_325\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_233_325\"><div tabindex=\"-1\"><p>Systematic error or skewed results generated by AI due to the nature of the data it was trained on.<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><template id=\"term_233_324\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_233_324\"><div tabindex=\"-1\"><p>A legal professional\u2019s duty to act competently, ethically, and in the client\u2019s best interest.<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><template id=\"term_233_326\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_233_326\"><div tabindex=\"-1\"><p>A workplace policy that outlines permitted and prohibited uses of AI, along with clauses on privacy, liability, and disclosure.<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><\/div>","protected":false},"author":92,"menu_order":5,"template":"","meta":{"pb_show_title":"on","pb_short_title":"AI and the Importance of Ethics in the Workplace","pb_subtitle":"","pb_authors":["ashley-mcdonald"],"pb_section_license":""},"chapter-type":[],"contributor":[74],"license":[],"class_list":["post-233","chapter","type-chapter","status-publish","hentry","contributor-ashley-mcdonald"],"part":3,"_links":{"self":[{"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/pressbooks\/v2\/chapters\/233","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/wp\/v2\/users\/92"}],"version-history":[{"count":54,"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/pressbooks\/v2\/chapters\/233\/revisions"}],"predecessor-version":[{"id":1012,"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/pressbooks\/v2\/chapters\/233\/revisions\/1012"}],"part":[{"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/pressbooks\/v2\/parts\/3"}],"metadata":[{"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/pressbooks\/v2\/chapters\/233\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/wp\/v2\/media?parent=233"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/pressbooks\/v2\/chapter-type?post=233"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/wp\/v2\/contributor?post=233"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/openbooks.macewan.ca\/legalresearchforparalegalsandlegalassistants\/wp-json\/wp\/v2\/license?post=233"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}