AI (artificial intelligence) is inescapable these days, including in the legal profession. Like it or not, lawyers need to learn about AI and its various uses and advantages (and drawbacks), both for their own practices and to be informed for clients.
The ABA recently (July 29, 2024) released an ethics opinion, Formal Opinion 512, which discusses lawyers' use of generative AI tools. Generative AI tools create new content, including text, images, audio, and video, based on a "prompt" given by the user and generated through data the AI has been trained on.
By now, most of us are familiar with the cases where lawyers got in hot water for citing non-existent cases in court papers that were drafted using AI tools. The opinion addresses the obligation to refrain from making false statements or misrepresentations to the court (noting that some courts now require lawyers to disclose their use of gen AI tools), but also looks at lawyers' use of generative AI tools in the context of several other ethical obligations, including:
- competence (ABA Model Rule 1.1)
- confidentiality (ABA Model Rule 1.6)
- communication (Model Rule 1.4)
- reasonableness of fees (Model Rule 1.5).
Lawyers Have a Duty to Understand Capabilities and Limitations of Gen AI
As to competence, the opinion notes,
"To competently use a GAI tool in a client representation, lawyers need not become GAI experts. Rather, lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use. This means that lawyers should either acquire a reasonable understanding of the benefits and risks of the GAI tools that they employ in their practices or draw on the expertise of others who can provide guidance about the relevant GAI tool’s capabilities and limitations."
Recognizing that changes in technology, and particularly AI, will continue, the opinion also makes it a point to say that lawyers' obligations with respect to competency and having a reasonable understanding of the technology is an ongoing one.
Part of competence is also recognizing the risks of using generative AI technology, including the risk of "hallucinations" or of poor outputs based on inaccurate, incomplete, or otherwise substandard data sets that the AI tool has been trained on. As a result of these risks, lawyers are required to independently review and verify any output generated by these tools, but the level of verification may depend on the tool, the lawyer’s experience with that tool, and the specific task being performed:
“…a lawyer’s use of a GAI tool designed specifically for the practice of law or to perform a discrete legal task, such as generating ideas, may require less independent verification or review, particularly where a lawyer’s prior experience with the GAI tool provides a reasonable basis for relying on its results.”
Lawyers Must Safeguard Client Confidentiality When Using Gen AI
The duty of confidentiality is one of the main cornerstones of legal representation. In addition to Model Rule 1.6, covering the duty owed to current clients, Model Rules 1.9(c) and 1.18(b) require lawyers to protect the confidential information received from former and prospective clients as well.
When using generative AI tools, lawyers must consider whether the information they provide to that AI tool may be disclosed or accessed by third parties or by anyone who may not properly safeguard that information. This can occur when the client information that is input into the tool is used to “train” the AI and is later used to generate an output by the tool in another matter.
According to the opinion, not only is this a danger when using a generative AI tool that is also used by others outside of the firm, but it is also a danger within firms where some individuals within the firm may be prohibited from accessing specific client information, or where confidential information from one client may be used to provide an output for a client in another case, inadvertently revealing client information.
Lawyers May Have a Duty to Disclose Use of Gen AI to Clients
As a result of the potential risks outlined above, according to the opinion, lawyers using such a generative AI tool are required to obtain the client’s informed consent before using the AI tool on that client’s behalf, and that informed consent must provide information to the client about the potential risks, the kinds of information that might be disclosed, as well as the potential benefits of using the AI tool.
Lawyers are generally not obligated to disclose the use of gen AI tools where the lawyer will not be inputting client information into the AI tool, although there are circumstances under which disclosure may still be required. For example, if the client asks or expressly requires to be informed whether these tools are being used, or if the gen AI tool is being used to influence a major decision in the representation, the lawyer is required to disclose their use.
Billing for Tasks Using Gen AI
Model Rule 1.5 mandates that a lawyer’s fees must be “reasonable,” and sets forth a number of criteria for determining reasonableness. When lawyers bill by the hour, they are required to bill only for the time actually spent on the client’s matter.
Gen AI tools can analyze large amounts of information much more quickly than human beings can. As a result, these tools can save lawyers a lot of time. If lawyers using gen AI tools are billing by the hour, they may only bill for the time it took to complete the task, even if the same task would have generated more billing if a human being performed it. Of course, by using the gen AI tool and completing the work faster, the lawyer has an opportunity to use that additional time to work on other matters for different clients or perform other tasks for the same client, so any “loss” of billable time should be made up.
According to the opinion, even where a lawyer is not billing by the hour, the lawyer may be required to bill less for tasks completed using gen AI tools. “For example, if using a GAI tool enables a lawyer to complete tasks much more quickly than without the tool, it may be unreasonable under Rule 1.5 for the lawyer to charge the same flat fee when using the GAI tool as when not using it. “A fee charged for which little or no work was performed is an unreasonable fee.” (citing Att’y Grievance Comm’n v. Monfried, 794 A.2d 92, 103, a 2002 case from Maryland where a lawyer was found to have violated Rule 1.5 by charging a $1,000 flat fee when the lawyer “did little or no work”).
The opinion goes on to caution lawyers that they may not charge clients for their time spent learning about gen AI tools or how to use them, if that service is one they will use regularly for their clients, but if the client requests the use of a specific gen AI tool, “it may be appropriate for the lawyer to bill the client to gain the knowledge to use the tool effectively.” But in that case, the lawyer should come to an agreement with the client about how the client will be charged for gaining that knowledge, and to memorialize that agreement in writing.
Finally, the opinion also addresses the ethical duties lawyers have to supervise those who work for them, whether those individuals work within the firm or outside of it, and this ethical obligation also includes the use of gen AI tools by those individuals.
You can read the full opinion here.