Skip to main content

Bradford Hise, General Counsel at Hanson Bridgett, on How Attorneys Can Use Generative AI Effectively and Ethically

Bradford Hise, General Counsel at Hanson Bridgett, on How Attorneys Can Use Generative AI Effectively and Ethically

Since OpenAI first released ChatGPT to the public about a year and a half ago, legal community stakeholders have been thinking about what generative AI means for the practice of law – and in particular about how to use the new technology ethically. In California, both the State Bar and the Legislature are putting forward potential approaches: the State Bar with its initial guidance on lawyers’ use of generative AI, and the Legislature with the proposed bill AB 2811. 

The ethics surrounding attorneys’ use of generative AI has been top of mind for Bradford Hise ever since large-language models first became widely available back in 2022. As general counsel and partner at the San Francisco-based Hanson Bridgett, Hise is responsible for developing ethical policies and procedures at the firm. Although generative AI is a novel technology, Hise says, the ethical framework attorneys can use to approach it remains relatively consistent.

CEB spoke with Hise about the ethical landscape of using AI in the practice of law, and how attorneys can use the new technology effectively while continuing to fulfill their professional responsibilities.

Start by telling me a little about your practice as Hanson Bridgett’s general counsel. 

My job – a law firm’s general counsel -- elicits quizzical looks even from other lawyers. I do everything for this business that the general counsel of any other business does. But the difference is because the practice of law is so heavily regulated and there are so many ethical obligations that we have, I have the extra layer of ensuring that our lawyers practice law in an ethical way. So I have overall responsibility for our law firm’s risk management program. A lot of our risk management involves designing and implementing policies and procedures to help our lawyers ensure that their practices are in compliance with the Rules of Professional Conduct and California law on a lawyer’s ethical responsibilities.

From your perspective in that role, what did you make of the Mata v. Avianca case, and what it means for attorneys’ ethical responsibilities when using generative AI?

That case is the most prominent in the mainstream press about some of the pitfalls of lawyers using generative AI in their practices. It highlights some of the ethical obligations that lawyers have – if you use GenAI, you have an obligation to make sure that the output is actually accurate and correct. That involves proofreading it, that involves looking at the cases that it cites to make sure they actually exist. It points toward that core obligation when using generative AI, which is making sure the output from the tool is legally correct.

Does the use of AI in the practice of law create any ethical implications for attorneys’ billing practices?

That issue isn’t quite at the forefront. But as AI tools become more commonplace and we all become more comfortable using them, that’s something that we’re going to have to deal with. Because if I can enter a query in a generative AI tool, and it takes me five minutes to think of how I want to structure my query and input it, I can probably bill a client for that time. But I can’t bill my client for the time that I sit in front of my computer waiting for it to spit it out, because I’m not doing the work. I don’t think that’s a reasonable thing for a lawyer to bill. I can certainly bill the client for the time it takes me to review the output and determine whether or not it’s accurate. What it comes down to is this: if a tool makes the practice of law more efficient, we can still bill the client for the time it takes us to perform a task. But I don’t think that we can bill the client for the time that it would have taken us without that tool. That’s not a reasonable cost to pass on to the client. 

The flip side of that is, at some point in the future, certain types of generative AI tools will become so commonplace in the practice of law that it will be expected for lawyers to use those tools. When we get to that point, it will become challenging for lawyers to bill clients for doing things the old-fashioned way when there’s a faster, more efficient tool that can get those same things done. 

How will AB 2811 and the State Bar’s guidance on AI use affect attorneys’ ethical obligations when using generative AI?

From my perspective, AB 2811 points to the need for lawyers to evaluate their use of generative AI through the lens of their existing ethical responsibilities. That’s the approach that the State Bar’s Committee on Professional Responsibility and Conduct took, and that the State Bar’s Board of Trustees adopted. The guidelines crystallized what I and a lot of people in my position were already thinking. Even though this is a new technology, we can evaluate it through the lens of our existing professional obligations, and that provides pretty decent guidance on how a lawyer can or should use this novel technology in the practice of law. 

Even though the rules have been around for a long time, they’re broadly applicable to a lot of situations. You always have a duty of competence. In this particular instance, competence would mean that you understand the tool that you’re using and what it can accomplish, and you understand that it is or is not appropriate to use in a particular representation. You have a duty of confidentiality, which means in the context of a generative AI tool, you need to be sure that if you use that tool, you’re not disclosing client confidential information, and that third parties won’t have access to the information you enter into the tool. There’s a duty to communicate with your clients, meaning you could discuss with your client whether a particular AI tool is the appropriate one to use. If we look at the use of AI through our existing ethical obligations, I actually think the rules we have now provide a lot of guidance. 

What are some of the questions you’ve been fielding from attorneys at your firm on using generative AI ethically?

When ChatGPT was first publicly released, we -- like a lot of law firms -- thought that we had to have a policy on this immediately. I was in Chicago in meetings with GCs of other law firms, and one of my friends and I were joking, “Oh, we should see what ChatGPT says about writing a policy on how lawyers should use it in the practice of law.” So I queried ChatGPT, and it spit out a not terrible policy on how lawyers could use ChatGPT in their practices. I brought that back to the firm and played around with the language a little bit – it wasn’t my voice, obviously, and it got a few things wrong. After I proofread it and made sure it was okay, I sent it to my managing partner and CIO and said, “I would like to circulate this at the firm to provide some guardrails for how lawyers may use ChatGPT in their practices.” And they both said that was fine. So I circulated that to my firm.

Since then, questions have come up, and we’ve changed our approach in certain ways. We realize that strictly saying you can’t use it – that’s never going to happen. There’s no way to police it, and that’s just unrealistic. The questions have changed over the course of a year from “Can I do this at all?” to “How can I do this safely?” So I have conversations with lawyers now about the issues they should think about before they decide to use a generative AI tool or product in their practice. Internally, we have an AI task force, which is made up of lawyers and staff from across the firm, and which is looking at AI from a bunch of different perspectives. One of the ways we’re looking at it is, “How can we incorporate different AI products to make us more efficient, better lawyers who are more responsive to our clients?” But we’re still very much in an exploratory phase.

To me the most important thing for lawyers to understand is that this is all moving really quickly, and I for one don’t think that the profession should react rashly. Our existing Rules of Professional Conduct give us really good guardrails for using generative AI responsibly. Just because things are changing quickly doesn’t mean our rules or regulations around the practice of law need to change quickly. We just need to be careful and diligent in how we analyze issues and use the existing rules. 

This interview was originally published on April 3 by Katherine Proctor for CEB’s DailyNews.

For More Information, Please Contact:

Bradford Hise
Bradford Hise
General Counsel
Partner
San Francisco, CA