Legal professionals have had some embarrassing interactions with technology. There was the attorney who logged into a Zoom hearing and was unable to remove the kitten filter, which gained Internet fame. More recently, lawyers who used ChatGPT to write a brief ended up facing fines for submitting fake case law, serving as a cautionary tale.
These instances may give weight to the arguments of legal traditionalists who’ve long resisted technological progress in their field. However, the truth is that modern legal professionals and firms can’t afford to avoid AI. Yes, ill-conceived uses of these tools warrant a cautious approach. Still, trying to practice law today without using any AI would be like throwing out the baby with the bath water while simultaneously struggling to shove the cat back in the bag. AI is useful for legal professionals and can make them far more efficient. At the same time, now that it’s here, there will be attorneys using it. Those who do so effectively will have a huge advantage in today’s legal world.
So how should you be using AI in your practice, and where should you avoid it — at least for now?
Using AI for research is tempting, but risky
Lawyers spend about one fifth of their time performing research. That’s on top of the research their paralegals or law clerks perform on their behalf. It is, therefore, one of the most time-consuming tasks for legal professionals. Most legal professionals start their process with a search engine like Google, while paid legal research sites like Lexis and Westlaw still play a significant role. One reason attorneys use these services is to make sure they’re citing good case law — as in case law that courts haven’t overturned.
The attorneys who used ChatGPT to write a brief didn’t just fail to perform proper research. They also neglected to check whether the cases they cited were still good law. If they had performed that check, they’d have realized that many of the cases weren’t cases at all, and that the AI had “hallucinated” the sources it used to back its legal position.
That case and the news surrounding ChatGPT has led some judges to issue standing orders prohibiting the use of generative AI or requiring specific disclosures about how attorneys use it. While it’s clear that legal professionals should not be using ChatGPT without any caution or oversight, some are still finding it useful for research. After all, a Google search isn’t the most reliable way to find case law either, but it can help researchers break down complex legal concepts into plain language, saving time and gaining insights into the issue they’re working on. Generative AI can serve a similar purpose.
Additionally, using these tools to summarize long documents could be helpful. Putting a long statute or case into the tool and asking it to pull highlights, specific places where a topic is discussed or provide a general summary could save significant time.
Navigating AI security challenges and sensitive information
Putting a statute into ChatGPT and asking the engine to provide a summary is potentially a good use of AI. It might be tempting to use the tool to summarize case notes or evidence. After all, many cases require legal teams to comb through thousands of pages of medical records or other information. Being able to find certain mentions in a matter of seconds would be useful. However, records and notes differ from statutes and case law, which are publicly available information.
Generative AI works by collecting data and using it to “learn” and improve along the way. The information people put in will influence what others get out. There have already been some issues with large companies like Samsung leaking confidential information, in this case their source code, by using ChatGPT. When it comes to lawyers, attorney-client privilege and laws like HIPAA mean that putting confidential or protected information into these tools could be a serious violation.
In time, this will likely change. Generative AI products designed for the legal industry with strong cyber security will serve these purposes and others. After all, lawyers communicate via email, save documents in clouds and use plenty of digital tools already. For now, though, it’s important to remember that not all these useful tools have the appropriate security standards for legal use cases.
Transforming legal transcription with the power of AI
Legal professionals use transcription to capture what witnesses say in depositions, preserve testimonies from hearings, convert digital evidence into text and more. The traditional way to capture a transcript is by hiring a stenographer to transcribe in real time. Today, more law offices are turning to digital court reporters for these same purposes. Digital court reporters use automatic speech recognition (ASR) technology to convert audio to text.
When it comes to legal transcription, a human transcriptionist needs to review the results for accuracy. While the AI that powers ASR is continuing to improve, having someone proof these transcripts for accuracy is a must. Names, niche terminology and speakers with accents can all lead to errors that need to be fixed in order for the transcript to serve its official purpose.
Verbit uses AI to support digital court reporters, making legal transcription easier and more efficient. With Verbit, legal professionals can receive transcripts quickly, with 24-hour rough drafts and accurate industry-standard final drafts. As a company dedicated to serving the legal industry, Verbit’s leaders understand the need for first-class security and accuracy. Reach out to Verbit today to learn more about how we’re responsibly using AI to support legal professionals with effective legal transcription tools they can rely on.