By Nicolas Garon
There is a growing debate among lawyers on the use of ChatGPT in legal practice.
ChatGPT was released during my first year of law school, and I quickly began exploring its features. As an avid writer, I was amazed when I used it to transform almost ten article rough drafts into cohesive stories in one night.
Over the following weeks, I tested its capabilities extensively and even shared my findings in my law school’s technology law elective, which focused heavily on ChatGPT. Since its release, I’ve used ChatGPT frequently—almost daily.
Read: 9 Ways To Make A Judge Rule In Your Favor
While initially impressed, I soon noticed its limitations. ChatGPT often generated generic and awkward language, struggled with concise legal or governmental writing (“govspeak”), hallucinated cases, reached sloppy conclusions, and lacked access to the databases like Lexis or Westlaw or current events.
It was also easily detectable. Long story short, ChatGPT is a poor lawyer—it’s a superficial researcher and writer, incapable of producing work that meets professional legal standards.
I noticed others in the legal field were using ChatGPT, often with concerning results. People I hired occasionally relied on it, producing subpar work that jeopardized their credibility. It felt insulting—like they assumed I couldn’t tell it was AI-generated junk.
I later saw a post on Reddit from an attorney who confronted several law clerks for using obvious ChatGPT responses. All denied it. This resonated with me, so I made my own post urging people not to use ChatGPT in legal practice irresponsibly, especially in legal writing.
A cleaned up version of my post:
“I recently saw a post about a lawyer who discovered several clerks submitting work with obvious ChatGPT-generated responses. When confronted, many denied using AI. Please, don’t do this.
This year, I hired several foreign lawyers for memos on various research projects. Unfortunately, every single one relied on ChatGPT. It was incredibly frustrating.
To address this, I subscribed to a service where licensed attorneys answer legal questions for about $50 a month. Yet, even there, around 4–6 out of 10 responses were blatantly AI-generated. When I called some out, they denied it.
I turned to Reddit to discuss this issue with freelancers. Some argued that using AI is inevitable and acceptable if done “correctly.” Others claimed it’s difficult to identify ChatGPT responses, or that their country’s English education resembles AI-like writing. I don’t buy most of these excuses.
I’ve used AI almost daily since its release and find it incredible for certain tasks. But trust me—it’s not a good lawyer. While it can answer basic criminal or constitutional law questions with decent accuracy, it fails on niche legal issues. It fabricates cases, provides incorrect information, formats poorly with strange indents and bullet points, and leaves behind tell-tale asterisks. Its language often sounds robotic, overusing silly terms. Even Lexis AI can tailor explanations to match your expectations, even when wrong.
If you’re submitting AI-generated briefs, you need to rethink your approach. Avoid using AI as a shortcut. Instead, let it assist in specific ways—summarizing cases you’ve already found, refining grammar, or checking for typos—without altering substance. You are the decision-maker; AI should only polish your thoroughly reviewed work. When used improperly, you’re approving subpar output, which damages your credibility and wastes client resources.
AI misuse is easy to detect, even without specialized tools. Use it wisely. And don’t complain about being underpaid. If you accepted the job, regardless of the fee, you owe your client quality work.”
That post went viral, garnering over a quarter million views, 800 upvotes, and nearly 200 comments from lawyers and law students. The comments were fascinating. Here are some highlights:
- ChatGPT should be treated as a research tool, used cautiously and scrutinized more than even Google results.
- Confidentiality is a major concern. Using ChatGPT for legal work risks breaching client confidentiality, which can ruin a multimillion-dollar case.
- AI detectors are unreliable, and innocent people are being wrongly flagged as using AI. However, blatant misuse of AI remains obvious.
- AI can speed up workflows when used properly—for example, summarizing cases or creating outlines—but it cannot replace meticulous research and editing.
Some commenters pointed out that generative AI could save time and money when used responsibly, but they also acknowledged its risks: inaccuracies, ethical concerns, and the loss of trust.
For instance, ChatGPT often fabricates cases, formats documents poorly, and overuses awkward or unnatural language. Lawyers need to ensure that AI serves as a supplement, not a shortcut, to their own thorough analysis. Misuse of AI reflects laziness and undermines credibility.
Finally on ChatGPT And Lawyers
Ultimately, my experience has taught me that while ChatGPT is an incredible tool for certain applications, it is no substitute for the rigor and integrity required in legal practice. AI is a double-edged sword—use it wisely or risk harming your professional reputation.