UChicago Law Course Prepares Students for Lawyering in the Age of AI
What does it mean to be a lawyer in a world increasingly dominated by AI? A UChicago Law course called Generative AI in Legal Practice takes a deep and critical dive into this question.
The course both explores how generative AI is changing the legal landscape and equips students with practical skills to navigate those changes. From learning how to use generative AI tools responsibly to considering the broader implications of AI’s impact on the profession, students in the course gain an in-depth understanding of the tech’s powers and limitations in legal contexts.
Offered each Winter Quarter, the course was one of the first AI-topic electives the Law School rolled out in 2024 when it began to weave AI into its curriculum.
“Law students today will never practice law in a world without generative AI,” said Lecturer in Law Ed Walters, ’96, who teaches the course. “Many of the skills that they need will be the same, but there will be a few new requirements, and I want to help them get ready.”
Walters is in a unique position to undertake this task. With more than two decades in the legal-tech industry, he’s had a backstage pass to the revolution of generative AI in the legal space. He is the cofounder and former CEO of the pioneering AI-powered legal research software company Fastcase, which merged in 2023 with vLex. Today, Walters is vice president of innovation and strategy at the legal AI tech company Clio, which recently acquired vLex.
Part of the class involves using different AI tools and comparing them to traditional research methods. While students may think that this will teach them that using generative AI is better, the real conclusion, Walters said, is that generative AI is sometimes better at some things, but not all things.
“The most important takeaway is that those who use their own expertise in law with generative AI will get the best results of all,” he explained. “It’s that expertise that allows you to critically analyze a generative AI output and that helps you determine what to do next. That skill—the ability to analyze output, push for a better answer, and understand the nuance that AI does not—is what we are training students to develop.”
A Hands-On Experience
The course has three main objectives for students: to gain a tacit understanding of how the technology works, to practice using generative AI tools to accomplish legal tasks, and to explore how AI will change the business of law.
Ahead of all this, Walters grounds students in an important first lesson on the responsible use of AI, using the New York case Mata v. Avianca, Inc. The case involved the plaintiff’s lawyers submitting fake (hallucinated) precedents generated by ChatGPT, which resulted in the lawyers being sanctioned by the court.
“A big concern with this technology is that the outputs have a very authoritative voice,” said Walters. “Students need to know they are always responsible for the outputs generative AI tools produce.”
After learning foundationally what the technology can and can’t do, students spend time using different generative AI tools—both legal-specific and general—to accomplish various legal tasks. The in-class exercises range from using the tools to tackle transactional work, such as evaluating contracts and trying to come up with better drafts, to preparing for litigation, such as analyzing a complaint, to conducting legal research. As part of the experience, students are given special access to Vincent AI, an AI legal tool produced by Clio.
At the end of each assignment the class comes together to compare and critically assess the different outputs.
Walters stressed that it’s important for students to see how the general large language models (LLMs) and the legal-specific AI tools differ. “I also teach them to ‘mind the gap,’” he said. “Students need to add their own substantive judgement on top of what the tool produces, particularly in scenarios where outputs look complete but are not. I want them to have good mechanisms for coping with that illusion of completeness.”
The opportunity to use the tools and analyze their differences was Vivian Hereens’s, ’27, favorite part of the class. “I found it especially valuable to compare results through class discussions, as it highlighted how outcomes varied depending on the tool and the approach used,” she said.
Going into the course, Hereens possessed a skeptical attitude toward generative AI in legal contexts. “When I had the opportunity to use AI tools in some of my previous law school courses, I wasn’t particularly impressed. I had encountered hallucinated cases and found that verifying outputs sometimes took longer than conducting traditional legal research.”
By the end of the course, Hereens’s perspective had shifted. The practical exercises helped her arrive at a more nuanced understanding and a more optimistic view of AI’s role in legal practice, she said.
“The course made it clear that using AI effectively in practice requires more than just familiarity with the tools—it requires judgment,” she said. “Because these tools can be highly effective in some contexts and unreliable in others, lawyers must be able to critically evaluate and supervise their use. Developing that kind of discernment is essential to using AI responsibly and effectively.”
Walters noted this as a critical takeaway of the course.
“Clients can reach the same average answer from generative AI very quickly without the help of law firms,” he said. “The real value of lawyers is the intellectual work we put on top of what generative AI produces. That's what clients are going to pay for in the future.”
Beyond just learning how to use the tools, students also examine the ethics and professional responsibilities that come with using them.
This provided a particularly eye-opening moment for Hereens during one of the class discussions. “We explored how existing rules apply to the use of generative AI tools,” she said. “In the current legal landscape, many lawyers are not expected or required to use these tools, and in some cases may even be discouraged from doing so. At the same time, however, there are growing pressures from clients and the market to adopt AI where it can improve efficiency and reduce costs.”
Walters argues that the existing model rules of professional responsibility for lawyers already covers the use of generative AI without any modification.
“Rules like the duty of competence to understand the tools and how they work, the duty of diligence, the duty of candor to the tribunal, the duty of supervision. ... All of these things map pretty well onto generative AI,” he said.
Larger Theoretical Questions
Yiyi Niu, ’26, came into the class curious about how AI would change the legal industry on a macro level, beyond just everyday practice. She believed that AI would change legal work in a meaningful way, but she was trying to understand the nature of that change.
Exploring this larger theoretical question of how AI will impact the business of law—the billable hour model, staffing structures, how legal services are delivered—is the capstone of the course.
Walters is confident that AI will change the industry and wants students to think ahead about these changes. “If using AI means 10 hours of work gets done in two minutes, should law firms change the value of the time?” he asked rhetorically.
While Niu had already been contemplating these questions, Hereens found the theoretical component of the class surprising. “We were encouraged to reflect on this regularly, and it led to very intriguing ideas and energizing dialogue that always had me thinking long after class,” she said.
Niu agreed. “People brought different assumptions about law, technology, business, and professional responsibility, which made the exchanges dynamic,” she said. “By the end, I saw much more clearly that the real significance of AI in law is not just whether it can draft faster or summarize more efficiently, but how it will alter professional habits, client expectations, firm economics, and even the social role of lawyers. The class did not make me simply more enthusiastic or more skeptical. Instead, it made me more analytical.”
Preparing for the Future
With all the changes AI will bring, Walters maintains that there has never been a better time to be a law student. He sees this as an opportune moment to influence how AI will shape legal practice and whether “we will be a Blockbuster or a Netflix.”
“There is a misconception that AI will take all our jobs,” he said, “but I think of legal work as a chain—and I see AI taking several links of that chain. The final links, however, always have to be human judgement and discernment.”
As exciting and transformative as the technology is, Walters also hopes that students will maintain a healthy level of skepticism when using generative AI tools in their future careers as lawyers. Serendipitously, this skepticism, he said, is one of the virtues of a University of Chicago law student.
“It’s an important lesson about skepticism and judgement to be able to say, when using these tools, that just because something sounds complete or sounds authoritative, doesn’t necessarily mean that it is,” he said.