- The popular new chatbot has stoked cheating concerns among academics
- But legal technology proponents say law schools should teach students how to use such tools
(Reuters) - The artificial intelligence program ChatGPT came up short last month on the multiple choice portion of the bar exam.
The free chatbot from OpenAI performed better than predicted, however, earning passing scores on evidence and torts. The academics behind the experiment expect it will pass the attorney licensing test someday.
Law professors are among those both alarmed and delighted by ChatGPT since its November release. The program generates sophisticated, human-like responses based on requests from users and mountains of data, including from legal texts.
Daniel Linna, director of law and technology initiatives at Northwestern University Pritzker School of Law, said most law professors thinking about language-based AI are concerned with students passing off work generated by the chatbot as their own. But others see AI as a tool for legal education, and warn that without it law students may be unprepared for legal careers in which technology will play ever larger roles.
Jake Heller, chief executive officer of legal tech company Casetext, said law schools should encourage students to use ChatGPT and similar tools as a starting point for documents and a way to generate ideas.
“It’s no different than turning to a friend in the law library late and night and saying, ‘Hey, I’m struggling with this idea,’” Heller said. “It’s like using a calculator in math.”
Andrew Perlman, dean of Suffolk University Law School, said he would like to see first-year legal research and writing classes cover the use of tools like ChatGPT, just as they teach students to conduct research on Westlaw and LexisNexis.
“We’re at a very interesting inflection point,” Perlman said. “It would not surprise me if professionals of the future will be expected to make queries to chatbots and other tools to at least get an initial draft of a document.”
ChatGPT is not yet sophisticated enough to earn a law student an “A” without additional work, said Northwestern's Linna. There are also law-focused AI tools that do a better job on specific tasks, he added.
In their Dec. 31 paper on GPT 3.5's performance on the bar exam, Chicago-Kent College of Law professor Daniel Martin Katz and Michigan State University College of Law adjunct Michael Bommarito found that the program got answers on the Multistate Bar Exam correct half the time, compared to 68% for human test takers.
Those limitations are not enough to soothe many skeptics. Among them is South Texas College of Law Houston law professor Josh Blackman, who urged professors to rethink take-home exams in a recent post on the Volokh Conspiracy blog.
“This technology should strike fear in all academics,” he wrote, noting that ChatGPT produces original text that cannot be identified by existing plagiarism detection software.
Heller predicted that law schools will soon begin to amend their codes of conduct and professors will need to clarify that simply turning in a paper produced by a chatbot is akin to plagiarism.
Law professors may begin to ask students to disclose what specific technology tools they used, Perlman added.
“Given how rapidly the technology seems to be progressing, these are conversations that are going to have to happen sooner rather than later,” he said.
Our Standards: The Thomson Reuters Trust Principles.