Meet Professor Byte: My Experience Using AI as a Study Partner During Law School Finals

Sofia DiPadova headshot next to USF School of Law building photo

When an exam is 100% of a grade, it’s natural for law students to want to maximize practice attempts. However, not every professor provides exam banks that make this possible. In an attempt to get ahead of the curve, I decided to train ChatGPT as my personal tutor. What emerged was Professor Byte, an AI that generated over 60 practice exams and changed how I prepared for finals.

The AI Tutoring Landscape in Legal Education

As generative AI evolves rapidly, law schools are struggling to establish clear policies and integrate these tools into curricula. Many professors, students, and practitioners lack deep understanding of how AI functions, how it applies to legal practice, and how to use it safely. This knowledge gap contributes to the poor regulation of AI tools in legal education.

ChatGPT dominates usage among law students, it’s a predictive text model that learns to forecast the next word in a sentence using vast training data. This enables coherent, contextually relevant responses across diverse topics. For legal education, this capacity offers unprecedented opportunities for students to engage with complex questions and receive feedback immediately.  

However, significant limitations remain. Recent research shows AI models hallucinate 58% to 88% of the time when answering direct legal questions. When Choi et al. tested ChatGPT on actual law school exams in their study ChatGPT Goes to Law School, the AI consistently scored in the bottom percentiles, struggling with nuanced analysis and rule application, exactly what law exams test. 

Yet students are already using these tools. The question for law schools now isn’t whether to allow AI in legal education, but how to guide its effective use while preparing students for an increasingly AI-integrated profession. 

Building Professor Byte

Despite documented limitations, I saw potential in addressing law school’s core challenge: the need for extensive practice with limited materials. My approach was systematic. I began by uploading 200 pages of study materials to ChatGPT: detailed outlines for all five first-year courses, case briefings, class notes with professor-specific emphases, and statutory materials. Everything uploaded was created without AI assistance. When I asked what it wanted to be called, it chose “Professor Byte.”

One month before finals, I created a structured study schedule. My goal was daily AI interaction to simulate and refine exam performance. The results exceeded expectations: Professor Byte generated 10-15 practice exams per course, totaling 63 complete examinations over four weeks. This represented a dramatic increase in available practice material. While classmates worked with 2-3 released past exams per course, I had personalized, unlimited practice opportunities. Each session let me test my legal reasoning, refine analytical approaches, and receive immediate feedback.

The key breakthrough was learning sophisticated prompting techniques. Generic requests often failed. Success required structured templates with multiple components: (1) Instruction layer: clear role definition and task parameters, (2) context layer: specific topic focus and constraints, (3) requirements layer: detailed format specifications and evaluation criteria, and (4) style layer: appropriate tone and complexity level

My final prompt template looked like this:

###Instruction###

You are a law professor tasked with creating a 1-hour examination on civil procedure for a first-year law student. Your task is to generate an exam based strictly on the provided reference materials (uploaded documents and course readings), and not on general legal knowledge.

###Context###

The exam must focus on the following specific topics in civil procedure:

    1. Subject Matter Jurisdiction
    2. Personal Jurisdiction
    3. Pleadings and Motions

###Requirements###

– The exam should include a mix of:

    • 3 multiple-choice questions
    • 2 short-answer questions
    • 1 essay question

– Clearly separate the question types using section headers.

– All questions must be answerable using the provided material and reflect real-world comprehension and application of U.S. civil procedure rules.

– Provide an **answer key** at the end for all multiple-choice and short-answer questions.

– Do **not** provide the answer to the essay question.

– Ensure the exam length is appropriate for a 1-hour session.

– Ensure that your answer is unbiased and does not rely on stereotypes.

###Style###

Write in a professional, academic tone. Use plain legal language appropriate for law students. Avoid overly technical jargon.

This systematic approach produced practice materials that closely reflected my professors’ teaching styles and exam formats.

Critical Limitations and Failures

Ultimately, Professor Byte, while enthusiastic and impressively responsive, had several limitations. Even with carefully tailored prompts, its answers often lacked depth or contained inaccuracies that undermined its reliability as a standalone tutor. This shortcoming aligns with the findings of several studies, including ChatGPT Goes to Law School, which evaluated ChatGPT’s performance on law school exams. 

One of the most consistent failures identified in ChatGPT Goes to Law School, was the model’s inability to apply legal rules to facts with sufficient detail. While ChatGPT frequently stated legal principles correctly, it often failed to connect those rules meaningfully to the facts in exam hypotheticals, leading the research to conclude that it was generally bad at focusing on what mattered. 

Professor Byte also exhibited overconfidence, a documented problem in AI systems. It generated responses with apparent certainty even when dealing with contested legal areas or making arguable analytical choices. This poor calibration meant I had to maintain constant skepticism about outputs.

Most importantly, Professor Byte’s effectiveness depended on my existing knowledge. To benefit from AI-generated practice, I needed sufficient understanding to spot errors and evaluate reasoning quality. The AI worked best after I’d mastered foundational content and was ready for repetitive testing.

What This Means for Law Schools

This experience revealed the fundamental tensions in how legal education is approaching AI integration. The technology offers genuine benefits, unprecedented practice volume, personalized feedback, and 24/7 availability, but requires constant sophisticated oversight and critical evaluation skills. 

Most law schools currently take one of two approaches: blanket prohibition or cautious experimentation. Both miss the mark. Prohibition ignores the reality that students are already using these tools and will continue doing so regardless of institutional policies. It also fails to prepare graduates for legal practice in an AI-integrated profession.

Cautious experimentation, while better, often lacks the systematic framework needed to maximize benefits while minimizing risks. Schools need comprehensive AI literacy curricula that teach students to recognize limitations, validate outputs, and use AI as a supplement to rather than replacement for critical thinking. 

The disconnect between academic caution and professional reality is stark. While law schools debate basic AI policies, major firms are integrating sophisticated AI platforms into daily practice. Allen & Overy deployed Harvey AI across 3,500 lawyers. Other firms are following suit. This gap risks producing graduates unprepared for contemporary legal practice.

Law schools should consider several specific reforms:

Structured Integration Over Prohibition: Rather than banning AI use, create controlled environments where students learn to work with these tools effectively while understanding their limitations.

Transparency Requirements: Following UCLA Law School’s model, require students to disclose AI usage and document specific prompts used. This promotes accountability while generating useful data about effective practices.

AI Literacy as Core Curriculum: Develop mandatory courses covering AI capabilities, limitations, and best practices in legal contexts. This should include hands-on training in prompt engineering, output validation, and ethical considerations.

Faculty Development: Invest in training professors to understand AI capabilities and integrate these tools meaningfully into course design and assessment. 

Law schools that begin thoughtful AI integration now, with appropriate safeguards and realistic assessment of current limitations, will better prepare students for this evolving landscape. Those that continue prohibiting or ignoring these tools risk producing graduates unprepared for contemporary legal practice. The future of legal education lies not in choosing between human and artificial intelligence, but in developing the judgment to combine both effectively.