Prompt Engineering for Lawyers: Getting Better Results from Legal AI Tools
A practitioner's guide to writing effective prompts for legal research, drafting, and analysis
You have access to a legal AI tool. You have typed questions into it. Sometimes the output is useful; often it is generic, incomplete, or subtly wrong in ways that make it more dangerous than helpful. The gap between a mediocre AI result and a genuinely useful one almost always comes down to the prompt. Prompt engineering is not a marketing buzzword. It is the skill of translating your legal judgment, context, and analytical framework into instructions that an AI system can execute with precision. For lawyers, this means adapting techniques you already use — structuring arguments, defining scope, specifying standards — to a new medium. This guide provides concrete, practice-area-specific techniques for getting better results from any legal AI tool, whether you are using Harvey, CoCounsel, Claude, or a general-purpose model.
Why Generic Prompts Produce Generic Results
Most lawyers approach AI tools the way they would approach a Google search: type a question, hope for a good answer. This fundamentally misunderstands how large language models work. An LLM does not search a database of answers. It generates a response based on patterns in its training data, guided entirely by the instructions you provide. When those instructions are vague, the model fills gaps with the most statistically probable content — which tends to be broad, surface-level, and jurisdiction-agnostic. The quality of your output is bounded by the quality of your input.
A useful rule of thumb: if your prompt would be ambiguous to a competent but uninformed new associate, it will be ambiguous to the AI.
The IRAC Framework Adapted for AI Prompting
Lawyers already have a structured analytical method: IRAC (Issue, Rule, Application, Conclusion). This framework adapts naturally to prompt construction because it forces you to provide exactly the type of structured context that AI tools need to produce useful legal analysis. Think of IRAC not just as an output format, but as an input framework — a way to organize the information you feed to the model.
Issue: Define the Legal Question Precisely
State the specific legal question you need answered. Avoid compound questions. Instead of "Is this contract enforceable?", ask "Under New York law, is a non-compete clause in an employment agreement enforceable where the restricted period is three years and the geographic scope is the entire United States?" The more precisely you define the issue, the more targeted the analysis.
Rule: Specify the Governing Law and Standards
Tell the model which jurisdiction, statute, regulation, or legal standard applies. Include relevant time frames. Example: "Apply the three-part reasonableness test from BDO Seidman, LLP v. Hirshberg, 93 N.Y.2d 382 (1999), and note any subsequent developments in New York appellate courts through 2025." Do not assume the model knows which legal standard governs your question.
Application: Provide the Relevant Facts
Supply the facts that the model should apply to the legal framework. Be specific about which facts are established and which are disputed. Example: "The employee is a senior vice president in the financial services industry, had access to proprietary client lists and trading algorithms, and signed the agreement at the time of a promotion with a $50,000 signing bonus." Omit privileged information but provide enough factual context for meaningful analysis.
Conclusion: Specify the Desired Output Format
Tell the model exactly what you want: a memo, a risk assessment with percentage likelihood, a comparison table, bullet points for a partner call, or draft language for a brief. Specify length, tone, and whether you want the model to take a position or present balanced analysis. Example: "Provide your analysis in a two-page memo format suitable for a mid-level associate to review, with a clear recommendation and identification of the strongest counterargument."
Research from the 2023 EMNLP conference confirms that LLMs perform significantly better on legal scenarios when prompted with IRAC-structured inputs compared to open-ended queries.
IRAC Prompt in Action
- +Generic prompt: "Can our client enforce the non-compete?"
- +IRAC-structured prompt: "[Issue] Under California law, is a post-employment non-compete clause enforceable against a departing software engineer? [Rule] Apply California Business and Professions Code Section 16600 and the California Supreme Court's holding in Edwards v. Arthur Andersen LLP, 44 Cal.4th 937 (2008), including any impact from AB 1076 (2024). [Application] Our client's former employee signed a two-year non-compete with a 50-mile radius restriction as a condition of employment. The employee had no access to trade secrets but did have client relationship knowledge. [Conclusion] Provide a two-paragraph analysis with a clear enforceability determination and one recommended alternative protective measure."
- +The IRAC-structured prompt will produce a focused, jurisdiction-specific analysis. The generic prompt will produce a 50-state survey you did not need.
Role-Based Prompting: Setting the AI's Expertise
One of the most effective prompting techniques for legal work is role assignment — telling the AI what kind of legal professional it should emulate. This is not anthropomorphizing the model. It is a practical mechanism for activating domain-specific language patterns, analytical frameworks, and depth of analysis that the model learned during training. A prompt that begins with a role instruction produces meaningfully different output than one without it.
The Specialist Role
Assign a specific practice area and seniority level to match the analytical depth you need.
Example: You are a senior associate at an AmLaw 50 firm specializing in ERISA litigation. You have 8 years of experience defending employers in benefits disputes. Analyze the following plan amendment for compliance risks under ERISA Section 204(h) notice requirements.
Why it excels: The model adjusts its analytical framework, terminology, and depth of coverage. A "senior ERISA associate" will flag anti-cutback rule issues that a generic legal analysis would miss entirely.
The Adversary Role
Use role-based prompting to stress-test your own arguments by asking the AI to argue the opposing side.
Example: You are plaintiff's counsel in a securities fraud class action. Draft the three strongest arguments that the defendant's forward-looking statements were not protected by the PSLRA safe harbor, based on the following quarterly earnings call transcript.
Why it excels: Forcing the model into an adversarial posture surfaces weaknesses in your position that you might overlook when analyzing from your own side.
The Judicial Role
Ask the AI to evaluate arguments from a judge's perspective for a more balanced, outcome-oriented analysis.
Example: You are a federal district judge in the Southern District of New York evaluating a motion to dismiss under Rule 12(b)(6). The plaintiff has alleged breach of fiduciary duty against corporate directors. Based on the complaint allegations below, identify which claims survive the motion and which do not, applying Iqbal/Twombly plausibility standards.
Why it excels: The judicial perspective produces a more objective evaluation focused on legal standards rather than advocacy, which is useful for assessing the strength of your position before filing.
The Client-Facing Role
Adjust the AI's communication style for different audiences — a general counsel expects different language than a non-lawyer business executive.
Example: You are an outside counsel preparing a risk summary for a non-lawyer CFO. Explain the litigation exposure from the following employment dispute in plain business language. Avoid legal jargon. Quantify the financial exposure in ranges. Keep it under 500 words.
Why it excels: Role-based audience targeting produces output that actually serves your communication goal, eliminating the need to substantially rewrite AI-generated content for the intended reader.
Jurisdiction-Specific Instructions: The Detail That Changes Everything
Jurisdiction is the single most important contextual detail in most legal prompts, and the one most frequently omitted. A contract enforceability analysis under New York law and California law can reach opposite conclusions on the same facts. A motion to dismiss standard in the Third Circuit differs from the Ninth Circuit. When you fail to specify jurisdiction, the model either defaults to a majority-rule overview — useful for a law school exam, useless for client advice — or silently applies the wrong jurisdiction's law.
The Singapore Academy of Law and Microsoft's 2025 Prompt Engineering Guide for legal professionals emphasizes that jurisdiction specificity is one of the highest-impact improvements lawyers can make to their AI prompts.
Iterative Refinement: The Multi-Turn Approach to Legal Drafting
Treating AI as a one-shot query tool wastes most of its capability. The most effective legal AI users work iteratively — building context over multiple exchanges, refining output progressively, and using each response to inform the next prompt. This mirrors how you would work with a junior associate: you give initial instructions, review the first draft, provide feedback, and refine. The same workflow applies to AI, and it produces dramatically better results.
Set Context and Constraints First
Begin with a framing prompt that establishes the matter, jurisdiction, key facts, and your role — before asking for any output. Example: "I am drafting an asset purchase agreement for a mid-market acquisition in the healthcare industry. The buyer is a private equity-backed platform company. The target is a physician practice group with 12 locations in Florida. Key issues include healthcare regulatory compliance, non-compete enforceability for physician sellers, and Stark Law/Anti-Kickback considerations. I will be asking you several questions about specific provisions."
Request a Structured First Draft
Ask for an initial output with explicit format instructions. Do not ask for a perfect draft — ask for a working draft you can react to. Example: "Draft an indemnification provision that provides the buyer with specific indemnification for pre-closing regulatory violations, including a 24-month survival period, a basket of $250,000, and a cap of 15% of the purchase price. Include a carve-out for fraud with no cap or survival limitation."
Critique and Redirect
Review the output and provide targeted feedback rather than starting over. Be specific about what to keep, what to change, and what is missing. Example: "The indemnification structure is correct, but the language is too broad on 'regulatory violations.' Narrow the definition to include only violations of the Stark Law (42 U.S.C. 1395nn), the Anti-Kickback Statute (42 U.S.C. 1320a-7b), and applicable Florida healthcare licensing statutes. Also add a mechanism for the seller to participate in the defense of any regulatory claim."
Pressure-Test the Output
Use follow-up prompts to stress-test the draft from different angles. Example: "Now review this indemnification provision as if you were seller's counsel. Identify the three provisions most unfavorable to the seller and suggest alternative language that balances buyer protection with seller exposure limitations."
Request Final Polish with Specific Standards
End with a refinement pass that specifies your exact formatting and style requirements. Example: "Finalize this provision using the defined terms established in the rest of the agreement (capitalize Buyer, Seller, Closing Date, Purchase Price). Format using section numbering consistent with ABA Model Asset Purchase Agreement conventions. Flag any internal cross-reference placeholders with [Section __]."
Practice-Area Prompt Examples
Effective prompting differs by practice area because each area has distinct analytical frameworks, output requirements, and precision demands. Below are concrete prompt examples organized by practice area, demonstrating how to apply the techniques discussed above in real workflows.
Litigation: Motion Practice
Litigation prompts should specify the procedural posture, standard of review, and court conventions. Always include the relevant procedural rules.
Example: You are a litigation associate preparing a motion for summary judgment in the U.S. District Court for the District of New Jersey. The claim is breach of a commercial lease. Draft the argument section addressing the undisputed material facts showing that the defendant-tenant failed to maintain the premises in accordance with Section 7.2 of the lease, which required 'commercially reasonable maintenance consistent with Class A office standards.' The landlord has declarations from two property inspectors and photographic evidence. Apply the summary judgment standard under Fed. R. Civ. P. 56 and Third Circuit precedent. Structure the argument using point headings. Cite to real, verifiable cases only — if you are not confident a citation is accurate, note it as [citation needed] rather than fabricating one.
Why it excels: This prompt specifies court, procedural rule, standard, factual basis, output structure, and — critically — instructs the model to flag uncertain citations rather than hallucinate them.
Transactional: M&A Due Diligence
Transactional prompts should define the deal structure, applicable frameworks, and the specific risk categories you are screening for.
Example: Review the attached services agreement and identify all provisions that would be affected by a change of control of the service provider. For each provision, state: (1) the section number, (2) the specific trigger language, (3) the consequence (termination right, consent requirement, or automatic assignment), and (4) whether the provision distinguishes between direct and indirect changes of control. Present findings in a table format. Flag any provisions where the change-of-control definition is ambiguous or could be interpreted to exclude a stock purchase transaction.
Why it excels: The prompt defines a specific analytical framework (four-part extraction per provision), requests structured output (table format), and asks for judgment calls on ambiguity — moving beyond simple extraction to analysis.
Regulatory: Compliance Analysis
Regulatory prompts must specify the applicable regulatory framework, the entity type being analyzed, and the compliance standard — not just the topic area.
Example: Our client is a fintech company that offers a buy-now-pay-later product to consumers in California, Texas, and New York. Analyze whether this product constitutes a 'loan' or 'extension of credit' under each state's consumer lending statutes, and identify the licensing requirements that would apply in each jurisdiction. For each state, cite the governing statute and any relevant regulatory guidance or no-action letters issued through 2025. Present the analysis in a jurisdiction-by-jurisdiction format with a summary comparison table.
Why it excels: Multi-jurisdictional regulatory analysis is one of AI's highest-value use cases. This prompt structures the comparison to produce work product that an associate could verify and refine rather than rebuild from scratch.
Intellectual Property: Patent Landscape
IP prompts benefit from highly technical specificity — the more precise your description of the technology, the better the analysis.
Example: You are a patent attorney evaluating freedom-to-operate risk for a new drug delivery system that uses lipid nanoparticles (LNPs) to encapsulate mRNA therapeutics for intramuscular injection. The key novel feature is a proprietary ionizable lipid with a pKa between 6.2 and 6.5. Identify the categories of patents that would be most relevant to a freedom-to-operate analysis, including: (1) LNP composition patents, (2) ionizable lipid structure patents, (3) mRNA encapsulation method patents, and (4) therapeutic application patents. For each category, describe the types of claims that could present blocking risk and the analytical framework for evaluating design-around potential.
Why it excels: Patent analysis requires technical precision that generic prompts cannot deliver. The prompt provides specific technical parameters (pKa range, delivery mechanism) that focus the analysis on commercially relevant patent categories.
Employment: Policy Drafting
Employment law prompts should specify the employer type, jurisdiction(s), workforce characteristics, and the specific policy objective.
Example: Draft a return-to-office policy for a technology company with 500 employees across California, New York, and Texas offices. The policy should require three days per week in-office beginning Q2 2026. Address: (1) ADA reasonable accommodation procedures for employees who request continued remote work, (2) state-specific requirements for expense reimbursement if the hybrid arrangement changes, (3) wage-and-hour implications for non-exempt employees who work from home on remote days, and (4) NLRA considerations for any collective response to the policy. Flag any provisions where state law differs materially across the three jurisdictions.
Why it excels: Employment policy drafting requires multi-jurisdictional awareness that the AI can provide across several legal frameworks simultaneously — something that would require consulting multiple subject-matter specialists in a traditional workflow.
Common Prompting Mistakes and How to Fix Them
After working with hundreds of legal professionals adopting AI tools, certain patterns of ineffective prompting appear repeatedly. Recognizing these mistakes is often more valuable than learning new techniques, because each one has a straightforward fix.
Asking compound questions
Prompts that bundle three or four distinct questions into a single query produce muddled answers where the model inadequately addresses each sub-question. Break compound questions into sequential, focused prompts — each building on the last.
fix: One question per prompt. Use iterative refinement to build complexity over multiple turns.
Assuming the AI knows your case
Referring to "the contract" or "our client's situation" without providing facts forces the model to either guess or produce generic analysis. The AI has no memory of your matter unless you provide it in the current session.
fix: Provide relevant facts in every prompt, or use the session's initial prompt to establish a persistent factual context.
Accepting the first output
Using the first response without iteration leaves significant quality on the table. First drafts from AI are like first drafts from a junior associate — they need feedback and revision. The model's second and third attempts are almost always better.
fix: Always do at least one refinement pass. Tell the model specifically what to improve rather than regenerating from scratch.
Failing to specify output format
Leaving format unspecified produces inconsistent structure — sometimes bullet points, sometimes flowing paragraphs, sometimes a mix. The output may be analytically sound but require significant reformatting for your work product.
fix: Specify format explicitly: "Provide this as a numbered list," "Use section headings and sub-headings," "Present in a table with columns for [X], [Y], and [Z]."
Including privileged or confidential details
Copying entire client communications or privileged strategy memos into prompts risks confidentiality, even with enterprise-grade tools. The ethical obligation exists regardless of the vendor's data handling promises.
fix: Anonymize facts before prompting. Replace client names, specific dates, and identifying details with generic placeholders. Ask the same legal question using sanitized hypotheticals.
Not specifying citation expectations
Without explicit instruction about citations, models will freely generate plausible-looking but fabricated case citations. This is the single highest-risk behavior in legal AI use and the source of every publicized sanctions case.
fix: Add: "Cite only cases you are confident are real. If you are uncertain about a citation, mark it as [VERIFY] and explain the proposition it should support so I can find the correct authority."
Prompt Templates for Common Legal Tasks
The following templates provide reusable starting points for high-frequency legal tasks. Customize the bracketed sections for your specific matter. These templates incorporate the techniques discussed throughout this guide — role assignment, jurisdiction specification, output formatting, and citation handling.
Case Law Research Memo
For researching a specific legal question and producing a structured analysis with citations.
Example: You are a [practice area] attorney licensed in [state]. Research the following question: [specific legal question]. Apply [governing statute or common law standard] as interpreted by [specific court or jurisdiction]. Provide your analysis in a memo format with the following sections: (1) Question Presented, (2) Brief Answer, (3) Discussion with relevant authorities, (4) Conclusion. Cite only authorities you are confident exist. Mark uncertain citations as [VERIFY]. Limit to [X] pages.
Why it excels: This template forces jurisdiction specificity, output structure, and citation discipline into every research query.
Contract Clause Comparison
For comparing contract language against your firm's standard positions or market terms.
Example: Compare the following [clause type] from the counterparty's draft against market-standard language for [deal type] transactions. For each material deviation, provide: (1) the specific language that deviates, (2) the risk to [our client's position], (3) suggested alternative language, and (4) your assessment of whether this is a point worth negotiating or an acceptable market variation. Present as a table. Counterparty language: [paste clause].
Why it excels: Structured comparison with risk assessment and negotiation prioritization moves the output from observation to actionable advice.
Deposition Preparation Outline
For generating an organized outline of topics and questions for deposition preparation.
Example: Prepare a deposition outline for [witness name/role] in [case type] litigation. The key issues in this case are: [list issues]. This witness is expected to have knowledge of: [list knowledge areas]. Organize the outline into topic areas, with 5-7 questions per topic, progressing from foundational/establishing questions to substantive questions targeting [specific facts or admissions we need]. Include suggested document references where the witness should be confronted with prior statements or documentary evidence. Note any areas where the witness may invoke privilege.
Why it excels: Deposition outlines benefit from AI's ability to systematically organize topics and generate comprehensive question sets that an attorney can then refine based on case strategy.
Client Advisory Letter
For drafting client-facing communications that explain legal issues in accessible language.
Example: Draft a client advisory letter regarding [legal development — new regulation, court decision, enforcement action]. The audience is [client type — GC, business executive, board of directors]. Explain: (1) what happened, (2) why it matters to the client's business, (3) specific action items the client should take within [timeframe], and (4) what we recommend as next steps. Use plain language — avoid legal jargon or define it when unavoidable. Tone should be [advisory/urgent/informational]. Keep under [X] words.
Why it excels: Client communications fail when they read like legal memos. This template forces audience-appropriate language and an action-oriented structure.
Regulatory Filing Review
For reviewing a draft filing or application against specific regulatory requirements.
Example: Review the following draft [filing type] for compliance with [specific regulation or regulatory body requirements]. Check for: (1) all required disclosures under [cite specific rule sections], (2) consistency of information across sections, (3) any statements that could be interpreted as misleading under [applicable standard], and (4) completeness of exhibits and attachments required under [rule]. Flag each issue with a severity rating: Critical (must fix before filing), Important (should fix), or Advisory (recommended improvement). Present as a numbered list organized by severity.
Chain-of-Thought Prompting for Complex Legal Analysis
Chain-of-thought (CoT) prompting instructs the model to reason through a problem step by step rather than jumping to a conclusion. This technique is particularly valuable for legal analysis because legal reasoning is inherently sequential — you identify the legal standard, apply facts to each element, evaluate counterarguments, and reach a conclusion. Research consistently shows that CoT prompting improves accuracy on complex reasoning tasks, and legal analysis is one of the most demanding reasoning domains. The technique is simple: explicitly instruct the model to show its work.
Models with extended thinking capabilities — such as Claude's thinking mode or OpenAI's o1 — inherently apply chain-of-thought reasoning internally, but explicitly requesting step-by-step analysis in your prompt still improves output structure and transparency.
Chain-of-Thought Example: Privilege Analysis
- +Prompt: "Analyze whether the following communication is protected by attorney-client privilege. Think through this step by step."
- +Step 1 — Identify the communication: "The email was sent from the VP of Compliance to in-house counsel, copying two business executives, attaching a draft internal investigation report."
- +Step 2 — Apply the elements: Instruct the model to evaluate: (a) Was legal advice sought or provided? (b) Was the communication made in confidence? (c) Does the presence of non-lawyer recipients on the CC line waive the privilege? (d) Does the work-product doctrine provide independent protection for the investigation report?
- +Step 3 — Evaluate complications: "Now consider: Does the crime-fraud exception potentially apply if the investigation uncovered potential violations? How does Upjohn apply to the scope of privilege for communications with employees during the investigation?"
- +This layered approach produces analysis that tracks the actual reasoning a court would apply, rather than a conclusory statement about whether privilege exists.
Ethics, Confidentiality, and Professional Guardrails
Effective prompting is not just about better outputs — it also requires awareness of your professional obligations when using AI tools. ABA Formal Opinion 512 (2024) established the first comprehensive ethics framework for lawyers using generative AI, addressing competence, confidentiality, communication, candor, supervision, and fees. Your prompting practices must operate within these boundaries.
When Better Prompting Is Not the Answer
Not every AI shortcoming is a prompting problem. Recognizing the distinction between prompt limitations and tool limitations saves time and prevents over-reliance on output that no amount of prompt refinement can fix. There are categories of tasks where current AI tools have fundamental constraints that better prompting cannot overcome.
Factual accuracy of citations
No prompting technique can guarantee that an AI-generated case citation is real. Models generate text based on probability, not by looking up a database. Even explicit instructions to "only cite real cases" reduce but do not eliminate hallucination. For citation-critical work, use tools with integrated legal research databases (Harvey with LexisNexis, CoCounsel with Westlaw) or independently verify every citation.
recommendation: Use grounded research tools for citation-dependent work. Use general-purpose models for analysis and drafting where you supply the citations.
Recent legal developments
Model training data has a cutoff date. Events, cases, and statutory changes after that date are invisible to the model. Prompting the model to analyze a statute enacted after its training cutoff will produce either a hallucinated analysis or an outdated one. No prompt can inject knowledge the model does not have.
recommendation: For questions involving recent developments, provide the relevant text directly in the prompt or use tools with web search and retrieval-augmented generation (RAG) capabilities.
Mathematical and financial calculations
LLMs are language models, not calculators. They can structure a damages analysis framework but should not be trusted to perform the arithmetic. Complex present-value calculations, tax computations, and financial modeling require specialized tools.
recommendation: Use AI to structure the analytical framework and identify relevant variables, then perform calculations in a spreadsheet or financial modeling tool.
Document review at scale
Prompting a general-purpose AI to review 10,000 documents is impractical due to context window limitations. Even models with large context windows cannot maintain accuracy and consistency across massive document sets. Purpose-built document review platforms (Relativity, Harvey Vault) are designed for this scale.
recommendation: Use specialized document review platforms for large-scale review. Use general AI tools for analyzing individual documents or small sets where the full text fits within the context window.
Privileged strategy decisions
AI can analyze legal issues and present options, but it cannot make judgment calls that depend on client relationships, business context, risk tolerance, or litigation strategy. These are inherently human decisions that require professional judgment AI does not possess.
recommendation: Use AI to generate options, analyze scenarios, and pressure-test reasoning — but keep strategic decision-making with the humans who understand the full context.
Key Takeaways
- 1.Generic prompts produce generic results — the quality of AI output is directly proportional to the specificity and structure of your input.
- 2.Apply the IRAC framework to prompt construction: specify the Issue, governing Rule, relevant facts for Application, and desired output format for Conclusion.
- 3.Role-based prompting (assigning the AI a specific legal persona and seniority level) meaningfully improves the depth, tone, and analytical framework of responses.
- 4.Jurisdiction is the single most important contextual detail most lawyers omit — always specify state, court level, and governing statutes.
- 5.Work iteratively: set context first, request a structured draft, provide targeted feedback, pressure-test from the opposing perspective, then polish.
- 6.Always instruct the model to flag uncertain citations as [VERIFY] rather than fabricating them — hallucinated case citations remain the highest-risk AI failure mode for lawyers.
- 7.Chain-of-thought prompting (asking the model to reason step by step through each element) improves accuracy on complex multi-factor legal analyses.
- 8.Recognize when better prompting is not the answer: citation verification, recent legal developments, mathematical calculations, and strategic judgment calls require tools or skills that prompting alone cannot provide.
References
- [1]American Bar Association, "ABA Formal Opinion 512: Generative Artificial Intelligence Tools." Standing Committee on Ethics and Professional Responsibility, July 29, 2024.Link
- [2]Singapore Academy of Law & Microsoft, "Prompt Engineering for Lawyers: Leveraging Generative AI in the Legal Profession." SAL, 2025.Link
- [3]Artificial Lawyer, "Prompt Engineering Is The New Drafting." April 30, 2025.Link
- [4]Anthropic, "Prompt Engineering Overview." Claude API Documentation, 2025.Link
- [5]Yu, F. et al., "Can ChatGPT Perform Reasoning Using the IRAC Method in Analyzing Legal Scenarios Like a Lawyer?" Findings of EMNLP 2023. arXiv:2310.14880.Link
- [6]U.S. Legal Support, "Legal AI Prompting Best Practices." 2025.Link