Amicore

The Legal AI Security Checklist: What to Ask Before Your Firm Adopts Any AI Tool

A Practitioner's Guide to Evaluating AI Vendors for Confidentiality, Compliance, and Risk

Last updated: February 11, 2026 Security

Every AI vendor will tell you their platform is secure. Most will point to certifications, encryption standards, and compliance badges. But for law firms, the security question is fundamentally different from what a typical SaaS buyer faces. Your firm holds attorney-client privileged communications, work product, trade secrets, and confidential client data that carries ethical obligations no other industry shares. A security failure is not just a breach notification — it is a potential waiver of privilege, a disciplinary complaint, and a malpractice claim. This guide provides a structured framework for evaluating any AI tool your firm considers adopting, with specific questions to ask vendors, red flags to watch for, and the regulatory context that makes legal AI security uniquely demanding.

Why Legal AI Security Is Different

Law firms are not ordinary businesses, and the data they handle is not ordinary data. Understanding why legal AI security demands a higher standard than general enterprise security is the first step toward proper evaluation.

Attorney-Client Privilege at Stake: Privilege can be waived through disclosure to third parties. When client data enters an AI system, the firm must confirm that the vendor relationship preserves privilege. If the vendor's employees can access your prompts, or if data is used for model training visible to other users, the privilege analysis becomes complicated. Courts have not yet drawn clear lines here, which makes caution the only defensible approach.
Ethical Duties Beyond Contract Law: ABA Model Rule 1.6 requires lawyers to make reasonable efforts to prevent unauthorized disclosure of client information. This is not a contractual obligation you can negotiate away — it is a professional duty enforced by state bars. A vendor's standard terms of service do not satisfy this obligation; the firm must independently assess whether the vendor's practices meet the standard of reasonable care.
Regulatory Multiplicity: Law firms often handle data subject to multiple regulatory frameworks simultaneously: HIPAA for healthcare clients, GDPR for European matters, state privacy laws, SEC regulations for securities work, and sector-specific rules. An AI tool must accommodate the most restrictive applicable standard, not just the vendor's home jurisdiction.
Adversarial Exposure: Litigation data is inherently adversarial. Opposing counsel, regulators, and courts may scrutinize how AI tools processed case data. Discovery requests may target AI interactions. The firm must be able to explain and defend its data handling practices in a courtroom context — a requirement that most enterprise buyers never face.

The Core Question Every Firm Must Answer

  • +When an attorney pastes a confidential client document into an AI tool, where does that data go? This single question branches into dozens of sub-questions about storage, processing, retention, training, logging, and third-party access.
  • +If you cannot answer this question with specificity for every AI tool your firm uses, you have a security gap. The answer should be documented, reviewed by your security team, and updated whenever the vendor changes its terms or architecture.
  • +79% of legal professionals now use AI tools in their practice, but only 10% of firms have formal AI use policies. That gap between adoption and governance is where security incidents happen.

Data Handling: Where Does Your Data Actually Go?

The most important security assessment you can perform is tracing the complete lifecycle of data through a vendor's system. Vendors often provide high-level assurances about encryption and privacy, but the details matter enormously. Ask for architecture documentation and do not accept marketing language as a substitute for technical specifics.

Is data encrypted in transit and at rest? What encryption standards are used?

TLS 1.2+ for transit and AES-256 for storage are table stakes. Ask specifically whether data is encrypted at the application layer or only at the infrastructure layer. Application-layer encryption means the vendor's own engineers cannot read your data without additional access controls. Infrastructure-layer encryption only protects against physical theft of hardware.

Where are prompts and responses stored, and for how long?

Many AI platforms log every interaction for debugging, quality assurance, or abuse detection. Ask whether logs include the full text of prompts and responses. Request the retention schedule in writing. Some vendors retain interaction logs for 30 days; others retain them indefinitely. For privileged communications, shorter retention with verifiable deletion is strongly preferable.

Does the vendor use subprocessors, and do they have access to your data?

Most AI vendors rely on cloud infrastructure providers (AWS, Azure, Google Cloud) and may use additional subprocessors for specific functions. Request a complete subprocessor list and understand what data each subprocessor can access. Your vendor contract is only as strong as the weakest link in their supply chain.

Can you get a data processing agreement (DPA) that specifies data handling obligations?

A DPA should specify data categories processed, processing purposes, retention periods, deletion procedures, subprocessor notification requirements, and breach notification timelines. If a vendor will not execute a DPA, that is a significant red flag for any firm handling confidential client data.

What happens to your data if you terminate the relationship?

Request contractual guarantees for data return and certified deletion upon termination. Ask how long deletion takes and whether backups are included. Some vendors retain backup copies for 90 days or more after account termination — during which your client data remains in their systems.

The Model Training Question

Whether an AI vendor trains its models on your data is perhaps the most consequential security question for law firms. If your client's confidential merger details, litigation strategy, or privileged communications become training data, that information could theoretically surface in responses to other users. Even if the probability is low, the ethical and liability exposure is unacceptable.

Does the vendor use customer inputs to train, fine-tune, or improve its AI models?

This is a binary question that demands a binary answer. 'We may use anonymized data to improve our services' is not acceptable — anonymization of legal text is unreliable, and even anonymized case details can be identifying. The answer must be an unambiguous no, backed by contractual language. Check the vendor's general terms of service separately from any enterprise agreement, as they may differ.

Does the vendor distinguish between model training and other data uses like safety monitoring?

Some vendors do not train on customer data but do use it for abuse detection, content filtering, or safety evaluations that may involve human review. Understand exactly what human review means: who reviews data, under what circumstances, and what confidentiality obligations those reviewers have. Safety monitoring that involves human review of privileged content raises the same concerns as model training.

Are training opt-out policies contractual commitments or just current practices?

A vendor's privacy page may say they do not train on enterprise customer data, but if this is not in your contract, it can change with a terms-of-service update. Ensure the no-training commitment is in a signed agreement with breach remedies, not just a blog post or FAQ.

What happens to data in the model's context window? Is it persisted beyond the session?

Understand the difference between data used in a conversation context (which should be ephemeral) and data stored in the vendor's systems. Some platforms offer persistent memory features that retain information across sessions. If your attorneys use such features with client data, that data is stored in the vendor's systems in a form designed to be recalled — which has very different risk implications than ephemeral processing.

Some vendors have been caught quietly updating terms of service to grant themselves broader rights to customer data for AI training. Review your vendor agreements at least annually, and set up alerts for terms-of-service changes.

The Certification Landscape: What SOC 2, ISO 27001, and HIPAA Actually Mean

Security certifications are necessary but not sufficient. Understanding what each certification covers — and what it does not — prevents false confidence. A vendor displaying a SOC 2 badge has demonstrated something meaningful, but it does not mean your specific concerns about privilege and confidentiality have been addressed.

SOC 2 Type II

Verifies that a vendor's controls are designed effectively AND have operated effectively over a period (typically 6-12 months). Covers five Trust Services Criteria: security, availability, processing integrity, confidentiality, and privacy. Type II is significantly more meaningful than Type I, which only confirms controls exist at a point in time.

limitation: SOC 2 is a US-focused attestation, not a certification. The scope is defined by the vendor — a narrow scope can exclude the systems that actually process your data. Always request the full SOC 2 report (not just the certification letter) and verify that the systems described match the ones handling your firm's data.

ISO 27001

An international standard for information security management systems (ISMS). Requires a comprehensive risk assessment and documented controls across 93 control objectives. More prescriptive than SOC 2 about what must be addressed. Preferred by international firms and required by many European clients.

limitation: ISO 27001 certifies the management system, not the specific security of a product. A vendor can be ISO 27001 certified while the specific product you use operates outside the certified scope. Ask whether the certification scope includes the specific services and data centers that will handle your data.

HIPAA Compliance

Required if the AI tool will process protected health information (PHI) — common for firms with healthcare clients, personal injury practices, or benefits work. The vendor must sign a Business Associate Agreement (BAA) and meet specific safeguards for PHI.

limitation: There is no official HIPAA certification. Vendors claiming to be 'HIPAA certified' are using marketing language. What matters is whether they will sign a BAA and whether their controls satisfy the Security Rule's technical safeguards. Ask for their most recent HIPAA risk assessment.

FedRAMP Authorization

Required for AI tools used in matters involving federal government data. Demonstrates rigorous security evaluation by a Third-Party Assessment Organization (3PAO). Relevant for firms with government contracts or regulatory practice areas.

limitation: FedRAMP authorization is expensive and time-consuming, so many legal AI startups do not have it. If your firm handles federal government work, this gap may limit which AI tools you can use for those matters.

If a vendor is already SOC 2 compliant, they have completed approximately 60-70% of the work needed for ISO 27001 certification. Firms with international clients should consider requiring both.

Data Residency and Jurisdiction

Where your data is physically stored and processed determines which laws govern it. For firms with multinational practices or clients subject to data localization requirements, this is not an academic question — it has direct compliance implications.

In which countries or regions is data stored and processed?

AI workloads often run in specific cloud regions based on GPU availability, not customer preference. A vendor may store your data in the US but process it through GPU clusters in another region. Ask about both storage and processing locations. For GDPR-subject data, processing outside the EEA requires specific legal mechanisms like Standard Contractual Clauses.

Can the firm select or restrict data residency to specific regions?

Some enterprise AI vendors offer data residency controls that let you specify where your data is stored and processed. This is particularly important for firms handling EU client matters, Canadian privacy-regulated data, or work subject to data localization laws in jurisdictions like China, Russia, or India.

How does the vendor handle cross-border data transfers?

After the Schrems II decision invalidated the EU-US Privacy Shield, cross-border transfers require alternative mechanisms. The EU-US Data Privacy Framework (DPF) now provides one path, but the vendor must be certified. Standard Contractual Clauses remain the most common mechanism. Ask whether the vendor has conducted a Transfer Impact Assessment for relevant jurisdictions.

Is the vendor subject to laws that could compel data disclosure?

The US CLOUD Act allows US authorities to compel disclosure of data stored abroad by US-headquartered companies. If your firm handles matters adverse to US government interests, this may be relevant. Similarly, the Chinese Cybersecurity Law and National Security Law impose broad disclosure obligations on companies operating in China.

Authentication, Access Controls, and Audit Logging

Security architecture at the firm level is just as important as the vendor's infrastructure. A secure AI platform deployed with weak access controls is a liability. Evaluate both what the vendor provides and what your firm must configure.

Single Sign-On (SSO) and Multi-Factor Authentication: The AI platform should integrate with your firm's identity provider (Okta, Azure AD, etc.) for SSO. MFA should be required, not optional. Consumer-grade AI tools that only offer username/password authentication are inappropriate for law firm use. Ask whether the vendor supports SAML 2.0, OpenID Connect, or both.
Role-Based Access Controls (RBAC): Different users should have different access levels. Partners may need access to all practice areas; associates may need restrictions by matter or client. Paralegals and staff may need different capabilities than attorneys. The platform should support granular permissions that mirror your firm's existing access control policies.
Matter-Level Data Segregation: Data from one client matter must never be accessible to users working on other matters, especially when the firm has ethical walls or conflict screens in place. Ask whether the vendor's architecture supports logical or physical data segregation and how it enforces ethical walls.
Comprehensive Audit Logging: Every AI interaction should be logged with user identity, timestamp, input summary, and output summary. Logs must be tamper-resistant and exportable. These logs serve dual purposes: security monitoring and potential evidence in malpractice defense. If the vendor cannot produce a complete audit trail of who accessed what and when, that is a dealbreaker.
Data Loss Prevention (DLP) Integration: The platform should integrate with your firm's DLP tools to prevent sensitive data categories (Social Security numbers, financial account numbers, personally identifiable information) from being inadvertently submitted to the AI system. This is a defense-in-depth measure that catches human error before it becomes a security incident.

Bar Association Ethics Guidance on AI

The ethics landscape for legal AI has evolved rapidly. ABA Formal Opinion 512, issued in July 2024, provides the national framework, while individual state bars have issued their own guidance that may impose additional requirements. Firms must track both the national standard and the rules in every jurisdiction where they practice.

ABA Formal Opinion 512 (July 2024): The ABA's first formal opinion on generative AI addresses six ethical areas: competence (Model Rule 1.1), confidentiality (Model Rule 1.6), communication with clients (Model Rule 1.4), candor toward the tribunal (Model Rules 3.1 and 3.3), supervisory responsibilities (Model Rules 5.1 and 5.3), and reasonable fees. Lawyers must understand the technology they use, protect client information when using AI tools, and cannot charge clients for time spent learning a technology for general use.
Confidentiality Is the Central Obligation: Opinion 512 emphasizes that attorneys must make reasonable efforts to prevent unauthorized disclosure of client information when using AI tools. This means evaluating the vendor's data practices before use, not after. Simply clicking 'I agree' on a vendor's terms of service does not satisfy the duty of competence or confidentiality.
State Bar Variations: Florida Bar Opinion 24-1 (January 2024), the New York State Bar Association AI Task Force report (April 2024), North Carolina 2024 Formal Ethics Opinion 1, Texas Opinion No. 705 (February 2025), and the Oregon State Bar Formal Opinion 2025-205 all address AI use. While consistent in broad principles, state opinions vary in specificity. Some require client disclosure of AI use; others address billing practices in detail.
Supervisory Duties Extend to AI: Partners and supervising attorneys have ethical obligations to ensure that attorneys and staff under their supervision use AI tools competently. Under Model Rules 5.1 and 5.3, a managing partner who allows firm-wide use of an AI tool without adequate training, policies, or oversight may face personal disciplinary exposure.

The ethics landscape is evolving rapidly. Track new opinions through the ABA Center for Professional Responsibility and your state bar's ethics committee. As of early 2026, over 35 states have issued some form of AI guidance for lawyers.

Insurance and Malpractice Implications

The intersection of AI use and legal malpractice insurance is a developing area that every firm must address proactively. AI adoption has surged from 22% to 80% of firms in just two years, but insurance products and coverage terms have not kept pace. Firms that assume their existing malpractice policy covers AI-related claims may be dangerously wrong.

Coverage Gaps Are Real: Lawyers' professional liability (LPL) policies typically do not explicitly exclude AI-related claims, but coverage depends on whether AI-assisted work meets the policy's definition of 'professional services.' If a lawyer relies on an AI tool without exercising independent professional judgment, an insurer could argue that no professional service was actually provided — and no professional service means no coverage.
AI Exclusions Are Emerging: Some insurers have begun adding AI-related exclusions or limitations to professional liability policies. These exclusions may target losses arising from AI-generated content, automated decision-making, or reliance on AI without human verification. Review your policy's exclusions section carefully and ask your broker whether AI exclusions have been added or are being considered.
The Sanctions Risk: Courts have sanctioned attorneys for submitting AI-generated briefs containing fabricated citations. If an insurer determines that the sanctioned conduct was not a 'professional service' or resulted from a failure to exercise reasonable care, the claim may not be covered. The Mata v. Avianca line of cases has put insurers on notice about this specific risk.
Emerging AI-Specific Coverage: Some insurers, including Munich Re, now offer AI-specific coverage that addresses financial losses from AI errors and third-party liabilities. As AI adoption becomes standard, expect AI-specific endorsements to become available for legal malpractice policies. Proactive firms should discuss AI coverage with their brokers now rather than after an incident.

Nearly half (45%) of law firms plan to upgrade their insurance coverage in response to AI adoption, up from just 14% the prior year. Contact your insurance broker to review your current policy's treatment of AI-related risks.

The Structured Security Checklist: Questions for Every Vendor

Use this checklist during vendor evaluations. These questions are designed to surface the specific risks that matter for law firms. A vendor that cannot answer these questions clearly — or that resists answering them — is not ready for law firm deployment.

1. Do you train your AI models on customer data, including prompts, uploaded documents, or generated outputs?

The only acceptable answer is an unqualified 'no' backed by contractual language. 'Not currently' or 'only in anonymized form' are not sufficient. Obtain a written commitment in the master services agreement, not just a marketing FAQ.

2. Can you provide your current SOC 2 Type II report covering the systems that will handle our data?

Request the full report, not a summary letter. Verify the scope includes the specific product and infrastructure your firm will use. Check the testing period — a report from 18 months ago may not reflect current controls. If the vendor only has SOC 2 Type I, ask when Type II will be available.

3. Where is our data stored and processed, and can we restrict it to specific regions?

Get specific data center locations, not just cloud provider names. Confirm that both storage and GPU processing stay within your required jurisdictions. If the vendor cannot offer data residency guarantees, assess whether this is acceptable given your client base and regulatory obligations.

4. What is your data retention policy, and can we configure shorter retention windows?

Understand what is retained (prompts, outputs, metadata, logs), for how long, and whether retention is configurable per customer. Shorter retention reduces risk. Ask whether backups follow the same retention schedule or persist longer.

5. Do you support SSO via SAML 2.0 or OpenID Connect, and is MFA enforced?

SSO integration with your identity provider is essential for centralized access management. MFA must be required, not optional. Ask about support for conditional access policies, device trust, and session management.

6. What audit logging is available, and can we export logs to our SIEM?

Logs should capture user identity, timestamp, action type, and data accessed. Real-time log streaming to your Security Information and Event Management (SIEM) system enables proactive threat detection. If the vendor only provides basic usage dashboards without exportable audit logs, their platform is not enterprise-ready.

7. Who at your organization can access our data, and under what circumstances?

Understand the vendor's internal access controls. Support engineers, safety reviewers, and developers may have varying levels of access. Ask about background checks, confidentiality agreements, and access approval processes. Zero-trust architecture where no employee can access customer data without specific authorization and logging is the gold standard.

8. What is your breach notification timeline, and what information will you provide?

Your firm's ethical obligations may require faster notification than the vendor's standard timeline. A 72-hour notification window (matching GDPR requirements) is a reasonable expectation. The notification should include what data was affected, when the breach occurred, and what remediation steps are being taken.

9. Will you sign a Data Processing Agreement (DPA) and, if applicable, a Business Associate Agreement (BAA)?

A DPA should specify processing purposes, data categories, subprocessor obligations, and deletion requirements. A BAA is required if any protected health information will be processed. Vendors that refuse to sign these agreements are not suitable for law firm use.

10. What is your incident response plan, and when was it last tested?

Ask for a summary of the vendor's incident response procedures and the date of their last tabletop exercise or simulation. A plan that has never been tested is not a plan — it is a document. Look for vendors that conduct annual or more frequent incident response exercises.

Red Flags That Should Stop an Evaluation

Some vendor behaviors should immediately halt your evaluation process. These are not minor concerns that can be negotiated away — they represent fundamental misalignments with the security requirements of legal practice.

1

No enterprise agreement available — only consumer-grade terms of service

If the vendor does not offer enterprise agreements with negotiable terms, their platform was not built for professional services. Consumer terms typically include broad data usage rights that are incompatible with attorney confidentiality obligations.

2

Data is used for model training with opt-out rather than opt-in

An opt-out model means your data is being used for training by default. Even if you opt out, data submitted before opting out may have already been incorporated. For legal data, the only acceptable model is opt-in, with no training as the default.

3

No SOC 2 report and no credible timeline for obtaining one

SOC 2 Type II is the minimum security assurance threshold for any tool handling confidential client data. A startup that is 'working toward' SOC 2 without a specific auditor, timeline, and scope is not ready for law firm deployment.

4

Vendor cannot identify where data is stored and processed

If the vendor cannot tell you which cloud provider, which regions, and which data centers handle your data, they either do not know or do not want you to know. Neither is acceptable.

5

No multi-factor authentication or SSO support

In 2026, a platform that relies on username and password alone is not taking security seriously. SSO and MFA are fundamental controls, not premium features.

6

Resistance to signing a Data Processing Agreement

A vendor that refuses to commit contractually to specific data handling obligations is telling you that their current practices may not withstand scrutiny. This is a dealbreaker, full stop.

7

No penetration testing or third-party security audits

Vendors should conduct annual penetration testing by independent third parties and be willing to share summary findings. Internal-only security testing is insufficient — it is the security equivalent of grading your own homework.

8

Vague or missing breach notification commitments

If the vendor's contract does not specify a breach notification timeline or the information that will be provided, you may not learn about a breach until it is too late to fulfill your own ethical and regulatory obligations.

Building Your Firm's AI Security Framework

Vendor evaluation is necessary but not sufficient. Your firm needs an internal framework that governs how AI tools are selected, deployed, monitored, and retired. This framework should be a living document, reviewed and updated at least annually.

1

Establish an AI Governance Committee

Include representatives from firm leadership, IT/security, risk management, ethics/compliance, and practicing attorneys from multiple practice groups. This committee should own the AI policy, approve new tools, and monitor ongoing compliance. Without executive sponsorship, AI governance becomes an unfunded mandate.

2

Create a Firm-Wide AI Acceptable Use Policy

Define which AI tools are approved for use, what data categories can be submitted to each tool, prohibited uses (such as submitting privileged communications to consumer-grade AI), and consequences for policy violations. Distribute the policy to all attorneys and staff, and require annual acknowledgment.

3

Implement a Vendor Assessment Process

Use the checklist in this guide as the foundation for a standardized vendor security assessment. Every AI tool — including free trials and personal subscriptions — must go through this process before any client data touches the platform. Shadow AI adoption is the number one risk most firms face.

4

Deploy Technical Controls

Implement network-level controls to block unapproved AI platforms. Deploy DLP tools to prevent sensitive data from being pasted into consumer AI chatbots. Configure approved tools with the most restrictive settings available, then loosen only where justified by business need.

5

Train Attorneys and Staff

Security training for AI tools should go beyond generic cybersecurity awareness. Cover the specific risks of AI (data leakage through prompts, hallucination verification, privilege implications), the firm's approved tools and policies, and practical examples of proper and improper use.

6

Monitor, Audit, and Iterate

Review audit logs regularly for unusual access patterns. Conduct periodic reassessments of approved vendors as their products and policies evolve. Update the AI policy as new ethics opinions, regulations, and security standards emerge. The firms that treat AI security as a one-time checkbox will be the ones that face incidents.

The Bottom Line

  • +AI adoption in legal practice is accelerating, and the security risks are real but manageable. The firms that will emerge strongest are those that adopt AI deliberately — with clear policies, rigorous vendor assessment, and ongoing governance.
  • +Security is not a reason to avoid AI; it is a reason to adopt AI responsibly. Every day your firm delays structured AI governance, individual attorneys are making their own decisions about which tools to use and what data to share. A formal framework replaces that chaos with defensible process.
  • +Start with the checklist in this guide. Run your current AI tools through it. You may find that some pass easily, some need renegotiated terms, and some should be replaced. That clarity is the goal.

Key Takeaways

  • 1.Legal AI security is fundamentally different from general enterprise security because of attorney-client privilege, ethical duties under Model Rule 1.6, and the adversarial nature of litigation data.
  • 2.The single most important vendor question is whether customer data is used for model training — the answer must be an unqualified 'no' backed by contractual language, not just a marketing FAQ.
  • 3.SOC 2 Type II is the minimum certification threshold, but always verify that the report's scope covers the specific systems handling your firm's data.
  • 4.ABA Formal Opinion 512 establishes six ethical obligations for lawyers using AI: competence, confidentiality, client communication, candor to tribunals, supervisory duties, and reasonable fees.
  • 5.Malpractice insurance coverage for AI-related claims is uncertain — some insurers are adding AI exclusions, and reliance on AI without professional judgment may void coverage entirely.
  • 6.Shadow AI (unapproved tools used by individual attorneys) is the largest security risk most firms face; a firm-wide AI acceptable use policy is essential.
  • 7.Red flags that should stop a vendor evaluation: no enterprise agreement, opt-out model training, no SOC 2 report, no MFA/SSO, and resistance to signing a Data Processing Agreement.
  • 8.Firms should establish an AI governance committee, create acceptable use policies, standardize vendor assessments, and conduct AI-specific security training for all attorneys and staff.

References

  1. [1]American Bar Association, "Formal Opinion 512: Generative Artificial Intelligence Tools," ABA Standing Committee on Ethics and Professional Responsibility, July 29, 2024.Link
  2. [2]American Bar Association, "How to Protect Your Law Firm's Data in the Era of GenAI," ABA Business Law Today, December 2024.Link
  3. [3]BDO, "SOC 2 Reports and ISO 27001 Certification for Law Firms: Why Now?" BDO Insights.Link
  4. [4]American Bar Association, "Does Your Professional Liability Insurance Cover AI Mistakes? Don't Be So Sure," ABA Journal, 2025.Link
  5. [5]Justia, "AI and Attorney Ethics Rules: 50-State Survey," Lawyers and the Legal Process Center.Link
  6. [6]Greenberg Traurig LLP, "Navigating Confidentiality Risks in Third-Party AI Tools," GT Insights, July 2025.Link
Back to Research