Amicore

Build vs. Buy: When Law Firms Should Build Custom AI Workflows Instead of Licensing Platforms

A decision framework for IT directors and managing partners evaluating custom AI development against licensed legal AI platforms

Last updated: February 11, 2026 Guide

The legal AI market has split into two distinct paths. On one side, enterprise platforms like Harvey, CoCounsel, and Lexis+ AI offer turnkey solutions with per-seat licensing. On the other, foundation model APIs from Anthropic and OpenAI have made it feasible for firms to build custom AI workflows at a fraction of the per-seat cost. The right choice depends on your firm's size, technical capacity, data sensitivity requirements, and how central AI is to your competitive strategy. This guide provides a concrete framework for making that decision — with real cost models, architectural considerations, and honest assessments of what building actually requires.

The Decision Framework: When to Build, When to Buy

The build-vs-buy decision is not binary, and the right answer depends on several intersecting factors. Most firms that get this wrong either overestimate their technical capacity to build or underestimate the ongoing maintenance cost of custom solutions. The following framework identifies the conditions that favor each approach.

Build When Workflows Are Firm-Specific: If your competitive advantage depends on proprietary processes — a specific approach to due diligence, a playbook for regulatory filings, or a firm-specific way of analyzing contracts — custom-built tools can encode that judgment directly. Off-the-shelf platforms impose generic workflows that may not match how your best lawyers actually work.
Buy When Speed to Deployment Matters: Licensed platforms can be operational in weeks. Custom builds typically require 3-6 months for a production-grade tool, longer if you need to hire engineers. If your managing partners want AI capabilities by next quarter, buying is the faster path.
Build When Data Must Stay On-Premise: Firms handling highly sensitive matters — national security, trade secrets, ongoing litigation with significant exposure — may need AI that runs entirely within their own infrastructure. API-based builds offer more control over data flow than hosted platforms, and self-hosted open-source models provide an air-gapped option.
Buy When You Need Integrated Legal Content: Harvey's LexisNexis partnership and CoCounsel's Westlaw integration provide access to authoritative legal databases that no custom build can replicate. If citation-grounded legal research is a primary use case, the content partnerships alone may justify the licensing cost.
Build When Per-Seat Costs Exceed API Costs: At enterprise scale, per-seat licensing can cost $2,700-$36,000 per attorney per year. A custom API-based solution serving the same number of users may cost a fraction of that in token consumption, though you must add engineering and infrastructure costs to the comparison.
Buy When You Lack Engineering Talent: Building is not just about the initial development. Production AI systems require ongoing prompt engineering, monitoring, security patching, and model migration when providers release new versions. Without at least one dedicated engineer, custom builds become technical debt within months.

What 'Building' Actually Means in Practice

When law firm IT directors hear 'build custom AI,' many picture a team of machine learning engineers training models from scratch. That is almost never what building means in 2026. Modern custom legal AI is built on top of foundation models via APIs, combined with retrieval systems that ground the AI in your firm's documents and knowledge.

API Integration: The simplest build. You call Claude or GPT via their API, wrap it in a firm-branded interface, and add system prompts tuned for legal tasks. This can be a web app, a Slack bot, or a Microsoft Teams integration. Development time: 2-4 weeks for a basic tool, 2-3 months for a polished product.
RAG Pipelines (Retrieval-Augmented Generation): The most common architecture for serious legal AI. Your firm's documents — briefs, contracts, memos, playbooks — are chunked, embedded into vectors, and stored in a database. When a user asks a question, the system retrieves relevant document chunks and feeds them to the LLM alongside the query. This grounds responses in your actual work product rather than the model's general training data.
Prompt Libraries and Playbooks: Structured prompt templates for recurring tasks: contract review checklists, regulatory compliance analysis, deposition summary formats. These encode senior attorney judgment into reusable, consistent workflows without requiring any infrastructure beyond an API key. Harvey customers have built over 15,000 custom workflows using this approach even within the platform.
Fine-Tuning (Rare for Legal): Training a model's weights on your firm's data. This is expensive, requires ML expertise, and is rarely necessary for legal applications. RAG achieves similar grounding benefits without the cost and complexity of fine-tuning. Fine-tuning makes sense only when you need the model to adopt a specific writing style or reasoning pattern that prompt engineering cannot achieve.
Agentic Workflows: Multi-step AI systems that break complex tasks into subtasks — for example, an agent that reads a contract, identifies non-standard clauses, retrieves your firm's preferred language for each, drafts redlines, and generates a summary memo. These are the most powerful but also the most complex to build and maintain reliably.

Cost Comparison: API vs. Per-Seat Licensing

The most concrete way to evaluate build vs. buy is to model the actual costs. Below is a comparison using current pricing as of early 2026. API costs assume Claude Sonnet 4.5 ($3/$15 per million input/output tokens) or GPT-4o ($5/$15 per million tokens). Platform costs use publicly reported estimates for Harvey and CoCounsel.

5-attorney firm$50-200/month in API tokens, plus ~$500/month amortized dev cost$165-1,250/month ($400-3,000/seat/year)$1,125/month ($225/user/month)Build likely wins on cost; buy wins on time-to-value
25-attorney firm$250-1,000/month in tokens, plus $2,000-4,000/month for a part-time engineer$830-6,250/month$5,625/monthHybrid approach often optimal — buy a platform, build niche tools
100-attorney firm$1,000-5,000/month in tokens, plus $8,000-15,000/month for 1-2 engineers$3,330-25,000/month$22,500/monthBuild becomes competitive if you have or can hire engineering talent
500-attorney firm (BigLaw)$5,000-25,000/month in tokens, plus $25,000-50,000/month for a small eng team$16,660-125,000/month$112,500/monthCustom builds can save $500K-1M+ annually at this scale

API token estimates assume moderate usage of 50-100 queries per attorney per month with average context windows. Actual costs vary significantly based on document sizes, query complexity, and whether you use prompt caching (which can reduce costs by up to 90% for repeated contexts). Platform pricing estimates are based on industry reporting and may not reflect negotiated enterprise rates.

The Hidden Costs of Building

  • +Token costs are the smallest line item. The real costs of building are engineering salaries, infrastructure (vector databases, hosting, monitoring), and ongoing maintenance. A production RAG pipeline requires someone who can manage embeddings, tune retrieval, handle edge cases, and migrate when model providers change their APIs.
  • +Model migrations are inevitable. When Anthropic or OpenAI releases a new model, your prompts may need adjustment. What worked perfectly with Claude Sonnet 4 may behave differently on Sonnet 4.5. Budget for quarterly prompt tuning cycles.
  • +Security and compliance are your responsibility. A licensed platform handles SOC 2 compliance, data encryption, access logging, and audit trails. When you build, every one of those requirements falls on your team. For a regulated industry like law, this is not optional.
  • +Training and change management still apply. Custom tools need documentation, user training, and internal champions. The technology is the easy part — getting 100 attorneys to actually use the tool is the hard part, whether you build or buy.

Data Sensitivity and Control

Attorney-client privilege and ethical obligations make data handling the single most important factor in the build-vs-buy analysis for many firms. Understanding exactly where your data goes — and where it does not — is essential regardless of which path you choose.

API Data Policies (Build Path): Both Anthropic and OpenAI state that API data is not used for model training by default. Anthropic's API terms explicitly exclude customer data from training. OpenAI's API has a similar policy, though their consumer products (ChatGPT) do use conversations for training unless opted out. For maximum control, enterprise API agreements add contractual guarantees, and Azure-hosted OpenAI endpoints provide additional isolation.
Platform Data Policies (Buy Path): Enterprise legal AI platforms like Harvey emphasize that client data remains isolated — no cross-contamination between firms. Harvey dedicates over 10% of its organization to security and has passed the compliance reviews of 50+ AmLaw 100 firms. However, data still leaves your network and transits through the vendor's infrastructure.
Self-Hosted Models (Maximum Control): For the most sensitive work, open-source models like Llama, Mistral, or DeepSeek can run entirely on your own servers. Performance lags behind frontier APIs for complex legal reasoning, but for targeted tasks like document classification or clause extraction, self-hosted models can be sufficient — and your data never leaves your network.
The Ethical Obligation: ABA Model Rule 1.6 requires lawyers to make reasonable efforts to prevent unauthorized disclosure of client information. ABA Formal Opinion 512 specifically addresses generative AI, requiring lawyers to understand how AI tools process client data. Whether you build or buy, you need clear documentation of data flows for your ethics compliance file.

The Integration Challenge

Legal AI does not operate in isolation. Its value multiplies when connected to your document management system, billing platform, case management software, and email. Integration is often the deciding factor in build-vs-buy because it determines whether AI becomes part of daily workflows or remains a standalone novelty.

Document Management (iManage, NetDocuments)

Licensed platforms increasingly ship with DMS connectors — Harvey and CoCounsel both offer integrations with major DMS platforms. Custom builds require writing API integrations to your DMS, which can be straightforward (iManage Cloud has a REST API) or challenging (on-premise DMS installations may lack modern API access).

Microsoft 365 (Word, Outlook, Teams)

Harvey offers Word and Outlook add-ins. CoCounsel integrates via the Westlaw ecosystem. Custom builds can use the Microsoft Graph API and Office Add-in framework, but building a polished Word sidebar experience takes meaningful engineering effort — expect 2-4 months of dedicated development.

Practice Management and Billing

Clio's Vincent AI demonstrates what integrated practice management AI looks like — AI embedded in matter management, time tracking, and client intake. Custom builds rarely attempt billing integration because the data formats are complex and error-prone. If practice management integration is critical, buying is usually the right path.

Email and Communication

Summarizing email threads, drafting responses, and flagging action items are high-value AI applications. Both paths can work — Microsoft Copilot handles general email AI, while custom builds using the Gmail or Outlook API can be tuned for legal-specific patterns like deadline detection and privilege review flags.

The Hybrid Approach: Buy the Platform, Build the Edge

In practice, the most successful firms are not choosing between building and buying — they are doing both. The hybrid model uses a licensed platform as the foundation for broad capabilities (research, general drafting, document review) while building custom tools for firm-specific workflows that no platform addresses well.

Platform for Research, Custom Tool for Deal Analysis

A mid-size M&A practice licenses CoCounsel for Westlaw-grounded legal research but builds a custom RAG pipeline over their proprietary deal database to analyze comparable transaction terms. The platform handles general research; the custom tool handles what makes the firm's analysis distinctive.

Why it excels: Legal research benefits from authoritative content partnerships that no custom build can replicate. But deal comparables are proprietary — the firm's unique dataset is its competitive edge, and a custom tool over that data creates irreplaceable value.

Platform for Junior Associates, Custom for Senior Practice

A large litigation firm deploys Harvey firm-wide for general drafting and research tasks, while the IP practice group builds a custom patent analysis workflow using Claude's API, tuned to their specific claim construction methodology and trained on their portfolio of prior analyses.

Why it excels: Harvey provides broad coverage that benefits everyone. The custom patent tool encodes years of partner-level judgment into a repeatable process that junior associates can execute at senior-level quality.

Platform Vendor, Custom Intake Automation

A plaintiff's firm uses a commercial platform for case research and drafting but builds a custom intake system that uses an LLM to analyze incoming case inquiries, score case viability based on the firm's historical win/settlement data, and route qualified leads to the right attorney.

Why it excels: No commercial platform knows your firm's case selection criteria. Custom intake automation directly increases revenue by improving lead qualification and reducing the time partners spend evaluating cases that do not meet the firm's thresholds.

Build Complexity vs. Vendor Lock-In

Both paths carry long-term risks. Building creates technical debt and dependency on specific engineering talent. Buying creates vendor lock-in and dependency on the vendor's roadmap, pricing decisions, and continued viability. The question is which set of risks you are better equipped to manage.

Switching CostModerate — custom tools built on standard APIs (Claude, GPT) can switch providers. Prompt libraries may need adjustment, but core architecture is portable.High — workflow templates, training data, and organizational muscle memory are tied to the platform. Switching vendors means retraining your entire firm.
Price EscalationLow — API pricing has consistently dropped. Claude and GPT token costs have fallen 60-80% since 2023. Competition between providers keeps prices in check.High — enterprise platforms have raised prices as they add content partnerships. Harvey's premium tier with LexisNexis content reportedly runs ~$3,000/seat versus ~$400-600 for basic access.
Feature Roadmap ControlFull control — you build exactly what your firm needs, when it needs it. But you also bear the full cost of every feature.No control — you get what the vendor ships. Feature requests may take months or years. The vendor's roadmap serves their entire customer base, not your specific needs.
Talent DependencyHigh — if your lead engineer leaves, your AI tools may stagnate. Knowledge is concentrated in a small team, and legal AI engineers are in high demand.Low — the vendor manages their own team. Platform knowledge is broadly distributed among your attorneys, not concentrated in one technical person.
Security ResponsibilityFull — you own compliance, audit, encryption, access controls, and incident response. This is substantial for a regulated industry.Shared — the vendor handles infrastructure security and typically provides SOC 2, ISO 27001, or equivalent certifications. You still manage user access and data governance.

Case Scenarios by Firm Size

The right approach varies dramatically by firm size, not because of cost alone but because of available talent, risk tolerance, and the complexity of workflows that AI needs to support.

Solo and Small Firms (1-10 Attorneys)

At this scale, building means using no-code or low-code tools — custom GPTs, Claude Projects, or simple API wrappers. There is no engineering team and no IT department. The 'builder' is typically a tech-savvy partner or office manager. Investment in a full platform rarely makes economic sense when a $20/month Claude Pro subscription covers most needs.

Example: A solo practitioner creates a Claude Project loaded with their standard engagement letters, fee agreements, and local court rules. Cost: $20/month. Time to build: one afternoon. The tool drafts client-specific engagement letters in the firm's voice and format.

Why it excels: Per-seat platform licenses ($225-400/month per user) are disproportionately expensive for small firms. API-based approaches cost $20-200/month total and can handle the lower query volumes that small firms generate.

Mid-Size Firms (25-75 Attorneys)

This is the most complex decision point. The firm is large enough that per-seat licensing adds up to real money ($67,500-270,000/year for 25 seats) but may not have dedicated engineering staff. The hybrid approach works well here: license a platform for broad capability, build targeted tools for 2-3 practice-specific workflows.

Example: A 40-attorney insurance defense firm licenses CoCounsel for research ($108,000/year) and builds a custom demand letter analysis tool using Claude's API that scores incoming demands against their historical settlement database. The custom tool costs $3,000/month including a part-time contractor. Total spend: $144,000/year versus $180,000+ for a second platform license.

Why it excels: Mid-size firms get the reliability and research grounding of a licensed platform while investing in custom tools that encode their specific competitive advantages — workflows that no vendor would build for a firm their size.

Large Firms and BigLaw (100-500+ Attorneys)

At BigLaw scale, the economics flip. Per-seat licensing at $400-3,000/seat for 500 attorneys costs $200,000 to $1.5 million per year. A dedicated AI engineering team (3-5 people) costs $500,000-800,000/year but can build and maintain multiple custom tools serving the entire firm. Many AmLaw 100 firms now have innovation teams doing exactly this — building on APIs while maintaining platform licenses for research grounding.

Example: A 300-attorney firm employs a 4-person AI engineering team that has built custom tools for contract review, due diligence, and deposition summary. API costs run $10,000/month. Engineering team costs $55,000/month. Total: $65,000/month versus $100,000-750,000/month in platform licenses for equivalent coverage across all attorneys.

Why it excels: Scale makes engineering costs efficient. The same 4-person team that builds tools for 100 attorneys can support 500 with minimal additional cost. Meanwhile, per-seat licensing scales linearly — twice the attorneys means twice the licensing fee.

Skills and Resources Required for Building

The most common reason custom AI projects fail in law firms is not technology — it is underestimating the human resources required to build, deploy, and maintain production-quality tools. Here is an honest assessment of what different build approaches require.

1

Tier 1: Prompt Libraries and Custom GPTs (No Coding Required)

Any attorney can build structured prompt templates using Claude Projects, custom GPTs, or similar tools. This requires no technical skills beyond clear writing and willingness to iterate. Time investment: 2-10 hours per workflow. Ongoing maintenance: minimal — update prompts when model behavior changes.

2

Tier 2: Simple API Integration (Basic Development Skills)

A Python or JavaScript developer can build a web interface that calls the Claude or GPT API with custom system prompts and document upload. Skills needed: basic web development, API integration, and prompt engineering. Can be done by a tech-savvy paralegal who has completed an online coding course, or a part-time contractor at $75-150/hour.

3

Tier 3: RAG Pipeline (Intermediate Engineering)

A production RAG system requires a vector database (Pinecone, Weaviate, or pgvector), a document processing pipeline (chunking strategy, metadata extraction), and retrieval logic. Skills needed: Python, embedding models, vector databases, and document parsing. Typically requires a mid-level software engineer with 2-3 months of development time.

4

Tier 4: Agentic Workflows (Senior Engineering)

Multi-step agent systems that chain reasoning, tool use, and document analysis require careful error handling, evaluation frameworks, and monitoring. Skills needed: senior-level Python development, familiarity with agent frameworks (LangChain, LlamaIndex, or custom), and experience with production ML systems. Expect 2-4 senior engineers and 4-6 months for a reliable system.

5

Tier 5: Self-Hosted Models (ML Engineering + Infrastructure)

Running open-source models on your own hardware requires GPU infrastructure, model optimization, and ML operations expertise. This is rarely justified for law firms unless data sensitivity is extreme. Skills needed: ML engineering, DevOps/MLOps, GPU cluster management. Budget: $50,000+ in hardware or cloud GPU costs, plus a dedicated ML engineer.

Most law firms building custom AI tools operate at Tier 2 or Tier 3. The National Law Review reports that functional legal AI tools can be built in under an hour using conversational 'vibe coding' approaches at Tier 1, and many firms are finding that simple, well-prompted API integrations deliver 80% of the value at 20% of the cost of a full platform.

RAG Architecture for Legal Applications

Retrieval-Augmented Generation has become the dominant architecture for custom legal AI tools because it directly addresses the two biggest risks in legal AI: hallucination and staleness. A RAG system retrieves relevant documents from your firm's own knowledge base before generating a response, grounding the output in real, verifiable sources rather than the model's parametric memory.

Why RAG Suits Legal Work: The legal field is uniquely positioned for RAG because of the availability of high-quality structured databases — statutes, case law, regulations, and firm work product. A Harvard JOLT analysis found that RAG-powered legal tools produced statistically significant gains in four of six tested tasks, with hallucination rates reduced to levels comparable to work completed without AI assistance.
Document Chunking Strategy: How you split documents into retrievable segments materially affects quality. Legal documents have natural structure — sections, clauses, paragraphs — that should guide chunking. Splitting a contract by clause preserves meaning better than arbitrary 500-token windows. Overlapping chunks (where each chunk includes the end of the previous one) help preserve context at boundaries.
Embedding Model Selection: Your choice of embedding model determines how well the system matches user queries to relevant document chunks. General-purpose models (OpenAI text-embedding-3-large, Cohere embed-v3) work reasonably well for legal text. Domain-specific legal embeddings can improve retrieval precision but require more setup and may not be worth the added complexity for most firms.
Vector Database Options: For most law firm builds, a managed vector database like Pinecone or Weaviate is the pragmatic choice — no infrastructure management required. PostgreSQL with the pgvector extension is a good option for firms that prefer to keep everything in a single database. For maximum data control, Chroma or Qdrant can run on your own servers.
Hybrid Retrieval: The best legal RAG systems combine vector similarity search (semantic meaning) with keyword search (exact term matching). Legal queries often require both — a user searching for 'force majeure clauses in Delaware limited partnerships' needs semantic understanding of the concept and precise term matching for jurisdictional specificity.

Decision Checklist: 10 Questions Before You Commit

Before you commit to building, buying, or a hybrid approach, work through these questions with your leadership team. The answers will point you toward the right strategy for your firm's specific situation.

Do you have — or can you hire — at least one engineer who will own this long-term?

If the answer is no, start with buying. A custom build without dedicated engineering talent becomes abandoned software within 6-12 months. Part-time contractors can handle initial development, but someone on staff needs to own maintenance.

Is citation-grounded legal research a primary use case?

If yes, lean toward buying. Harvey's LexisNexis partnership and CoCounsel's Westlaw integration provide authoritative research grounding that no custom RAG pipeline over your own documents can match. You cannot build a Westlaw competitor.

How many attorneys will use AI tools regularly?

At fewer than 20 regular users, per-seat platform costs are manageable. At 50+ users, the annual licensing cost may exceed what it would cost to build and maintain a custom solution. Model your actual expected usage, not your total headcount.

What is your most valuable proprietary workflow?

Identify the 2-3 workflows where your firm's approach is genuinely different from competitors. These are candidates for custom builds. Everything else — general research, first-draft memos, email summarization — is better served by platforms that have already solved these problems.

Can your data leave your network?

If you handle matters where even the existence of an engagement is privileged, API calls to external providers may be unacceptable. Evaluate whether enterprise API agreements (with their no-training guarantees) satisfy your obligations, or whether you need self-hosted models.

What is your timeline?

If you need capabilities in 30 days, buy. If you can invest 3-6 months, building becomes viable. The firms that regret their choice most often are those that started building under unrealistic timeline pressure and ended up with a half-finished tool that no one uses.

Have you calculated total cost of ownership, not just licensing or API fees?

For buying: add training, change management, and integration costs. For building: add engineering salaries, infrastructure, security compliance, and ongoing maintenance. The platform license or API bill is often less than half the true cost.

How important is vendor support and accountability?

When a custom tool breaks at 2 AM before a filing deadline, your engineer gets the call. When a platform breaks, the vendor's support team gets the call. For firms without 24/7 IT coverage, vendor accountability has tangible value.

Is AI central to your competitive strategy or a utility?

If AI-powered analysis is a client-facing differentiator — something you sell as a service advantage — building custom tools makes strategic sense. If AI is a back-office efficiency tool, the build-vs-buy decision is purely economic.

What is your exit strategy if the approach fails?

If a custom build fails, you have spent engineering hours and can pivot to a platform. If a platform fails (vendor shuts down, prices triple, security incident), you are starting from zero. Consider the reversibility of each option. Multi-year platform contracts with high exit costs deserve extra scrutiny.

The Bottom Line

  • +There is no universally right answer. The firms getting the most value from legal AI in 2026 are not the ones that made the 'correct' build-or-buy choice — they are the ones that made a deliberate choice based on their actual capabilities, budget, and strategic priorities.
  • +Start with the workflow, not the technology. Identify your highest-value AI use cases first. Then evaluate whether a platform or custom build serves those specific workflows better. Technology selection follows strategy, not the other way around.
  • +Most firms will do both. The hybrid model — licensing a platform for broad capability while building custom tools for 2-3 firm-specific workflows — is emerging as the dominant strategy across firm sizes. The question is not build OR buy. It is what to build and what to buy.
  • +Revisit annually. API costs are dropping, platform capabilities are expanding, and new options emerge quarterly. The right answer in February 2026 may not be the right answer in February 2027. Build flexibility into your contracts and your architecture.

Key Takeaways

  • 1.The build-vs-buy decision hinges on five factors: firm size, engineering capacity, data sensitivity, workflow specificity, and whether citation-grounded legal research is a primary use case.
  • 2.API token costs have dropped 60-80% since 2023 — Claude Sonnet 4.5 runs at $3/$15 per million tokens, making custom builds economically viable even for mid-size firms.
  • 3.At 100+ attorney firms, custom API-based solutions can save $500K-1M+ annually compared to per-seat platform licensing, but only if you invest in dedicated engineering talent.
  • 4.RAG (Retrieval-Augmented Generation) is the dominant architecture for custom legal AI, grounding responses in your firm's actual documents and reducing hallucination to levels comparable to non-AI work.
  • 5.The hybrid approach — licensing a platform for research and general tasks while building custom tools for firm-specific workflows — is emerging as the dominant strategy across firm sizes.
  • 6.Most custom legal AI tools operate at Tier 2-3 complexity: API wrappers with structured prompts or RAG pipelines over firm documents, not self-hosted models or ground-up ML training.
  • 7.Vendor lock-in risk is real: enterprise platforms have raised prices as they add content partnerships, and switching costs include retraining your entire firm. Custom builds on standard APIs are more portable.
  • 8.The most common failure mode is underestimating maintenance — a custom build without a dedicated engineer becomes abandoned software within 6-12 months.

References

  1. [1]National Law Review, "Why Smart Lawyers Are Building AI Tools Instead of Buying Them." [Online]. Available: https://natlawreview.com/article/why-smart-lawyers-are-building-ai-tools-instead-buying-themLink
  2. [2]Johnston, P., "Retrieval-Augmented Generation (RAG): Towards a Promising LLM Architecture for Legal Work," Harvard Journal of Law & Technology, Apr. 2025. [Online]. Available: https://jolt.law.harvard.edu/digest/retrieval-augmented-generation-rag-towards-a-promising-llm-architecture-for-legal-workLink
  3. [3]Anthropic, "Pricing — Claude API Documentation." [Online]. Available: https://docs.anthropic.com/en/docs/about-claude/pricingLink
  4. [4]American Bar Association, "The Legal Industry Report 2025," Law Technology Today, 2025. [Online]. Available: https://www.americanbar.org/groups/law_practice/resources/law-technology-today/2025/the-legal-industry-report-2025/Link
  5. [5]Legal IT Insider, "A&O Shearman partners with Harvey to launch practice-based workflow tools," Apr. 2025. [Online]. Available: https://legaltechnology.com/2025/04/07/ao-shearman-partners-with-harvey-to-launch-practice-based-workflow-tools/Link
  6. [6]OpenAI, "API Pricing." [Online]. Available: https://openai.com/api/pricing/Link
Back to Research