AI for Attorneys & Law Firms

AI Policy for Law Firms: A Practical Build

How to build an AI policy at a law firm. Roles, rules, training, supervision — operator-grade framework under ABA Model Rules.

Every law firm using AI tools needs a written AI policy. Bar associations are increasingly asking. Malpractice insurers are starting to require. Examiners and ethics counsel expect it. Without one, you're exposed even if your AI tools are configured perfectly.

This is the operator-grade build — lean enough to use, rigorous enough to defend.

What an AI policy must cover

Six components:

  • Scope and tool inventory — What AI is used at the firm
  • Roles and accountability — Who owns AI decisions
  • Use-case policies — What AI can and can't do
  • Data handling rules — Confidentiality protections
  • Training and supervision — Competence and supervisory obligations
  • Audit and review process — How AI use is supervised
Each can be a paragraph or a page. Firm scale dictates depth.

Component 1: Scope and tool inventory

A current inventory of:

  • Tool name and vendor
  • Approved use cases
  • Data the tool processes
  • Compliance posture (SOC 2, encryption, retention)
  • Approval status (Tier 1: approved; Tier 2: conditional; Tier 3: prohibited)
  • Tool owner
Updated quarterly. Most firms discover during this exercise that they have AI tools in use that leadership doesn't know about.

Component 2: Roles and accountability

Three roles at minimum:

  • AI Sponsor — Principal-level owner (typically managing partner or executive committee designee)
  • AI Operator — Operations leader who runs AI tools and workflow
  • AI Ethics Reviewer — Ethics counsel or designated partner who reviews AI use under Model Rules
At firms under 25 attorneys, one person may hold two roles. At larger firms, these are distinct.

Component 3: Use-case policies

The three-tier framework most firms adopt:

Tier 1 — Permitted without specific approval:

  • Internal AI use (drafting memos, summarizing internal docs)
  • AI-assisted research that doesn't produce client deliverables
  • Personal productivity (Microsoft Copilot for email drafting)
Tier 2 — Permitted with documented review:
  • AI-generated client communications (review and supervise)
  • AI-drafted briefs, motions, memos (verify and supervise)
  • AI contract review and drafting (verify and supervise)
  • AI marketing materials (advertising rules apply)
Tier 3 — Prohibited:
  • AI rendering legal advice to clients directly
  • AI signing or filing court documents without attorney supervision
  • Consumer-grade AI tools processing client confidences
  • AI tools without SOC 2 or equivalent for client data
Each firm tunes the tiers to its practice and risk tolerance.

Component 4: Data handling

Four rules:

  • Client data goes only to approved tools. Approved tools have proper data handling for confidentiality (Rule 1.6).
  • PII redaction before AI processing where the tool doesn't redact natively.
  • Retention follows firm books-and-records policy, not vendor defaults.
  • Cross-border data transfers require explicit approval.

Component 5: Training and supervision

Annual training requirements:

  • All attorneys: AI competence training (Rule 1.1) — 60-90 minutes
  • All staff using AI tools: tool-specific training
  • New hires: AI policy review within 30 days
  • Supervising attorneys: supervisory obligations under AI workflows (Rule 5.1/5.3)
  • Ethics counsel: regulatory developments and ABA updates
Documentation: training materials, attendance, acknowledgments, knowledge checks.

Component 6: Audit and review

Three cadences:

  • Weekly: Compliance lead samples 5-10 AI-generated work products
  • Monthly: AI Operator reviews tool usage and exceptions
  • Quarterly: Full inventory review, policy updates if needed
  • Annually: Full AI policy review, training refresh

What examiners and ethics counsel look for

Common questions when AI policy is reviewed:

  • Do you have a written AI policy?
  • What AI tools are in use?
  • How are attorneys trained on AI?
  • How is AI use supervised?
  • How is client confidentiality protected?
  • Have you addressed Rule 1.1 competence?
  • Have you addressed Rule 1.6 confidentiality?
  • Have you addressed Rule 5.1/5.3 supervision?
  • Have you addressed Rule 7.1 marketing communications?
  • What documentation supports your AI use?
If the policy answers these questions cleanly and the audit trail backs the answers, you're in good shape. If not, you have work to do.

ABA Formal Opinion 512 mapping

The policy should explicitly reference and address ABA Formal Opinion 512 (2024) requirements:

  • Competence in AI tools (Rule 1.1)
  • Confidentiality protection (Rule 1.6)
  • Supervisory obligations (Rule 5.1/5.3)
  • Honest billing for AI-assisted work
  • Candor to tribunal (verification of AI output)
  • Communications and marketing rules (Rule 7.1)
Each gets a section in the policy.

State-specific addenda

Different states have published or are publishing AI guidance. The policy should reference applicable state rules in addition to ABA Model Rules:

  • California Bar AI guidance
  • New York State Bar Association AI Task Force opinion
  • Florida Bar AI opinion
  • Texas Bar guidance
  • Illinois ARDC guidance
  • Washington DC Bar opinion
Maintain a current list with summaries.

Training curriculum

The standard six-module training:

  • What AI tools the firm uses — inventory awareness
  • What AI can and can't do — use-case tiers
  • Client data handling — confidentiality protections
  • Supervision and review obligations — Rule 5.1/5.3 application
  • What to do when uncertain — escalation path
  • Recent regulatory developments — annual update
Total: 90 minutes per year per attorney. Documented with acknowledgment.

Engagement letter language

Many firms now include AI disclosure in engagement letters:

"In providing legal services, our firm may use AI tools to assist with research, drafting, document review, and related tasks. All AI-assisted work is reviewed and verified by attorneys representing you, and client confidentiality is maintained through tools that protect privileged information."

Tune to firm style. The disclosure manages client expectations.

Insurance interaction

Legal malpractice insurers increasingly:

  • Ask about AI policy as part of underwriting
  • Offer modest premium reductions for firms with documented AI policies and training
  • Add exclusions for AI-related errors without verification
  • Update coverage as AI use evolves
Maintain AI policy documentation as part of insurance compliance.

What can go wrong without a policy

Three patterns we see at firms without structured AI policy:

  • Junior attorneys using consumer-tier AI with client data. Confidentiality breach with no supervisory framework.
  • Unverified AI citations filed in court. Mata v. Avianca-style outcomes.
  • AI marketing materials with unsupported claims. Bar discipline exposure.
Each is preventable with structured policy and training.

What we recommend

For firms deploying AI:

  • 4-8 page written AI policy
  • Quarterly tool inventory
  • Annual attorney training (90 min minimum)
  • Engagement letter AI language
  • Documented supervision and review process
  • Quarterly compliance review
  • Annual policy refresh
Total upfront work: 20-40 hours of leadership time. Annual maintenance: 5-10 hours.

Bottom line

AI policy at a law firm isn't a binder. It's a small set of explicit decisions about what AI does, who oversees it, how data is handled, and how supervision works. Build it once at the right level of rigor, update it quarterly, and it will protect the firm.

The firms that get this right today will have meaningful regulatory and malpractice defensibility in three to five years. The firms that punt will retrofit policy under exam or insurance pressure.

The choice is when to do the work, not whether.

Frequently asked questions

Do law firms need a written AI policy?

Yes. Bar associations are increasingly asking, ABA Formal Opinion 512 (2024) establishes the framework, malpractice insurers are starting to require, and clients are asking. A 4-8 page policy is the practical minimum for any firm using AI tools.

Who should own AI policy at a law firm?

Three roles: AI Sponsor (principal-level owner, typically managing partner or designee), AI Operator (runs AI day-to-day), AI Ethics Reviewer (ethics counsel or designated partner reviews under Model Rules). At smaller firms, one person can hold two roles.

What's the minimum AI training for attorneys?

Annual training covering Model Rule 1.1 competence in AI tools used at the firm, plus tool-specific training for staff. 60-90 minutes minimum per attorney per year, documented with acknowledgment. New hires within 30 days of joining.

What AI tools should be prohibited at law firms?

Consumer-grade AI tools (free ChatGPT, Claude) processing client data — they don't have proper confidentiality handling under Rule 1.6. AI rendering legal advice directly to clients. AI signing or filing court documents without attorney supervision.

How often should AI policy be reviewed?

Quarterly tool inventory review, annual full policy refresh, plus updates when new regulations or tools emerge. AI evolves quickly; the policy must keep pace. Plan for active maintenance, not one-time creation.

Related guides

Need help implementing this?

//prometheus does onsite AI consulting and implementation in Milwaukee. We set it up, train your team, and make sure it works.

let's talk