The EU AI Act is the first comprehensive AI regulation in the world. It entered into force on August 1, 2024, with obligations rolling out in phases through 2027. For professional services firms, the consequences are both direct (your own AI tools are now regulated) and indirect (your clients will need help navigating these rules).

This guide covers what law firms, consulting firms, and accountancies need to know. No legal theory. Just practical requirements, deadlines, and steps.

73%
of EU law firms use AI tools but lack formal usage policies
CCBE Survey, 2025
Aug 2025
GPAI obligations now in force (ChatGPT, Claude, Copilot)
EU AI Act Art. 51-56
EUR 35M
maximum fine for prohibited AI practices (or 7% of global turnover)
EU AI Act Art. 99

1. What the EU AI Act requires

The EU AI Act classifies AI systems into four risk tiers. Each tier carries different obligations. The regulation applies to anyone who develops or deploys AI systems within the EU, regardless of where the AI provider is based.

For professional services firms, the most relevant role is deployer: an entity that uses an AI system under its authority. If your firm uses Harvey, Luminance, Kira, CoCounsel, or even ChatGPT for client work, you are a deployer.

The four risk levels:

2. Key deadlines for professional services

August 1, 2024
EU AI Act enters into force
The regulation officially published. 24-month transition period begins for most provisions.
February 2, 2025
Prohibited practices enforceable
Banned AI systems must be discontinued. Covers social scoring, manipulative AI, and most emotion recognition in workplaces. If you use sentiment analysis tools in HR, review immediately.
August 2, 2025
GPAI model obligations apply
Providers of general-purpose AI models (OpenAI, Anthropic, Google, Microsoft) must comply with transparency and documentation requirements. Deployers must ensure their usage of these tools meets transparency rules.
August 2, 2026
Full enforcement begins
All provisions enforceable, including high-risk AI system requirements. Conformity assessments, risk management systems, and quality management must be in place.
August 2, 2027
Existing high-risk AI systems must comply
AI systems already on the market or in service before August 2026 have an additional year to achieve full compliance.

Why this matters now: GPAI obligations (August 2025) are already in force. If your firm uses ChatGPT, Claude, Copilot, or any other general-purpose AI tool, you should already have transparency measures in place. The August 2026 deadline for high-risk systems is 5 months away.

3. Risk classification for tools you already use

The risk level depends on the use case, not the tool itself. The same AI system can be minimal risk in one context and high risk in another. Here is how common professional services AI tools map to the EU AI Act risk tiers:

AI Tool / Use Case Risk Level Why
AI document review (Harvey, Luminance, Kira) Limited AI-assisted search and analysis. Human makes final decisions. Transparency obligation: clients should know AI was used.
AI contract drafting/generation Limited AI generates content that a human reviews before delivery. Transparency obligation applies.
AI for HR decisions (hiring, promotion, termination) High Annex III explicitly lists AI in employment, worker management, and access to self-employment. Full high-risk compliance required.
AI credit scoring / financial assessment High Listed in Annex III. AI evaluating creditworthiness or insurance risk requires conformity assessment.
Chatbots for client intake Limited Must disclose that the user is interacting with an AI system. Transparency only.
AI legal research (Westlaw AI, CoCounsel) Minimal Internal tool assisting professionals. No direct client impact. Voluntary codes apply.
AI tax filing / automated accounting Limited Produces outputs that a human reviews. If AI makes autonomous decisions affecting tax obligations, could escalate to high risk.
Emotion recognition in interviews Prohibited Emotion recognition in workplace/education contexts is banned (with narrow law enforcement exceptions). Since February 2025.
AI-powered due diligence Limited AI analyses documents, human makes decisions. Transparency obligation. May escalate if AI autonomously flags/rejects deals.
Predictive analytics for case outcomes High (potential) If used in administration of justice or to influence judicial outcomes, this falls under Annex III. Context-dependent.

The critical distinction: Most AI tools used in professional services today fall in the limited risk category, which means transparency is the primary obligation. But two areas frequently trigger high risk: HR decisions and financial assessments. If your firm uses AI in either area, start compliance work now.

4. Your obligations as a deployer

Even when you are not developing AI (just using tools built by others), the EU AI Act assigns specific obligations to deployers. The scope depends on the risk level.

For all risk levels

For high-risk AI systems

5. General-purpose AI: ChatGPT, Copilot, Claude

General-purpose AI (GPAI) models get their own regulatory track (Title IIIa). Since August 2025, GPAI providers must:

As a deployer of GPAI tools, your obligations are different from the provider's. You do not need to document the model's architecture. But you do need to:

Practical implication: If you are using ChatGPT Enterprise, Microsoft Copilot, Claude, or similar tools, you should already have a written AI usage policy, a training program, and disclosure language in your client engagement letters. This is not best practice. It is a legal requirement since August 2025.

6. Sector-specific guidance

Law firms

Legal services sit at a unique intersection. You are both subject to the regulation (your own AI tools) and positioned to advise clients on it. Key points:

Consulting and advisory firms

Management consultancies face dual exposure. Internal AI tools and client-facing AI recommendations both fall under the regulation.

Accounting and audit firms

The intersection of AI and financial reporting creates specific compliance considerations.

7. 10-step compliance checklist

Use this checklist to assess your firm's readiness. Each step maps to specific articles in the regulation.

  1. 1
    Inventory all AI systems in use
    List every AI tool your firm uses, from enterprise platforms (Harvey, Luminance) to individual tools (ChatGPT accounts, Copilot, Grammarly). Include shadow IT. You cannot comply with what you do not know about.
  2. 2
    Classify each system by risk level
    Map each tool to its use case and determine the risk tier. Use the table above as a starting point. Remember: risk level is determined by use case, not by the tool itself.
  3. 3
    Check for prohibited uses
    Confirm no AI system is used for social scoring, workplace emotion recognition, or other banned practices. This has been enforceable since February 2025. If found, discontinue immediately.
  4. 4
    Draft or update your AI usage policy
    Document approved tools, permitted use cases, required safeguards, and prohibited uses. This is your foundational compliance document. It should be reviewed quarterly.
  5. 5
    Implement transparency measures
    Update client engagement letters, website terms, and internal communications to disclose AI use where required. For chatbots, add clear "AI-powered" labels. For AI-assisted deliverables, include disclosure language.
  6. 6
    Establish human oversight protocols
    For each AI system, define who reviews its outputs, how they review them, and what authority they have to override. Document this. For high-risk systems, assign named individuals with appropriate qualifications.
  7. 7
    Conduct AI literacy training
    Article 4 requires that all staff using AI systems have sufficient understanding. This means structured training, not just a memo. Cover tool capabilities, limitations, and the firm's usage policies.
  8. 8
    Review data processing and privacy
    AI tools processing personal data must comply with both the AI Act and GDPR. Review your DPAs, update privacy impact assessments, and ensure data minimization principles apply to AI inputs.
  9. 9
    Verify provider compliance
    Request compliance documentation from your AI tool providers. For GPAI models, confirm the provider has published required technical documentation. Include compliance obligations in vendor contracts.
  10. 10
    Set up monitoring and incident reporting
    Establish processes to detect AI system malfunctions, biased outputs, or security incidents. For high-risk systems, you must report serious incidents to both the provider and the relevant national authority.

8. Penalties and enforcement

The EU AI Act introduces a tiered penalty structure:

Violation Maximum Fine
Prohibited AI practices (Art. 5) EUR 35 million or 7% of global annual turnover
High-risk AI system obligations EUR 15 million or 3% of global annual turnover
Providing incorrect information to authorities EUR 7.5 million or 1% of global annual turnover

For SMEs and startups, proportional caps apply. But for most professional services firms, the reputational damage of non-compliance may outweigh the financial penalties. A law firm found to be using AI irresponsibly risks losing client trust and facing professional disciplinary proceedings.

Enforcement will be handled by national authorities. The European AI Office coordinates cross-border cases and oversees GPAI model compliance directly. Each member state must designate a national supervisory authority by August 2025.

9. Five common misconceptions

Misconception 1: "We only use ChatGPT, so we are not regulated."

Incorrect. Using ChatGPT makes you a deployer of a GPAI system. Transparency obligations apply. You must inform clients when AI-generated content is included in deliverables, and you must maintain a usage policy.

Misconception 2: "The AI Act is about AI developers, not users."

The regulation creates obligations for both providers (developers) and deployers (users). Deployer obligations are lighter but still legally binding, particularly for transparency, human oversight, and high-risk system monitoring.

Misconception 3: "Our AI vendor handles compliance."

Vendors handle provider obligations (model documentation, technical standards). Deployer obligations (transparency, human oversight, training, incident reporting) are yours. Vendor compliance does not substitute for your own.

Misconception 4: "Full enforcement is not until 2026."

Partially correct. Prohibited practices have been enforceable since February 2025. GPAI obligations since August 2025. High-risk system requirements from August 2026. If you use GPAI tools today, you should already be compliant with the transparency and AI literacy provisions.

Misconception 5: "This only applies to firms in the EU."

The AI Act has extraterritorial scope. If your AI system's output is used within the EU, the regulation applies, regardless of where your firm is headquartered. This mirrors the GDPR's extraterritorial reach.

10. What to do this quarter

With GPAI obligations already in force and high-risk deadlines approaching in August 2026, here is what to prioritize in Q2 2026:

  1. Complete your AI inventory (1 week). If you do not have one, this is the single highest priority. You cannot manage risk you have not mapped.
  2. Publish your AI usage policy (2 weeks). Even a basic policy puts you ahead of most firms. It signals to regulators, clients, and staff that you take AI governance seriously.
  3. Update client engagement letters (1 week). Add disclosure language about AI use. Your competitors will do this. Being late looks worse than being early.
  4. Schedule AI literacy training (ongoing). Start with a 2-hour session covering what tools are approved, what is prohibited, and how outputs should be reviewed. Repeat quarterly.
  5. Identify high-risk AI systems (2 weeks). If any exist (most commonly in HR), begin the conformity assessment process. You have until August 2026 for new systems, August 2027 for existing ones.

The firms that move first will define the standard. In every regulatory cycle, early movers set the bar for what "reasonable compliance" looks like. Late movers inherit the standard without having shaped it. For professional services firms, where trust is the product, being seen as a leader in responsible AI use creates a competitive advantage that is difficult to replicate.

Assess your firm's AI readiness

Take our 3-minute assessment to identify compliance gaps and get a personalized action plan.

Start Assessment

Related reading