The EU AI Act is the first comprehensive AI regulation in the world. It entered into force on August 1, 2024, with obligations rolling out in phases through 2027. For professional services firms, the consequences are both direct (your own AI tools are now regulated) and indirect (your clients will need help navigating these rules).
This guide covers what law firms, consulting firms, and accountancies need to know. No legal theory. Just practical requirements, deadlines, and steps.
- What the EU AI Act requires
- Key deadlines for professional services
- Risk classification for tools you already use
- Your obligations as a deployer
- General-purpose AI: ChatGPT, Copilot, Claude
- Sector-specific guidance
- 10-step compliance checklist
- Penalties and enforcement
- 5 common misconceptions
- What to do this quarter
1. What the EU AI Act requires
The EU AI Act classifies AI systems into four risk tiers. Each tier carries different obligations. The regulation applies to anyone who develops or deploys AI systems within the EU, regardless of where the AI provider is based.
For professional services firms, the most relevant role is deployer: an entity that uses an AI system under its authority. If your firm uses Harvey, Luminance, Kira, CoCounsel, or even ChatGPT for client work, you are a deployer.
The four risk levels:
- Unacceptable risk: Banned outright. Social scoring, emotion recognition in workplaces (with narrow exceptions), manipulative techniques. Enforceable since February 2025.
- High risk: Permitted but heavily regulated. Must meet conformity assessment, documentation, human oversight, and transparency requirements. Full enforcement from August 2026.
- Limited risk: Transparency obligations only. Users must be informed when they interact with AI or when content is AI-generated.
- Minimal risk: No specific obligations. Voluntary codes of conduct encouraged.
2. Key deadlines for professional services
Why this matters now: GPAI obligations (August 2025) are already in force. If your firm uses ChatGPT, Claude, Copilot, or any other general-purpose AI tool, you should already have transparency measures in place. The August 2026 deadline for high-risk systems is 5 months away.
3. Risk classification for tools you already use
The risk level depends on the use case, not the tool itself. The same AI system can be minimal risk in one context and high risk in another. Here is how common professional services AI tools map to the EU AI Act risk tiers:
| AI Tool / Use Case | Risk Level | Why |
|---|---|---|
| AI document review (Harvey, Luminance, Kira) | Limited | AI-assisted search and analysis. Human makes final decisions. Transparency obligation: clients should know AI was used. |
| AI contract drafting/generation | Limited | AI generates content that a human reviews before delivery. Transparency obligation applies. |
| AI for HR decisions (hiring, promotion, termination) | High | Annex III explicitly lists AI in employment, worker management, and access to self-employment. Full high-risk compliance required. |
| AI credit scoring / financial assessment | High | Listed in Annex III. AI evaluating creditworthiness or insurance risk requires conformity assessment. |
| Chatbots for client intake | Limited | Must disclose that the user is interacting with an AI system. Transparency only. |
| AI legal research (Westlaw AI, CoCounsel) | Minimal | Internal tool assisting professionals. No direct client impact. Voluntary codes apply. |
| AI tax filing / automated accounting | Limited | Produces outputs that a human reviews. If AI makes autonomous decisions affecting tax obligations, could escalate to high risk. |
| Emotion recognition in interviews | Prohibited | Emotion recognition in workplace/education contexts is banned (with narrow law enforcement exceptions). Since February 2025. |
| AI-powered due diligence | Limited | AI analyses documents, human makes decisions. Transparency obligation. May escalate if AI autonomously flags/rejects deals. |
| Predictive analytics for case outcomes | High (potential) | If used in administration of justice or to influence judicial outcomes, this falls under Annex III. Context-dependent. |
The critical distinction: Most AI tools used in professional services today fall in the limited risk category, which means transparency is the primary obligation. But two areas frequently trigger high risk: HR decisions and financial assessments. If your firm uses AI in either area, start compliance work now.
4. Your obligations as a deployer
Even when you are not developing AI (just using tools built by others), the EU AI Act assigns specific obligations to deployers. The scope depends on the risk level.
For all risk levels
- Transparency to natural persons: When AI output affects someone, they must be informed that AI was used (Art. 50).
- AI literacy: Ensure your staff have sufficient understanding of the AI systems they use. This is not a vague aspiration. It is a legal requirement (Art. 4).
For high-risk AI systems
- Human oversight: Assign qualified individuals to oversee the AI system's operation (Art. 14).
- Input data quality: Ensure data fed into the AI system is relevant and representative (Art. 10).
- Record keeping: Maintain logs of the AI system's operation for the period specified by the provider (Art. 12).
- Fundamental rights impact assessment: For public bodies and certain private entities, a FRIA is required before deploying high-risk AI (Art. 27).
- Monitoring: Monitor the AI system's operation and report any serious incidents to the provider and national authority (Art. 72).
- Inform workers: Staff working with or affected by high-risk AI must be informed (Art. 26(7)).
5. General-purpose AI: ChatGPT, Copilot, Claude
General-purpose AI (GPAI) models get their own regulatory track (Title IIIa). Since August 2025, GPAI providers must:
- Publish technical documentation and a summary for the public
- Implement copyright compliance policies (including text and data mining opt-out)
- Identify and mitigate systemic risks (for models with systemic risk designation)
As a deployer of GPAI tools, your obligations are different from the provider's. You do not need to document the model's architecture. But you do need to:
- Inform clients and counterparties when AI-generated content is used in deliverables.
- Maintain internal usage policies documenting which tools are approved, for what purposes, and with what safeguards.
- Train your team on responsible use, limitations, and when human review is mandatory.
- Verify the provider's compliance. If your GPAI provider cannot demonstrate compliance, you share the liability risk.
Practical implication: If you are using ChatGPT Enterprise, Microsoft Copilot, Claude, or similar tools, you should already have a written AI usage policy, a training program, and disclosure language in your client engagement letters. This is not best practice. It is a legal requirement since August 2025.
6. Sector-specific guidance
Law firms
Legal services sit at a unique intersection. You are both subject to the regulation (your own AI tools) and positioned to advise clients on it. Key points:
- Client privilege implications: Using cloud-based AI tools means client data leaves your systems. Update your privacy policies and data processing agreements.
- Professional conduct rules: Bar associations across Europe are issuing supplementary guidance. The CCBE published its position paper in 2025. Check your national bar association for jurisdiction-specific rules.
- Malpractice risk: AI-generated legal analysis that a lawyer fails to verify creates liability. The EU AI Act's transparency requirements add a regulatory layer on top of existing professional obligations.
- New service opportunity: AI Act compliance advisory, AI contract review, FRIA preparation, and AI governance consulting are emerging practice areas.
Consulting and advisory firms
Management consultancies face dual exposure. Internal AI tools and client-facing AI recommendations both fall under the regulation.
- AI strategy work: If you advise clients on AI implementation, those recommendations must account for EU AI Act compliance. Recommending a high-risk AI deployment without flagging regulatory obligations could create professional liability.
- Internal analytics: Consulting firms increasingly use AI for market analysis, benchmarking, and client presentations. These tools typically fall under minimal or limited risk, but usage policies are still required.
- HR applications: If your firm uses AI for talent assessment, staffing, or performance management, this is high-risk territory. Many large consultancies are early adopters of AI HR tools.
Accounting and audit firms
The intersection of AI and financial reporting creates specific compliance considerations.
- AI in audit procedures: Using AI to analyze financial statements, detect anomalies, or assess going-concern risk falls under limited risk (transparency) when the auditor retains decision-making authority.
- Automated tax compliance: If AI autonomously files returns or makes tax elections, this could escalate to high risk. Maintain human-in-the-loop for all fiduciary decisions.
- Client advisory: When advising clients on AI-driven financial planning or risk management tools, ensure the advice accounts for the regulatory framework.
- KSeF and e-invoicing: In Poland, the mandatory KSeF system (effective February 2026) creates opportunities for AI-driven compliance tools, but these must also comply with the AI Act.
7. 10-step compliance checklist
Use this checklist to assess your firm's readiness. Each step maps to specific articles in the regulation.
-
1
Inventory all AI systems in useList every AI tool your firm uses, from enterprise platforms (Harvey, Luminance) to individual tools (ChatGPT accounts, Copilot, Grammarly). Include shadow IT. You cannot comply with what you do not know about.
-
2
Classify each system by risk levelMap each tool to its use case and determine the risk tier. Use the table above as a starting point. Remember: risk level is determined by use case, not by the tool itself.
-
3
Check for prohibited usesConfirm no AI system is used for social scoring, workplace emotion recognition, or other banned practices. This has been enforceable since February 2025. If found, discontinue immediately.
-
4
Draft or update your AI usage policyDocument approved tools, permitted use cases, required safeguards, and prohibited uses. This is your foundational compliance document. It should be reviewed quarterly.
-
5
Implement transparency measuresUpdate client engagement letters, website terms, and internal communications to disclose AI use where required. For chatbots, add clear "AI-powered" labels. For AI-assisted deliverables, include disclosure language.
-
6
Establish human oversight protocolsFor each AI system, define who reviews its outputs, how they review them, and what authority they have to override. Document this. For high-risk systems, assign named individuals with appropriate qualifications.
-
7
Conduct AI literacy trainingArticle 4 requires that all staff using AI systems have sufficient understanding. This means structured training, not just a memo. Cover tool capabilities, limitations, and the firm's usage policies.
-
8
Review data processing and privacyAI tools processing personal data must comply with both the AI Act and GDPR. Review your DPAs, update privacy impact assessments, and ensure data minimization principles apply to AI inputs.
-
9
Verify provider complianceRequest compliance documentation from your AI tool providers. For GPAI models, confirm the provider has published required technical documentation. Include compliance obligations in vendor contracts.
-
10
Set up monitoring and incident reportingEstablish processes to detect AI system malfunctions, biased outputs, or security incidents. For high-risk systems, you must report serious incidents to both the provider and the relevant national authority.
8. Penalties and enforcement
The EU AI Act introduces a tiered penalty structure:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices (Art. 5) | EUR 35 million or 7% of global annual turnover |
| High-risk AI system obligations | EUR 15 million or 3% of global annual turnover |
| Providing incorrect information to authorities | EUR 7.5 million or 1% of global annual turnover |
For SMEs and startups, proportional caps apply. But for most professional services firms, the reputational damage of non-compliance may outweigh the financial penalties. A law firm found to be using AI irresponsibly risks losing client trust and facing professional disciplinary proceedings.
Enforcement will be handled by national authorities. The European AI Office coordinates cross-border cases and oversees GPAI model compliance directly. Each member state must designate a national supervisory authority by August 2025.
9. Five common misconceptions
Misconception 1: "We only use ChatGPT, so we are not regulated."
Incorrect. Using ChatGPT makes you a deployer of a GPAI system. Transparency obligations apply. You must inform clients when AI-generated content is included in deliverables, and you must maintain a usage policy.
Misconception 2: "The AI Act is about AI developers, not users."
The regulation creates obligations for both providers (developers) and deployers (users). Deployer obligations are lighter but still legally binding, particularly for transparency, human oversight, and high-risk system monitoring.
Misconception 3: "Our AI vendor handles compliance."
Vendors handle provider obligations (model documentation, technical standards). Deployer obligations (transparency, human oversight, training, incident reporting) are yours. Vendor compliance does not substitute for your own.
Misconception 4: "Full enforcement is not until 2026."
Partially correct. Prohibited practices have been enforceable since February 2025. GPAI obligations since August 2025. High-risk system requirements from August 2026. If you use GPAI tools today, you should already be compliant with the transparency and AI literacy provisions.
Misconception 5: "This only applies to firms in the EU."
The AI Act has extraterritorial scope. If your AI system's output is used within the EU, the regulation applies, regardless of where your firm is headquartered. This mirrors the GDPR's extraterritorial reach.
10. What to do this quarter
With GPAI obligations already in force and high-risk deadlines approaching in August 2026, here is what to prioritize in Q2 2026:
- Complete your AI inventory (1 week). If you do not have one, this is the single highest priority. You cannot manage risk you have not mapped.
- Publish your AI usage policy (2 weeks). Even a basic policy puts you ahead of most firms. It signals to regulators, clients, and staff that you take AI governance seriously.
- Update client engagement letters (1 week). Add disclosure language about AI use. Your competitors will do this. Being late looks worse than being early.
- Schedule AI literacy training (ongoing). Start with a 2-hour session covering what tools are approved, what is prohibited, and how outputs should be reviewed. Repeat quarterly.
- Identify high-risk AI systems (2 weeks). If any exist (most commonly in HR), begin the conformity assessment process. You have until August 2026 for new systems, August 2027 for existing ones.
The firms that move first will define the standard. In every regulatory cycle, early movers set the bar for what "reasonable compliance" looks like. Late movers inherit the standard without having shaped it. For professional services firms, where trust is the product, being seen as a leader in responsible AI use creates a competitive advantage that is difficult to replicate.
Assess your firm's AI readiness
Take our 3-minute assessment to identify compliance gaps and get a personalized action plan.
Start Assessment