- Why Lawyers Are Right to Be Cautious (And Why They Should Still Use It)
- The Core Risks: What Can Actually Go Wrong
- How to Configure ChatGPT Safely: Tiers, API, and Training Opt-Out
- What You CAN Safely Use ChatGPT For
- What You Should Never Put Into ChatGPT
- Alternatives for Sensitive Legal Work
- Setting Up an AI Usage Policy for Your Firm
- European Lawyers: GDPR, BRAO, SRA, and CNB
- The 80/20 Workflow: Maximum Benefit, Zero Data Risk
- Common Mistakes Lawyers Make with ChatGPT
Why Lawyers Are Right to Be Cautious (And Why They Should Still Use It)
The legal profession has a complicated relationship with ChatGPT. On one hand, it is genuinely useful. Lawyers who use it well report saving hours each week on research summaries, first drafts, and internal communications. On the other hand, law firms handle some of the most sensitive personal and commercial data in existence. The consequences of a breach are not just a fine. They can end client relationships, trigger bar association investigations, and expose the firm to professional liability claims.
Both instincts are correct. The caution is warranted. So is the interest.
The mistake is treating this as binary: either avoid ChatGPT entirely, or use it without thinking. Neither extreme serves your clients or your practice. The right approach is knowing exactly what you can and cannot put into which version of the tool, and building that knowledge into firm policy before someone does something they cannot undo.
The headline stat: A 2025 survey by Thomson Reuters found that 74% of lawyers believe generative AI tools like ChatGPT will significantly change legal work within two years. The same survey found that 52% have already used AI tools for at least one work task. The question is not whether to engage. It is how to do it safely.
This guide does not lecture you about theory. It tells you what to configure, what to type, what to never type, and which tools to use instead when the task is too sensitive for a general-purpose AI.
The Core Risks: What Can Actually Go Wrong
Before the configuration section, you need to understand the real risks. Not the vague "AI is dangerous" headline risks, but the specific failure modes that affect law practices.
Client Data Appearing in Training Sets
OpenAI's default settings for the free and Plus tiers include using conversation data to improve their models. This means that if you paste a client's name, case details, or contractual terms into a standard ChatGPT conversation, there is a non-zero chance that information could surface in responses to other users. This is not hypothetical. In 2023, a Samsung engineer inadvertently uploaded proprietary code to ChatGPT, and the incident became public. Law firms face the same risk with client-privileged information.
The good news: this risk is fully configurable. The bad news: most lawyers do not know how to configure it, and default settings are not safe for legal work.
Hallucinations in Legal Contexts
ChatGPT invents things. Not occasionally. Regularly, confidently, and convincingly. The famous 2023 case of Mata v. Avianca made international headlines when a New York attorney submitted a brief with six completely fabricated case citations, all generated by ChatGPT. The court sanctioned the attorneys. The firm faced a public relations disaster.
This is not an edge case. In legal contexts, hallucinations tend to occur precisely where they are most dangerous: citations, statute numbers, procedural deadlines, and specific contract terms. ChatGPT produces text that looks authoritative even when the underlying facts are invented.
Never treat ChatGPT output as a primary source for legal citations, statutory references, or case law. Use it to find leads, then verify every reference in Westlaw, LexisNexis, or the relevant jurisdiction's official database before anything goes near a court filing or client advice.
Confidentiality and Professional Privilege
In most jurisdictions, communications between lawyers and clients are protected by legal professional privilege or attorney-client privilege. This protection can be waived if the communication is disclosed to a third party. The legal status of information processed by a commercial AI platform is still developing, but the prudent position is that uploading privileged communications to a general-purpose AI platform risks waiving privilege. Several bar associations have issued formal guidance on exactly this point.
GDPR and Data Protection Obligations
For European lawyers, there is a fourth risk that their US counterparts do not face in the same way: data protection law. Uploading personal data about clients to a commercial platform without a lawful basis, appropriate safeguards, and a valid Data Processing Agreement (DPA) violates the GDPR. Law firms are data controllers. They cannot outsource that responsibility by pointing to OpenAI's terms of service.
How to Configure ChatGPT Safely: Tiers, API, and Training Opt-Out
ChatGPT is not one thing. There are multiple tiers with significantly different data handling terms. Choosing the right one is not a technical decision. It is a compliance decision.
The Four Tiers and What They Mean for Legal Work
| Tier | Price | Training on Your Data? | Safe for Legal Work? |
|---|---|---|---|
| Free | $0 | Yes, by default | No. Never use for client work. |
| Plus | $20/month per user | Yes, unless you manually opt out per account | Only if you have opted out of training. Still no DPA. |
| Team | $30/month per user (min 2 users) | No. Training opt-out is on by default. | Better. No training on your data. Still no DPA for EU firms. |
| Enterprise | Custom pricing (typically $60+/user/month) | No. Contractual guarantee. | Yes, with proper DPA in place. Best option for EU firms. |
| API (direct) | Pay-per-use (~$0.01-0.06 per 1K tokens) | No. API data is not used for training. | Yes, but requires technical setup and your own DPA arrangement. |
How to Opt Out of Training on ChatGPT Plus
If your firm is on Plus accounts, opt out immediately. The steps are: Settings (bottom-left icon) > Data controls > toggle off "Improve the model for everyone." This prevents your conversations from being used in future training. Note: this does not retroactively remove data already submitted.
This setting applies per account. If multiple lawyers in your firm use Plus, each person must complete this step individually. It cannot be set firm-wide on Plus. That is one reason to upgrade to Team or Enterprise if you have more than two or three users.
ChatGPT Team: The Practical Sweet Spot for Most Firms
For small to mid-size law firms, ChatGPT Team at $30 per user per month offers the right balance of cost and protection. Training is off by default. You get a workspace admin console. You can manage which lawyers have access. Conversations are isolated from public model training.
The limitation for European firms: OpenAI does not currently offer a GDPR-compliant DPA for Team customers. For EU firms with strict data protection obligations, Enterprise is the appropriate tier.
ChatGPT Enterprise: What You Actually Get
Enterprise adds: SOC 2 Type II compliance, a GDPR-ready Data Processing Agreement, no training on your data (contractual), admin controls for the entire organization, dedicated storage, higher rate limits, and extended context windows. For law firms processing client data through ChatGPT, Enterprise is the only tier that provides the contractual protections a GDPR compliance audit will require.
Enterprise pricing is not published. Expect to negotiate. For a firm of 10 to 50 lawyers, budgets of $600 to $3,000 per month are realistic depending on usage volume and negotiated terms.
The API Option
Accessing GPT-4o or o1 directly through OpenAI's API gives you the same models without the chat interface, and OpenAI does not train on API inputs by default. This is technically the most flexible option and works well for firms that want to build custom workflows, integrate with their document management system, or use custom system prompts to enforce firm-specific rules. It requires developer resources to set up properly. For smaller firms without technical staff, Team or Enterprise is more practical.
Quick-start recommendation: If you want to start safely this week, upgrade to ChatGPT Team, opt out of training in the admin settings, brief your lawyers on what cannot go in (see below), and proceed. Upgrade to Enterprise when you are ready to process genuinely sensitive material or when a GDPR audit requires it.
What You CAN Safely Use ChatGPT For
The key principle: anything you would write on a whiteboard in a public hallway is safe for ChatGPT. Anything that would be marked "confidential" on a document envelope is not. With that framework in mind, here are the most valuable use cases for legal practices.
Research Summaries and Legal Background
Ask ChatGPT to explain a legal concept, summarize the legal landscape around a topic, or give you a plain-English overview of how a particular regulatory regime works. This is genuinely excellent. You can ask it to explain the difference between strict liability and negligence, summarize the EU AI Act's risk categories, or outline the general framework of GDPR lawful bases for processing. Use the output as a starting point for your own research. Always verify specifics against primary sources.
Drafting Templates and Standard Clauses
Generic templates are safe territory. "Draft a standard NDA with mutual confidentiality obligations under English law" is fine. "Draft an NDA for my client Acme Corp covering their merger talks with Globex" is not. The distinction is client identification and matter-specific detail. Remove those, and most drafting tasks become safe.
Marketing Copy and Business Development
Website copy, LinkedIn posts, newsletter drafts, pitch deck language, award submissions, proposals for new clients (without existing client references), practice area descriptions. All of this is safe and where many lawyers find the most immediate time savings.
Internal Memos and Briefings
A memo explaining a new regulation to the firm's commercial team, a training document on updated GDPR guidance, a briefing on EU AI Act compliance requirements. Firm-internal communications that do not reference specific clients or matters are generally safe.
Interview Prep and Training
Preparing for client presentations, practicing how to explain complex legal issues in plain language, generating practice questions for associate training programs. These use cases have no data risk and consistently save time.
Administrative Drafting
Engagement letter templates (without client names filled in), general terms and conditions, firm policies, HR documents for the firm itself, billing guideline documents. Standard operational work that any firm needs but no one enjoys writing.
Translation Assistance
Getting a rough translation or checking the meaning of a clause in a foreign-language contract before commissioning a certified translation. Useful for understanding. Not a substitute for certified legal translation on executed documents.
What You Should Never Put Into ChatGPT
This section should be printed and posted near every computer in your firm that accesses ChatGPT. Not because lawyers are careless, but because the temptation to paste a document directly into a chat window is natural, and the habit needs to be broken before it forms.
Never put these into ChatGPT (free, Plus, or Team tiers without Enterprise DPA):
- Client names, company names, or any identifier that connects a person to a legal matter
- Case details, litigation strategy, settlement positions
- Confidential contracts with counterparty names or commercially sensitive terms
- Communications protected by attorney-client privilege or legal professional privilege
- Personal data as defined by GDPR: names, addresses, identification numbers, financial data, health data
- Due diligence materials referencing a specific transaction
- Witness statements, deposition transcripts, court orders from active matters
- Internal firm communications about billing disputes, malpractice concerns, or partner compensation
- Any document marked "confidential," "privileged," or "without prejudice"
The practical workaround: anonymize and abstract before you paste. Instead of "My client, TechCorp Ltd, is in a dispute with their distributor in Germany regarding exclusivity terms," write "A technology company has a dispute with a European distributor over exclusivity terms in their agreement." You lose nothing analytically, and you have not disclosed client-identifying information.
This takes three minutes of habit to build. It is worth building it explicitly through training rather than assuming lawyers will develop it on their own under time pressure.
Alternatives to ChatGPT for Sensitive Legal Work
If the task requires working with actual client documents, case-specific details, or privileged materials, the answer is not to find a clever way to use ChatGPT anyway. The answer is to use tools built for exactly this purpose, with the contractual and technical protections legal work demands.
Luminance
Built specifically for legal document analysis. Luminance processes contracts within your firm's environment, does not train on your data, offers EU-hosted deployments, and has a GDPR-compliant DPA. It reads documents with an understanding of legal language trained on millions of legal documents, not general internet text. Pricing starts around $1,000 to $5,000 per month depending on modules. Strong in due diligence, contract review, and M&A document processing. This is the tool for high-volume document analysis where you need to work with actual client contracts.
CoCounsel (Thomson Reuters)
Now integrated into Westlaw, CoCounsel provides AI-assisted legal research, deposition preparation, and contract analysis within the Thomson Reuters data protection framework. For firms already on Westlaw, it adds approximately $100 per user per month. Data stays within the Thomson Reuters secure environment. Does not use your queries for external model training. Strong for research-heavy practices doing litigation, regulatory, or compliance work.
Harvey AI
Built on GPT-4 with legal-specific fine-tuning. Harvey is designed for law firms and legal departments, with enterprise-grade data protection and confidentiality guarantees. Currently available primarily to AmLaw 100 and larger firms, with broader availability expanding. Pricing is not public but reported in the $150 to $300 per user per month range. Particularly strong for drafting, due diligence analysis, and regulatory research. Watch for availability if you are a mid-size firm: rollout is ongoing.
Spellbook (Rally)
Focuses on contract drafting and review. Integrates directly into Microsoft Word, which is where most lawyers already work. AI capabilities operate within your firm's Microsoft environment if deployed via Azure. Pricing around $500 per user per month. Popular with transactional practices because it fits into the existing workflow rather than requiring a separate platform. Uses OpenAI models with contractual data protection provisions appropriate for legal work.
Legartis
European-built, GDPR-native contract analysis platform. Data is processed within EU infrastructure. Strong for standard commercial contracts: NDAs, supply agreements, service contracts. Particularly relevant for German and Swiss firms (Legartis is Swiss-based) where local regulatory nuances matter. Pricing varies by firm size and contract volume. Worth evaluating if your firm handles high volumes of commercial contract review and data sovereignty is a primary concern.
Microsoft Copilot for Microsoft 365
If your firm uses Microsoft 365, Copilot processes data within your existing Microsoft tenant. Your firm's data does not leave your Microsoft environment. It does not go to OpenAI's servers in the standard sense. Microsoft holds the DPA. For European firms, Microsoft offers EU Data Boundary commitments. At roughly $30 per user per month added to an existing M365 subscription, this is often the lowest-friction path to AI assistance for firms already in the Microsoft ecosystem. It handles drafting, email, meeting summaries, and document review within Word, Outlook, and Teams. Limitations: it is less specialized for legal work than purpose-built tools.
Decision framework: For research summaries, marketing, and template drafting: ChatGPT Team is sufficient. For contract analysis with real client documents: Luminance or Spellbook. For research on active matters: CoCounsel. For end-to-end AI assistance within your existing Microsoft environment: Copilot. For firms without any specialized tools yet: start with CoCounsel if on Westlaw, or ChatGPT Team for non-sensitive tasks, and add specialized tools as workflows mature.
Setting Up an AI Usage Policy for Your Firm
If more than one person in your firm uses AI, you need a policy. Not because lawyers cannot be trusted, but because the rules are not intuitive, the technology is new, and one incident can create significant liability. A policy also protects the lawyers themselves by giving them clear guidance when they are under deadline pressure.
What a Firm AI Usage Policy Should Cover
Keep it short enough to actually be read. A policy that runs 20 pages will not be read. Here is a practical outline:
1. Approved tools and tiers. List the specific tools and subscription tiers the firm has approved. "ChatGPT" is not specific enough. "ChatGPT Team (firm workspace)" is. Include Microsoft Copilot if deployed, any legal-specific tools, and the API if relevant.
2. Data classification rules. Create a simple two-category system: information that can go into AI tools, and information that cannot. Client-identifying information, privileged communications, and confidential documents are in the second category. Non-client-specific drafting tasks, research background questions, and marketing copy are in the first. Give lawyers two or three examples of each.
3. Verification requirements. All AI-generated legal citations, case references, statutory provisions, and procedural rules must be independently verified before use in any client-facing work or court submission. This is not optional. Put it in the policy.
4. Disclosure obligations. Address whether and when the firm discloses AI use to clients. Many jurisdictions do not yet require disclosure for AI-assisted drafting, but some bar associations are moving in that direction. Check your local bar guidance and have a policy position before you need one.
5. Personal device and personal account prohibition. Lawyers must not use personal ChatGPT accounts (especially free or Plus tiers without training opt-out) for work-related tasks. This is where most ad hoc data leaks happen.
6. Review before submission. AI drafts must always be reviewed by a qualified lawyer before any client delivery or court submission. This is already a professional obligation in most jurisdictions. Putting it in the AI policy reinforces it in context.
7. Incident reporting. If a lawyer believes client data may have been submitted to an AI tool inappropriately, they must report it immediately. Define who they report to and what happens next.
European Lawyers: GDPR, BRAO, SRA, and CNB
European lawyers face a regulatory layer that significantly changes the analysis. General-purpose AI tools are not GDPR-neutral. Processing personal data through them requires a legal basis, appropriate safeguards, and in most cases a DPA with the AI provider.
GDPR Obligations When Using LLMs
If you input personal data into ChatGPT, you are a data controller using a data processor. Under GDPR Article 28, you need a written contract with your processor that covers what data is processed, for what purpose, with what security measures, and what happens to data after the engagement ends. OpenAI's standard Terms of Service for free and Plus users do not constitute a GDPR-compliant DPA. ChatGPT Enterprise does provide a DPA. ChatGPT Team is improving here but has not reached full GDPR compliance for EU data controllers.
The practical implication: on Team tier, do not input personal data. Use ChatGPT Team for tasks that work with anonymized or entirely generic content. For tasks that require personal data, you need Enterprise or a purpose-built legal tool with existing GDPR compliance baked in.
There is also the question of transfers outside the EU. OpenAI is a US company. Data processed through ChatGPT may transit or rest on US infrastructure. This raises standard-contractual-clause questions that your DPA needs to address. Enterprise customers can review data processing details in their agreements.
Germany: BRAO and Professional Rules
German lawyers (Rechtsanwaelte) are bound by the Bundesrechtsanwaltsordnung (BRAO) and the professional rules set by the regional bars (Rechtsanwaltskammern). The duty of confidentiality under Section 43a BRAO is strict. Uploading client information to external AI platforms that lack adequate data protection agreements is a potential violation.
The Bundesrechtsanwaltskammer (BRAK) has issued guidance noting that AI use in law practice is permitted but that lawyers remain responsible for ensuring confidentiality. The guidance specifically calls attention to the risk of confidential information being processed by third-party AI providers without adequate safeguards. German firms should not use ChatGPT free or Plus for client work. Enterprise with a DPA, or specialized legal AI with EU data hosting, is the appropriate path.
UK: Solicitors Regulation Authority (SRA)
The SRA has taken a pragmatic approach. Their 2025 guidance on AI confirms that AI tools can be used in legal practice, provided solicitors comply with their existing professional obligations: competence, confidentiality, and supervisory responsibility. The SRA does not ban ChatGPT. It requires that solicitors understand the tools they use, verify AI output, maintain client confidentiality, and not allow AI use to create conflicts of interest.
Practically, the SRA guidance means: lawyers must understand what happens to data they submit to AI tools, must verify AI-generated legal content, and must maintain meaningful supervision over AI-assisted work product. The SRA has also indicated it will treat AI-related professional failures the same as other failures: negligence is negligence, regardless of whether a human or an AI made the error.
France: Conseil National des Barreaux (CNB)
The CNB issued a detailed report on AI in legal practice in 2024. The report confirms that AI use is compatible with professional obligations but emphasizes that the duty of professional secrecy (secret professionnel) under French law is absolute and applies to AI-processed data. French lawyers must ensure that any AI platform they use processes data in a manner consistent with Article 66-5 of the Law of 31 December 1971. The CNB recommends using AI platforms that offer EU data residency and that contractually commit to not retaining or using firm data for training. This means ChatGPT Enterprise or legal-specific tools with appropriate data residency, not free or Plus tiers.
The short version for European lawyers: Free tier is never acceptable for any work-related task. Plus requires manual training opt-out and still lacks a GDPR DPA. Team is acceptable for non-personal, non-confidential tasks. Enterprise is required if you want to work with personal data or client information. For active client matters, use a purpose-built legal AI platform. This is not conservative advice. It is what the bar associations are saying.
The 80/20 Workflow: Maximum Benefit, Zero Data Risk
Here is a concrete workflow that gives most of the productivity benefit while eliminating data risk. Implement this Monday and you will see results by Thursday.
Morning: Research Prep
Before diving into a research task, use ChatGPT to orient yourself. Ask it to explain the legal framework in plain language, identify the key questions a court would ask, and suggest the most important concepts to research. Do not mention the client or the matter. Use this as a thought partner to structure your research before you open Westlaw or LexisNexis. Time saved: 20 to 40 minutes per research task.
Drafting: Start Generic, Then Customize
Use ChatGPT to draft the generic shell of any document. A generic NDA. A generic employment agreement. A generic board resolution. Then take that shell into your firm's document management system and add client-specific details without ever returning to ChatGPT. You get the structure and language from AI. The client-specific details stay inside your secure environment. Time saved: 30 to 60 minutes per document type.
Correspondence: The Tone Adjustment Step
Write your first draft yourself, or use a generic template. Then ask ChatGPT to adjust the tone, tighten the language, or translate it into plain English for a non-lawyer audience. Paste in your own text (no client names) and ask for editorial help. The output needs review, but this step is consistently faster than starting from scratch. Time saved: 15 to 30 minutes per significant letter.
Client Education Materials
Guides explaining GDPR obligations to clients, explainers on the EU AI Act for in-house counsel, summaries of how a particular legal process works. These materials are generic by design. ChatGPT is excellent at producing clear first drafts. Your value-add is legal accuracy and firm voice. Time saved: 1 to 3 hours per document.
Meeting Prep
Before a client meeting, use ChatGPT to brief yourself on an area of law you are less familiar with. Ask it to anticipate likely questions. Ask it to generate a list of issues a client in a particular situation typically raises. No client data goes in. You come to the meeting more prepared. Time saved: 30 to 60 minutes per complex meeting.
Verification Step (Always)
Before any ChatGPT output reaches a client or a court document, verify every legal reference. Not some. Every one. This takes five minutes and has saved multiple lawyers from the Mata v. Avianca situation. Build it into the workflow as a non-negotiable step, not an optional quality check.
Common Mistakes Lawyers Make with ChatGPT
These are the patterns that come up most consistently. Awareness is most of the solution.
Using Personal Accounts for Work Tasks
Associates and junior lawyers often have personal ChatGPT accounts and use them for work tasks because it is faster than waiting for firm IT to set up approved access. The firm has no visibility, no controls, and no DPA coverage. Fix this by providing approved firm access quickly so there is no temptation to route around it.
Trusting Citations Without Verification
ChatGPT produces citations that look real. The case name is plausible. The citation format is correct. The quoted holding sounds reasonable. None of it may be real. Treat every case citation, statute reference, and regulation number as unverified until you have checked it in a primary source. This is not optional extra diligence. It is the minimum standard of care when using generative AI for legal research support.
Treating AI Output as Final
AI drafts are starting points. Not final deliverables. A well-crafted AI draft still needs a qualified lawyer to review it for accuracy, tone, jurisdiction-specific nuance, and the specific circumstances of the client. The time saving comes from not starting with a blank page, not from skipping review.
Not Anonymizing Before Pasting
This is the most common mistake. A lawyer has a contract dispute involving a specific client and counterparty. They want help thinking through the arguments. They paste the contract directly into ChatGPT, names included. The anonymization step takes two minutes. Most lawyers skip it because they are in a hurry. Build the habit explicitly. Teach it in your next team meeting.
Using the Wrong Tier
Free accounts, personal Plus accounts, or using ChatGPT.com without checking the training settings. The tiering section above gives you the clear breakdown. The solution is providing firm access through the correct tier so lawyers do not default to personal accounts.
Ignoring Bar Association Guidance
Every major European bar association has now published guidance on AI use. Several have updated it in 2024 and 2025 as the tools have evolved. Lawyers who have not read their bar association's current AI guidance are practicing blind. These documents are not long. They are worth reading before your next AI-related decision.
Not Having a Policy Until After an Incident
Firms almost always create AI policies in response to a problem. A junior associate used the free tier. A client asked whether their data was shared with AI. A partner saw something in the news. The policy conversation should happen before any of those events. It takes two hours to draft a reasonable policy. It takes much longer to manage the aftermath of an incident.
Need Help Setting Up a Safe AI Workflow?
We help European law firms configure AI tools correctly, draft usage policies, and build workflows that deliver real time savings without touching client data. Our AI readiness assessment identifies exactly where your firm can start safely.
Take the Free AssessmentEmail Irene directly
Related Reading
Sources and References
- Thomson Reuters. "2025 Generative AI in Legal Survey." thomsonreuters.com
- Levidow, Levidow & Oberman, P.C. v. Mata (Mata v. Avianca). S.D.N.Y. 2023. Case No. 1:22-cv-01461. Opinion sanctioning counsel for AI-fabricated citations.
- OpenAI. "Enterprise Privacy and Security." Data processing terms and training opt-out documentation. openai.com/enterprise-privacy
- OpenAI. "ChatGPT Team." Product page and data handling terms. openai.com/chatgpt/team
- Bundesrechtsanwaltskammer (BRAK). "Hinweise zur Nutzung von KI in der Anwaltschaft." 2024. brak.de
- Solicitors Regulation Authority (SRA). "Generative AI: Guidance for Solicitors." 2025. sra.org.uk
- Conseil National des Barreaux (CNB). "Intelligence artificielle et profession d'avocat." 2024. cnb.avocat.fr
- European Data Protection Board. "Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models." 2024. edpb.europa.eu
- Luminance. "Legal AI Platform." Data protection and EU hosting documentation. luminance.com
- Thomson Reuters CoCounsel. Product documentation and data security overview. legal.thomsonreuters.com
- Spellbook (Rally). "Enterprise Security and Privacy." spellbook.legal
- Legartis. "Contract AI for European Law Firms." legartis.ai
- Microsoft. "EU Data Boundary for the Microsoft Cloud." microsoft.com
- GDPR, Article 28. Processor obligations. Official text at gdpr.eu
Diesen Artikel auf Deutsch lesen · Przeczytaj ten artykuł po polsku · Leer este artículo en español