The EU AI Act Just Changed the Rules for Your Productivity Tools

·12 min read

In August 2024, the EU AI Act—the world's first comprehensive artificial intelligence regulation—entered into force. By January 2025, prohibited AI practices became enforceable. Throughout 2025 and 2026, additional requirements phase in for different risk categories.

For users of AI-powered productivity tools, this matters. The tools you use for email, task management, writing assistance, and knowledge management now operate under new legal frameworks. Vendors must comply with specific requirements. Users gain new rights. Non-compliance creates consequences.

This guide explains the EU AI Act's structure, how it classifies productivity tools, what requirements apply, and how to evaluate tools in this new regulatory environment. It's written for UK readers, who face an interesting position: post-Brexit, the UK is not directly bound by EU law, but any tool serving EU customers must comply—which means most tools you use will operate under EU AI Act frameworks regardless.

The EU AI Act: Structure and Approach

The EU AI Act regulates AI based on risk. Higher-risk AI systems face stricter requirements; lower-risk systems face lighter touch or no specific regulation.

Risk Categories

Unacceptable Risk (Banned): AI systems that pose unacceptable risks are prohibited outright. These include social scoring systems by governments, real-time biometric identification in public spaces (with limited exceptions), manipulation techniques exploiting vulnerabilities, and emotion recognition in certain contexts.

None of these prohibitions directly affect productivity tools—they target surveillance and manipulation applications.

High Risk (Strict Requirements): AI systems affecting fundamental rights or safety fall into high-risk category. Examples include AI for hiring decisions, credit scoring, education assessment, law enforcement, and critical infrastructure management.

Some productivity adjacent tools may be high-risk: AI hiring platforms, employee monitoring systems with decision-making authority, and workplace assessment tools making consequential decisions about workers.

Limited Risk (Transparency Requirements): AI systems that interact with humans or generate content must disclose their AI nature. This is the category most productivity tools fall into.

Minimal Risk (Unrestricted): AI systems posing minimal risk have no specific requirements beyond existing law. Spam filters, basic recommendation systems, and simple automation fall here.

What "Limited Risk" Means for Productivity Tools

Most AI-powered productivity tools—task managers with AI prioritisation, writing assistants, email clients with AI triage, note-taking apps with AI features—are classified as limited risk.

Limited risk primarily means transparency requirements:

Users must know when they're interacting with AI. If a chatbot responds to your query, you must be informed it's AI, not human.

AI-generated content must be identifiable. If AI writes your email draft, systems must enable identification of that content as AI-generated.

Deepfakes and synthetic content have specific labelling requirements.

The practical impact: AI features in productivity tools must be clearly identified as AI. Users must be able to understand when AI is influencing their experience.

What Changed in January 2025

January 2025 marked the enforcement of prohibited practices and general provisions. For productivity tools, key changes include:

Transparency Obligations Active

AI-powered features must now be disclosed. This affects:

AI writing assistants: Tools like Grammarly, Jasper, or built-in AI writing in Notion must identify AI involvement in content generation.

AI email clients: Superhuman's AI features, Gmail's Smart Compose, and similar must indicate when AI is generating or modifying content.

AI task managers: Tools that use AI for prioritisation, scheduling, or recommendations must disclose the AI's role.

AI meeting tools: Transcription, summarisation, and analysis features must be identified as AI-generated.

Users must have ability to consent to or decline AI processing:

Opt-out availability: Users should be able to disable AI features and use tools without AI if preferred.

Processing transparency: Users should understand what data the AI processes and why.

Human oversight option: For consequential decisions, human review must be available.

Vendor Compliance Documentation

Vendors must maintain documentation of their AI systems:

System description: What the AI does, how it works, what data it uses.

Risk assessment: Evaluation of potential harms and mitigations.

Testing documentation: Evidence of pre-deployment testing for bias, accuracy, and safety.

For users, this means vendors should be able to provide compliance documentation upon request.

The 2025-2026 Compliance Timeline

Requirements phase in progressively:

August 2024: Act enters into force.

February 2025: Prohibited practices enforceable; AI literacy obligations for deployers.

August 2025: Governance provisions apply; codes of practice finalized.

August 2026: Full requirements for high-risk AI systems; general-purpose AI requirements.

August 2027: Extended deadline for certain embedded AI systems.

The staggered timeline means requirements continue expanding through 2027. Tools compliant in early 2025 may face additional requirements as deadlines arrive.

How the AI Act Classifies Common Productivity Tools

Understanding how specific tool categories are classified guides evaluation:

Task and Project Management

Tools: Todoist, Asana, Monday.com, Chaos, Motion

Classification: Limited risk (transparency requirements)

AI features affected: AI prioritisation, smart scheduling, predictive features, natural language task creation.

Requirements: Disclose when AI is making suggestions or decisions. Enable users to override AI recommendations. Provide option to disable AI features.

Writing and Documentation

Tools: Notion AI, Grammarly, Jasper, ChatGPT, Microsoft Copilot

Classification: Limited risk (transparency requirements)

AI features affected: AI drafting, editing suggestions, summarisation, translation.

Requirements: Identify AI-generated content. Enable users to distinguish human-written from AI-written content. Disclose AI involvement in editing process.

Email and Communication

Tools: Superhuman, Gmail, Outlook, Slack with AI features

Classification: Limited risk (transparency requirements)

AI features affected: Smart compose, email summarisation, priority inbox, suggested replies.

Requirements: Identify AI-generated suggestions. Disclose AI role in message triage. Enable users to disable AI triage.

Meeting and Collaboration

Tools: Otter.ai, Zoom with AI features, Microsoft Teams Copilot, Fireflies

Classification: Limited risk (transparency requirements)

AI features affected: Transcription, meeting summarisation, action item extraction, speaker identification.

Requirements: Disclose AI transcription is occurring. Provide access to AI-generated summaries. Enable verification/correction of AI outputs.

Tools: Notion AI Q&A, Glean, Guru, enterprise search with AI

Classification: Limited risk (transparency requirements)

AI features affected: AI-powered search, question answering, content recommendation.

Requirements: Indicate when AI is generating answers versus retrieving existing content. Provide source attribution. Enable users to access underlying documents.

What About High-Risk Classification?

Some productivity-adjacent tools may face high-risk classification:

HR and Hiring Tools

AI systems used in recruitment, candidate assessment, or hiring decisions are high-risk under the Act. This includes:

CV screening tools: AI that filters or ranks candidates. Interview analysis: AI that evaluates candidate responses. Assessment platforms: AI that scores tests or evaluations. Employee monitoring: AI making decisions about employee performance, promotion, or termination.

High-risk requirements include: mandatory conformity assessments, registration in EU database, extensive documentation, human oversight requirements, and quality management systems.

If your organisation uses AI hiring tools, compliance requirements are substantially higher.

Employee Monitoring Systems

AI systems that monitor workers and make or influence decisions about their work may be high-risk, depending on their functionality:

Productivity monitoring: If the AI merely tracks metrics, likely limited risk. If it makes consequential decisions (flagging employees for action, influencing reviews), potentially high-risk.

Behaviour analysis: AI analysing employee behaviour patterns for decision-making moves toward high-risk.

The key distinction: AI that monitors versus AI that decides. Monitoring alone is lower risk; decision-making authority increases risk classification.

GDPR Comparison: What Overlaps, What's New

UK readers are familiar with GDPR requirements. The AI Act interacts with but differs from GDPR:

What Overlaps

Data processing requirements: Both require lawful bases for processing personal data. AI processing personal data must comply with GDPR regardless of AI Act requirements.

Transparency: Both require informing users about automated processing. GDPR's Article 22 restrictions on automated decision-making interact with AI Act requirements.

Rights of access: Both provide rights to access information about processing.

What's New with AI Act

System-level requirements: GDPR focuses on data processing; AI Act focuses on the system itself—its design, testing, and deployment.

Risk-based classification: GDPR applies uniformly to personal data processing; AI Act imposes different requirements based on risk classification.

Technical requirements: AI Act mandates specific technical measures (robustness, accuracy, human oversight) that GDPR doesn't require.

Market access: AI Act creates barriers to EU market access for non-compliant systems, beyond GDPR's data-specific requirements.

Compliance Checklist for Tool Selection

When evaluating AI-powered productivity tools, this checklist guides compliance-aware selection:

Transparency

Does the tool clearly indicate when AI is involved? Can users distinguish AI-generated content from human content? Is AI processing disclosed in terms of service and UI? Are AI features labelled or identified in the interface?

User Control

Can AI features be disabled by users who prefer not to use them? Can users override AI recommendations? Is there meaningful human oversight for consequential outputs? Do users control what data the AI processes?

Documentation

Does the vendor provide AI system documentation? Is there a description of what the AI does and how it works? Has the vendor conducted bias and fairness testing? Is compliance with EU AI Act claimed and documented?

Data Handling

What data does the AI process? Where is data stored and processed (geographic location)? Is data used to train AI models (and can users opt out)? How long is data retained?

Vendor Compliance

Has the vendor publicly addressed EU AI Act compliance? Is there a designated compliance contact? Does the vendor operate in the EU and thus have direct obligations? For non-EU vendors, how do they ensure EU compliance?

UK-Specific Considerations

The UK, post-Brexit, is not directly bound by the EU AI Act. However, several factors make understanding it relevant for UK users:

Tools Serving EU Customers Comply Anyway

Any productivity tool with EU customers must comply with the AI Act for those customers. Practically, most global tools implement compliance universally rather than creating EU-specific versions. UK users benefit from compliance designed for EU requirements.

UK AI Regulation Is Coming

The UK government has signalled AI regulation through a "pro-innovation" approach, initially relying on existing regulators rather than new comprehensive law. However, alignment with EU frameworks is likely over time to facilitate data flows and market access.

The Online Safety Act includes some AI-relevant provisions. Future UK AI legislation will likely reference or resemble EU approaches.

Cross-Border Considerations

UK organisations with EU subsidiaries, customers, or partners face compliance obligations through those relationships. Using AI tools that are non-compliant in the EU creates risk for cross-border operations.

Adequacy Decisions

UK-EU data transfers currently operate under adequacy decisions. If UK AI regulation diverges significantly from EU standards, adequacy could be affected. Using EU-compliant tools provides some protection against regulatory divergence risks.

Vendor Compliance Landscape

How are major productivity vendors responding?

Microsoft

Microsoft has been proactive, publishing AI principles and compliance documentation. Microsoft 365 Copilot includes transparency features identifying AI content. Microsoft's enterprise focus means compliance is well-resourced.

Google

Google Workspace AI features include disclosure of AI involvement. Google's AI principles address some regulatory concerns. Compliance documentation is available for enterprise customers.

Notion

Notion AI's terms address AI-generated content. Features are clearly labelled as AI-powered. European data residency options address some location concerns.

Smaller Vendors

Smaller AI productivity vendors have varied responses. Some have published EU AI Act compliance statements; others haven't addressed it publicly. When evaluating smaller tools, explicit inquiry about compliance is warranted.

Open-Source and Self-Hosted

Self-hosted or open-source AI tools create interesting compliance questions. The EU AI Act places obligations on "deployers" as well as "providers." Organisations deploying open-source AI may bear compliance obligations themselves.

Enforcement and Penalties

Understanding enforcement motivates compliance attention:

Enforcement Bodies

Each EU member state designates national competent authorities for AI Act enforcement. The European AI Office coordinates EU-level oversight.

Penalties

Maximum penalties are significant:

Prohibited practices: Up to €35 million or 7% of global annual turnover. High-risk system violations: Up to €15 million or 3% of global turnover. Other violations: Up to €7.5 million or 1.5% of global turnover.

The turnover percentages mean large companies face substantial absolute penalties.

Enforcement Priorities

Early enforcement will likely focus on: prohibited practices (clearest violations), high-risk systems (highest stakes), and prominent violations with public visibility.

Limited-risk transparency violations for productivity tools are lower enforcement priority initially—but this could change as enforcement matures.

Key Takeaways

The EU AI Act classifies AI systems by risk: unacceptable (banned), high-risk (strict requirements), limited risk (transparency requirements), and minimal risk (unrestricted).

Most AI productivity tools fall into limited risk category, requiring transparency about AI involvement, user ability to identify AI features, and options to decline AI processing.

High-risk classification affects HR, hiring, and consequential employee monitoring tools—requiring extensive compliance measures including conformity assessments and database registration.

January 2025 marked enforcement of prohibited practices and transparency requirements. Additional requirements phase in through August 2026.

Compliance checklist for tool selection: transparency about AI involvement, user control over AI features, documentation availability, clear data handling, and vendor compliance posture.

UK users aren't directly bound but benefit from compliance because most global tools comply with EU requirements universally.

Vendor compliance varies: major vendors (Microsoft, Google, Notion) are generally prepared; smaller vendors require explicit inquiry.

Penalties are significant: up to 7% of global turnover for prohibited practices, creating real incentive for vendor compliance.

When selecting AI productivity tools, compliance should be evaluation criterion alongside features and pricing. The regulatory landscape will continue evolving, and choosing compliant tools now reduces future disruption.

Chaos approaches AI with transparency built in—AI features are clearly identified, users can understand and override AI suggestions, and the system is designed for human oversight rather than autonomous decision-making. This aligns with both the letter of EU AI Act requirements and the spirit of responsible AI deployment.

Related articles