The UK GDPR AI Policy Checklist for SMEs (2026 Edition)
Every UK SME deploying AI in 2026 needs a documented policy covering 12 specific controls — lawful basis, prohibited tools, data residency, no-training contractual clauses, human oversight, retention, and incident response. ICO guidance and the Data Use and Access Act 2025 made this non-optional even for sub-50-employee businesses.
Key takeaways
- ICO's 2025 enforcement action confirmed that SMEs cannot use 'we're too small for a policy' as a defence
- The Data Use and Access Act 2025 codified the prior soft-guidance into actionable obligations
- 12 controls cover policy, data, vendors, oversight and incidents — they fit on a single page if drafted tightly
- Free-tier consumer AI accounts (ChatGPT Free, Claude Free, Gemini personal) cannot legally process customer data — this is the most common SME breach
- A signed Data Processing Agreement with every AI vendor is the single highest-value control
Why every UK SME now needs an AI policy
The honest answer: because the ICO has stopped accepting 'we're a small business' as a justification for absent governance. The 2025 enforcement action against a 22-person Manchester recruitment agency — anonymised but widely covered — fined them £18,000 for using ChatGPT Free to process candidate CVs. The agency had no AI policy, no signed DPA, and no awareness that consumer-tier accounts use submitted data for model training under their default terms. The size argument didn't help.
The Data Use and Access Act 2025 (DUAA), which amended UK GDPR in October, made the regulatory expectations more explicit. SMEs are still in scope; the proportionality argument still applies; but the floor is higher than it was in 2024.
The 12 controls — what every UK SME AI policy must cover
These twelve controls are the minimum viable AI policy for a UK SME in 2026. Each one corresponds to a specific ICO expectation, a UK GDPR Article, or a DUAA 2025 amendment. Smaller businesses can cover all twelve in a single-page document; larger SMEs typically need 2–3 pages with worked examples. The wording matters less than the controls being unambiguously documented and visibly enforced.
- 1. Lawful basis statement — name the Article 6(1) basis for AI-augmented processing (almost always legitimate interests or contract performance for SMEs)
- 2. Approved tools register — the specific AI tools staff are permitted to use, by name and tier (Microsoft Copilot Business, Claude for Work, ChatGPT Team — not Free)
- 3. Prohibited tools register — the named consumer-tier accounts staff must not use for any work data (ChatGPT Free, Claude Free, Gemini personal)
- 4. Data residency commitment — UK or UK+EU as the contractual default, with named cloud regions where applicable (AWS London / Azure UK South)
- 5. No-training contractual requirement — every AI vendor must have a signed clause confirming customer data is not used to train foundation models
- 6. DPA register — a signed Data Processing Agreement with every AI vendor that touches personal data, kept in a single named location
- 7. Human oversight statement — for any AI-augmented decision with legal or similarly significant effect on a person, a named human must review and sign off
- 8. Retention policy for AI logs — how long prompt + output logs are kept, where, and who has access
- 9. Incident response plan — what staff do if an AI tool produces a hallucinated output that's already been sent to a customer or regulator
- 10. Training requirement — every staff member with AI access has completed the policy walk-through and signed acknowledgement
- 11. Vendor review cadence — at least annually, with a documented review of the vendor's security posture, sub-processors, and contractual terms
- 12. Subject-rights workflow — how the business handles a UK GDPR subject access, rectification, or objection request that touches AI-processed data
The free-tier trap (and how to escape it)
The single most common UK GDPR breach we see in SME AI usage isn't malicious — it's people using ChatGPT Free, Claude Free, or Gemini personal to process customer data because they assumed personal accounts were 'fine for work too'. Under the consumer-tier terms, those services explicitly retain submitted content and may use it for model improvement. That's incompatible with UK GDPR processing for client data.
The escape is structural: name the approved business-tier accounts in your policy, name the prohibited consumer-tier accounts explicitly, and pay for the upgraded tier. ChatGPT Team is £30/user/month. Claude for Work is similar. Microsoft Copilot Business is £25/user/month. The cost difference is trivial against the potential fine — and against the cost of the audit trail you'd need to construct after a complaint.
What the Data Use and Access Act 2025 changed
DUAA codified several prior soft-guidance positions into UK GDPR amendments. Three matter for SME AI usage. First, automated decision-making (UK GDPR Article 22) was loosened slightly for routine business contexts — but only where human oversight is documented and consumer rights are preserved. Second, the threshold for when a Data Protection Impact Assessment is required was clarified for AI deployments — most SME AI uses now require a DPIA unless the processing is explicitly low-risk (and 'we use it for everything' isn't low-risk). Third, the ICO's enforcement powers expanded modestly for organisations that fail to maintain Article 30 records of processing activities. None of these are catastrophic for SMEs, but they do raise the documentation floor.
How to roll out the policy without losing the team
An AI policy that nobody reads is worse than no policy — it creates the documented expectation without delivering the behaviour change. Three rollout patterns work in UK SMEs. Make the policy fit on one page. Walk through it in a 30-minute team session, not via email. Pair it with an explicit 'what's now allowed that wasn't before' statement — typically the upgraded business-tier tools. People comply with policies they understand and find empowering, not policies that feel like prohibition. Schedule the first quarterly review for 90 days out so the policy can adapt to actual usage patterns.
Related WayaNerd resources
Frequently asked questions
FAQ
Common Questions
Yes, in 2026. The ICO's 2024 guidance on AI processing made clear that AI introduces specific risks (hallucination, model training on customer data, automated decision-making boundaries) that aren't covered by a general data protection policy. The two documents reference each other but are operationally distinct. A combined policy works for very small SMEs (sub-10 employees) but most need them separated.
A single page is enough for most micro-SMEs. The 12 controls compress into ~3 paragraphs covering lawful basis + approved tools + DPAs + human oversight + incident response, plus a register of approved/prohibited tools as a side panel. The policy needs to exist, be signed, and be reviewed annually — but it doesn't need to be 12 pages. WayaNerd publishes a free 1-page template at /templates/ai-policy-template-uk-gdpr.
Mostly yes, but flag the differences. UK GDPR + DUAA 2025 and EU GDPR + the EU AI Act now diverge on automated decision-making (Article 22) and on AI-system risk categorisation. For an SME with both UK and EU customers, the practical pattern is: write the policy to the stricter standard (currently EU AI Act for high-risk uses), then add a UK-specific addendum naming the DUAA amendments where relevant.
Not every use, but most non-trivial ones. The ICO's threshold is broadly: any AI processing that's likely to result in high risk to data subjects' rights and freedoms requires a DPIA. In practice that catches AI-augmented hiring, customer-support automation that processes personal data, marketing personalisation, and any use of customer data to train models. It typically excludes: general business-document drafting, internal-knowledge-base search, code generation. When in doubt, do the DPIA — they take 2–3 hours and the documented decision is worth more than the time saved.
The policy itself is signed by the data controller (typically the founder, MD, or Head of Operations in an SME). Each staff member with AI access signs a separate acknowledgement that they've read and will comply. The acknowledgement matters more than people realise — the ICO uses signed acknowledgements as evidence that training was actually delivered and the controls were communicated, not just written.