The articles that matter most
Six GDPR articles every organization using AI should understand. Flip the cards to see how each article applies to AI workflows, and which Sanitica mode addresses it.
Data Minimization
Personal data must be “adequate, relevant, and limited to what is necessary.” When AI tools process entire documents, they access far more data than the task requires. A summary request doesn’t need the candidate’s national ID.
Lawful Basis for Processing
Every data processing activity needs a legal basis. Emailing a contract internally has one (employment relationship). Sending the same contract to ChatGPT for review is a new processing activity, one the data subject was never informed about.
Right to Erasure
Data subjects can request deletion of their personal data. But if their data was sent to an AI provider’s servers, deletion becomes nearly impossible. The data may live in training sets, logs, or backups across multiple systems.
Protection by Design
GDPR explicitly names pseudonymization as a recommended data protection measure. Article 25 requires organizations to implement appropriate technical measures “by design and by default,” not as an afterthought.
Security of Processing
Article 32 requires “appropriate technical and organizational measures” including “the pseudonymisation and encryption of personal data.” For AI workflows, encryption alone isn’t enough. Once decrypted for processing, data is fully exposed.
Administrative Fines
Fines can reach €20 million or 4% of global annual turnover, whichever is higher. But fines aren’t the real risk. Reputational damage, loss of customer trust, and mandatory public disclosure often cause more lasting harm.
Ready to protect your data?
Take our quiz to assess your Shadow AI risk, or sign up for early access to Sanitica.