The Network and Information Security Directive 2 (NIS2) is the EU’s most ambitious cybersecurity legislation to date. It entered into force in January 2023 and EU member states were required to transpose it into national law by October 2024. If your organization operates in Europe and uses AI tools, NIS2 has direct implications for how you handle data.
Who Does NIS2 Apply To?
NIS2 dramatically expands the scope of the original NIS directive. It now covers 18 sectors, including:
- Energy (electricity, oil, gas, hydrogen, district heating)
- Transport (air, rail, water, road)
- Healthcare and pharmaceuticals
- Water supply and waste management
- Digital infrastructure and ICT services
- Manufacturing (medical devices, electronics, machinery, vehicles)
- Food production and distribution
- Postal and courier services
- Public administration
- Space
The threshold is any medium-sized organization or larger (50+ employees or €10M+ annual turnover). But member states can also designate smaller entities as essential or important if they provide critical services. This is not just for big companies.
What Does NIS2 Require?
NIS2 mandates a risk-based approach to cybersecurity. Key obligations include:
- Risk analysis and security policies — documented and regularly updated
- Incident handling — detection, response, and reporting within 24 hours (early warning) and 72 hours (full notification)
- Supply chain security — assessing risks from third-party providers, including AI service providers
- Encryption and access controls — appropriate to the risk level
- Business continuity — backup management, disaster recovery, crisis management
- Management accountability — board-level responsibility for cybersecurity, with personal liability for non-compliance
Where AI Creates NIS2 Risk
This is where most organizations have a blind spot. NIS2 explicitly covers supply chain security and data handling practices. When your employees upload documents to AI tools — whether external services like ChatGPT or even internal AI systems — several NIS2 obligations are triggered:
1. Third-Party Risk (Article 21.2d)
Every AI tool that processes your data is a supplier in your supply chain. NIS2 requires you to assess and manage these risks. If an employee uploads a document to an external AI service, you have introduced an unassessed third-party risk into your supply chain. For most organizations, this is happening hundreds of times per day without any oversight.
2. Data Integrity and Confidentiality
NIS2 requires appropriate measures to protect data integrity and confidentiality. When a document containing employee SSNs, client contracts, or infrastructure specifications enters an AI system, confidentiality is compromised. The data may be stored, used for training, or surfaced in responses to other users.
3. Incident Reporting Obligations
If personal or sensitive data is exposed through an AI tool, this could constitute a security incident under NIS2. The 24-hour early warning and 72-hour full notification requirements are unforgiving. Organizations that discover a data exposure through AI weeks or months later face both the original incident and a reporting failure.
4. Management Liability
NIS2 introduces personal liability for management. Board members and senior executives can be held personally accountable for cybersecurity failures. If your organization has no controls over how AI tools process sensitive data, management is exposed to personal liability.
NIS2 Penalties
The penalties under NIS2 are significant:
- Essential entities: up to €10 million or 2% of global annual turnover, whichever is higher
- Important entities: up to €7 million or 1.4% of global annual turnover, whichever is higher
- Management sanctions: temporary bans from exercising managerial functions
These are on top of any GDPR fines (up to €20M or 4% of turnover) that may apply when personal data is involved.
The Overlap: NIS2 + GDPR + AI Act
Organizations using AI now face a triple regulatory framework:
- GDPR — governs personal data protection (data minimization, purpose limitation, right to erasure)
- NIS2 — governs cybersecurity practices (supply chain security, incident handling, management accountability)
- EU AI Act — governs AI system deployment (risk classification, transparency, human oversight)
A single incident — an employee uploading a client contract to ChatGPT — can trigger violations under all three frameworks simultaneously. The regulatory surface area is larger than most organizations realize.
What to Do Now
If your organization falls under NIS2 (and the expanded scope means it very likely does), here are concrete steps:
- Audit your AI exposure — map which AI tools your employees are using and what data they process
- Classify your data — identify which documents contain personal data, trade secrets, or infrastructure information
- Implement automated controls — manual review does not scale; you need systematic protection
- Establish an audit trail — demonstrate to regulators exactly what data was protected and how
- Brief your board — management liability means this is a board-level issue, not just an IT concern
Where Sanitica Fits
Sanitica addresses the core NIS2 data handling challenge: it automatically sanitizes documents before they reach any AI system, removing personal data, metadata, and hidden information at the binary level. Every action is logged, providing the audit trail that NIS2 compliance demands. Your employees keep using AI productively. Your supply chain risk from AI tools drops to near zero. And your management team has the evidence they need to demonstrate due diligence.