William Tulaba Natick AI NIST CSF 2.0 Securing AI

Blog Series: Securing AI with the NIST Cybersecurity Framework 2.0 – Part 3 AI (PR.DS)

Part 3: AI Data Security & Data Security Posture Management (PR.DS)

William Tulaba Natick AI NIST CSF 2.0 PR.DS Part 3 Data Security

Data Is the Foundation of AI

Artificial intelligence systems depend heavily on data. Training datasets, model outputs, and user prompts may contain sensitive business information.

The Protect function of NIST CSF 2.0 emphasizes safeguarding data from unauthorized access and misuse.

Protecting AI Training Data

Organizations should implement controls that protect data used to train AI models, including:

  • Encryption of sensitive datasets

  • Access control policies for training environments

  • Data classification and labeling

  • Monitoring of data pipelines

Compromised training data can lead to model poisoning, where attackers manipulate AI behavior.

Data Security Posture Management (DSPM)

As AI systems grow more complex, many organizations are adopting DSPM platforms to gain visibility into sensitive data across environments.

DSPM capabilities help organizations:

  • Discover sensitive data across cloud and AI platforms

  • Monitor how data is used within AI systems

  • Identify unauthorized data exposure risks

  • Enforce governance policies

Securing the data that powers AI is essential to maintaining trustworthy and reliable AI systems.