William Tulaba Natick AI - Artificial Intelligence

Blog Series: Cybersecurity Fundamentals in the Age of AI – Part 3

When Good Security Fails at Machine Speed In cybersecurity, most organizations don’t fail because they lack controls. They fail because those controls were never designed to operate under extreme scale, speed, and repetition. In a human-driven environment, “good enough” security can often hold. In an AI-driven environment, it breaks. The Hidden Assumption Behind “Good Security” […]

Blog Series: Cybersecurity Fundamentals in the Age of AI – Part 3 Read More »

William Tulaba Natick AI - Artificial Intelligence

Blog Series: Cybersecurity Fundamentals in the Age of AI – Part 2

Humans vs. Machines — A Fundamental Shift in Risk In cybersecurity, we’ve always designed controls around one central reality: People make mistakes. Security awareness programs, phishing simulations, access reviews, and approval workflows all exist because human behavior is inherently inconsistent. People hesitate. They question. They make judgment calls. Sometimes they get it wrong, but just

Blog Series: Cybersecurity Fundamentals in the Age of AI – Part 2 Read More »

William Tulaba Natick AI - Artificial Intelligence

Blog Series: Cybersecurity Fundamentals in the Age of AI

Part 1: The Illusion That AI Changes Everything Artificial Intelligence is dominating every technology conversation right now. Organizations are racing to adopt it. Vendors are embedding it into their platforms. Security teams are being asked, often urgently, how to secure it. And in the middle of all of this, a common narrative has emerged: “AI

Blog Series: Cybersecurity Fundamentals in the Age of AI Read More »

William Tulaba Natick AI NIST CSF 2.0 Securing AI

Blog Series: Securing AI with the NIST Cybersecurity Framework 2.0 – Part 5 AI (RS.MI)

Part 5: AI Incident Response (RS.MI) Preparing for AI Security Incidents AI introduces new types of cybersecurity incidents that traditional response plans may not fully address. Examples include: Exposure of confidential data through generative AI outputs Manipulation of AI models affecting automated decisions Compromised training data altering system behavior Abuse of AI tools for internal

Blog Series: Securing AI with the NIST Cybersecurity Framework 2.0 – Part 5 AI (RS.MI) Read More »

William Tulaba Natick AI NIST CSF 2.0 Securing AI

Blog Series: Securing AI with the NIST Cybersecurity Framework 2.0 – Part 4 AI (DE.CM)

Part 4: AI Threat Detection (DE.CM) Monitoring AI Systems and AI-Enabled Attacks Artificial intelligence is enabling new forms of cyberattacks, including: AI-generated phishing campaigns Deepfake impersonation attempts Automated vulnerability discovery Manipulation of AI models through crafted inputs The Detect function of NIST CSF 2.0 focuses on identifying cybersecurity events quickly through continuous monitoring. Monitoring AI

Blog Series: Securing AI with the NIST Cybersecurity Framework 2.0 – Part 4 AI (DE.CM) Read More »

William Tulaba Natick AI NIST CSF 2.0 Securing AI

Blog Series: Securing AI with the NIST Cybersecurity Framework 2.0 – Part 3 AI (PR.DS)

Part 3: AI Data Security & Data Security Posture Management (PR.DS) Data Is the Foundation of AI Artificial intelligence systems depend heavily on data. Training datasets, model outputs, and user prompts may contain sensitive business information. The Protect function of NIST CSF 2.0 emphasizes safeguarding data from unauthorized access and misuse. Protecting AI Training Data

Blog Series: Securing AI with the NIST Cybersecurity Framework 2.0 – Part 3 AI (PR.DS) Read More »

William Tulaba Natick AI NIST CSF 2.0 Securing AI

Blog Series: Securing AI with the NIST Cybersecurity Framework 2.0 – Part 2: AI (ID.AM)

Part 2: AI Asset Management (ID.AM) Understanding the AI Attack Surface The first step in securing AI systems is knowing where they exist. Many organizations have limited visibility into the AI technologies being used across the enterprise. Employees may adopt generative AI tools independently, creating what is often referred to as “Shadow AI.” The Identify

Blog Series: Securing AI with the NIST Cybersecurity Framework 2.0 – Part 2: AI (ID.AM) Read More »

William Tulaba Natick NIST CSF 2.0 GV.OC-02

GV.OC-02: Internal and external stakeholders are understood, and their needs and expectations regarding cybersecurity risk management are understood and considered

In cybersecurity, success isn’t measured solely by technical safeguards, it’s also about how well those controls reflect the expectations of the people who depend on your organization. Whether it’s customers expecting privacy, regulators demanding compliance, or employees relying on system reliability, these expectations form a key part of risk management. GV.OC-02, a subcategory within the

GV.OC-02: Internal and external stakeholders are understood, and their needs and expectations regarding cybersecurity risk management are understood and considered Read More »

William Tulaba Natick NIST CSF 2.0 GV.OC-01 organizational Context

GV.OC-01: The organizational mission is understood and informs cybersecurity risk management

Let the Mission Lead: Connecting Purpose to Cybersecurity In NIST CSF 2.0, the Govern (GV) Function brings cybersecurity into the boardroom. And at the heart of this function lies GV.OC-01, a deceptively simple idea with powerful implications: “The organizational mission is understood and informs cybersecurity risk management.” This subcategory challenges organizations to go beyond tech

GV.OC-01: The organizational mission is understood and informs cybersecurity risk management Read More »