Overview

Dennis covers AI security and cybersecurity, tracking how organizations establish standards to protect AI systems. He explains complex security concepts through accessible analogies—building codes, toddlers with credit cards, bouncers at clubs—making enterprise security frameworks understandable to general readers. His recent coverage addresses "Shadow AI," the phenomenon of employees bypassing security controls to use unauthorized AI tools.

Key Themes

Posts

The Rise of "Shadow AI": When Employees Bypass Security

Examines how employees use unauthorized AI tools like ChatGPT for work tasks, arguing that "the best security patch right now isn't software—it's giving your team a corporate ChatGPT license."

NIST Generative AI Profile Sets the Standard for Secure AI

Breaks down NIST's 400+ action checklist for AI security, comparing it to "building codes for AI" that move security from "wild west" to "standard procedure."

OWASP Flags "Excessive Agency" as a Top AI Threat

Explains why giving AI agents too much autonomous power is dangerous, likening it to "giving a toddler a credit card"—they might accidentally buy a pony even with good intentions.

Google Cloud Debuts "Model Armor" to Firewall AI Prompts

Reports on Google's new prompt-filtering tool, describing it as "a bouncer for your AI club" that provides defense-in-depth rather than relying on AI to protect itself.