Overview
Dennis covers AI security and cybersecurity, tracking how organizations establish standards to protect AI systems. He explains complex security concepts through accessible analogies—building codes, toddlers with credit cards, bouncers at clubs—making enterprise security frameworks understandable to general readers. His recent coverage addresses "Shadow AI," the phenomenon of employees bypassing security controls to use unauthorized AI tools.
Key Themes
- AI Security Standards
- Prompt Injection Risks
- Shadow AI
- Enterprise Security Tools
- User Experience as Security
Posts
Examines how employees use unauthorized AI tools like ChatGPT for work tasks, arguing that "the best security patch right now isn't software—it's giving your team a corporate ChatGPT license."
Breaks down NIST's 400+ action checklist for AI security, comparing it to "building codes for AI" that move security from "wild west" to "standard procedure."
Explains why giving AI agents too much autonomous power is dangerous, likening it to "giving a toddler a credit card"—they might accidentally buy a pony even with good intentions.
Reports on Google's new prompt-filtering tool, describing it as "a bouncer for your AI club" that provides defense-in-depth rather than relying on AI to protect itself.