Build Log
What's getting built, what's breaking, and what I'm learning along the way. Building in public, for real.
The Agentic SOC Is Coming. So Are the Attacks Designed to Break It.
AI is going from co-pilot to co-worker in security operations. But prompt injection, data poisoning, and model hijacking are the new attack categories.
Your AI Agent's Plugins Could Be Malware. Nobody's Checking.
341 malicious skills found in a single AI agent marketplace. We've seen this movie before with npm and Docker Hub. The sequel is worse.
Google Just Published a Scoreboard of How Nation-States Are Weaponizing AI.
Google's GTIG AI Threat Tracker reveals Iran, North Korea, China, and Russia are using Gemini for recon, phishing, malware generation, and model theft.
A Nation-State Just Deepfaked a CEO on Zoom. Here's the Full Kill Chain.
North Korea's UNC1069 is using AI-generated video, compromised Telegram accounts, and commercial AI tools to build a new kind of social engineering.
Prompt Injection Isn't a Bug. It's Language Working as Designed.
And a new research paper just proved why no one is ever going to fix it.
Ring's "Search Party" Isn't About Finding Dogs. It's About Building the Surveillance Network No One Voted For.
Amazon just ran a Super Bowl ad normalizing neighborhood-wide AI scanning. And 125 million people watched it happen.
These posts are from Security Signal, my cybersecurity and AI security newsletter on Substack.