OpenClaw security 101: Vulnerabilities & hardening (2026)
Everything you want to know about OpenClaw/ Moltbot/ Clawdbot security — architectural weaknesses, vulnerabilities, and multi-tier hardening strategies for individuals and organizations.
Everything you want to know about OpenClaw/ Moltbot/ Clawdbot security — architectural weaknesses, vulnerabilities, and multi-tier hardening strategies for individuals and organizations.
Traditional chatbot red teaming leaves 85% of the agentic AI attack surface exposed. Learn what action risk entails, explore key agentic threats like memory poisoning and tool hijacking, and understand why securing agents demands a fundamentally different approach than securing LLMs.
Cascading failures in agentic AI: the definitive OWASP ASI08 security guide A Comprehensive Technical Reference for Security Professionals, Architects, and Risk Managers Table of contents Introduction: understanding cascading failures in agentic AI Why cascade prevention matters for agentic AI security Anatomy of agentic AI cascading failures Temporal patterns of cascading ...
As AI systems evolve from passive responders to autonomous agents equipped with planning, memory, and tool use, the Model Context Protocol (MCP) becomes a central architectural layer — and a new security frontier. Yet traditional red teaming approaches are ill-equipped to test how MCP-enabled agents interact, delegate, and reason across ...
In August 2025, Lenovo quietly patched a critical vulnerability in its AI chatbot “Lena” that could have allowed attackers to steal session cookies and potentially compromise customer support systems through a single 400-character prompt—highlighting a new class of AI-driven security threats that most organizations are unprepared to defend against. The ...
The rapid deployment of generative AI systems across critical infrastructure has created an unprecedented security challenge: how do we effectively test and secure systems that can generate content, make decisions, and interact with users in ways we never fully anticipated — even with AI Red Teaming in place? A groundbreaking ...
Executive Summary for CISO Security researchers from Adversa AI discovered that ChatGPT 5 have a fatal flaw: they can route your requests to cheaper, less secure models to save money. Attackers can exploit this to bypass AI security and safety measures with just a few words. What Is PROMISQROUTE? When ...
Introduction. When AI Becomes the Attack Vector In the world of AI red teaming, we often ask ourselves: “What’s the worst that could happen?” On July 2025, we got our answer—and it wasn’t from a sophisticated nation-state actor or a zero-day exploit. It came from a single frustrated hacker who ...
The Model Context Protocol (MCP) has emerged as a “USB-C port for AI applications” that standardizes how AI systems interact with external data sources and tools. While MCP revolutionizes AI integration by enabling seamless connections between AI models and diverse services, this powerful capability introduces significant MCP defense and security challenges ...