Towards Secure AI Week 29 — America’s AI Action Plan, LLM Plugin Flaws, and Package Hallucination Risks
From insecure plugins to hallucinated packages, this week’s AI security landscape exposes the fragile trust surface of generative and agentic systems. New research reveals that leading LLMs frequently invent fake software dependencies, creating a dangerous supply chain vulnerability, while Google’s Gemini plugin fell prey to indirect prompt injection capable of ...