
Adversa AI was selected as TOP #6 AI blog in Israel by FeedSpot
We’re excited to announce that Adversa AI has been ranked #6 on the official list of the Best Israel AI Blogs by FeedSpot — selected from among hundreds of outstanding ...
Secure AI Weekly + Digests ADMIN todayJune 16, 2025 4
As AI systems transition from passive tools to autonomous agents, the risks surrounding them evolve just as fast. This week’s digest reveals how attackers are already exploiting agentic AI, how regulators are racing to keep up, and how industry is responding with new benchmarks and standards.
From Microsoft’s EchoLeak zero-click Copilot flaw to CISA’s new data security guidance and South Korea’s landmark AI law, the focus is no longer on if AI can help—but how we keep it safe.
The message is clear: traditional controls aren’t enough. Security leaders need architecture-aware safeguards, continuous AI red teaming, and responsible deployment strategies for LLMs and autonomous systems now operating in critical environments.
The White House, June 6 2025
A new Executive Order amends prior U.S. cybersecurity policies to prioritize defense against nation-state cyber threats and modernize secure software development through NIST frameworks.
The Executive Order recognizes the intensifying cyber campaigns by foreign actors—particularly China, Russia, Iran, and North Korea—targeting U.S. government and critical infrastructure. In response, it mandates significant updates to cybersecurity frameworks such as NIST SP 800-218 (SSDF) and SP 800-53, with deadlines for implementation. These updates will focus on secure software development, reliable patch deployment, and operational security, reinforcing national digital defenses. By formalizing collaboration through a public-private consortium at the National Cybersecurity Center of Excellence, the order aims to ensure real-world application and industry alignment.
How to deal with it:
— Track updates to NIST SP 800-218 and SP 800-53 and integrate their secure development practices into your software lifecycle.
— Participate in or follow outputs from the NIST-led public-private consortium to stay aligned with national guidance.
— Prioritize defenses against nation-state threats by enhancing patch reliability, visibility, and operational resilience across your software infrastructure.
Tech Monitor, June 10 2025
A new analysis reveals how agentic AI systems—autonomous AI agents with delegated permissions—are vulnerable to espionage-style subversion and inherited LLM threats, creating new cybersecurity risks.
Agentic AI is being rapidly embedded across SaaS, cloud, and enterprise tools, mimicking human workflows and assuming sensitive roles like HR, operations, or support. However, these AI agents are often granted broad access and autonomy without safeguards, making them prime targets for attackers who can ‘turn’ them—just like spies turning insiders in classic espionage operations. Once compromised, these agents can leak data, run unauthorized SQL commands, or be manipulated through prompt injections and memory poisoning. Since many agentic vulnerabilities are inherited from generative AI, organizations face a growing threat landscape where traditional SOC defenses may fail to detect AI-native attacks.
How to deal with it:
— Map and limit the permissions and decision-making scope of AI agents to prevent overreach or abuse.
— Implement rigorous input validation, output monitoring, and behavioral boundaries to guard against prompt injection, jailbreaking, and data poisoning.
— Treat AI agents as privileged entities in your security model and apply least-privilege and zero-trust principles accordingly.
Fortune, June 12 2025
Aim Security has revealed EchoLeak—the first known zero-click vulnerability in a generative AI assistant—found in Microsoft 365 Copilot, exposing broader risks in how AI agents handle untrusted data.
EchoLeak allows attackers to compromise AI agents like Copilot simply by sending an email, without any user interaction. Because Copilot autonomously scans content to assist users, hidden prompts can manipulate it into leaking sensitive data from emails, documents, and internal systems—while hiding the source of the attack. The flaw exposes a core design issue in how AI agents process trusted and untrusted data together, similar to the software security problems of the early internet era. While Microsoft has since patched the vulnerability, researchers warn that other platforms—from Anthropic’s MCP to Salesforce’s Agentforce—may harbor similar risks due to this architectural flaw.
How to deal with it:
— Redesign AI agents to clearly separate trusted instructions from untrusted user data during task execution.
— Implement contextual scope enforcement and guardrails that restrict agent behavior based on strict permission boundaries.
— Conduct red teaming of AI agents for zero-click and LLM scope violation vulnerabilities, using behavioral fuzzing and prompt injection simulations.
Stimson, June 12 2025
South Korea has become the first country to enact a comprehensive AI legal framework, aiming to regulate ethical use, reliability, and national security risks associated with AI technologies.
The framework, passed in December 2024, marks a significant milestone in global AI governance, reflecting South Korea’s intent to balance innovation with accountability. The law was driven by a wave of public concern over AI misuse—including fears of deepfakes and autonomous misinformation during political and financial crises. While it sets baseline guardrails for AI use, its real-world effectiveness remains uncertain, especially as enforcement and technical definitions are still being clarified. The framework will likely influence regulatory approaches across the Asia-Pacific and beyond, particularly in democratic nations grappling with AI governance at scale.
How to deal with it:
— Monitor implementation updates from South Korea to assess enforcement mechanisms and compliance expectations under the new AI law.
— Use South Korea’s framework as a reference model when preparing for region-specific or future global AI regulations.
— Incorporate ethical risk assessments and AI reliability testing early in product development to align with emerging international standards.
Covington, June 9 2025
CISA, in partnership with U.S. and allied cyber agencies, has released new guidance to help organizations manage data security risks across the AI system lifecycle.
As AI becomes integrated into critical infrastructure and defense systems, the data it relies on—whether curated, collected, or web-crawled—presents unique security risks. The guidance outlines best practices tailored for AI systems, including data integrity checks, provenance tracking, secure storage, and privacy-preserving techniques. It also categorizes threats across the data supply chain, such as split-view poisoning and frontrunning attacks, as well as risks from adversarial data manipulation and model drift over time. This multi-agency effort provides a foundational framework for national security and critical infrastructure sectors to secure the data that powers AI, reinforcing the message that traditional controls alone are insufficient for AI-era threats.
How to deal with it:
— Perform data provenance checks, integrity validation, and source certification to secure the AI data supply chain.
— Implement defenses against data poisoning, such as sanitizing training inputs, validating metadata, and monitoring for anomalies.
— Continuously monitor for data drift and retrain models with fresh, application-specific data under secure and traceable conditions.
Written by: ADMIN
Company Updates ADMIN
We’re excited to announce that Adversa AI has been ranked #6 on the official list of the Best Israel AI Blogs by FeedSpot — selected from among hundreds of outstanding ...
Adversa AI, Trustworthy AI Research & Advisory