Article + GenAI Security ADMIN todayJuly 10, 2025 1050
On 30 June 2025, security researchers Ian Carroll and Sam Curry opened McDonald’s recruiting site, clicked a tiny “Paradox team members” link, typed the universal joke password 123456, and found themselves inside the admin console of McHire—the AI-driven chatbot platform that screens applicants for about 90% of McDonald’s 40,000+ restaurants worldwide.
Within minutes they were paging through sequential IDs and could pull any candidate’s full record—names, e-mails, phone numbers, even authentication tokens—more than 64 million of them.
This wasn’t an exotic LLM jailbreak. It was a reminder that the AI supply-chain is built on the same old web stacks, and that skipping the basics can put tens of millions of people at risk. Here’s the five-minute brief every CISO will want to forward to HR tonight.
Security researchers Ian Carroll and Sam Curry spent 30 minutes poking at the public McHire site. When prompt-injection tests were uneventful they tried the staff login link labelled “Paradox team members”… and the default credentials 123456 : 123456 opened the admin console.
Inside, an Insecure Direct Object Reference (IDOR) let them iterate applicant IDs downward from 64,185,742 and view any other candidate’s full record—no rate-limit, no MFA, no audit alert.
| Stakeholder | Role | Quote / Position |
| Paradox.ai (vendor) | Built & hosts the Olivia chatbot and McHire SaaS |
“We own this… we’re launching a bug-bounty program.” — Stephanie King, CLO Wired |
| McDonald’s Corp. | Data controller; 90% franchise adoption |
“Unacceptable vulnerability from a third-party provider… remediated the same day we were notified.” Wired |
| Ian Carroll & Sam Curry | Independent security researchers |
Disclosed on 30 Jun 2025; accessed seven records for proof-of-concept. ian carroll |
| Date (2025) | Event |
| 30 Jun 17:46 ET | Researchers e-mail Paradox & McD with full vuln details |
| 30 Jun 18:24 | McDonald’s acknowledges, requests PoC |
| 30 Jun 19:31 | Default credentials disabled; backend no longer accessible |
| 01 Jul 22:18 | Paradox confirms both issues fixed, begins internal review |
| 09 Jul | WIRED and The Verge break the story |
| 10 Jul | Mainstream/infosec press amplifies (Computing, India Today, CyberSecurityNews, etc.) |
The vulnerable infrastructure was hosted on Paradox.ai’s US cloud tenant; admin accounts traced to a test “restaurant” whose listed employees were Paradox developers in Vietnam. Impacted applicants span global markets because large franchisees pipe every job ad through McHire.
| Layer | Weakness | Exploit Path |
| Authentication | Hard-coded test account, no MFA, password = 123456 |
Straight credential stuffing; 100% success on first try |
| Authorization / Objects | IDOR on /api/lead/cem-xhr | Incrementing/decrementing lead_id returned any candidate record |
| Environment hygiene | Dormant test tenant left active since 2019 |
Provided elevated backend scope once breached |
| Monitoring & Response | No rate-limit / anomaly alerting | Researchers could enumerate records unnoticed |
Lack of Defense in Depth:
Third-Party Security Gaps:
Model Security Issues:
Source index
McHire’s face-plant is the poster-child for why AI security is not a one-and-done checkbox. Yesterday’s “simple” bugs—hard-coded passwords, IDORs, stale test tenants—slipped straight through change windows because nobody kept attacking the system after go-live. Meanwhile, tomorrow’s threats—prompt-injection, jailbreak chaining, model-poisoning—will evolve every time the vendor fine-tunes the bot or swaps an LLM.
The fix is a single, rolling discipline: Continuous AI Red Teaming that covers both sides of the stack.
| Surface | Classical tests you still need | AI-specific tests you must add |
| Auth & Session | Credential-stuffing, MFA bypass, IDOR, rate limits |
Context-hijack via auth-header injection that reroutes LLM calls |
| Data layer | SQLi/NoSQLi, stale test data, supply-chain SBOM gaps |
Poisoned training snippets, data exfiltration via “helpful” responses |
| Business logic | Race conditions, TOCTOU, mis-scoped RBAC |
Jailbreak chains, prompt injections, hallucination-driven fraud |
| User comms | Phishing, typo-squatting, weak DMARC |
Social-engineered prompts that trick the bot into sending PII |
Operational cadence
Bottom line: AI magnifies old sins and invents new ones. If you’re deploying chatbots that touch customer or employee data, budget for a red-team subscription the same way you budget for cloud spend. The bot that hires your crew today could bankrupt you tomorrow—and the only way to know before the headlines is to attack it first.
For more expert breakdowns, visit our Trusted AI Blog or follow us on LinkedIn to stay up to date with the latest in AI security. Be the first to learn about emerging risks, tools, and defense strategies.
Stay up to date with what is happening! Plus, get a first look at news, noteworthy research, and the worst attacks on AI—delivered right to your inbox.
Written by: ADMIN
Agentic AI Security Digest ADMIN
Explore the Top Agentic AI Resources to stay informed about the most pressing risks and defenses in the field. As autonomous agents gain new capabilities—reasoning, memory, tool use—they also introduce ...
(c) Adversa AI, 2026. Continuous red teaming of AI systems, trustworthy AI research & advisory
Privacy, cookies & security compliance · Security & trust center