All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection AttackA new attack technique named Policy Puppetry can break the protections of major gen-AI models to produce harmful outputs.#securityweek#EN#2025#technique#Gen-AI#Models#Policy-Puppetry#AI#vulnerabilty·securityweek.com·Apr 25, 2025All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack