Found 55 bookmarks
Newest
ChatGPT Guessing Game Leads To Users Extracting Free Windows OS Keys & More
ChatGPT Guessing Game Leads To Users Extracting Free Windows OS Keys & More
0din.ai - In a recent submission last year, researchers discovered a method to bypass AI guardrails designed to prevent sharing of sensitive or harmful information. The technique leverages the game mechanics of language models, such as GPT-4o and GPT-4o-mini, by framing the interaction as a harmless guessing game. By cleverly obscuring details using HTML tags and positioning the request as part of the game’s conclusion, the AI inadvertently returned valid Windows product keys. This case underscores the challenges of reinforcing AI models against sophisticated social engineering and manipulation tactics. Guardrails are protective measures implemented within AI models to prevent the processing or sharing of sensitive, harmful, or restricted information. These include serial numbers, security-related data, and other proprietary or confidential details. The aim is to ensure that language models do not provide or facilitate the exchange of dangerous or illegal content. In this particular case, the intended guardrails are designed to block access to any licenses like Windows 10 product keys. However, the researcher manipulated the system in such a way that the AI inadvertently disclosed this sensitive information. Tactic Details The tactics used to bypass the guardrails were intricate and manipulative. By framing the interaction as a guessing game, the researcher exploited the AI’s logic flow to produce sensitive data: Framing the Interaction as a Game The researcher initiated the interaction by presenting the exchange as a guessing game. This trivialized the interaction, making it seem non-threatening or inconsequential. By introducing game mechanics, the AI was tricked into viewing the interaction through a playful, harmless lens, which masked the researcher's true intent. Compelling Participation The researcher set rules stating that the AI “must” participate and cannot lie. This coerced the AI into continuing the game and following user instructions as though they were part of the rules. The AI became obliged to fulfill the game’s conditions—even though those conditions were manipulated to bypass content restrictions. The “I Give Up” Trigger The most critical step in the attack was the phrase “I give up.” This acted as a trigger, compelling the AI to reveal the previously hidden information (i.e., a Windows 10 serial number). By framing it as the end of the game, the researcher manipulated the AI into thinking it was obligated to respond with the string of characters. Why This Works The success of this jailbreak can be traced to several factors: Temporary Keys The Windows product keys provided were a mix of home, pro, and enterprise keys. These are not unique keys but are commonly seen on public forums. Their familiarity may have contributed to the AI misjudging their sensitivity. Guardrail Flaws The system’s guardrails prevented direct requests for sensitive data but failed to account for obfuscation tactics—such as embedding sensitive phrases in HTML tags. This highlighted a critical weakness in the AI’s filtering mechanisms.
·0din.ai·
ChatGPT Guessing Game Leads To Users Extracting Free Windows OS Keys & More
Microsoft Purges Dormant Azure Tenants, Rotates Keys to Prevent Repeat Nation-State Hack
Microsoft Purges Dormant Azure Tenants, Rotates Keys to Prevent Repeat Nation-State Hack
Microsoft security chief Charlie Bell says the SFI’s 28 objectives are “near completion” and that 11 others have made “significant progress.” Microsoft, touting what it calls “the largest cybersecurity engineering project in history,” says it has moved every Microsoft Account and Entra ID token‑signing key into hardware security modules or Azure confidential VMs with automatic rotation, an overhaul meant to block the key‑theft tactic that fueled an embarrassing nation‑state breach at Redmond. Just 18 months after rolling out a Secure Future Initiative in response to the hack and a scathing US government report that followed, Microsoft security chief Charlie Bell said five of the program’s 28 objectives are “near completion” and that 11 others have made “significant progress.” In addition to the headline fix to put all Microsoft Account and Entra ID token‑signing keys in hardware security modules or Azure confidential virtual machines, Bell said more than 90 percent of Microsoft’s internal productivity accounts have moved to phishing‑resistant multi factor authentication and that 90 percent of first‑party identity tokens are validated through a newly hardened software‑development kit.
·securityweek.com·
Microsoft Purges Dormant Azure Tenants, Rotates Keys to Prevent Repeat Nation-State Hack
Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures
Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures
The defendants used stolen API keys to gain access to devices and accounts with Microsoft’s Azure OpenAI service, which they then used to generate “thousands” of images that violated content restrictions.
·cyberscoop.com·
Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures
Perfecting Ransomware on AWS — Using ‘keys to the kingdom’ to change the locks
Perfecting Ransomware on AWS — Using ‘keys to the kingdom’ to change the locks
If someone asked me what was the best way to make money from a compromised AWS Account (assume root access even) — I would have answered “dump the data and hope that no-one notices you before you finish it up.” This answer would have been valid until ~8 months ago when I stumbled upon a lesser known feature of AWS KMS which allows an attacker to do devastating ransomware attacks on a compromised AWS account. Now I know that ransomware attacks using cross-account KMS keys is already known (checkout the article below)— but even then, the CMK is managed by AWS and they can just block the attackers access to the CMK and decrypt data for the victim because the key is OWNED by AWS and attacker is just given API access to it under AWS TOS. Also there’s no way to delete the CMK but only schedule the key deletion (min 7 days) which means there’s ample time for AWS to intervene.
·medium.com·
Perfecting Ransomware on AWS — Using ‘keys to the kingdom’ to change the locks
Experts Fear Crooks are Cracking Keys Stolen in LastPass Breach
Experts Fear Crooks are Cracking Keys Stolen in LastPass Breach
In November 2022, the password manager service LastPass disclosed a breach in which hackers stole password vaults containing both encrypted and plaintext data for more than 25 million users. Since then, a steady trickle of six-figure cryptocurrency heists targeting security-conscious…
·krebsonsecurity.com·
Experts Fear Crooks are Cracking Keys Stolen in LastPass Breach
MSI Breach Leaks Intel BootGuard & OEM Image Signing Keys, Compromises Security of Over 200 Devices & Major Vendors
MSI Breach Leaks Intel BootGuard & OEM Image Signing Keys, Compromises Security of Over 200 Devices & Major Vendors
A recent breach in MSI's servers exposed Intel's BootGuard keys and has now put the security of various devices at risk. Major MSI Breach Affects The Security of Various Intel Devices Last month, a hacker group by the name of Money Message revealed that they had breached MSI's servers and stolen 1.5 TBs of data from the company's servers including source code amongst a list of various files that are important to the integrity of the company. The group asked MSI to pay $4.0 million in ransom to avert them from releasing the files to the public but MSI refused the payment.
·wccftech.com·
MSI Breach Leaks Intel BootGuard & OEM Image Signing Keys, Compromises Security of Over 200 Devices & Major Vendors
I scanned every package on PyPi and found 57 live AWS keys
I scanned every package on PyPi and found 57 live AWS keys
After inadvertently finding that InfoSys leaked an AWS key on PyPi I wanted to know how many other live AWS keys may be present on Python package index. After scanning every release published to PyPi I found 57 valid access keys from organisations like: Amazon themselves 😅 Intel Stanford, Portland and Louisiana University The Australian Government General Atomics fusion department Terradata Delta Lake And Top Glove, the worlds largest glove manufacturer 🧤
·tomforb.es·
I scanned every package on PyPi and found 57 live AWS keys
I scanned every package on PyPi and found 57 live AWS keys
I scanned every package on PyPi and found 57 live AWS keys
After inadvertently finding that InfoSys leaked an AWS key on PyPi I wanted to know how many other live AWS keys may be present on Python package index. After scanning every release published to PyPi I found 57 valid access keys from organisations like: Amazon themselves 😅 Intel Stanford, Portland and Louisiana University The Australian Government General Atomics fusion department Terradata Delta Lake And Top Glove, the worlds largest glove manufacturer 🧤
·tomforb.es·
I scanned every package on PyPi and found 57 live AWS keys