Found 2 bookmarks
Custom sorting
Getting a taste of your own medicine: Threat actor MUT-1244 targets offensive actors, leaking hundreds of thousands of credentials | Datadog Security Labs
Getting a taste of your own medicine: Threat actor MUT-1244 targets offensive actors, leaking hundreds of thousands of credentials | Datadog Security Labs
  • In this post, we describe our in-depth investigation into a threat actor to which we have assigned the identifier MUT-1244. MUT-1224 uses two initial access vectors to compromise their victims, both leveraging the same second-stage payload: a *phishing campaign targeting thousands of academic researchers and a large number of trojanized GitHub repositories, such as proof-of-concept code for exploiting known CVEs. Over 390,000 credentials, believed to be for WordPress accounts, have been exfiltrated to the threat actor through the malicious code in the trojanized "yawpp" GitHub project, masquerading as a WordPress credentials checker. Hundreds of victims of MUT-1244 were and are still being compromised. Victims are believed to be offensive actors—including pentesters and security researchers, as well as malicious threat actors— and had sensitive data such as SSH private keys and AWS access keys exfiltrated. We assess that MUT-1244 has overlap with a campaign tracked in previous research reported on the malicious npm package 0xengine/xmlrpc and the malicious GitHub repository hpc20235/yawpp.
·securitylabs.datadoghq.com·
Getting a taste of your own medicine: Threat actor MUT-1244 targets offensive actors, leaking hundreds of thousands of credentials | Datadog Security Labs
Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models
Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models
At Project Zero, we constantly seek to expand the scope and effectiveness of our vulnerability research. Though much of our work still relies on traditional methods like manual source code audits and reverse engineering, we're always looking for new approaches. As the code comprehension and general reasoning ability of Large Language Models (LLMs) has improved, we have been exploring how these models can reproduce the systematic approach of a human security researcher when identifying and demonstrating security vulnerabilities. We hope that in the future, this can close some of the blind spots of current automated vulnerability discovery approaches, and enable automated detection of "unfuzzable" vulnerabilities.
·googleprojectzero.blogspot.com·
Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models