In this paper, we first present a taxonomy of GenAI misuse tactics, informed by existing academic literature and a qualitative analysis of 200 media reports of misuse and demonstrations of abuse of GenAI systems published between January 2023 and March 2024). Based on this analysis, we then illuminate key and novel patterns in GenAI misuse during this time period (see Section 4: Findings), including potential motivations, strategies, and how attackers leverage and abuse system capabilities across modalities (e.g. image, text, audio, video) in an uncontrolled environment.
We find that:
1. Manipulation of human likeness and falsification of evidence underlie the most prevalent tactics in real-world cases of misuse. Most of these were deployed with a discernible intent to influence public opinion, enable scam or fraudulent activities, or to generate profit.
2. The majority of reported cases of misuse do not consist of technologically sophisticated uses of GenAI systems or attacks. Instead, we are predominantly seeing an exploitation of easily accessible GenAI capabilities requiring minimal technical expertise.
3. The increased sophistication, availability and accessibility of GenAI tools seemingly introduces new and lower-level forms of misuse that are neither overtly malicious nor explicitly violate these tools’ terms of services, but still have concerning ethical ramifications. These include the emergence of new forms of communications for political outreach, self-promotion and advocacy that blur the lines between authenticity and deception (see Section 5: Discussion).