Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures
The defendants used stolen API keys to gain access to devices and accounts with Microsoft’s Azure OpenAI service, which they then used to generate “thousands” of images that violated content restrictions.
Cybercriminals impersonate OpenAI in large-scale phishing attack
Since the launch of ChatGPT, OpenAI has sparked significant interest among both businesses and cybercriminals. While companies are increasingly concerned about whether their existing cybersecurity measures can adequately defend against threats curated with generative AI tools, attackers are finding new ways to exploit them. From crafting convincing phishing campaigns to deploying advanced credential harvesting and malware delivery methods, cybercriminals are using AI to target end users and capitalize on potential vulnerabilities. Barracuda threat researchers recently uncovered a large-scale OpenAI impersonation campaign targeting businesses worldwide. Attackers targeted their victims with a well-known tactic — they impersonated OpenAI with an urgent message requesting updated payment information to process a monthly subscription.
We banned accounts linked to an Iranian influence operation using ChatGPT to generate content focused on multiple topics, including the U.S. presidential campaign. We have seen no indication that this content reached a meaningful audience.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s A.I. technologies. The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
Disrupting malicious uses of AI by state-affiliated threat actors
We terminated accounts associated with state-affiliated threat actors. Our findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks.
The EU Just Passed Sweeping New Rules to Regulate AI
The European Union agreed on terms of the AI Act, a major new set of rules that will govern the building and use of AI and have major implications for Google, OpenAI, and others racing to develop AI systems.
OpenAI managed to appease Italian data authorities and lift the country’s effective ban on ChatGPT last week, but its fight against European regulators is far from over.
At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks. In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes. CPR’s analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.
Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots
Users of underground forums start sharing malware coded by OpenAI’s viral sensation and dating scammers are planning on creating convincing fake girls with the tool. Cyber prognosticators predict more malicious use of ChatGPT is to come.
At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks. In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes. CPR’s analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.
Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots
Users of underground forums start sharing malware coded by OpenAI’s viral sensation and dating scammers are planning on creating convincing fake girls with the tool. Cyber prognosticators predict more malicious use of ChatGPT is to come.