Found 22 bookmarks
Custom sorting
Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures
Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures
The defendants used stolen API keys to gain access to devices and accounts with Microsoft’s Azure OpenAI service, which they then used to generate “thousands” of images that violated content restrictions.
·cyberscoop.com·
Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures
Cybercriminals impersonate OpenAI in large-scale phishing attack
Cybercriminals impersonate OpenAI in large-scale phishing attack
Since the launch of ChatGPT, OpenAI has sparked significant interest among both businesses and cybercriminals. While companies are increasingly concerned about whether their existing cybersecurity measures can adequately defend against threats curated with generative AI tools, attackers are finding new ways to exploit them. From crafting convincing phishing campaigns to deploying advanced credential harvesting and malware delivery methods, cybercriminals are using AI to target end users and capitalize on potential vulnerabilities. Barracuda threat researchers recently uncovered a large-scale OpenAI impersonation campaign targeting businesses worldwide. Attackers targeted their victims with a well-known tactic — they impersonated OpenAI with an urgent message requesting updated payment information to process a monthly subscription.
·blog.barracuda.com·
Cybercriminals impersonate OpenAI in large-scale phishing attack
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s A.I. technologies. The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.
·nytimes.com·
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
OPWNAI : Cybercriminals Starting to Use ChatGPT
OPWNAI : Cybercriminals Starting to Use ChatGPT
At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks. In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes. CPR’s analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.
·research.checkpoint.com·
OPWNAI : Cybercriminals Starting to Use ChatGPT
OPWNAI : Cybercriminals Starting to Use ChatGPT
OPWNAI : Cybercriminals Starting to Use ChatGPT
At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks. In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes. CPR’s analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.
·research.checkpoint.com·
OPWNAI : Cybercriminals Starting to Use ChatGPT