Found 34 bookmarks
Custom sorting
Things are about to get a lot worse for Generative AI
Things are about to get a lot worse for Generative AI
A full of spectrum of infringment The cat is out of the bag: Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials; OpenAI, despite its name, has not been transparent about what it has been trained on. Generative AI systems are fully capable of producing materials that infringe on copyright. They do not inform users when they do so. They do not provide any information about the provenance of any of the images they produce. Users may not know when they produce any given image whether they are infringing.
·garymarcus.substack.com·
Things are about to get a lot worse for Generative AI
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him.
·nytimes.com·
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
A Closer Look at ChatGPT's Role in Automated Malware Creation
A Closer Look at ChatGPT's Role in Automated Malware Creation
As the use of ChatGPT and other artificial intelligence (AI) technologies becomes more widespread, it is important to consider the possible risks associated with their use. One of the main concerns surrounding these technologies is the potential for malicious use, such as in the development of malware or other harmful software. Our recent reports discussed how cybercriminals are misusing the large language model’s (LLM) advanced capabilities: We discussed how ChatGPT can be abused to scale manual and time-consuming processes in cybercriminals’ attack chains in virtual kidnapping schemes. We also reported on how this tool can be used to automate certain processes in harpoon whaling attacks to discover “signals” or target categories.
·trendmicro.com·
A Closer Look at ChatGPT's Role in Automated Malware Creation
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT
Like many companies, Dropbox has been experimenting with large language models (LLMs) as a potential backend for product and research initiatives. As interest in leveraging LLMs has increased in recent months, the Dropbox Security team has been advising on measures to harden internal Dropbox infrastructure for secure usage in accordance with our AI principles. In particular, we’ve been working to mitigate abuse of potential LLM-powered products and features via user-controlled input.
·dropbox.tech·
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT
ChatGPT creates mutating malware that evades detection by EDR
ChatGPT creates mutating malware that evades detection by EDR
A global sensation since its initial release at the end of last year, ChatGPT's popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.
·csoonline.com·
ChatGPT creates mutating malware that evades detection by EDR
The criminal use of ChatGPT – a cautionary tale about large language models
The criminal use of ChatGPT – a cautionary tale about large language models
In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across Europol to explore how criminals can abuse large language models (LLMs) such as ChatGPT, as well as how it may assist investigators in their daily work.
·europol.europa.eu·
The criminal use of ChatGPT – a cautionary tale about large language models
OPWNAI : Cybercriminals Starting to Use ChatGPT
OPWNAI : Cybercriminals Starting to Use ChatGPT
At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks. In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes. CPR’s analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.
·research.checkpoint.com·
OPWNAI : Cybercriminals Starting to Use ChatGPT