Found 50 bookmarks
Custom sorting
Researchers found multiple flaws in ChatGPT plugins
Researchers found multiple flaws in ChatGPT plugins
Researchers from Salt Security discovered three types of vulnerabilities in ChatGPT plugins that can be could have led to data exposure and account takeovers. ChatGPT plugins are additional tools or extensions that can be integrated with ChatGPT to extend its functionalities or enhance specific aspects of the user experience. These plugins may include new natural language processing features, search capabilities, integrations with other services or platforms, text analysis tools, and more. Essentially, plugins allow users to customize and tailor the ChatGPT experience to their specific needs.
·securityaffairs.com·
Researchers found multiple flaws in ChatGPT plugins
Things are about to get a lot worse for Generative AI
Things are about to get a lot worse for Generative AI
A full of spectrum of infringment The cat is out of the bag: Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials; OpenAI, despite its name, has not been transparent about what it has been trained on. Generative AI systems are fully capable of producing materials that infringe on copyright. They do not inform users when they do so. They do not provide any information about the provenance of any of the images they produce. Users may not know when they produce any given image whether they are infringing.
·garymarcus.substack.com·
Things are about to get a lot worse for Generative AI
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him.
·nytimes.com·
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
AI Act, come funziona lo stop al riconoscimento biometrico della prima legge europea sull'intelligenza artificiale | Wired Italia
AI Act, come funziona lo stop al riconoscimento biometrico della prima legge europea sull'intelligenza artificiale | Wired Italia
Sono previste tre eccezioni per le forze dell'ordine, con una lista di 16 crimini per le cui indagini può essere ammesso. Serve un'autorizzazione dall'autorità giudiziaria, ma si può partire senza e richiederla in 24 ore
·wired.it·
AI Act, come funziona lo stop al riconoscimento biometrico della prima legge europea sull'intelligenza artificiale | Wired Italia
Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute
Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute
It’s been one year since the launch of ChatGPT, and since that time, the market has seen astonishing advancement of large language models (LLMs). Despite the pace of development continuing to outpace model security, enterprises are beginning to deploy LLM-powered applications. Many rely on guardrails implemented by model developers to prevent LLMs from responding to sensitive prompts. However, even with the considerable time and effort spent by the likes of OpenAI, Google, and Meta, these guardrails are not resilient enough to protect enterprises and their users today. Concerns surrounding model risk, biases, and potential adversarial exploits have come to the forefront.
·robustintelligence.com·
Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute
A Closer Look at ChatGPT's Role in Automated Malware Creation
A Closer Look at ChatGPT's Role in Automated Malware Creation
As the use of ChatGPT and other artificial intelligence (AI) technologies becomes more widespread, it is important to consider the possible risks associated with their use. One of the main concerns surrounding these technologies is the potential for malicious use, such as in the development of malware or other harmful software. Our recent reports discussed how cybercriminals are misusing the large language model’s (LLM) advanced capabilities: We discussed how ChatGPT can be abused to scale manual and time-consuming processes in cybercriminals’ attack chains in virtual kidnapping schemes. We also reported on how this tool can be used to automate certain processes in harpoon whaling attacks to discover “signals” or target categories.
·trendmicro.com·
A Closer Look at ChatGPT's Role in Automated Malware Creation
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT
Like many companies, Dropbox has been experimenting with large language models (LLMs) as a potential backend for product and research initiatives. As interest in leveraging LLMs has increased in recent months, the Dropbox Security team has been advising on measures to harden internal Dropbox infrastructure for secure usage in accordance with our AI principles. In particular, we’ve been working to mitigate abuse of potential LLM-powered products and features via user-controlled input.
·dropbox.tech·
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT
ChatGPT creates mutating malware that evades detection by EDR
ChatGPT creates mutating malware that evades detection by EDR
A global sensation since its initial release at the end of last year, ChatGPT's popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.
·csoonline.com·
ChatGPT creates mutating malware that evades detection by EDR