ChatGPT-4, Mistral, other AI chatbots spread Russian propaganda
A NewsGuard audit found that chatbots spewed misinformation from American fugitive John Mark Dougan. #AI #Axios #ChatGPT #Google #Illustrations #License #Microsoft #Misinformation #OpenAI #Visuals #genAI #generative #or
Disrupting malicious uses of AI by state-affiliated threat actors
We terminated accounts associated with state-affiliated threat actors. Our findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks.
Things are about to get a lot worse for Generative AI
A full of spectrum of infringment The cat is out of the bag: Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials; OpenAI, despite its name, has not been transparent about what it has been trained on. Generative AI systems are fully capable of producing materials that infringe on copyright. They do not inform users when they do so. They do not provide any information about the provenance of any of the images they produce. Users may not know when they produce any given image whether they are infringing.
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him.
Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute
It’s been one year since the launch of ChatGPT, and since that time, the market has seen astonishing advancement of large language models (LLMs). Despite the pace of development continuing to outpace model security, enterprises are beginning to deploy LLM-powered applications. Many rely on guardrails implemented by model developers to prevent LLMs from responding to sensitive prompts. However, even with the considerable time and effort spent by the likes of OpenAI, Google, and Meta, these guardrails are not resilient enough to protect enterprises and their users today. Concerns surrounding model risk, biases, and potential adversarial exploits have come to the forefront.
How ChatGPT can turn anyone into a ransomware and malware threat actor
Ever since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratize cybercrime.
How ChatGPT can turn anyone into a ransomware and malware threat actor
Ever since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratize cybercrime.