Found 59 bookmarks
Newest
AI bots hallucinate software packages and devs download them
AI bots hallucinate software packages and devs download them
Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice, we've learned. If the package was laced with actual malware, rather than being a benign test, the results could have been disastrous.
·theregister.com·
AI bots hallucinate software packages and devs download them
Microsoft AI engineer says Copilot Designer creates disturbing images
Microsoft AI engineer says Copilot Designer creates disturbing images
  • Shane Jones, who’s worked at Microsoft for six years, has been testing the company’s AI image generator in his free time and told CNBC he is disturbed by his findings. He’s warned Microsoft of the sexual and violent content that the product, Copilot Designer, is creating, but said the company isn’t taking appropriate action. On Wednesday, Jones escalated the matter, sending letters to FTC Chair Lina Khan and to Microsoft’s board, which were viewed by CNBC.
·cnbc.com·
Microsoft AI engineer says Copilot Designer creates disturbing images
New ‘Magic’ Gmail Security Uses AI And Is Here Now, Google Says
New ‘Magic’ Gmail Security Uses AI And Is Here Now, Google Says
Google has confirmed a new security scheme which, it says, will help “secure, empower and advance our collective digital future” using AI. Part of this AI Cyber Defence Initiative includes open-sourcing the new, AI-powered, Magika tool that is already being used to help protect Gmail users from potentially problematic content.
·forbes.com·
New ‘Magic’ Gmail Security Uses AI And Is Here Now, Google Says
Things are about to get a lot worse for Generative AI
Things are about to get a lot worse for Generative AI
A full of spectrum of infringment The cat is out of the bag: Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials; OpenAI, despite its name, has not been transparent about what it has been trained on. Generative AI systems are fully capable of producing materials that infringe on copyright. They do not inform users when they do so. They do not provide any information about the provenance of any of the images they produce. Users may not know when they produce any given image whether they are infringing.
·garymarcus.substack.com·
Things are about to get a lot worse for Generative AI
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him.
·nytimes.com·
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies
Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies
With less than a year to go before one of the most consequential elections in US history, Microsoft’s AI chatbot is responding to political queries with conspiracies, misinformation, and out-of-date or incorrect information. When WIRED asked the chatbot, initially called Bing Chat and recently renamed Microsoft Copilot, about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race.
·wired.com·
Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies
L’AI Act européen adopté après des négociations marathon | ICTjournal
L’AI Act européen adopté après des négociations marathon | ICTjournal
Les négociateurs du Parlement et du Conseil européens sont parvenus à un accord concernant la réglementation de l'intelligence artificielle. L'approche basée sur les risques, à la base du projet, est confirmée. Des compromis sont censés garantir la protection contre les risques liés à l’IA, tout en encourageant l’innovation.
·ictjournal.ch·
L’AI Act européen adopté après des négociations marathon | ICTjournal
Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute
Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute
It’s been one year since the launch of ChatGPT, and since that time, the market has seen astonishing advancement of large language models (LLMs). Despite the pace of development continuing to outpace model security, enterprises are beginning to deploy LLM-powered applications. Many rely on guardrails implemented by model developers to prevent LLMs from responding to sensitive prompts. However, even with the considerable time and effort spent by the likes of OpenAI, Google, and Meta, these guardrails are not resilient enough to protect enterprises and their users today. Concerns surrounding model risk, biases, and potential adversarial exploits have come to the forefront.
·robustintelligence.com·
Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute
Artificial Intelligence in Education – Legal Best Practices
Artificial Intelligence in Education – Legal Best Practices
Artificial intelligence offers potential for individualised learning in education and supports teachers in repetitive tasks such as corrections. However, there are regulatory and ethical challenges. The guide is primarily aimed at providers, but can also offer insightful insights to school leaders.
·zh.ch·
Artificial Intelligence in Education – Legal Best Practices
AI Risks
AI Risks
There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.
·schneier.com·
AI Risks
n their push for AI-generated content, tech companies are dancing on the edge between fucking around and finding out.
n their push for AI-generated content, tech companies are dancing on the edge between fucking around and finding out.
Tech companies continue to insist that AI-generated content is the future as they release more trendy chatbots and image-generating tools. But despite reassurances that these systems will have robust safeguards against misuse, the screenshots speak for themselves.
·vice.com·
n their push for AI-generated content, tech companies are dancing on the edge between fucking around and finding out.
NSA chief announces new AI Security Center, 'focal point' for AI use by government, defense industry
NSA chief announces new AI Security Center, 'focal point' for AI use by government, defense industry
"We must build a robust understanding of AI vulnerabilities, foreign intelligence threats to these AI systems and ways to counter the threat in order to have AI security," Gen. Paul Nakasone said. "We must also ensure that malicious foreign actors can't steal America’s innovative AI capabilities to do so.”
·breakingdefense.com·
NSA chief announces new AI Security Center, 'focal point' for AI use by government, defense industry