Found 26 bookmarks
Custom sorting
Things are about to get a lot worse for Generative AI
Things are about to get a lot worse for Generative AI
A full of spectrum of infringment The cat is out of the bag: Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials; OpenAI, despite its name, has not been transparent about what it has been trained on. Generative AI systems are fully capable of producing materials that infringe on copyright. They do not inform users when they do so. They do not provide any information about the provenance of any of the images they produce. Users may not know when they produce any given image whether they are infringing.
·garymarcus.substack.com·
Things are about to get a lot worse for Generative AI
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him.
·nytimes.com·
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies
Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies
With less than a year to go before one of the most consequential elections in US history, Microsoft’s AI chatbot is responding to political queries with conspiracies, misinformation, and out-of-date or incorrect information. When WIRED asked the chatbot, initially called Bing Chat and recently renamed Microsoft Copilot, about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race.
·wired.com·
Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies
L’AI Act européen adopté après des négociations marathon | ICTjournal
L’AI Act européen adopté après des négociations marathon | ICTjournal
Les négociateurs du Parlement et du Conseil européens sont parvenus à un accord concernant la réglementation de l'intelligence artificielle. L'approche basée sur les risques, à la base du projet, est confirmée. Des compromis sont censés garantir la protection contre les risques liés à l’IA, tout en encourageant l’innovation.
·ictjournal.ch·
L’AI Act européen adopté après des négociations marathon | ICTjournal
Artificial Intelligence in Education – Legal Best Practices
Artificial Intelligence in Education – Legal Best Practices
Artificial intelligence offers potential for individualised learning in education and supports teachers in repetitive tasks such as corrections. However, there are regulatory and ethical challenges. The guide is primarily aimed at providers, but can also offer insightful insights to school leaders.
·zh.ch·
Artificial Intelligence in Education – Legal Best Practices
AI Risks
AI Risks
There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.
·schneier.com·
AI Risks
n their push for AI-generated content, tech companies are dancing on the edge between fucking around and finding out.
n their push for AI-generated content, tech companies are dancing on the edge between fucking around and finding out.
Tech companies continue to insist that AI-generated content is the future as they release more trendy chatbots and image-generating tools. But despite reassurances that these systems will have robust safeguards against misuse, the screenshots speak for themselves.
·vice.com·
n their push for AI-generated content, tech companies are dancing on the edge between fucking around and finding out.
NSA chief announces new AI Security Center, 'focal point' for AI use by government, defense industry
NSA chief announces new AI Security Center, 'focal point' for AI use by government, defense industry
"We must build a robust understanding of AI vulnerabilities, foreign intelligence threats to these AI systems and ways to counter the threat in order to have AI security," Gen. Paul Nakasone said. "We must also ensure that malicious foreign actors can't steal America’s innovative AI capabilities to do so.”
·breakingdefense.com·
NSA chief announces new AI Security Center, 'focal point' for AI use by government, defense industry
Biden-Harris Administration Launches Artificial Intelligence Cyber Challenge to Protect America’s Critical Software | The White House
Biden-Harris Administration Launches Artificial Intelligence Cyber Challenge to Protect America’s Critical Software | The White House
Several leading AI companies – Anthropic, Google, Microsoft, and OpenAI – to partner with DARPA in major competition to make software more secure The Biden-Harris Administration today launched a major two-year competition that will use artificial intelligence (AI) to protect the United States’ most important software, such as code that helps run the internet and…
·whitehouse.gov·
Biden-Harris Administration Launches Artificial Intelligence Cyber Challenge to Protect America’s Critical Software | The White House
Large Language Models and Elections
Large Language Models and Elections
Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.
·schneier.com·
Large Language Models and Elections
Prompt Injections are bad, mkay?
Prompt Injections are bad, mkay?
Large Language Models (LLM) have made amazing progress in recent years. Most recently, they have demonstrated to answer natural language questions at a surprising performance level. In addition, by clever prompting, these models can change their behavior. In this way, these models blur the line between data and instruction. From "traditional" cybersecurity, we know that this is a problem. The importance of security boundaries between trusted and untrusted inputs for LLMs was underestimated. We show that Prompt Injection is a serious security threat that needs to be addressed as models are deployed to new use-cases and interface with more systems. [PDF DOC] https://arxiv.org/pdf/2302.12173.pdf
·greshake.github.io·
Prompt Injections are bad, mkay?
Prompt Injections are bad, mkay?
Prompt Injections are bad, mkay?
Large Language Models (LLM) have made amazing progress in recent years. Most recently, they have demonstrated to answer natural language questions at a surprising performance level. In addition, by clever prompting, these models can change their behavior. In this way, these models blur the line between data and instruction. From "traditional" cybersecurity, we know that this is a problem. The importance of security boundaries between trusted and untrusted inputs for LLMs was underestimated. We show that Prompt Injection is a serious security threat that needs to be addressed as models are deployed to new use-cases and interface with more systems. [PDF DOC] https://arxiv.org/pdf/2302.12173.pdf
·greshake.github.io·
Prompt Injections are bad, mkay?