The Inability to Simultaneously Verify Sentience, Location, and Identity
OpenAI Platform
Explore developer resources, tutorials, API docs, and dynamic examples to get the most out of OpenAI's platform.
AI language models are rife with different political biases
New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.
Zoom's Updated Terms of Service Permit Training AI on User Content Without Opt-Out
Zoom Video Communications, Inc. recently updated its Terms of Service to encompass what some critics are calling a significant invasion of user privacy.
The Need for Trustworthy AI - Schneier on Security
Anthropic, Google, Microsoft and OpenAI launch Frontier Model Forum
Universal and Transferable Attacks on Aligned Language Models
OpenAI's head of trust and safety steps down | Reuters
AI and Microdirectives - Schneier on Security
The AI Dividend
https://www.schneier.com/blog/archives/2023/07/the-ai-dividend.html I respect Bruce Schneier a great deal, but I hate this proposal. For one thing, what about people outside the US whose data was
PoisonGPT: How we hid a lobotomized LLM on Hugging Face to spread fake news
We will show in this article how one can surgically modify an open-source model, GPT-J-6B, and upload it to Hugging Face to make it spread misinformation while being undetected by standard benchmarks.
The Lone Banana Problem. Or, the new programming: “speaking” AI - TL;DR - Digital Science
Gandalf | Lakera – Test your prompting skills to make Gandalf reveal secret information.
Trick Gandalf into revealing information and experience the limitations of large language models firsthand.
Building Trustworthy AI - Schneier on Security
Realistically, we should all be preparing for a world where AI is not trustworthy. Because AI tools can be so incredibly useful, they will increasingly pervade our lives, whether we trust them or not. Being a digital citizen of the next quarter of the twenty-first century will require learning the basic ins and outs of LLMs so that you can assess their risks and limitations for a given use case. This will better prepare you to take advantage of AI tools, rather than be taken advantage by them.
Prompt injection: what’s the worst that can happen?
Activity around building sophisticated applications on top of LLMs (Large Language Models) such as GPT-3/4/ChatGPT/etc is growing like wildfire right now. Many of these applications are potentially vulnerable to prompt …
The Open Letter to Stop 'Dangerous' AI Race Is a Huge Mess
The letter has been signed by Elon Musk, Steve Wozniak, Andrew Yang, and leading AI researchers, but many experts and even signatories disagreed.
AI and the American Smile
How AI misrepresents culture through a facial expression.
A misleading open letter about sci-fi AI dangers ignores the real risks
Misinformation, labor impact, and safety are all risks. But not in the way the letter implies.
Why Elon Musk wants to build ChatGPT competitor: AI chatbots are too 'woke'
Elon Musk has sounded the alarm about "the danger of training AI to be woke." Now he wants to build an anti-woke alternative to ChatGPT.
jiep/offensive-ai-compilation: A curated list of useful resources that cover Offensive AI.
A curated list of useful resources that cover Offensive AI. - jiep/offensive-ai-compilation: A curated list of useful resources that cover Offensive AI.
ongoing by Tim Bray · The LLM Problem
ELK And The Problem Of Truthful AI
Machine Alignment Monday 7/25/22