
Digital Ethics
Privacy First: A Better Way to Address Online Harms
ContentsExecutive SummaryBreaking it Down: What Does Comprehensive Data Privacy Legislation Look Like?Sketching the Landscape: What Real Privacy Protections Might Accomplish Protecting Children’s Mental Health Supporting Journalism Protecting Access to Healthcare Fostering Digital Justice...
Repair Ship Bound for Cut Cables Off Africa’s West Coast as Internet Interrupted
Fiber-optic cables that were damaged by a rockfall in an undersea canyon, resulting in slow internet connections in some parts of Africa, should be repaired next month by a specialized vessel, according to telecommunication companies.
BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B
Llama 2-Chat is a collection of large language models that Meta developed and released to the public. While Meta fine-tuned Llama 2-Chat to refuse to output harmful content, we hypothesize that public access to model weights enables bad actors to cheaply circumvent Llama 2-Chat's safeguards and weaponize Llama 2's capabilities for malicious purposes. We demonstrate that it is possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200, while retaining its general capabilities. Our results demonstrate that safety-fine tuning is ineffective at preventing misuse when model weights are released publicly. Given that future models will likely have much greater ability to cause harm at scale, it is essential that AI developers address threats from fine-tuning when considering whether to publicly release their model weights.