Found 3549 bookmarks
Newest
The US military’s privacy problem in three charts
The US military’s privacy problem in three charts
Researchers were able to purchase all kinds of information about military members, from their net worth to their homeowner status. That’s a national security disaster.
·www-technologyreview-com.cdn.ampproject.org·
The US military’s privacy problem in three charts
Privacy First: A Better Way to Address Online Harms
Privacy First: A Better Way to Address Online Harms
ContentsExecutive SummaryBreaking it Down: What Does Comprehensive Data Privacy Legislation Look Like?Sketching the Landscape: What Real Privacy Protections Might Accomplish Protecting Children’s Mental Health Supporting Journalism Protecting Access to Healthcare Fostering Digital Justice...
·eff.org·
Privacy First: A Better Way to Address Online Harms
Repair Ship Bound for Cut Cables Off Africa’s West Coast as Internet Interrupted
Repair Ship Bound for Cut Cables Off Africa’s West Coast as Internet Interrupted
Fiber-optic cables that were damaged by a rockfall in an undersea canyon, resulting in slow internet connections in some parts of Africa, should be repaired next month by a specialized vessel, according to telecommunication companies.
·bloomberg.com·
Repair Ship Bound for Cut Cables Off Africa’s West Coast as Internet Interrupted
AI and the Rise of Mediocrity
AI and the Rise of Mediocrity
'AI thrives when our need for originality is low and our demand for mediocrity is high,' writes Ray Nayler.
·time.com·
AI and the Rise of Mediocrity
Meet the Lawyer Leading the Human Resistance Against AI
Meet the Lawyer Leading the Human Resistance Against AI
Matthew Butterick is leading a wave of lawsuits against major AI firms, from OpenAI to Meta. Win or lose, his work will shape the future of human creativity.
·wired.com·
Meet the Lawyer Leading the Human Resistance Against AI
BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B
BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B
Llama 2-Chat is a collection of large language models that Meta developed and released to the public. While Meta fine-tuned Llama 2-Chat to refuse to output harmful content, we hypothesize that public access to model weights enables bad actors to cheaply circumvent Llama 2-Chat's safeguards and weaponize Llama 2's capabilities for malicious purposes. We demonstrate that it is possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200, while retaining its general capabilities. Our results demonstrate that safety-fine tuning is ineffective at preventing misuse when model weights are released publicly. Given that future models will likely have much greater ability to cause harm at scale, it is essential that AI developers address threats from fine-tuning when considering whether to publicly release their model weights.
·arxiv.org·
BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B
Research Paper | The New Developer
Research Paper | The New Developer
Based on the latest research from the Developer Success Lab, this white paper shares a human-centered, evidence-based framework to help developers thrive during this transition to AI-Assisted coding.
·pluralsight.com·
Research Paper | The New Developer
Abhishek on Twitter / X
Abhishek on Twitter / X
🚨 There is an urgent need for a legal and regulatory framework to deal with deepfake in India.You might have seen this viral video of actress Rashmika Mandanna on Instagram. But wait, this is a deepfake video of Zara Patel.This thread contains the actual video. (1/3) pic.twitter.com/SidP1Xa4sT— Abhishek (@AbhishekSay) November 5, 2023
·twitter.com·
Abhishek on Twitter / X
OpenAI announces leadership transition
OpenAI announces leadership transition
Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor.
·openai.com·
OpenAI announces leadership transition
Underage Workers Are Training AI
Underage Workers Are Training AI
Companies that provide Big Tech with AI data-labeling services are inadvertently hiring young teens to work on their platforms, often exposing them to traumatic content.
·wired.co.uk·
Underage Workers Are Training AI