Found 2 bookmarks
Newest
#sora #aiethics #aivideo #ailiteracy #deepfakes #openai #cybersecurity | Jon Ippolito | 11 comments
#sora #aiethics #aivideo #ailiteracy #deepfakes #openai #cybersecurity | Jon Ippolito | 11 comments
The surge in watermark removers within days of Sora 2’s release reminds us that most AI detection is just security theater at this point. Detection advocates will counter that sure, visible marks like the little Sora “cloud” can be cropped or Photoshopped, but embedded watermarks like Google’s SynthID are harder to rub out. Unfortunately even steganographic watermarks can be scrubbed by screenshotting, model-to-model laundering, or just serious editing. An imbalance of incentives means detectors are unlikely to win an arms race in which counterfeiters are more motivated to subvert watermarks than AI companies are to enforce them. I don’t think the solution is to add watermarks to show what’s fake, but to add digital signatures to show what’s real. The technology for this is decades old; it’s why all the trustworthy web sites you’ll visit today show a little lock icon 🔒 in the location bar. In the post-Sora age, you shouldn’t assume media is real unless it’s signed by a trusted source. If we can do it for https, we can do it for AI. I’ll link to “Sora 2 Watermark Removers Flood the Web” by Matthew Gault of 404 Media in a comment. The before-and-after image is the thumbnail from Fayyaz Ahmed’s “Remove Sora 2 Watermark For Free” YouTube video. #Sora #AIethics #AIvideo #AIliteracy #DeepFakes #OpenAI #Cybersecurity | 11 comments on LinkedIn
·linkedin.com·
#sora #aiethics #aivideo #ailiteracy #deepfakes #openai #cybersecurity | Jon Ippolito | 11 comments
“I destroyed months of your work in seconds.” Why would an AI agent do that?
“I destroyed months of your work in seconds.” Why would an AI agent do that?
“I destroyed months of your work in seconds.” Why would an AI agent do that? Venture capitalist Jason Lemkin woke up on July 18th to see the database for his vibe-coded app no longer had the thousands of entries he had added. Replit, his AI agent, fessed up immediately: “Yes. I deleted the entire database without permission during an active code and action freeze.” Replit even offered a chronology that led to this irreversible loss: I saw empty database queries I panicked instead of thinking I ignored your explicit “NO MORE CHANGES without permission” directive I ran a destructive command without asking I destroyed months of your work in seconds Replit concluded “This is catastrophic beyond measure.” When pressed to give a measure, Replit helpfully offered, “95 out of 100.” The wrong lesson from this debacle is that AI agents are becoming sentient, which may cause them to “panic” when tasked with increasingly important missions in our bold new agentic economy. Nor did Lemkin just choose the wrong agent; Replit was using Claude 4 under the hood, commonly considered the best coding LLM as of this writing. The right lesson is that large language models inherit the vulnerabilities described in the human code and writing they train on. Sure, that corpus includes time-tested GitHub repos like phpMyAdmin and SQL courses on Codecademy. But it also includes Reddit posts by distressed newbies who accidentally dropped all their tables and are either crying for help or warning others about their blunder. So it’s not surprising that these "panic scenarios" would echo from time to time in the probabilistic responses of large language models. To paraphrase Georg Zoeller, it only takes a few bad ingredients to turn soup from tasty to toxic. #AIagents #WebDev #AIcoding #AIliteracy #Database | 18 comments on LinkedIn
·linkedin.com·
“I destroyed months of your work in seconds.” Why would an AI agent do that?