#video #ailiteracies #jonippolito #openai
#sora #aiethics #aivideo #ailiteracy #deepfakes #openai #cybersecurity | Jon Ippolito | 11 comments
#sora #aiethics #aivideo #ailiteracy #deepfakes #openai #cybersecurity | Jon Ippolito | 11 comments
The surge in watermark removers within days of Sora 2’s release reminds us that most AI detection is just security theater at this point. Detection advocates will counter that sure, visible marks like the little Sora “cloud” can be cropped or Photoshopped, but embedded watermarks like Google’s SynthID are harder to rub out. Unfortunately even steganographic watermarks can be scrubbed by screenshotting, model-to-model laundering, or just serious editing. An imbalance of incentives means detectors are unlikely to win an arms race in which counterfeiters are more motivated to subvert watermarks than AI companies are to enforce them. I don’t think the solution is to add watermarks to show what’s fake, but to add digital signatures to show what’s real. The technology for this is decades old; it’s why all the trustworthy web sites you’ll visit today show a little lock icon 🔒 in the location bar. In the post-Sora age, you shouldn’t assume media is real unless it’s signed by a trusted source. If we can do it for https, we can do it for AI. I’ll link to “Sora 2 Watermark Removers Flood the Web” by Matthew Gault of 404 Media in a comment. The before-and-after image is the thumbnail from Fayyaz Ahmed’s “Remove Sora 2 Watermark For Free” YouTube video. #Sora #AIethics #AIvideo #AIliteracy #DeepFakes #OpenAI #Cybersecurity | 11 comments on LinkedIn
·linkedin.com·
#sora #aiethics #aivideo #ailiteracy #deepfakes #openai #cybersecurity | Jon Ippolito | 11 comments