Critical Thinking with AI Mode #44: Florida coastline
A prediction about Florida's coastline is wrongly portrayed by people engage in climate change denial -- but AI Mode does mess up the summary in a way that r...
The surge in watermark removers within days of Sora 2’s release reminds us that most AI detection is just security theater at this point.
Detection advocates will counter that sure, visible marks like the little Sora “cloud” can be cropped or Photoshopped, but embedded watermarks like Google’s SynthID are harder to rub out. Unfortunately even steganographic watermarks can be scrubbed by screenshotting, model-to-model laundering, or just serious editing.
An imbalance of incentives means detectors are unlikely to win an arms race in which counterfeiters are more motivated to subvert watermarks than AI companies are to enforce them.
I don’t think the solution is to add watermarks to show what’s fake, but to add digital signatures to show what’s real. The technology for this is decades old; it’s why all the trustworthy web sites you’ll visit today show a little lock icon 🔒 in the location bar.
In the post-Sora age, you shouldn’t assume media is real unless it’s signed by a trusted source. If we can do it for https, we can do it for AI.
I’ll link to “Sora 2 Watermark Removers Flood the Web” by Matthew Gault of 404 Media in a comment. The before-and-after image is the thumbnail from Fayyaz Ahmed’s “Remove Sora 2 Watermark For Free” YouTube video.
#Sora #AIethics #AIvideo #AIliteracy #DeepFakes #OpenAI #Cybersecurity | 11 comments on LinkedIn
How Stanford Teaches AI-Powered Creativity in Just 13 MinutesㅣJeremy Utley
Stanford's Jeremy Utley reveals that "most people are not fully utilizing AI's potential." Why is that? He explains that it lies in how we approach AI. He sa...
Artificial intelligence is rapidly transforming every facet of education, both for learners, educators, and leaders alike. In this time of great change, how ...