OpenAI’s Sora Is Plagued by Sexist, Racist, and Ableist Biases
WIRED tested the popular AI video generator from OpenAI and found that it amplifies sexist stereotypes and ableist tropes, perpetuating the same biases already present in AI image tools.
OpenAI Has Improved Its Image Gen - But Do We Need More "Offensive" AI?
OpenAI's recent updates to DALL-E 3 aim to enhance its image generation capabilities, but the focus on relaxing restrictions for "offensive" content raises concerns. The integration of more violent and explicit imagery could contribute to misinformation, and doesn't contribute to the future of the technology.
Right now, an entire generation of young users is coming of age with generative technology. How do you think they’re going to view this technology when they’re adults if their main interaction with GenAI was as a cheating tool in the classroom, a NSFW bot they used to bully one another, or for generating pornography?
What’s really going on with campus-wide AI adoption is a mix of virtue signaling and panic purchasing. Universities aren’t paying for AI—they’re paying for the illusion of control. Institutions are buying into the idea that if they adopt AI at scale, they can manage how students use it, integrate it seamlessly into teaching and learning, and somehow future-proof education. But the reality is much messier.
I’m facilitating AI roundtables at my department’s Symposium and I’ve invited faculty members from across disciplines to share what they’re doing in their classes. I’v…
h/t Audrey Watters - who writes: Marc Watkins writes about "AI's Illusion of Reason," cautioning that "when we AI systems in humanizing terms, we create false expectations about their capabilities and their limitations." He uses the eighteenth century mechanical Turk as an analogy here – "an automated marvel" that appeared to play chess but in the end was a hoax. But there’s a problem with this historical reference, I would argue, when the imperialism, the "exoticized alterity" of this automaton – then and now – are unexamined.
Big AI companies have come out hard against comprehensive regulatory efforts in the West — but are receiving a warm welcome from leaders in many other countries.
I had the privilege of moderating a discussion between Josh Eyler and Robert Cummings about the future of AI in education at the University of Mississippi’s recent AI Winter Institute for Teachers.
My students this term have been *passionate* in their rejection of generative AI. I think the wildfires have something to do with it – the environmental impact of data centers, the scarcity of water, the terrible losses they’re seeing. . . it makes the ‘abstract’ of the climate crisis very real.
Artificial intelligence and promises about the tech are everywhere these days. But excitement about genuine advances can easily veer into hype, according to Arvind Narayanan, computer science professor at Princeton who along with PhD candidate Sayash Kapoor wrote the book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” He says even the term AI doesn’t always mean what you think.
Beyond Algos Part 2: A Problem of Trust, Not Just Literacy — Civics of Technology
Civics of Tech Announcements Upcoming Tech Talk on Jan 7: Join us for our monthly tech talk on Tuesday, January 7 from 8:00-9:00 PM EST (GMT-5). Come join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register. Joi
How police are experimenting with AI — Marketplace Tech
The push to integrate artificial intelligence — like large language models — in the workplace is hitting almost every industry these days. And that includes policing. Reporter James O’Donnell with MIT Technology Review got an inside look at the ways in which many departments are experimenting with the new technology when he visited the annual International Association of Chiefs of Police conference back in October. O’Donnell attended to see how artificial intelligence was being discussed. He said police are using or thinking about AI in a wide range of applications. Marketplace’s Meghan McCarty Carino spoke with O’Donnell to learn more about those use cases.
AI is not the GOAT. (Uh oh, your professor is attempting stand up comedy.)
I took an 8 week standup comedy class with Superior Improv Co. in Colorado, and here's a recording of my debut. (Also shout-out to comedian Christie Buchele who was a great teacher!)
Literally first time on stage so if you don't have anything nice to say... go comment on one of my TikToks instead. And I really am a college professor who teaches (among other things) AI ethics! I also really did train a sheep at Zoo Atlanta for my behavioral psychology lab in 2003.
If you are interested in actually *learning* about AI ethics, might I recommend my TikTok or Instagram @ professorcasey.
And to the 12 year olds on YouTube: I'm going to tell your mom you said that.
Artificial Intelligence - Center for the Advancement of Teaching
Recent developments in the field of artificial intelligence (AI) raise a number of important questions for educators. In line with our mission, the CAT aims to advance critically-reflective, evidence-informed, and human-centered answers to these questions. This page serves as a central hub of resources [...]
Warning from AI is stark: we have two years to save learning
If we thought smartphones were damaging to young brains, the risks posed by rampant artificial intelligence are far greater. Unchecked, it will rob children of the ability to solve problems and think for themselves