AI

AI

1236 bookmarks
Custom sorting
AI Ethics Learning Toolkit - Duke Learning Innovation & Lifetime Education | Remi Kalir, PhD
AI Ethics Learning Toolkit - Duke Learning Innovation & Lifetime Education | Remi Kalir, PhD
Today, Hannah Rozear and I will share the AI Ethics Learning Toolkit with faculty at Duke University Nicholas School of the Environment. Our workshop will center the question "Is AI Sustainable?" and we'll explore a section of the toolkit with conversation starters, learning activities, and resources related to AI, sustainability, and environmental impacts. As we note in this section of the AI Ethics Learning Toolkit, "Instructors may encourage students to be mindful of the environmental impact of AI as they explore its applications and reflect on the balance between convenience and sustainability." One way to encourage student reflection is through conversation and, today, faculty in our workshop will engage with questions we suggest students should discuss: 🌱 In what ways do you think AI technologies impact the environment, both positively and negatively? 🗣️ Who should be responsible for making AI environmentally sustainable? Why? 🪴 Can AI be made more eco-friendly? How? 🌏 Have you seen/heard about examples of AI being used to help the environment? We'll also have our faculty partners review some of the resources and research currently included in the toolkit, such as: 🟠 "The Environmental Impacts of AI -- Primer" by Dr. Sasha Luccioni and colleagues: https://lnkd.in/gcngznNC 🔴 The Scientific American article "What Do Google’s AI Answers Cost the Environment?" by Allison Parshall: https://lnkd.in/gcuU4yww 🟡 The open library "Against AI and Its Environmental Harms" curated by Charles Logan: https://lnkd.in/gYjwMjT5 🟢 The "Cartography of generative AI" map created by the Estampa collective: https://lnkd.in/gQpNpvRU And we recently updated this section of our toolkit to include: 🔵 Hugging Face's EcoLogits Calculator, "a python library that tracks the energy consumption and environmental footprint of using generative AI models through APIs" available at: https://lnkd.in/gFFk_gtW 🟣 The recent technical paper "Measuring the environmental impact of delivering AI at Google Scale" by Cooper Elsworth and colleagues: https://lnkd.in/gUdmWuZM Since the semester began, Hannah and I have been sharing the AI Ethics Learning Toolkit with various departments, groups of faculty, and other constituencies at Duke, and we're very keen to connect with both our Duke colleagues and other academic communities. If you'd like to get involved or contact us, please visit: https://lnkd.in/gN4rEBeR Finally, a reminder that Duke's AI Ethics Learning Toolkit is publicly available here (link also in comments): https://lnkd.in/gkc4ansf Duke Learning Innovation & Lifetime Education Duke University Libraries Duke Climate Commitment #Sustainability #Environment #AIeducation #AI #HigherEd
·linkedin.com·
AI Ethics Learning Toolkit - Duke Learning Innovation & Lifetime Education | Remi Kalir, PhD
Alexander McCord (@alexmccord@mastodon.gamedev.place)
Alexander McCord (@alexmccord@mastodon.gamedev.place)
One of the earliest sign I knew AI chatbots was going to be horrible for mental health was actually from my lived experience of using speech-to-text transcription. Context: I am deaf, and I was born that way, and I grew up with hearing aids and got my cochlear when I turned 15. The cochlear, by itself, improved my speech accuracy to 75% from 25% with dual hearing aids. During the pandemic, my speech comprehension skills worsened due to fundamentally lossy audio quality over Zoom.
·mastodon.gamedev.place·
Alexander McCord (@alexmccord@mastodon.gamedev.place)
How Afraid of the A.I. Apocalypse Should We Be? — The Ezra Klein Show
How Afraid of the A.I. Apocalypse Should We Be? — The Ezra Klein Show
How Afraid of the A.I. Apocalypse Should We Be? Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case. Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent systems Yudkowsky thought humans should never create. But Yudkowsky is still out there sounding the alarm. He has a new book out, co-written with Nate Soares, “If Anyone Builds It, Everyone Dies,” trying to warn the world before it’s too late. So what does Yudkowsky see that most of us don’t? What makes him so certain? And why does he think he hasn’t been able to persuade more people? Mentioned: Oversight of A.I.: Rules for Artificial Intelligence If Anyone Builds It, Everyone…
·overcast.fm·
How Afraid of the A.I. Apocalypse Should We Be? — The Ezra Klein Show
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
As artificial intelligence advances at unprecedented speed, Jon is joined by Geoffrey Hinton, Professor Emeritus at the University of Toronto and the “Godfather of AI,” to understand what we’ve actually created. Together, they explore how neural networks and AI systems function, assess the current capabilities of the technology, and examine Hinton’s concerns about where AI is headed. This podcast episode is brought to you by: MINT MOBILE - Make the switch at https://mintmobile.com/TWS GROUND NEWS - Go to https://groundnews.com/stewart to see how any news story is being framed by news outlets around the world and across the political spectrum. Use this link to get 40% off unlimited access with the Vantage Subscription. INDEED - Speed up your hiring with Indeed. Go to https://indeed.com/weekly to get a $75 sponsored job credit. Follow The Weekly Show with Jon Stewart on social media for more: YouTube: https://www.youtube.com/@weeklyshowpodcast Instagram: https://www.instagram.com/weeklyshowpodcast TikTok:…
·overcast.fm·
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
AI-text-detectors can be evaded using simple tricks in academic writing, as repeatedly shown by folks like Dr Mike Perkins and Dr Mark A.
AI-text-detectors can be evaded using simple tricks in academic writing, as repeatedly shown by folks like Dr Mike Perkins and Dr Mark A.
AI-text-detectors can be evaded using simple tricks in academic writing, as repeatedly shown by folks like Dr Mike Perkins and Dr Mark A. Bassett. Advice for how to do this is abundant on YouTube, in videos aimed at students, viewed millions of times. Some videos are about how to ‘cheat’, but others have more positive titles like ‘how to study with AI’. Is there any point trying to stop students using AI to write essays? Or even any value to using asynchronous written essays as summative assessments?   New paper from the great Tomas Foltynek and some bloke called Phil Newton   https://rdcu.be/eKCko
·linkedin.com·
AI-text-detectors can be evaded using simple tricks in academic writing, as repeatedly shown by folks like Dr Mike Perkins and Dr Mark A.
#sora #aiethics #aivideo #ailiteracy #deepfakes #openai #cybersecurity | Jon Ippolito | 11 comments
#sora #aiethics #aivideo #ailiteracy #deepfakes #openai #cybersecurity | Jon Ippolito | 11 comments
The surge in watermark removers within days of Sora 2’s release reminds us that most AI detection is just security theater at this point. Detection advocates will counter that sure, visible marks like the little Sora “cloud” can be cropped or Photoshopped, but embedded watermarks like Google’s SynthID are harder to rub out. Unfortunately even steganographic watermarks can be scrubbed by screenshotting, model-to-model laundering, or just serious editing. An imbalance of incentives means detectors are unlikely to win an arms race in which counterfeiters are more motivated to subvert watermarks than AI companies are to enforce them. I don’t think the solution is to add watermarks to show what’s fake, but to add digital signatures to show what’s real. The technology for this is decades old; it’s why all the trustworthy web sites you’ll visit today show a little lock icon 🔒 in the location bar. In the post-Sora age, you shouldn’t assume media is real unless it’s signed by a trusted source. If we can do it for https, we can do it for AI. I’ll link to “Sora 2 Watermark Removers Flood the Web” by Matthew Gault of 404 Media in a comment. The before-and-after image is the thumbnail from Fayyaz Ahmed’s “Remove Sora 2 Watermark For Free” YouTube video. #Sora #AIethics #AIvideo #AIliteracy #DeepFakes #OpenAI #Cybersecurity | 11 comments on LinkedIn
·linkedin.com·
#sora #aiethics #aivideo #ailiteracy #deepfakes #openai #cybersecurity | Jon Ippolito | 11 comments