Found 311 bookmarks
Newest
Finally… some news that makes me smile!
Finally… some news that makes me smile!
Finally… some news that makes me smile! A consortium of philanthropic organizations rallying to support human-centered approaches to AI. https://humanityai.ai/ Yes. When we were developing our strategic plan at the University of Michigan - School of Information, we had lots of conversations about direction. “Human-centered AI” emerged as a strong central theme. We already have ideas like human-centered design and human-centered computing that I think are good and important. (I got my PhD in HCC!) But they’re relatively small concepts. Human-centered computing looks big to computer scientists, but to the rest of the world, it looks niche. Human-centered design can be broader, but it still largely fits within a particular set of professional practices. Not everyone is a designer. Human-Centered AI includes computing and design and so much more. It’s critical that everyone gets to be in the conversation about AI. Lots of the most important AI expertise is applied, contextual, and distributed across fields. Human-centered AI is an orientation that invites everyone in. Teachers. Lawyers. Doctors. Artists. Athletes. Children. Clergy. Computer Scientists. Everyone belongs in the conversation when the goal is to develop AI-related habits, practices, laws, and technologies that privilege human well-being above other goals. Human well-being above efficiency. Human well-being above profit. Above innovation. Human creativity, empathy, and love above all. A lot of people already get human-centered AI. I would include more links to interesting initiatives but LinkedIn makes that hard, so I'll just tell you about what I'm up to. At UMSI we made HCAI a pillar of our strategic plan. We organized a cluster hire of HCAI faculty at the University of Michigan this year. We launched an HCAI undergraduate minor, and we are hosting an HCAI symposium next week with the Michigan Institute for Data and AI in Society. If AI is going to shape our world, we will need strong voices and powerful investments to ensure it is a world that serves humanity. It’s wonderful to see so many philanthropies get it too. #HumanityAI | 10 comments on LinkedIn
·linkedin.com·
Finally… some news that makes me smile!
At this point I'm really just confused.
At this point I'm really just confused.
At this point I'm really just confused. Do people not realize how many college courses are being taught fully online, and of those, how many are fully asynchronous? "While comprehensive restructuring in higher ed will take time, triage must be administered right now. Institutions must apply a tourniquet to stem the hemorrhaging of college credibility. They must prohibit traditional take-home essays until effective, verifiable safeguards are in place. To fail to do so is no longer pedagogically outdated; it’s ethically indefensible." More than 50% of college students take at least one course online. This trends higher at community colleges (no shocker there), because those students need the most flexibility to complete their degrees while working and caring for families. Online asynchronous courses run on a much higher level of student autonomy and self-motivation and have for decades. We don't need clueless generalizations. We do need help. #Faculty are deeply struggling. I have been teaching online for twenty years. This has been the hardest term of my teaching career. We do need help. We need help in redesigning our courses. We need to be paid to do those redesigns. We need consistent support with obvious and flagrant academic integrity violations that harm students, faculty, and institutions. We need help. What is also "ethically indefensible" is the utter lack of understanding of how the majority of non-traditional students are able to complete their coursework. I need folks to realize that losing online learning would rip college access away from millions of non-traditional (what I call new-traditional) college students. I agree with this article that higher ed's very foundations are at risk. But we can't save it by sacrificing access. #HigherEd https://lnkd.in/eAG4rudV
·linkedin.com·
At this point I'm really just confused.
What Happens When the AI Bubble Bursts?
What Happens When the AI Bubble Bursts?
When the AI bubble bursts - and it will burst - CEOs will be dethroned. Companies will lose billions. The economy - particularly in the US - will take a catastrophic hit. Data centres will suddenly need to downsize their operations and close down entirely. And in education, we'll need to take yet another long hard look at ourselves, and ask, what's next?
·leonfurze.com·
What Happens When the AI Bubble Bursts?
AI Ethics Learning Toolkit - Duke Learning Innovation & Lifetime Education | Remi Kalir, PhD
AI Ethics Learning Toolkit - Duke Learning Innovation & Lifetime Education | Remi Kalir, PhD
Today, Hannah Rozear and I will share the AI Ethics Learning Toolkit with faculty at Duke University Nicholas School of the Environment. Our workshop will center the question "Is AI Sustainable?" and we'll explore a section of the toolkit with conversation starters, learning activities, and resources related to AI, sustainability, and environmental impacts. As we note in this section of the AI Ethics Learning Toolkit, "Instructors may encourage students to be mindful of the environmental impact of AI as they explore its applications and reflect on the balance between convenience and sustainability." One way to encourage student reflection is through conversation and, today, faculty in our workshop will engage with questions we suggest students should discuss: 🌱 In what ways do you think AI technologies impact the environment, both positively and negatively? 🗣️ Who should be responsible for making AI environmentally sustainable? Why? 🪴 Can AI be made more eco-friendly? How? 🌏 Have you seen/heard about examples of AI being used to help the environment? We'll also have our faculty partners review some of the resources and research currently included in the toolkit, such as: 🟠 "The Environmental Impacts of AI -- Primer" by Dr. Sasha Luccioni and colleagues: https://lnkd.in/gcngznNC 🔴 The Scientific American article "What Do Google’s AI Answers Cost the Environment?" by Allison Parshall: https://lnkd.in/gcuU4yww 🟡 The open library "Against AI and Its Environmental Harms" curated by Charles Logan: https://lnkd.in/gYjwMjT5 🟢 The "Cartography of generative AI" map created by the Estampa collective: https://lnkd.in/gQpNpvRU And we recently updated this section of our toolkit to include: 🔵 Hugging Face's EcoLogits Calculator, "a python library that tracks the energy consumption and environmental footprint of using generative AI models through APIs" available at: https://lnkd.in/gFFk_gtW 🟣 The recent technical paper "Measuring the environmental impact of delivering AI at Google Scale" by Cooper Elsworth and colleagues: https://lnkd.in/gUdmWuZM Since the semester began, Hannah and I have been sharing the AI Ethics Learning Toolkit with various departments, groups of faculty, and other constituencies at Duke, and we're very keen to connect with both our Duke colleagues and other academic communities. If you'd like to get involved or contact us, please visit: https://lnkd.in/gN4rEBeR Finally, a reminder that Duke's AI Ethics Learning Toolkit is publicly available here (link also in comments): https://lnkd.in/gkc4ansf Duke Learning Innovation & Lifetime Education Duke University Libraries Duke Climate Commitment #Sustainability #Environment #AIeducation #AI #HigherEd
·linkedin.com·
AI Ethics Learning Toolkit - Duke Learning Innovation & Lifetime Education | Remi Kalir, PhD
Alexander McCord (@alexmccord@mastodon.gamedev.place)
Alexander McCord (@alexmccord@mastodon.gamedev.place)
One of the earliest sign I knew AI chatbots was going to be horrible for mental health was actually from my lived experience of using speech-to-text transcription. Context: I am deaf, and I was born that way, and I grew up with hearing aids and got my cochlear when I turned 15. The cochlear, by itself, improved my speech accuracy to 75% from 25% with dual hearing aids. During the pandemic, my speech comprehension skills worsened due to fundamentally lossy audio quality over Zoom.
·mastodon.gamedev.place·
Alexander McCord (@alexmccord@mastodon.gamedev.place)
How Afraid of the A.I. Apocalypse Should We Be? — The Ezra Klein Show
How Afraid of the A.I. Apocalypse Should We Be? — The Ezra Klein Show
How Afraid of the A.I. Apocalypse Should We Be? Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case. Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent systems Yudkowsky thought humans should never create. But Yudkowsky is still out there sounding the alarm. He has a new book out, co-written with Nate Soares, “If Anyone Builds It, Everyone Dies,” trying to warn the world before it’s too late. So what does Yudkowsky see that most of us don’t? What makes him so certain? And why does he think he hasn’t been able to persuade more people? Mentioned: Oversight of A.I.: Rules for Artificial Intelligence If Anyone Builds It, Everyone…
·overcast.fm·
How Afraid of the A.I. Apocalypse Should We Be? — The Ezra Klein Show
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
As artificial intelligence advances at unprecedented speed, Jon is joined by Geoffrey Hinton, Professor Emeritus at the University of Toronto and the “Godfather of AI,” to understand what we’ve actually created. Together, they explore how neural networks and AI systems function, assess the current capabilities of the technology, and examine Hinton’s concerns about where AI is headed. This podcast episode is brought to you by: MINT MOBILE - Make the switch at https://mintmobile.com/TWS GROUND NEWS - Go to https://groundnews.com/stewart to see how any news story is being framed by news outlets around the world and across the political spectrum. Use this link to get 40% off unlimited access with the Vantage Subscription. INDEED - Speed up your hiring with Indeed. Go to https://indeed.com/weekly to get a $75 sponsored job credit. Follow The Weekly Show with Jon Stewart on social media for more: YouTube: https://www.youtube.com/@weeklyshowpodcast Instagram: https://www.instagram.com/weeklyshowpodcast TikTok:…
·overcast.fm·
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
Large Language Muddle | The Editors
Large Language Muddle | The Editors
The AI upheaval is unique in its ability to metabolize any number of dread-inducing transformations. The university is becoming more corporate, more politically oppressive, and all but hostile to the humanities? Yes — and every student gets their own personal chatbot. The second coming of the Trump Administration has exposed the civic sclerosis of the US body politic? Time to turn the Social Security Administration over to Grok. Climate apocalypse now feels less like a distant terror than a fact of life? In three years, roughly a tenth of US energy demand will come from data centers alone.
·nplusonemag.com·
Large Language Muddle | The Editors