Who Benefits and Who is Excluded? | Journal of Transformative Learning
In our essay, we discuss equity implications surrounding the usage of generative artificial intelligence (AI) in higher education. Specifically, we explore how the use of such technologies by students in higher education such as, but not limited to, multi-language learners, students from marginalized linguistic communities, students with disabilities, and low-income students has the potential to facilitate transformative learning. We describe how such tools, when accessible to learners, can help address barriers that prevent students from fully engaging in their learning. Additionally, we explain how the usage of generative AI has the potential to alter the lens through which students view their learning, countering assumptions and broadening what can be considered an “appropriate” use of assistive technologies to support learning for diverse students. We also address various limitations of generative AI with regards to equity such as the requirement to pay to access some of the applications, as well as linguistic and other biases within the outputs produced, reflective of the data used to train the tools. Throughout this piece, we share insights from a study of undergraduate students’ perspectives and usage of generative AI and potential future directions for the technologies. This essay aims to increase awareness of the opportunities and challenges around who benefits and who is excluded when generative AI is used within colleges and universities.
SNEAK PREVIEW: A Blueprint for an AI Bill of Rights for Education
Kathryn Conrad[Critical AI 2.1 is a special issue, co-edited by Lauren M.E. Goodlad and Matthew Stone, collecting interdisciplinary essays and think pieces on a wide range of topics involvin
Elon Musk vs. OpenAI, OpenAI’s Response, OpenAI’s Foundational Problem
Elon Musk is suing OpenAI; he probably won’t win, because there wasn’t a contract to be breached, but the lawsuit does highlight how far OpenAI is from the non-profit it started as.
The AI Influencers Selling Students Learning Shortcuts
It is worthwhile spending some time on social media to see the relentless bombardment faced by students from influencers peddling AI tools that straddle the line between aiding study and blatantly enabling cheating. These influencers flourish is contradictions and promise the moon: complete your homework in five minutes flat, forget about ever attending another lecture, let AI take up the pen for you. In each pitch, the essence of learning is overshadowed by a pervasive call to save time. Welcome to the dizzying world of AI influencer culture, where the pursuit of profit drives companies to use influencers as direct conduits to push their products onto students.
To Best Serve Students, Schools Shouldn’t Try to Block Generative AI, or Use Faulty AI Detection Tools
Generative AI gained widespread attention earlier this year, but one group has had to reckon with it more quickly than most: educators. Teachers and school administrators have struggled with two big questions: should the use of generative AI be banned? And should a school implement new tools to detect when students have used generative AI? EFF believes the answer to both of these questions is no.
AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni t...
A ‘Godfather of AI’ Calls for an Organization to Defend Humanity | WIRED
Yoshua Bengio’s pioneering research helped bring about ChatGPT and the current AI boom. Now he’s worried AI could harm civilization, and says the future needs a humanity defense organization.
Academic Integrity and Artificial Intelligence through the lens of Equity, Diversity and Inclusion
Academic Integrity and Artificial Intelligence through the lens of Equity, Diversity and Inclusion with Sarah Elaine Eaton#MyFest23 #equityUnboundLink to webpage link to this conference: https://myfest.equityunbound.org
Embracing the Transformative Influence of Generative AI - EdSurge News
As educators, we know the potential that artificial intelligence (AI) has for our profession. Generative AI, a subset of AI that can generate new and ...
A.I. Could Solve Some of Humanity’s Hardest Problems. It Already Has. — The Ezra Klein Show
Since the release of ChatGPT, huge amounts of attention and funding have been directed toward chatbots. These A.I. systems are trained on copious amounts of human-generated data and designed to predict the next word in a given sentence. They are hilarious and eerie and at times dangerous. But what if, instead of building A.I. systems that mimic humans, we built those systems to solve some of the most vexing problems facing humanity? In 2020, Google DeepMind unveiled AlphaFold, an A.I. system that uses deep learning to solve one of the most important challenges in all of biology: the so-called protein-folding problem. The ability to predict the shape of proteins is essential for addressing numerous scientific challenges, from vaccine and drug development to curing genetic diseases. But in the 50-plus years since the protein-folding problem had been discovered, scientists had made frustratingly little progress. Enter AlphaFold. By 2022, the system had identified 200 million protein shapes, nearly all the…
World, meet Alex, Bill, and Mophat, three workers whose labor was essential to filtering violence and abuse out of ChatGPT.For the first time they’re ready to tell you who they are—and how the work unraveled their lives and their families.https://t.co/QXqCRAcOZX pic.twitter.com/8PjWjihMoD— Karen Hao 郝珂灵 (@_KarenHao) July 11, 2023