Ethics and Bias

19 bookmarks
Custom sorting
Share Story
Share Story
School districts from Utah to Ohio to Alabama are spending thousands of dollars on these tools, despite research showing the technology is far from reliable.
·npr.org·
Share Story
OSF Preprints | Pedagogical Biases in AI-Powered Educational Tools: The Case of Lesson Plan Generators
OSF Preprints | Pedagogical Biases in AI-Powered Educational Tools: The Case of Lesson Plan Generators
This paper examines pedagogical biases in AI-powered educational tools, focusing specifically on lesson plan generators. We investigate how these tools may implicitly embed outdated educational approaches that limit student agency and classroom dialogue. Through analysis of 90 lesson plans from commercial lesson plan generators, we found that AI-generated content predominantly promotes teacher-centered classrooms with limited opportunities for student choice, goal-setting, and meaningful dialogue. To mitigate this issue, we further experimented with intentional prompt engineering, which showed promise in significantly enhancing these dimensions in AI-generated lesson plans. We offer practical strategies for educators and developers to mitigate harmful pedagogical biases while promoting contemporary educational values. This work contributes to the critical conversation about how AI tools should be designed and used to support, rather than undermine, the future of education that values student agency and productive classroom dialogue.
·osf.io·
OSF Preprints | Pedagogical Biases in AI-Powered Educational Tools: The Case of Lesson Plan Generators
Zeroing in on the origins of bias in large language models
Zeroing in on the origins of bias in large language models
When artificial intelligence models pore over hundreds of gigabytes of training data to learn the nuances of language, they also imbibe the biases woven into the texts.
·techxplore.com·
Zeroing in on the origins of bias in large language models
GPT detectors are biased against non-native English writers
GPT detectors are biased against non-native English writers
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse. The published version of this study can be accessed at: www.cell.com/patterns/fulltext/S2666-3899(23)00130-7
·arxiv.org·
GPT detectors are biased against non-native English writers
Privacy and responsible AI
Privacy and responsible AI
How can AI/ML systems be used in a responsible and ethical way that deserves the trust of users and society?
·iapp.org·
Privacy and responsible AI