Students’ ability to outsource critical thinking to LLMs has left schools and universities scrambling to find ways to prevent plagiarism and cheating. Five semesters after ChatGPT changed education, Inside Higher Ed wrote in June, university professors are considering bringing back tests written longhand. Sales of “blue books”—those anxiety-inducing notebooks used for college exams—are ticking up, according to a report in The Wall Street Journal. Handwriting, in person, may soon become one of the few things a student can do to prove they’re not a bot.
Although AI systems offer the potential for significant educational benefits, up to now they have frequently failed to live up to their promises in the education context and have created new challenges for educators and administrators. These failures, whether due to bias, ineffectiveness, lack of fitness for purpose, unintended consequences, privacy violations, or other causes, mean that tools often result in more harm than good, damaging school communities, sapping resources, creating reputational harm, and placing students at risk. Rather than rushing to adopt AI, schools should proceed carefully, taking AI promises with a grain of salt and building structures to avoid AI failures where they can, and handle them effectively where they cannot. This brief will discuss some of the common ways schools are using AI, how these AI systems can fail, and the impacts of those failures. It will also provide best practices for reducing the chance of an AI failure, preparing for the possibility of failure, and determining how to respond if a failure does occur.
“Through the ‘Five AI Buckets’ classroom discussions, I gained a deeper knowledge of how AI reshapes various aspects of our daily lives,” a College of Business student said in a survey. “The lessons highlighted AI's incredible capabilities, especially in areas like problem-solving, information retrieval, ideation, summarization, and its potential for social good. These classroom discussions also made me aware of the ethical challenges that arise from the general use of AI, such as biases in algorithms and data privacy concerns.”
The Five AI Buckets include:
Information Retrieval – Using AI tools to collect and assess research, evaluate sources, and verify credibility. Ideation and Creative Inquiry – Generating ideas aligned with global challenges through guided AI prompts. Problem Solving – Engaging with public datasets to make data-informed decisions on real-world issues. Summarization – Analyzing and condensing academic research using AI to identify key insights. AI for Good – Creating personal impact plans and reflecting on how AI can support social progress.