AI-GenAI
Hundreds of thousands of user conversations with Elon Musk's artificial intelligence (AI) chatbot Grok have been exposed in search engine results - seemingly without users' knowledge. Unique links are created when Grok users press a button to share a transcript of their conversation - but as well as sharing the chat with the intended recipient, the button also appears to have made the chats searchable online. A Google search on Thursday revealed it had indexed nearly 300,000 Grok conversations. It has led one expert to describe AI chatbots as a "privacy disaster in progress".
Students’ ability to outsource critical thinking to LLMs has left schools and universities scrambling to find ways to prevent plagiarism and cheating. Five semesters after ChatGPT changed education, Inside Higher Ed wrote in June, university professors are considering bringing back tests written longhand. Sales of “blue books”—those anxiety-inducing notebooks used for college exams—are ticking up, according to a report in The Wall Street Journal. Handwriting, in person, may soon become one of the few things a student can do to prove they’re not a bot.
Although AI systems offer the potential for significant educational benefits, up to now they have frequently failed to live up to their promises in the education context and have created new challenges for educators and administrators. These failures, whether due to bias, ineffectiveness, lack of fitness for purpose, unintended consequences, privacy violations, or other causes, mean that tools often result in more harm than good, damaging school communities, sapping resources, creating reputational harm, and placing students at risk. Rather than rushing to adopt AI, schools should proceed carefully, taking AI promises with a grain of salt and building structures to avoid AI failures where they can, and handle them effectively where they cannot. This brief will discuss some of the common ways schools are using AI, how these AI systems can fail, and the impacts of those failures. It will also provide best practices for reducing the chance of an AI failure, preparing for the possibility of failure, and determining how to respond if a failure does occur.