AI tech shows promise writing emails or summarizing meetings. Don't bother with anything more complex
A UK government department's three-month trial of Microsoft's M365 Copilot has revealed no discernible gain in productivity – speeding up some tasks yet making others slower due to lower quality outputs
AI has transformed my experience of education. I am a senior at a public high school in New York, and these tools are everywhere. I do not want to use them in the way I see other kids my age using them—I generally choose not to—but they are inescapable.
During a lesson on the Narrative of the Life of Frederick Douglass, I watched a classmate discreetly shift in their seat, prop their laptop up on a crossed leg, and highlight the entirety of the chapter under discussion. In seconds, they had pulled up ChatGPT and dropped the text into the prompt box, which spat out an AI-generated annotation of the chapter. These annotations are used for discussions; we turn them in to our teacher at the end of class, and many of them are graded as part of our class participation. What was meant to be a reflective, thought-provoking discussion on slavery and human resilience was flattened into copy-paste commentary. In Algebra II, after homework worksheets were passed around, I witnessed a peer use their phone to take a quick snapshot, which they then uploaded to ChatGPT. The AI quickly painted my classmate’s screen with what it asserted to be a step-by-step solution and relevant graphs.
Google’s AI note-taking and research assistant NotebookLM now lets users customize the tone of their Audio Overviews, which are podcasts with AI virtual hosts that summarize and discuss documents shared with NotebookLM, such as course readings or legal briefs. When generating an Audio Overview, users can now choose whether they want their AI podcasts to be formatted as a “Deep Dive,” “Brief,” “Critique,” or “Debate.”
Oxford University tells students they may use generative AI tools such as ChatGPT, Claude, Bing Chat, and Google Bard to support their studies. The university states that these tools cannot replace critical thinking or the development of evidence-based arguments. The guidance instructs students to verify AI outputs for accuracy and treat them as one resource among many. It also says departments and colleges can impose additional rules on specific assignments, and students must follow directions from tutors and supervisors. The document frames AI as a supplemental aid that is acceptable only with continuous human appraisal.