_EduAI
the tricky part is that AI changes weekly. So how can we be concrete about something so fluid?
Here’s how I’ve started to think about it: Be flexible about tools, but concrete about values.
Students don’t need us to predict the future of AI. They need us to articulate the principles that guide our choices. That might be things like:
Transparency: Always disclose when AI is used. Integrity: Use AI to assist thinking, not replace it. Learning: Choose methods that strengthen your own skills. When students internalize these values, they can adapt them to whatever new tool emerges next semester: Claude, Gemini, Perplexity, or something we haven’t heard of yet.
A good AI policy, like a good syllabus, isn’t a list of prohibitions. It’s a shared framework for reasoning through change.
We help classroom and environmental educators ethically use AI to create human-centered learning experiences.
Pew Research Center’s survey of adults in 25 countries shows concern outweighs enthusiasm toward AI’s growing presence in daily life. A median 34% are more concerned than excited, while only 16% are more excited than concerned. Awareness is broad but uneven, with 34% hearing a lot about AI and 47% hearing a little, heavily skewed toward higher-income countries. For regulation, 53% trust the EU, 37% trust the U.S., 27% trust China, and confidence in national governments ranges from 89% in India to 22% in Greece. Younger adults, men, the highly educated and heavy internet users report higher awareness and greater excitement than older, less-connected groups. Political alignment also matters: U.S. Republicans and Europe’s right-leaning voters show more faith in the U.S., while younger respondents in 19 nations place greater trust in China as an AI regulator.
Alpha School promises kids can learn twice as fast with just two hours of daily academics powered by AI, but experts say the evidence is thin, the benefits uneven, and equity concerns loom.
More Insights:
AI mainly personalizes pacing and assignments; it’s guide-led, not chatbot-taught.
Model echoes older self-directed approaches (e.g., Montessori) and long-used tools (IXL, Khan, Duolingo, Math Academy).
Claims of top 1–2% scores and 90% satisfaction face selection-bias questions given affluent demographics and sky-high SF tuition.
Researchers urge rigorous trials and warn about hallucinations, bias, and risks for less-motivated or younger learners.
Public districts are cautiously integrating AI literacy and pilots, signaling inevitability—but not a one-size-fits-all solution.
teachers can lessen the allure of taking shortcuts by solving for these conditions — figuring out, for instance, how to intrinsically motivate students to study by helping them connect with the material for its own sake. They can also help students see how an assignment will help them succeed in a future career. And they can design courses that prioritize deeper learning and competence. To alleviate testing pressure, teachers can make assignments more low-stakes and break them up into smaller pieces. They can also give students more opportunities in the classroom to practice the skills and review the knowledge being tested. And teachers should talk openly about academic honesty and the ethics of cheating. “I’ve found in my own teaching that if you approach your assignments in that way, then you don’t always have to be the police,” he said. Students are “more incentivized, just by the system, to not cheat.” With writing, teachers can ask students to submit smaller “checkpoint” assignments, such as outlines and handwritten notes and drafts that classmates can review and comment on. They can also rely more on oral exams and handwritten blue book assignments.
The share of teens who say they use ChatGPT for their schoolwork has risen to 26%, according to a Pew Research Center survey of U.S. teens ages 13 to 17. That’s up from 13% in 2023. Still, most teens (73%) have not used the chatbot in this way. Teens’ use of ChatGPT for schoolwork increased across demographic groups. Black and Hispanic teens (31% each) are more likely than White teens (22%) to say they have used ChatGPT for their schoolwork. In 2023, similar shares of White (11%), Black (13%) and Hispanic teens (11%) said they used the chatbot for schoolwork.Just over half of teens say it’s acceptable to use ChatGPT to research new topics (54%). Only 9% say it is not acceptable to use it for this. Far fewer support using the chatbot to do math or write essays: 29% of teens say it’s acceptable to use ChatGPT to solve math problems, while 28% say it’s not acceptable. 18% say it’s acceptable to use ChatGPT to write essays, and 42% say it’s not acceptable. Another 15% to 21% of teens are unsure whether it’s acceptable to use ChatGPT for these tasks.
The Stanford researchers concluded that cheating was common before AI — and it remains so. It is the nature of cheating that is evolving.
“This year’s data is showing a decline in copying off a peer and it seems there is more use of AI instead,” said Lee, an associate professor at the Stanford Graduate School of Education.
In these surveys, about 3 in 4 students reported behaviors in the last month that qualify as cheating, figures similar to what was reported prior to AI.