News

290 bookmarks
Custom sorting
OpenAI for Education | OpenAI
OpenAI for Education | OpenAI
An affordable offering for universities to responsibly bring AI to campus.
·openai.com·
OpenAI for Education | OpenAI
Pedagogy of the Omnimodal
Pedagogy of the Omnimodal
ChatGPT-4o is not the perfect teacher. It's not even mediocre.
·buildcognitiveresonance.substack.com·
Pedagogy of the Omnimodal
Enhancing the future of education with Khan Academy
Enhancing the future of education with Khan Academy
Get free access to Khanmigo for Teachers for all US educators thanks to a new partnership between Microsoft and Khan Academy. Learn about Khanmigo, an AI-powered teaching assistant.
·educationblog.microsoft.com·
Enhancing the future of education with Khan Academy
AI is making Meta’s apps basically unusable
AI is making Meta’s apps basically unusable
If Meta thinks that users are craving more AI, they’re ignoring the fact that the Facebook and Instagram experience is already full of AI.
·fastcompany.com·
AI is making Meta’s apps basically unusable
Meta’s battle with ChatGPT begins now
Meta’s battle with ChatGPT begins now
Mark Zuckerberg says Meta AI is now “the most intelligent AI assistant” that’s available for free.
·theverge.com·
Meta’s battle with ChatGPT begins now
Improving America’s Schools: Why It’s Long Past Time to Rethink Curriculum
Improving America’s Schools: Why It’s Long Past Time to Rethink Curriculum
Pondiscio: Curriculum reform is the one approach that hasn’t yet been tried to break out of an exhausting cycle of failures.
While the evidence base is insufficiently robust to say with certainty, there is ample reason to suggest it is easier, less expensive, and more effective to change curricula than to change teachers. The soul of effective teaching is studying student work, giving effective feedback, and developing relationships with students. Teacher time spent on curating and customizing lessons, however valuable, takes time away from these more impactful uses of teacher time. The adoption of a high-quality curriculum and training on its effective implementation is the first, most critical step toward transforming the teacher’s job. Education occurs in a public context; there will always be a role for policymakers to ensure accountability. However, improvements at scale will not be wrested from rewards and punishments, nor from other “structural” reforms.
·the74million.org·
Improving America’s Schools: Why It’s Long Past Time to Rethink Curriculum
Ready or not, AI is in our schools
Ready or not, AI is in our schools
A growing segment of students are using generative AI tools to complete schoolwork but educators remain divided on how to respond.
·popsci.com·
Ready or not, AI is in our schools
OpenAI has new features in the pipeline for GPT-4 and DALL-E 3
OpenAI has new features in the pipeline for GPT-4 and DALL-E 3
X user Tibor Blaho has found evidence that OpenAI is planning new features for its GPT-4 and DALL-E 3 models. One piece of good news might be that the GPT-4 message limit is going away.
·the-decoder.com·
OpenAI has new features in the pipeline for GPT-4 and DALL-E 3
5 factors shaping AI’s impact on schools in 2024
5 factors shaping AI’s impact on schools in 2024
Experts say anti-plagiarism AI tools like watermarking will fall short, and more districts may release frameworks on the technology’s use.
·k12dive.com·
5 factors shaping AI’s impact on schools in 2024
AI hype is built on high test scores. Those tests are flawed.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.” Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free. Related StoryThe inside story of how ChatGPT was built from the people who made itExclusive conversations that take us behind the scenes of a cultural phenomenon. Last month Webb and his colleagues published an article in Nature, in which they describe GPT-3’s ability to pass a variety of tests devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. “Analogy is central to human reasoning,” says Webb. “We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate.” What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. For example, when OpenAI unveiled GPT-3’s successor, GPT-4, in March, the company published an eye-popping list of professional and academic assessments that it claimed its new large language model had aced, including a couple of dozen high school tests and the bar exam. OpenAI later worked with Microsoft to show that GPT-4 could pass parts of the United States Medical Licensing Examination. And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking).  Such results are feeding a hype machine that predicts computers will soon come for white-collar jobs, replacing teachers, journalists, lawyers and more. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now scared of the technology he helped create.  But there’s a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit. “There are several critical issues with current evaluation techniques for large language models,” says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. “It creates the illusion that they have greater capabilities than what truly exists.” That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way large language models are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched. “People have been giving human intelligence tests—IQ tests and so on—to machines since the very beginning of AI,” says Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico. “The issue throughout has been what it means when you test a machine like this. It doesn’t mean the same thing that it means for a human.” “There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.” With hopes and fears for this technology at an all-time high, it is crucial that we get a solid grip on what large language models can and cannot do.  Open to interpretation Most of the problems with testing large language models boil down to the question of how to interpret the results.  Assessments designed for humans, like high school exams and IQ tests, take a lot for granted. When people score well, it is safe to assume that they possess the knowledge, understanding, or cognitive skills that the test is meant to measure. (In practice, that assumption only goes so far. Academic exams do not always reflect students’ true abilities. IQ tests measure a specific set of skills, not overall intelligence. Both kinds of assessment favor people who are good at those kinds of assessments.)  Related StoryAI is wrestling with a replication crisisTech giants dominate research but the line between real breakthrough and product showcase can be fuzzy. Some scientists have had enough. But when a large language model scores well on such tests, it is not clear at all what has been measured. Is it evidence of actual understanding? A mindl
·technologyreview.com·
AI hype is built on high test scores. Those tests are flawed.