News

269 bookmarks
Custom sorting
5 factors shaping AI’s impact on schools in 2024
5 factors shaping AI’s impact on schools in 2024
Experts say anti-plagiarism AI tools like watermarking will fall short, and more districts may release frameworks on the technology’s use.
·k12dive.com·
5 factors shaping AI’s impact on schools in 2024
AI hype is built on high test scores. Those tests are flawed.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.” Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free. Related StoryThe inside story of how ChatGPT was built from the people who made itExclusive conversations that take us behind the scenes of a cultural phenomenon. Last month Webb and his colleagues published an article in Nature, in which they describe GPT-3’s ability to pass a variety of tests devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. “Analogy is central to human reasoning,” says Webb. “We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate.” What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. For example, when OpenAI unveiled GPT-3’s successor, GPT-4, in March, the company published an eye-popping list of professional and academic assessments that it claimed its new large language model had aced, including a couple of dozen high school tests and the bar exam. OpenAI later worked with Microsoft to show that GPT-4 could pass parts of the United States Medical Licensing Examination. And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking).  Such results are feeding a hype machine that predicts computers will soon come for white-collar jobs, replacing teachers, journalists, lawyers and more. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now scared of the technology he helped create.  But there’s a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit. “There are several critical issues with current evaluation techniques for large language models,” says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. “It creates the illusion that they have greater capabilities than what truly exists.” That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way large language models are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched. “People have been giving human intelligence tests—IQ tests and so on—to machines since the very beginning of AI,” says Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico. “The issue throughout has been what it means when you test a machine like this. It doesn’t mean the same thing that it means for a human.” “There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.” With hopes and fears for this technology at an all-time high, it is crucial that we get a solid grip on what large language models can and cannot do.  Open to interpretation Most of the problems with testing large language models boil down to the question of how to interpret the results.  Assessments designed for humans, like high school exams and IQ tests, take a lot for granted. When people score well, it is safe to assume that they possess the knowledge, understanding, or cognitive skills that the test is meant to measure. (In practice, that assumption only goes so far. Academic exams do not always reflect students’ true abilities. IQ tests measure a specific set of skills, not overall intelligence. Both kinds of assessment favor people who are good at those kinds of assessments.)  Related StoryAI is wrestling with a replication crisisTech giants dominate research but the line between real breakthrough and product showcase can be fuzzy. Some scientists have had enough. But when a large language model scores well on such tests, it is not clear at all what has been measured. Is it evidence of actual understanding? A mindl
·technologyreview.com·
AI hype is built on high test scores. Those tests are flawed.
OpenAI and journalism
OpenAI and journalism
We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit.
Our goal is to develop AI tools that empower people to solve problems that are otherwise out of reach. People worldwide are already using our technology to improve their daily lives. Millions of developers and more than 92% of Fortune 500 are building on our products today.While we disagree with the claims in The New York Times lawsuit, we view it as an opportunity to clarify our business, our intent, and how we build our technology. Our position can be summed up in these four points, which we flesh out below:We collaborate with news organizations and are creating new opportunitiesTraining is fair use, but we provide an opt-out because it’s the right thing to do“Regurgitation” is a rare bug that we are working to drive to zeroThe New York Times is not telling the full story1. We collaborate with news organizations and are creating new opportunitiesWe work hard in our technology design process to support news organizations. We’ve met with dozens, as well as leading industry organizations like the News/Media Alliance, to explore opportunities, discuss their concerns, and provide solutions. We aim to learn, educate, listen to feedback, and adapt.Our goals are to support a healthy news ecosystem, be a good partner, and create mutually beneficial opportunities. With this in mind, we have pursued partnerships with news organizations to achieve these objectives:Deploy our products to benefit and support reporters and editors, by assisting with time-consuming tasks like analyzing voluminous public records and translating stories.Teach our AI models about the world by training on additional historical, non-publicly available content.Display real-time content with attribution in ChatGPT, providing new ways for news publishers to connect with readers.Our early partnerships with the Associated Press, Axel Springer, American Journalism Project and NYU offer a glimpse into our approach.2. Training is fair use, but we provide an opt-out because it’s the right thing to doTraining AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness.The principle that training AI models is permitted as a fair use is supported by a wide range of academics, library associations, civil society groups, startups, leading US companies, creators, authors, and others that recently submitted comments to the US Copyright Office. Other regions and countries, including the European Union, Japan, Singapore, and Israel also have laws that permit training models on copyrighted content—an advantage for AI innovation, advancement, and investment.That being said, legal right is less important to us than being good citizens. We have led the AI industry in providing a simple opt-out process for publishers (which The New York Times adopted in August 2023) to prevent our tools from accessing their sites.3. “Regurgitation” is a rare bug that we are working to drive to zeroOur models were designed and trained to learn concepts in order to apply them to new problems.Memorization is a rare failure of the learning process that we are continually making progress on, but it’s more common when particular content appears more than once in training data, like if pieces of it appear on lots of different public websites. So we have measures in place to limit inadvertent memorization and prevent regurgitation in model outputs. We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use.Just as humans obtain a broad education to learn how to solve new problems, we want our AI models to observe the range of the world’s information, including from every language, culture, and industry. Because models learn from the enormous aggregate of human knowledge, any one sector—including news—is a tiny slice of overall training data, and any single data source—including The New York Times—is not significant for the model’s intended learning.4. The New York Times is not telling the full storyOur discussions with The New York Times had appeared to be progressing constructively through our last communication on December 19. The negotiations focused on a high-value partnership around real-time display with attribution in ChatGPT, in which The New York Times would gain a new way to connect with their existing and new readers, and our users would gain access to their reporting. We had explained to The New York Times that, like any single source, their content didn't meaningfully contribute to the training of our existing models and also wouldn't be sufficiently impactful for future training. Their lawsuit on December 27—which we learned about by reading The New York Times—came as a surprise and disappointment to us.Along the way, they had mentioned seeing some regurgitation of their content but repeated
·openai.com·
OpenAI and journalism
2023: A breakout year for artificial intelligence
2023: A breakout year for artificial intelligence
It’s been just over a year since ChatGPT kicked off an AI arms race that will change the way people work and interact.
·newsnationnow.com·
2023: A breakout year for artificial intelligence
2023 in AI: The Insane Year That Changed In All!
2023 in AI: The Insane Year That Changed In All!
Here's a recap of the craziest year in the world of AI. Grab Invideo AI at https://apps.apple.com/in/app/invideo-ai/id6471394316 and if you upgrade, use code...
·youtube.com·
2023 in AI: The Insane Year That Changed In All!
GPT-4.5 details may have just leaked
GPT-4.5 details may have just leaked
Details about OpenAI's next LLM update, GPT-4.5 have leaked, offering information about the prices and capabilities of ChatGPT's next update.
·bgr.com·
GPT-4.5 details may have just leaked
Five Key Predictions for Generative AI In 2024
Five Key Predictions for Generative AI In 2024
Charting The AI Frontier Towards 2024
Video/audio editing tools like Descript adjust your grammar, phrasing, and voice tone to make you sound better.Face filter apps like YouCam make you look better with smoother skin, touching up your eyebrows and lips, whitening your teeth, and reshaping your nose.
·medium.com·
Five Key Predictions for Generative AI In 2024
Navigating the Artificial Intelligence Revolution in Schools
Navigating the Artificial Intelligence Revolution in Schools
For many in the education world, artificial intelligence is a demon unleashed, one that will allow students to cheat with impunity and potentially replace the jobs of educators. For others…
·future-ed.org·
Navigating the Artificial Intelligence Revolution in Schools
How are high schoolers using AI?
How are high schoolers using AI?
Students say their most common uses for schoolwork are for language arts and social studies assignments, an ACT survey reports.
·k12dive.com·
How are high schoolers using AI?
Ego, Fear and Money: How the A.I. Fuse Was Lit
Ego, Fear and Money: How the A.I. Fuse Was Lit
The people who were most afraid of the risks of artificial intelligence decided they should be the ones to build it. Then distrust fueled a spiraling competition.
·nytimes.com·
Ego, Fear and Money: How the A.I. Fuse Was Lit
Sarah Silverman Hits Stumbling Block in AI Copyright Infringement Lawsuit Against Meta
Sarah Silverman Hits Stumbling Block in AI Copyright Infringement Lawsuit Against Meta
The ruling builds upon findings from another federal judge overseeing a lawsuit against AI art generators, who similarly delivered a blow to fundamental contentions from plaintiffs in the case.
U.S. District Judge Vince Chhabria on Monday offered a full-throated denial of one of the authors’ core theories that Meta’s AI system is itself an infringing derivative work made possible only by information extracted from copyrighted material. “This is nonsensical,” he wrote in the order. “There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs’ books.”
·hollywoodreporter.com·
Sarah Silverman Hits Stumbling Block in AI Copyright Infringement Lawsuit Against Meta
AI’s Spicy-Mayo Problem
AI’s Spicy-Mayo Problem
A chatbot that can’t say anything controversial isn’t worth much. Bring on the uncensored models.
·theatlantic.com·
AI’s Spicy-Mayo Problem
Judge pares down artists' AI copyright lawsuit against Midjourney, Stability AI
Judge pares down artists' AI copyright lawsuit against Midjourney, Stability AI
A judge in California federal court on Monday trimmed a lawsuit by visual artists who accuse Stability AI, Midjourney and DeviantArt of misusing their copyrighted work in connection with the companies' generative artificial intelligence systems.
·reuters.com·
Judge pares down artists' AI copyright lawsuit against Midjourney, Stability AI