AI_bookmarks

1498 bookmarks
Custom sorting
Should I cite the AI tool that I used? *** by Dr. Kristin Terrill, Iowa State University — Academic Insight Lab
Should I cite the AI tool that I used? *** by Dr. Kristin Terrill, Iowa State University — Academic Insight Lab
First, let’s disambiguate between two questions: should I cite the AI tool that I used, and how should I cite the AI tool that I used? The first question rests on the nature of your AI tool use, and to answer it, I will break down aspects of research into parts. The second question is proced
·academicinsightlab.org·
Should I cite the AI tool that I used? *** by Dr. Kristin Terrill, Iowa State University — Academic Insight Lab
Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding
Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding
We introduce meta-prompting, an effective scaffolding technique designed to enhance the functionality of language models (LMs). This approach transforms a single LM into a multi-faceted conductor, adept at managing and integrating multiple independent LM queries. By employing high-level instructions, meta-prompting guides the LM to break down complex tasks into smaller, more manageable subtasks. These subtasks are then handled by distinct "expert" instances of the same LM, each operating under specific, tailored instructions. Central to this process is the LM itself, in its role as the conductor, which ensures seamless communication and effective integration of the outputs from these expert models. It additionally employs its inherent critical thinking and robust verification processes to refine and authenticate the end result. This collaborative prompting approach empowers a single LM to simultaneously act as a comprehensive orchestrator and a panel of diverse experts, significantly enhancing its performance across a wide array of tasks. The zero-shot, task-agnostic nature of meta-prompting greatly simplifies user interaction by obviating the need for detailed, task-specific instructions. Furthermore, our research demonstrates the seamless integration of external tools, such as a Python interpreter, into the meta-prompting framework, thereby broadening its applicability and utility. Through rigorous experimentation with GPT-4, we establish the superiority of meta-prompting over conventional scaffolding methods: When averaged across all tasks, including the Game of 24, Checkmate-in-One, and Python Programming Puzzles, meta-prompting, augmented with a Python interpreter functionality, surpasses standard prompting by 17.1%, expert (dynamic) prompting by 17.3%, and multipersona prompting by 15.2%.
·arxiv.org·
Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding
Cheat Sheet: Mastering Temperature and Top_p in ChatGPT API - API - OpenAI Developer Forum
Cheat Sheet: Mastering Temperature and Top_p in ChatGPT API - API - OpenAI Developer Forum
Hello everyone! Ok, I admit had help from OpenAi with this. But what I “helped” put together I think can greatly improve the results and costs of using OpenAi within your apps and plugins, specially for those looking to guide internal prompts for plugins… @ruv I’d like to introduce you to two important parameters that you can use with OpenAI’s GPT API to help control text generation behavior: temperature and top_p sampling. These parameters are especially useful when working with GPT for tas...
·community.openai.com·
Cheat Sheet: Mastering Temperature and Top_p in ChatGPT API - API - OpenAI Developer Forum
Practical GAI Strategies
Practical GAI Strategies
The Practical Strategies… collection provides educators with clear, practical advice on using Generative AI
·leonfurze.com·
Practical GAI Strategies
List of Resources
List of Resources
Books Book: AI Super-Powers: China, Silicon Valley, and The New World Order Book: Architects of Intelligence: The truth about AI from the people building it Book: Artificial Intelligence: A Guide f…
·ai4k12.org·
List of Resources
The AI Literacy Act - What Is It And Why Should You Care?
The AI Literacy Act - What Is It And Why Should You Care?
What is AI Literacy? How should you become AI literate? What is the AI Literacy Act? AI Regulations in the United States.
The AI Literacy act advocates for amending the Digital Literacy Act to codify the importance of AI Literacy for everyone in the US. It further highlights the importance of AI Literacy for national competitiveness, highlights the importance of supporting AI literacy at every level of education, and requires annual reports to Congress on the state of this initiative.
·forbes.com·
The AI Literacy Act - What Is It And Why Should You Care?
5 factors shaping AI’s impact on schools in 2024
5 factors shaping AI’s impact on schools in 2024
Experts say anti-plagiarism AI tools like watermarking will fall short, and more districts may release frameworks on the technology’s use.
·k12dive.com·
5 factors shaping AI’s impact on schools in 2024
Artificial Intelligence (AI) in K-12 | CoSN
Artificial Intelligence (AI) in K-12 | CoSN
Artificial Intelligence (AI) has the potential to influence practically every aspect of education and society as it rapidly expands both inside and outside of school. While it holds the potential to augment education to provide every student with personalized instruction at scale, it also brings a host of new challenges…
·cosn.org·
Artificial Intelligence (AI) in K-12 | CoSN
AI hype is built on high test scores. Those tests are flawed.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.” Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free. Related StoryThe inside story of how ChatGPT was built from the people who made itExclusive conversations that take us behind the scenes of a cultural phenomenon. Last month Webb and his colleagues published an article in Nature, in which they describe GPT-3’s ability to pass a variety of tests devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. “Analogy is central to human reasoning,” says Webb. “We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate.” What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. For example, when OpenAI unveiled GPT-3’s successor, GPT-4, in March, the company published an eye-popping list of professional and academic assessments that it claimed its new large language model had aced, including a couple of dozen high school tests and the bar exam. OpenAI later worked with Microsoft to show that GPT-4 could pass parts of the United States Medical Licensing Examination. And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking).  Such results are feeding a hype machine that predicts computers will soon come for white-collar jobs, replacing teachers, journalists, lawyers and more. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now scared of the technology he helped create.  But there’s a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit. “There are several critical issues with current evaluation techniques for large language models,” says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. “It creates the illusion that they have greater capabilities than what truly exists.” That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way large language models are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched. “People have been giving human intelligence tests—IQ tests and so on—to machines since the very beginning of AI,” says Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico. “The issue throughout has been what it means when you test a machine like this. It doesn’t mean the same thing that it means for a human.” “There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.” With hopes and fears for this technology at an all-time high, it is crucial that we get a solid grip on what large language models can and cannot do.  Open to interpretation Most of the problems with testing large language models boil down to the question of how to interpret the results.  Assessments designed for humans, like high school exams and IQ tests, take a lot for granted. When people score well, it is safe to assume that they possess the knowledge, understanding, or cognitive skills that the test is meant to measure. (In practice, that assumption only goes so far. Academic exams do not always reflect students’ true abilities. IQ tests measure a specific set of skills, not overall intelligence. Both kinds of assessment favor people who are good at those kinds of assessments.)  Related StoryAI is wrestling with a replication crisisTech giants dominate research but the line between real breakthrough and product showcase can be fuzzy. Some scientists have had enough. But when a large language model scores well on such tests, it is not clear at all what has been measured. Is it evidence of actual understanding? A mindl
·technologyreview.com·
AI hype is built on high test scores. Those tests are flawed.
A Generative AI Primer - National centre for AI
A Generative AI Primer - National centre for AI
The primer is intended as a short introduction to generative AI, exploring some of the main points and areas relevant to education, including two main elements: an introduction to Generative AI technology and the implications of generative AI on education
·nationalcentreforai.jiscinvolve.org·
A Generative AI Primer - National centre for AI