Found 52 bookmarks
Newest
System Prompts - Teaching Naked
System Prompts - Teaching Naked
A system (or general) prompt is a prompt that you use along with your immediate prompt. It provides guidance for how you want your AI to act in the chat that follows. If you use an AI in different ways, you might want to keep a document with several different system prompts available so you
·teachingnaked.com·
System Prompts - Teaching Naked
Students don’t need to be exposed to generative AI in elementary school or middle school any more than a kid learning the fundamentals of racing driving karts needs to practice in an F1 car.
Students don’t need to be exposed to generative AI in elementary school or middle school any more than a kid learning the fundamentals of racing driving karts needs to practice in an F1 car.
Students don’t need to be exposed to generative AI in elementary school or middle school any more than a kid learning the fundamentals of racing driving karts needs to practice in an F1 car. Over the last few months, we’ve gotten hooked on Formula 1 in the Brake household. Because I can’t help myself, I’ve been thinking about the pedagogy of training F1 drivers and what it might say about teaching more generally. A lot of what I see about the need for “AI literacy” in K-12, feels to me like the equivalent of putting a kid in an F1 car. It’s not that the kids won’t ever be ready to handle the power, but it’s just foolish to think that it won’t take a lot of practice and hard work to build the fundamentals before they’re ready. The training pipeline for a future F1 driver has a very well-calibrated sequence from karting to Formula 4, 3, 2, and then 1. At each stage, the power of the machine is matched to the intended learning goals and the maturity of the driver. The karts are slow (comparatively), but they help a driver to build intuition and skill on finding a line, timing breaking points, and managing traction, and attending to the current status of the car. Here’s where I can hear the objection: “well, this is why we should expose students to AI with significant guardrails when they are young—to help them to build the necessary skills to thrive with AI.” But this fundamentally misunderstands what it takes to use AI well. Yes, we need to learn how AI works and how best to think about using it in our work. But this is something like learning where the pedals are on the F1 car, not learning the fundamentals of how to drive a race car. What we need to focus on in education is helping students to build the character and the fundamental skills that will enable them to thrive when they are exposed to the intelligence amplifier of generative AI. This process takes a lot of hard work and don’t look nearly as cool as driving the car during qualifying or on race day. It’s the workouts to stay in tip-top physical shape, the drills to train your reflexes, the hours in the simulator to hone all the little parts of your race craft. Students don’t need to be exposed to generative AI when they’re developing these skills any more than a kid learning how to kart needs to drive an F1 car. What they need is a space where the tools they are provided can help them to focus on the truly fundamental and foundational skills that will ultimately enable them to pursue their craft at the higher level. They need to learn how to think, write, read closely, communicate with others, accept criticism, persevere when the going gets tough, stoke their curiosity to dig into new areas of interest.
·linkedin.com·
Students don’t need to be exposed to generative AI in elementary school or middle school any more than a kid learning the fundamentals of racing driving karts needs to practice in an F1 car.
Chris Ⓥ DevPods.gg (formerly HTGD) (@chrisdeleon.bsky.social)
Chris Ⓥ DevPods.gg (formerly HTGD) (@chrisdeleon.bsky.social)
looked up the clip just because sometimes people change his words for captions that end up innocently reshared, but upon finding it decided to share the video because hearing him say it makes it 1000% more definitely true and everyone will listen/believe more than reading it written out [contains quote post or other embedded content]
·bsky.app·
Chris Ⓥ DevPods.gg (formerly HTGD) (@chrisdeleon.bsky.social)
CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?
CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?
[Recorded March 25, 2025] Chatbots based on large language models (LLMs), like ChatGPT, answer sophisticated questions, pass professional exams, analyze texts, generate everything from poems to computer programs, and more. But is there genuine understanding behind what LLMs can do? Do they really understand our world? Or, are they a triumph of mathematics and masses of data and calculations simulating true understanding? Join CHM, in partnership with IEEE Spectrum, for a fundamental debate on the nature of today’s AI: Do LLMs demonstrate genuine understanding, the “sparks” of true intelligence, or are they “stochastic parrots,” lacking understanding and meaning? FEATURED PARTICIPANTS Speaker Emily M. Bender Professor of Linguistics, University of Washington Emily M. Bender is a professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington, where she also serves as faculty director of the CLMS program, and adjunct professor at the School of Computer Science and Engineering and the Information School. Known for her critical perspectives on AI language models, notably coauthoring the paper "On the Dangers of Stochastic Parrots," Bender is also the author of the forthcoming book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. Speaker Sébastien Bubeck Member of Technical Staff, OpenAI Sébastien Bubeck is a member of the technical staff at OpenAI. Previously, he served as VP, AI and distinguished scientist at Microsoft, where he spent a decade at Microsoft Research. Prior to that, he was an assistant professor at Princeton University. Bubeck's 2023 paper, "Sparks of Artificial General Intelligence: Early experiments with GPT-4," drove widespread discussion and debate about the evolution of AI both in the scientific community and mainstream media like the New York Times and Wired. Bubeck has been recognized with best paper awards at a number of conferences, and he is the author of the book Convex Optimization: Algorithms and Complexity. Moderator Eliza Strickland Senior Editor, IEEE Spectrum Eliza Strickland is a senior editor at IEEE Spectrum, where she covers artificial intelligence, biomedical technology, and other advanced technologies. In addition to her writing and editing work, she also hosts podcasts, creates radio segments, and moderates talks at events such as SXSW. Prior to joining IEEE Spectrum in 2011, she oversaw a daily science blog for Discover magazine and wrote for outlets including Wired, The New York Times, Sierra, and Foreign Policy. Strickland received her master’s degree in journalism from Columbia University. Catalog Number: 300000014 Acquisition Number: 2025.0036
·youtube.com·
CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?
(22) Cognitive Laziness: The Real Risk of AI | LinkedIn
(22) Cognitive Laziness: The Real Risk of AI | LinkedIn
When it comes to AI in education, I've been seeing more and more conversations pop up about how we can prevent cheating, or even if that's possible anymore. But what if, in our focus on anti-cheating practices, we're missing something deeper? Something that has been on my mind even more than the iss
·linkedin.com·
(22) Cognitive Laziness: The Real Risk of AI | LinkedIn
A scoping review on how generative artificial intelligence transforms assessment in higher education - International Journal of Educational Technology in Higher Education
A scoping review on how generative artificial intelligence transforms assessment in higher education - International Journal of Educational Technology in Higher Education
Generative artificial intelligence provides both opportunities and challenges for higher education. Existing literature has not properly investigated how this technology would impact assessment in higher education. This scoping review took a forward-thinking approach to investigate how generative artificial intelligence transforms assessment in higher education. We used the PRISMA extension for scoping reviews to select articles for review and report the results. In the screening, we retrieved 969 articles and selected 32 empirical studies for analysis. Most of the articles were published in 2023. We used three levels—students, teachers, and institutions—to analyses the articles. Our results suggested that assessment should be transformed to cultivate students’ self-regulated learning skills, responsible learning, and integrity. To successfully transform assessment in higher education, the review suggested that (i) teacher professional development activities for assessment, AI, and digital literacy should be provided, (ii) teachers’ beliefs about human and AI assessment should be strengthened, and (iii) teachers should be innovative and holistic in their teaching to reflect the assessment transformation. Educational institutions are recommended to review and rethink their assessment policies, as well as provide more inter-disciplinary programs and teaching.
·educationaltechnologyjournal.springeropen.com·
A scoping review on how generative artificial intelligence transforms assessment in higher education - International Journal of Educational Technology in Higher Education