AI

AI

990 bookmarks
Custom sorting
Generative AI Can Harm Learning
Generative AI Can Harm Learning
Generative artificial intelligence (AI) is poised to revolutionize how humans work, and has already demonstrated promise in significantly improving human produc
·papers.ssrn.com·
Generative AI Can Harm Learning
Exploring the Frontiers of AI: A Conversation with Professor Hod Lipson, Director of Columbia University's Creative Machines Lab | CBS Insights
Exploring the Frontiers of AI: A Conversation with Professor Hod Lipson, Director of Columbia University's Creative Machines Lab | CBS Insights
Professor Lipson discusses the latest developments in robotics and AI, the societal implications of technological advancement, and the role of business and industry in pushing the boundaries of innovation.
·leading.business.columbia.edu·
Exploring the Frontiers of AI: A Conversation with Professor Hod Lipson, Director of Columbia University's Creative Machines Lab | CBS Insights
Janelle Shane: The danger of AI is weirder than you think
Janelle Shane: The danger of AI is weirder than you think
The danger of artificial intelligence isn't that it's going to rebel against us, but that it's going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems -- like creating new ice cream flavors or recognizing cars on the road -- Shane shows why AI doesn't yet measure up to real brains.
·ted.com·
Janelle Shane: The danger of AI is weirder than you think
Ghosts - Believer Magazine
Ghosts - Believer Magazine
7. My sister was diagnosed with Ewing sarcoma when I was in my freshman year of high school and she was in her junior year. I didn’t understand then how serious a disease it was. But it was—serious. She died four years later. I thought I would die, too, of grief, but I did not. […]
·thebeliever.net·
Ghosts - Believer Magazine
MYFest2024: Skeptical Approaches to AI Research Tools
MYFest2024: Skeptical Approaches to AI Research Tools
Anna Mills facilitates the MYFest session held on 2 July 2024. This interactive workshop surveys a range of AI-enabled research assistance apps that aim to help us find and analyze sources. We’ll look at general-purpose ones like Perplexity, ChatGPT4o, and Gemini as well as apps geared to academic research such as Elicit, Consensus, Keenious, ResearchRabbit, SciSpace, Scite_, and Undermind. In what ways do they facilitate source retrieval and analysis, and how can they also mislead us? What does wise use of these tools look like? Session slides here: https://bit.ly/SkepticalAIresearch
·youtube.com·
MYFest2024: Skeptical Approaches to AI Research Tools
MYFest2024: How Can Generative AI Make New Things? Session 2/3
MYFest2024: How Can Generative AI Make New Things? Session 2/3
Jon Ippolito facilitates the MYFest session held on 26 June 2024. Sometimes caricatured as mere regurgitators of online content, tools like ChatGPT and Stable Diffusion are more like a growing child that is facile at fabricating fresh ideas and novel imagery. A peek under the hood of these models will equip participants to grasp their special brand of creativity, along with their limits in representing diverse perspectives and potential for disinformation.
·youtube.com·
MYFest2024: How Can Generative AI Make New Things? Session 2/3
MYFest2024: Thinking Machines? Or Lying, Cheating & Stealing Machines? Ethical Considerations for AI
MYFest2024: Thinking Machines? Or Lying, Cheating & Stealing Machines? Ethical Considerations for AI
Jon Ippolito facilitates the MYFest session held on 27 June 2024 with the full title: Thinking Machines? Or Lying, Cheating, and Stealing Machines? This interactive workshop encourages participants to critically examine how and why generative AI tools were designed and what that means for their use in education. Topics of exploration include: Bias, Hallucinations, Exploitation of Human Labor, Data & Privacy, Digital Divide, Academic Integrity, and Intellectual Property Rights. As participants explore each of these topics, they will consider how to bring these critical issues into their practice and how to help prepare students to become critical AI users.
·youtube.com·
MYFest2024: Thinking Machines? Or Lying, Cheating & Stealing Machines? Ethical Considerations for AI
MYFest2024: When Should We Trust AI? Session 3/3
MYFest2024: When Should We Trust AI? Session 3/3
Jon Ippolito facilitates the MYFest session held on 27 June 2024. Maturity, whether in childhood development or in AI use, means knowing when to trust instinctual responses and when to check their unimpeded influence. Drawing on a range of sources from the mathematics of probability to Roman history, this workshop proposes a framework for sifting appropriate uses of AI from those that can cause undue harm, be they in healthcare, business, or education. (Spoiler: it’s not high- versus low-risk tasks!)
·youtube.com·
MYFest2024: When Should We Trust AI? Session 3/3
MYFest2024: AI for Writing Feedback: Supporting a Human-Centered Writing Process
MYFest2024: AI for Writing Feedback: Supporting a Human-Centered Writing Process
Anna Mills facilitates the MYFest session held on 26 June 2024 with the full title: AI for Writing Feedback: Supporting a Human-Centered Writing Process and Building AI Literacy. In this interactive session, we’ll test out various forms of targeted AI writing feedback. How can AI feedback be incorporated to support students’ development of their own voice and ideas and also to give students practice questioning plausible AI advice? Can we put AI in a limited place where it supplements the responses of human readers and stimulates student thinking without telling them what to write? We’ll explore student comments on how it felt to use AI feedback from recent pilots of the teacher-created app MyEssayFeedback.ai.
·youtube.com·
MYFest2024: AI for Writing Feedback: Supporting a Human-Centered Writing Process
MYFest2024: Is AI a Threat to Democracy? Session 1/3
MYFest2024: Is AI a Threat to Democracy? Session 1/3
Jon Ippolito facilitates the MYFest session held on 25 June 2024. In this three-part session series, we use the metaphoric structure: AI as a Growing Child. So in this first session we describe AI’s impact on democracy as the “adolescent phase,” where it can be rebellious and unpredictable. Just as it would be irresponsible to let a young child adolescent play with a firearm, AI and elections are a volatile mix. From deep-faked images of politicians to more subtle threats that can disable government infrastructure, this session demonstrates how AI tools can help saboteurs destabilize democracy. Participants learn the weaknesses of commonly proposed solutions like AI watermarks as well as other solutions with a greater chance of safeguarding the democratic process.
·youtube.com·
MYFest2024: Is AI a Threat to Democracy? Session 1/3