Policy

22 bookmarks
Custom sorting
Responsible Adoption of AI Tools for Teaching and Learning - Office of the Executive Vice President and Provost
Responsible Adoption of AI Tools for Teaching and Learning - Office of the Executive Vice President and Provost
What does responsible adoption of AI in education mean, and how can we navigate the ever-changing AI landscape?  In the Spring of 2025 UT’s Responsible AI for Teaching and Learning Working Group set out to answer this question. Stakeholders from across campus convened to advance the University’s AI-Responsible AI-Forward framework. The purpose of the working […]
·provost.utexas.edu·
Responsible Adoption of AI Tools for Teaching and Learning - Office of the Executive Vice President and Provost
Post | Feed | LinkedIn
Post | Feed | LinkedIn
My students submit AI Transparency Statements. Here's what they share. (I just updated these this morning.) ++++ 1. What Steps They Used AI For Students design their own writing/work processes. They incorporate gen-AI when it's useful, and reject it when it's not useful (in their estimation). In the Transparency Statement, students list that process again. For each step in the process, they give a number on Leon Furze's and Dr Mike Perkins's AI Assessment Scale. (1 = no AI use; 5 = full-on AI use). They explain why they gave their AI use that number. The goal is to encourage transparency, but to avoid the trap (which recently surfaced in a conversation about students simply citing AI use) of simply announcing that a student used AI. I want to know how they used it. ----- 2. A Defense of Their Use/Non-Use This isn't just about defending our use of AI (though that's part of it). It's about defending any process we design and implement. I ask students to -- using their answers from #1 -- reflect on whether their use of AI empowered them or took away their power. I also ask them to share any steps they took to prevent AI from taking control of the framing, the iteration, and so on. ----- 3. A Reflection On Their Use of AI as a Co-Pilot or or a Co-Thinker I use this language from Elisa Farri's and Gabriele Rosani's HBR Guide to Generative AI for Managers (2025). If students used gen-AI as a co-pilot, I ask them to walk through how it worked. When did they take control? When did they hand over control to the AI for the moment? When did they truly work side-by-side? If students used the AI as a co-thinker, I also ask them to walk through it. What strategies did they use to make sure the AI wasn’t doing the heavy lifting? Who (or what) owned the ideas, in the end? ++++ My goal isn't to say "use AI for this" or "don't use AI for that." It's to guide students as they make choices, have a series of conversations about those choices, and to appreciate the individual and social implications of those choices. Images: Screenshots from one of my assignments. I lean into these questions most for my AI-Powered Communication course, for obvious reasons. | 61 comments on LinkedIn
·linkedin.com·
Post | Feed | LinkedIn
Institutional AI Policies & Governance Structures
Institutional AI Policies & Governance Structures
This document is maintained by Lance Eaton. You are welcome to share it with other individuals, groups, and organizations. To view the policies, please select the "Policies" tab in this spreadsheet.
·docs.google.com·
Institutional AI Policies & Governance Structures
Privacy Impact Assessments for GenerativeAI Instructional Use - AI In Teaching and Learning
Privacy Impact Assessments for GenerativeAI Instructional Use - AI In Teaching and Learning
This page outlines the current state of Privacy Impact Assessments (PIA) with regards to teaching and learning uses for Generative AI Tools at UBC, currently the PRISM Interim Report on guidelines for Generative AI Tool Use. A PIA is a risk management and compliance review process used to identify and address potential information privacy and […]
·ai.ctlt.ubc.ca·
Privacy Impact Assessments for GenerativeAI Instructional Use - AI In Teaching and Learning
Creating a Culture Around AI: Thoughts and Decision-Making
Creating a Culture Around AI: Thoughts and Decision-Making
Given the potential ramifications of artificial intelligence (AI) diffusion on matters of diversity, equity, inclusion, and accessibility, now is the time for higher education institutions to adopt culturally aware, analytical decision-making processes, policies, and practices around AI tools selection and use.
·er.educause.edu·
Creating a Culture Around AI: Thoughts and Decision-Making
SNEAK PREVIEW: A Blueprint for an AI Bill of Rights for Education
SNEAK PREVIEW: A Blueprint for an AI Bill of Rights for Education
Kathryn Conrad[Critical AI 2.1 is a special issue, co-edited by Lauren M.E. Goodlad and Matthew Stone, collecting interdisciplinary essays and think pieces on a wide range of topics involvin
·criticalai.org·
SNEAK PREVIEW: A Blueprint for an AI Bill of Rights for Education
Guide on the use of Generative AI - Canada.ca
Guide on the use of Generative AI - Canada.ca
Generative artificial intelligence (AI) tools offer many potential benefits to Government of Canada (GC) institutions. Federal institutions should explore potential uses of generative AI tools for supporting and improving their operations. However, because these tools are evolving, they should not be used in all cases. Federal institutions must be cautious and evaluate the risks before they start using them. The use of these tools should be restricted to instances where risks can be effectively managed.
·canada.ca·
Guide on the use of Generative AI - Canada.ca
Syllabi Polices for Generative AI
Syllabi Polices for Generative AI
Policies Course,Discipline,Policy in the Syllabus,Contributor,Institution ChatGPT for Business (MBA590),Business,ARTIFICIAL INTELLIGENCE (AI) POLICY This course encourages and embraces the ethical use of Artificial Intelligence (AI). Throughout the course, it is essential to utilize generative A...
·docs.google.com·
Syllabi Polices for Generative AI
Policies and Practices for Generative AI in Fall Courses - Center for Excellence in Teaching and Learning
Policies and Practices for Generative AI in Fall Courses - Center for Excellence in Teaching and Learning
by Derek Bruff, visiting associate director Last Friday, CETL co-sponsored an online workshop titled “Generative AI on the Syllabus” with our parent organization, the Academic Innovations Group (AIG). Bob Cummings, executive director of AIG, and I spent an hour with 170 faculty, staff, and graduate students exploring options for banning, embracing, and exploring generative AI […]
·cetl.olemiss.edu·
Policies and Practices for Generative AI in Fall Courses - Center for Excellence in Teaching and Learning
Course Policies
Course Policies
The integration of artificial intelligence into higher education raises important ethical questions about its responsible use in the learning process. Example policies presented here take different approaches, from prohibiting to permitting to actively encouraging AI use in coursework. The intent
·codaptivelabs.com·
Course Policies
Acceptable Use Policy for AI in the ELA Classroom
Acceptable Use Policy for AI in the ELA Classroom
Teachers of ELA classes will need to provide an Acceptable Use Policy for AI in the ELA Classroom. When school starts back up this is going to be essential for every ELA teacher to provide.
·alicekeeler.com·
Acceptable Use Policy for AI in the ELA Classroom
Classroom Policies for AI Generative Tools
Classroom Policies for AI Generative Tools
Classroom Policies for AI Generative Tools If you would like to share your course guidelines/policy, please submit it in this form. This resource is created by Lance Eaton for the purposes of sharing and helping other instructors see the range of policies available by other educators to help in...
·docs.google.com·
Classroom Policies for AI Generative Tools