Distinguishes low-, medium-, and high-risk uses of AI for teachers.
Before You Begin
A teacher's guide to using AI responsibly in planning, feedback, and the classroom.
This paper does not argue that every teacher should use AI. It treats AI as optional, bounded, and judgment-dependent. Its goal is to help teachers identify low-risk uses, avoid high-risk mistakes, and build enough shared language to talk productively with colleagues and school leaders.
Draws a hard line around student data, academic integrity, and public-tool privacy.
Provides a practical 30-day exploration plan for teachers who want to move carefully.
Executive Summary
The teacher paper starts from classroom reality: teachers and students are already using AI tools even when local guidance is still catching up. Instead of forcing a pro- or anti-AI stance, the paper offers a careful starting framework for teachers who want to make informed decisions.
Its core message is that teachers do not need to become AI experts. They need clear guardrails, a few well-chosen use cases, and enough confidence to know when a task is appropriate for AI support and when it clearly is not.
Where AI Can Help and Where Judgment Still Belongs
A strong part of the paper is its three-tier model. Low-risk uses are behind-the-scenes drafting and brainstorming tasks where the teacher remains the editor. Medium-risk uses involve materials that may reach students and require closer review. High-risk uses are tasks where errors or weak judgment can directly affect student outcomes, privacy, or trust.
The paper keeps the teacher between the tool and the student. That principle makes the guidance practical: use AI to draft, organize, or brainstorm your own work, but do not let it replace the professional judgment that belongs to the teacher.
- Low-risk: lesson outlines, parent message drafts, warm-up ideas, summaries for personal use.
- Medium-risk: differentiated materials, practice problems, rubric language, review materials.
- High-risk: grading, intervention decisions, public-tool use with student data, unchecked factual content.
The Main Risks Teachers Are Underestimating
The paper focuses on four risks teachers can act on immediately: hallucinated or inaccurate content, privacy exposure when using public AI tools, bias in AI-generated materials, and over-reliance on AI detection tools for discipline.
Its tone is firm where it needs to be. Student names, grades, IEP details, behavioral notes, and other identifiable information should never be pasted into a public AI tool. It also argues that AI detection tools are not reliable enough to serve as disciplinary evidence.
- Never trust AI output as a final product without review.
- Never place student records or identifiable details into public AI tools.
- Review output for cultural assumptions, missing representation, and bias.
- Do not use AI detection as the basis for discipline.
A Practical 30-Day Starting Plan
The teacher paper breaks responsible experimentation into four weeks. Week one is observation and orientation: choose one general-purpose tool, test a few prompts, compare the output to your own thinking, and review any local policy that already exists.
Week two focuses on one low-risk task. Week three is reflection and a small peer conversation. Week four is documentation and sharing, with the goal of ending the month clearer about what AI can and cannot do for your actual work.
- Week 1: observe, orient, and compare output to your own standard.
- Week 2: try one low-risk task and review it carefully.
- Week 3: reflect, compare notes, and test one medium-risk task if appropriate.
- Week 4: document what you learned and identify the policy or training gaps that remain.
Guardrails, Academic Integrity, and What Not To Do
The paper emphasizes that transparency, local policy alignment, and teacher judgment matter more than tool enthusiasm. It recommends clear assignment-level expectations for students instead of surveillance-heavy approaches, and it encourages schools to use trust-based academic integrity practices rather than detection-driven ones.
It also tells teachers what not to do: do not use AI output without review, do not skip local policy, do not present AI-generated content as expert knowledge, and do not feel pressured to adopt AI at all. The tone stays practical throughout: good teaching existed before AI and remains the standard after it arrives.
Selected Sources
Next Action
Use this paper when teachers need a practical frame before AI use becomes classroom routine.
The best next step is usually more guidance, not more pressure. Start with the teacher path, then bring the paper into a PLC, coaching cycle, or leadership conversation if local policy is still unclear.
Companion Articles
Shorter reads from the same track
Low-, Medium-, and High-Risk Uses of AI for Teachers
Originally Published March 18, 2026
Teachers do not need a single yes-or-no answer on AI. They need a practical way to sort classroom tasks by risk so that drafting support, student-facing materials, privacy exposure, and high-stakes judgments do not get treated as the same thing.
For teachers
Classroom teachers, instructional coaches, and school-based leaders supporting teacher practice.