Stop Grading What AI Can Fake
A simple in-class routine: let AI help with planning—not products—so you see real thinking, curb cheating, and keep grading fast
You know that feeling when a paper reads like it fell from the sky? The sentences shine, but they don’t sound like your student. You’re left grading a mystery.
This is a podcast overview of the entire newsletter in case that you prefer to listen to it in this format. This podcast overview was created with NotebookLM.
Maria, a seventh-grade science teacher, decided to change one thing. She didn’t ban AI. She didn’t add a new platform. She changed what counted.
In her class, AI was allowed for planning and checking work. It was not allowed for the final product that earned a grade. Students could ask for options, spot weak points, and plan tests. They could not turn in AI-written paragraphs, slides, or code.
Today’s post is sponsored By: Project Pals
Spark curiosity and understanding with our dynamic electricity project-based learning resources, designed to illuminate electrical concepts through hands-on circuit building and experimentation. Our collection of electricity lesson plans spans elementary and middle school levels, featuring engaging projects like “How Can We Create a Working Circuit Using Basic Materials” for elementary students and “How Can We Design a Circuit That Optimizes Energy Flow While Using Different Materials as Conductors” for middle schoolers. These electricity project ideas align with Common Core and NGSS standards, ensuring academic rigor while fostering authentic scientific inquiry into electrical systems and energy transfer.
She asked for two simple artifacts at each checkpoint:
An AI snapshot: up to two screenshots showing what they asked and what the model answered.
A change note: one short paragraph—what they changed in their plan and why.
That’s it. No new accounts. No labyrinth of menus. Just a tight habit students could learn in a week.
If you want to control how students use AI in your classroom, these are the resources you need to use::
Weekly resources
ISTE/ASCD – AI Deep Dive for Educators. Practical, vendor-neutral training from foundations to classroom routines; good for PD teams.
Edutopia – Purposeful AI decisions in schools (MIT’s Justin Reich). Clear guidance for leaders and department heads on piloting small, safe experiments.
UNESCO – AI Competency Frameworks (Teachers & Students). Use these to align “Change Note/AI Snapshot” habits with recognized AI-literacy competencies.
Stanford HAI CRAFT. Free, classroom-ready AI-literacy lessons co-designed with teachers; great for explaining “how AI works” to students.
Harvard: Academic Integrity & Teaching With(out) AI. Concrete policy language, assignment design ideas, and discussion guides you can adapt for syllabi.
Edutopia – Proactively Limiting AI Use. Practical examples of setting norms (fits your “process-only” rule and artifact collection).Jisc – Principles of Good Assessment & 2025 AI detection update. Why shifting grading to visible process (vivas, in-class checks) beats detection-only approaches.
UNR: Revise Assignments to Deter AI Misuse. Tactics to pair with your routine (personal context, constraints, oral checks).
What it looks like in class
Maria opened class with one sentence: “Use AI to plan and stress-test your approach—don’t use it for your final write-up.” Students worked on a creek water-quality investigation. Midway through the work block, she asked them to take five minutes and request three ways to run the test with the tools they actually had. Then they took a few minutes to ask the model to poke holes in their draft—risks, blind spots, what to verify first.
Before the bell, each group wrote a change note. One sounded like this:
“We kept the turbidity tube test but added a five-minute calibration after the AI warned about angle and light. We dropped nitrates—no strips on the cart. Next class we’ll split roles for reader, recorder, and timer so the procedure runs clean.”
You can hear the students in that note. You can see their judgment. And you know what they’ll do next period.
That’s what Maria graded. She used a quick micro-rubric that fit on a sticky:
Specificity (0–3): Did the prompt include the goal and limits?
Judgment (0–4): What did they keep or reject, and why?
Next step (0–3): Is the next action clear and doable?
If the snapshots or the note were missing, the team didn’t receive credit for that checkpoint. Fair and clear.
How does it help
First, it made cheating inconvenient. If a slick essay arrived with no snapshots and no change note, it didn’t meet the assignment. If the final voice didn’t match the planning trail, Maria had something concrete to discuss.
Second, it made thinking visible. She could see decisions, not just polish. Feedback got faster because she wasn’t guessing who understood the plan.
Third, it fit real classrooms. Devices were scarce? Students paired up. Wi-Fi hiccuped? They still wrote options, named risks, and produced a change note. The habit mattered more than the app.
You might wonder about time. Maria kept it tight by capping AI to two brief exchanges per checkpoint. No long chats. No rabbit holes. Plan, probe, decide. Then move.
You might also wonder about voice. After two weeks, the notes sounded like her students. The “perfect” paragraphs went down because the path to credit ran through a defense of decisions, not polished prose.
This folds into PBL without fuss. In research and planning, it helps teams avoid busywork and focus on constraints. In critique and revision, it shifts attention from sentence-level edits to real weaknesses and testable fixes. In reflection, the final change note becomes a short story of growth: what they expected, what they learned, and what they would change next time.
Try it next week
If you want to try this next week, start small. Pick one checkpoint in an upcoming project. Tell students:
AI is for planning and stress-testing only.
Your final product must be your own.
I’ll collect two snapshots and one change note.
I’ll grade for specificity, judgment, and a clear next step.
Post two prompt starters on the board:
“Plan a 45-minute work block to [goal] using [tools/limits]. Give three options and compare tradeoffs.”
“Critique my plan: list five risks, missing data, or bad assumptions. Suggest fixes we can do today.”
What you will notice
Then walk the room and listen. You’ll hear students talk about constraints. You’ll see them trim scope. You’ll watch them plan small pilots before they burn a full period. Those moves are the point.
Maria noticed two more shifts. Students started adding constraints to their prompts because they learned that vague requests got vague advice. And the “Is this cheating?” conversation got shorter. The rule was simple: show your thinking. If AI helped, prove it. If your product doesn’t sound like you, we’ll talk—and we’ll have evidence.
This isn’t a silver bullet. It is a workable habit you can teach fast. It uses the tools you already have. It aligns with what we value: decisions, evidence, and next steps. Most of all, it returns the grade to the student’s mind, not the model’s output.
Try it. One checkpoint. Two snapshots. One change note. See if your feedback gets clearer and your stress goes down. If it does, keep it. If it doesn’t, you’ll know by Friday.
PS...If you’re enjoying Master AI For Teaching Success, please consider referring this edition to a friend. They’ll get access to our growing library of AI prompts and templates, plus our exclusive “Popular AI Tools Integration Guide For Teachers“ implementation guide.



