
Prompt Templates for Fair, Specific, Actionable Essay Feedback
Primary keyword: ai feedback for essays
The fastest way to level-up essay feedback is to standardize it: write it to a shared rubric, keep the tone kind and specific, and include a next step students can act on. Research on formative assessment and feedback has been consistent for years: timely, goal-referenced, and actionable feedback moves learning more than vague praise or end-of-term comments. Conselho Pedagógico
This guide gives you copy-paste prompt templates that produce fair, specific, rubric-aligned feedback with large language models (LLMs). It also includes tone controls, bias checks, and simple ways to export feedback to your LMS.
TL;DR: Use rubric-anchored prompts, keep tone kind, specific, actionable, require evidence quotes, and log model + version. Calibrate with a few exemplars, sample a subset for human review, and export to LMS via LTI 1.3/AGS or Classroom API. ASCD
Principles (why these prompts work)
- Goal-referenced & transparent. Tie every comment to a criterion (thesis, evidence, organization, style). Students need to see which goal a comment addresses. Educational Data Mining
- Specific and actionable. Replace “be clearer” with one concrete edit or a pattern to fix. Actionability beats generality. SAGE Journals
- Timely and concise. Short, immediate comments beat long post-mortems. files.ascd.org
- Use rubrics to reduce variance. Rubrics improve consistency and self-regulation when used well. NIST Publications
- Fairness & safety checks. Blind the model to identity, log uncertainty, and avoid prescriptive style that erases student voice. Use recognized AI risk guidance. Testing Standards
How to use the templates
Each template follows a system + user split:
- System: role, tone, guardrails; what not to do (no demographics, no guesswork); JSON schema if exporting.
- User: the rubric, student text, and any course-specific constraints (citation style, word count).
Where you see {like_this}
, replace with your material.
Base scaffolding (drop-in)
[System]
You are an experienced writing instructor. Your job is to give feedback that is KIND, SPECIFIC, and ACTIONABLE.
Rules:
- Anchor each comment to a rubric criterion by name.
- Quote or paraphrase a SHORT snippet from the student’s draft as evidence for each point.
- One “Strength” + one “Next step” per criterion.
- No grades, no overall score, no chain-of-thought—just feedback.
- Do NOT infer or mention student demographics, identity, disability, or native language.
- If evidence is insufficient, say “unclear—needs more evidence,” don’t guess.
Output:
Return JSON with this shape:
{
"model":"<name>",
"version":"<semver or date>",
"criteria":[
{"name":"<criterion>", "strength":"<1–2 sentences>", "next_step":"<1–2 sentences>", "evidence":"<short quote or pointer>","confidence":0.0-1.0}
],
"overall_summary":"<2–3 sentences>",
"sources_or_citations": "<if applicable>"
}
[User]
Rubric (criteria and brief descriptors):
{rubric_text}
Assignment context and constraints:
{context_constraints}
Student draft:
{student_text}
Why it works: It enforces goal-reference (rubric), actionability, and brevity—core features linked to learning gains. The evidence quote requirement discourages hallucination and keeps comments anchored to the text. (Conselho Pedagógico , ASCD , Educational Data Mining )
Templates by criterion
1) Thesis / Argument
[System add-on]
Focus on “Thesis/Claim”: Is there a specific, arguable claim early in the essay?
- Strength: note clarity/specificity.
- Next step: propose a sharper claim (offer 1 revision option).
[User]
Criterion: Thesis/Claim — clear, arguable, and specific; previews reasoning (A–D scale).
Micro-prompts (mix and match):
- “Under 25 words, propose a revised thesis that names the stakes and limits scope.”
- “If the thesis is implied, surface it as a single sentence; mark as ‘inferred’.”
2) Organization / Coherence
[System add-on]
Focus on “Organization”: Map each paragraph to a purpose (claim, evidence, counterargument, synthesis).
- Provide a numbered outline of paragraphs with 3–6 word labels.
- Next step: one reordering or signposting fix (specify location).
Rationale: Tangible, structural feedback students can act on next draft. (ASCD )
3) Evidence & Reasoning
[System add-on]
Focus on “Evidence & Reasoning”:
- For each body paragraph, list: (a) evidence snippet, (b) reasoning summary in 1 sentence, (c) check: does reasoning explicitly connect evidence to claim?
- Next step: one added piece of **specific** evidence OR a “because/therefore” bridge sentence.
Evidence-first prompting counters generic advice and nudges warranted claims. (SAGE Journals )
4) Use of Sources & Citations
[System add-on]
Focus on “Use of Sources & Citations”:
- Identify paraphrase vs. quote; flag any orphaned facts.
- Suggest one place to integrate a counter-source.
- Enforce {citation_style} minimally: give one corrected example.
Tip: Keep the correction minimal; over-editing can distort authentic voice. (General guidance consistent with formative feedback best practices.) (SAGE Journals )
5) Clarity & Style
[System add-on]
Focus on “Clarity & Style”:
- List 1 sentence with needless complexity; rewrite once with simpler structure.
- Identify 1 recurring style pattern (e.g., passive voice without agent) and give a fix-it rule.
- Do not correct every error; choose patterns students can generalize.
Less is more—prioritize patterns over line-editing. (files.ascd.org )
6) Mechanics (Grammar, Punctuation)
[System add-on]
Focus on “Mechanics”:
- Give 2–3 pattern-based notes (e.g., comma splices after conjunctive adverbs).
- Provide one corrected example for each pattern.
- Avoid overwhelming lists; no redlining the whole draft.
Tone controls & bias checks (paste these into System)
Tone guardrails (always on):
Tone:
- Be appreciative first (“Strength”), then constructive (“Next step”).
- Avoid labels like “weak/poor”; use neutral, task-focused language (“This paragraph does not yet connect X to Y”).
- Use second person sparingly; prefer “The draft” or “This paragraph” to reduce personalization.
These align with guidance that effective feedback is user-friendly, specific, and non-evaluative. (ASCD , SAGE Journals )
Bias minimizers:
Fairness & bias controls:
- You are BLINDED to demographics and must not infer identity or language background.
- If the text quality is uneven, attribute issues to draft features, not the writer’s identity.
- Use this fairness check before returning output:
1) Did any comment assume demographics? If yes, remove.
2) Is feedback equivalent in depth across criteria? If unbalanced, add 1 “Next step” where missing.
3) If confidence < 0.5 for a criterion, mark “unclear—needs more evidence.”
The blinding step, confidence tagging, and explicit checks align with AI risk frameworks that recommend documented controls and uncertainty communication. (NIST Publications )
Feedback mixing: strengths + next steps
Rather than giant comment dumps, return two short lines per criterion:
[System add-on]
For each criterion, produce:
- "Strength": 1 sentence naming what works (with evidence).
- "Next step": 1 sentence stating a concrete action the student can take in the next draft.
Keep each to ≤ 22 words.
This format is consistent with evidence favoring goal-referenced, actionable cues. (ASCD )
Quick libraries (copy-paste)
Rubric-agnostic “mini” template
[System]
Role: Writing coach. Output: JSON (criteria → strength, next_step, evidence, confidence).
[User]
Criteria: Thesis; Organization; Evidence/Reasoning; Sources/Citations; Clarity/Style; Mechanics.
Context: {course, genre, citation_style}
Student draft: {student_text}
College argumentative essay (APA citations)
[System]
Use APA in examples. Require at least one counterargument location. Suggest one transitional phrase by paragraph.
[User]
Rubric (APA): {rubric}
Draft: {student_text}
History DBQ (document-based question)
[System]
For each point, reference the document label (Doc A/B/C). Distinguish sourcing (author POV, audience) from evidence. Suggest one contextualization sentence.
[User]
DBQ prompt + docs summary: {prompt}
Draft: {student_text}
Short-response (150–250 words)
[System]
Return only: 2 strengths, 2 next steps, 1 suggested sentence-level fix. 120 words max.
[User]
Prompt & draft: {text}
Calibration (5–10 minutes that pays off)
- Pick three exemplars (A/B/C range), run the base template, and compare with instructor feedback.
- Adjust: tighten tone, add course-specific “musts” (e.g., mention of opposing claim).
- Freeze the system prompt and re-use it for the course to reduce variance.
- Sample 10–20% of outputs each assignment—especially low-confidence cases—for human review. That practice mirrors testing guidance emphasizing validation & fairness documentation. (Educational Testing Standards , AERA )
Exporting feedback to your LMS (Canvas, Moodle, Brightspace, Google Classroom)
If you want comments and rubric values to land in the gradebook:
- LTI 1.3 + Assignment & Grade Services (AGS) lets a tool post rubric-aligned scores/columns and results back to LMS gradebooks. (Canvas, Moodle, Brightspace support AGS.) (IMS Global , Instructure Dev Docs , Moodle Docs , Brightspace )
- Google Classroom API supports programmatic returns of scores and private comments. (Google for Developers , Google Help )
Export schema example (safe to map to AGS line items):
{
"student_id": "abc123",
"assignment_id": "essay-01",
"rubric_scores": [
{"name":"Thesis","points":3,"out_of":4},
{"name":"Evidence","points":7,"out_of":8}
],
"feedback_json": { "...": "object from prompts" }
}
- In Canvas, configure your tool with AGS and create line items for criteria; post scores per criterion and an overall score if you choose. (Instructure Dev Docs )
- In Moodle, LTI Advantage allows multiple grades per resource link (handy for criterion-level scores). (Moodle Docs )
- In Brightspace, use the AGS LineItem/Result/Score trio. (Brightspace )
- For Google Classroom, use the Submissions and Grades endpoints to set points and add private comments. (Google for Developers )
Privacy & policy: If using a third-party tool, ensure your vendor aligns with applicable testing and privacy standards, and document intended use and error analyses. (Educational Testing Standards , AERA )
Guardrails for reliability & fairness
- Blind inputs. Remove names and identifiers from the student text before prompting.
- Confidence tagging. Ask for
confidence
0–1 per criterion; route low-confidence items for human check. - Hallucination control. Require a quote/snippet for each comment; if none, return “unclear.” (Hallucinations are a known risk in automated feedback.) (Educational Data Mining )
- Document controls. Log model, version, prompts, and parameters per assignment. NIST’s AI RMF encourages explicit risk controls and provenance. (NIST Publications , NIST )
- Institutional guidance. Pair these practices with local assessment policies and recognized standards (AERA-APA-NCME). (Educational Testing Standards )
- Equity lens. See emerging work on bias in automated writing evaluation and LLM-mediated feedback; treat the system as decision support, not sole arbiter. (ERIC , PubMed , ScienceDirect )
Copy-ready: full prompt pack
Paste the System block once; then attach the relevant add-ons by criterion for each run.
[System]
You are an experienced writing instructor. Feedback must be KIND, SPECIFIC, ACTIONABLE.
- Anchor to rubric criteria by name; give 1 strength + 1 next step per criterion.
- Quote a SHORT snippet from the draft as evidence per point.
- Avoid identity inferences; if unsure, say “unclear—needs more evidence.”
- No grades, no chain-of-thought. Max 120 words per criterion.
Return JSON: { model, version, criteria:[{name, strength, next_step, evidence, confidence}], overall_summary }
Add-ons you can include:
- Thesis focus
- Organization mapping
- Evidence & reasoning
- Sources & citations ({citation_style})
- Clarity & style
- Mechanics
- Tone guardrails
- Bias controls
Packaging & classroom workflow (5 steps)
- Upload rubric and freeze the System prompt.
- Batch run on drafts; flag low-confidence items.
- Spot-check ~10–20% (borderlines + low confidence).
- Release feedback + next-step mini-tasks (revise thesis, add counter-evidence).
- Export rubric scores/comments to LMS via LTI AGS or Classroom API. (IMS Global , Instructure Dev Docs , Google for Developers )
Final thoughts
Standardized prompts won’t make writing feedback robotic—they make it fair and legible. By anchoring to rubrics, using strength + next step for each criterion, and adding bias controls, you’ll reduce variance while keeping the human relationship at the center. That’s squarely in line with decades of feedback research and current guidance for trustworthy AI. (Conselho Pedagógico , SAGE Journals , NIST Publications )
Ready to Transform Your Grading Process?
Experience the power of AI-driven exam grading with human oversight. Get consistent, fast, and reliable assessment results.
Try AI GraderRelated Reading
© 2025 AI Grader. All rights reserved.