Articles
Teaching and Learning
How do you know if feedback is any good?
Senior Curriculum Adviser KassiElla Rankine asks 'What constitutes 'good feedback', and how do we ensure our AI feedback hits the mark?'
KassiElla Rankine
Curriculum Advisor
14 Jan 2026
We all know that feedback is important.
It’s mentioned in the Teachers’ Standards.
It’s the name of a policy that every school has.
It’s frequently the topic of whole-school training days.
It’s the job of a teacher to provide feedback to their students — but how does that teacher know if their feedback is actually any good?
Our answer: good feedback means that the student receiving it improves as a direct result. If feedback doesn’t help them to improve, it’s not worth giving. That’s why we’ve trained our AI to capture every minute detail, every nuance of a child’s writing, in order to generate specific feedback that will help them to make progress.
Before we look at how we curate our feedback, let’s consider the alternatives.
Different approaches to feedback
There are a variety of approaches to feedback that a marking policy or an individual teacher might take.
Highlighting all errors
Although it can be tempting to want to correct every mistake a child makes, doing so is completely unachievable. Not only does it add more work to a teacher’s (already overflowing) plate, it doesn’t do the very thing that feedback is supposed to do: help. Instead, it overwhelms the student by highlighting to them just how many skills they haven’t yet secured. Without further conversation with their teacher, they’re left in the dark about which errors take priority over others. Where are they supposed to start? If a child is making multiple errors across multiple skills, they’re unlikely to know how to fix any of their mistakes without more guided feedback — feedback that a teacher could not possibly give for every error in every child’s piece of writing.
Our response: Quality over quantity is key. Two bespoke ‘Even Better If’ comments, detailing what the error was and how to fix it next time will lead to better outcomes than trying (and failing) to fix every mistake in one fell swoop.
‘Fix x’
Highlighting all errors can be overwhelming — but so can highlighting one error and telling the student to ‘fix it’. Again, they might ask themselves, “Where am I supposed to start?” This kind of feedback (often given ‘for feedback’s sake’) isn’t useful as it doesn’t help the child to understand what mistake they have made. Telling a child to ‘fix your semicolons’ doesn’t guide them in improving their work — if they’re using semicolons incorrectly, how are they supposed to magically know how to use them correctly without feedback that explains?
Our response: For feedback to be effective, it needs to dig deep and find the root of the error at hand. Does the child lack the knowledge that semicolons are used to join two main clauses? Have they used a semicolon in a sentence like “I didn’t want to go; but my hand was forced.”? The more granular the diagnosis, the more precise (and helpful) the feedback, the more progress can be made.
Live feedback
There are many benefits to using live marking in the classroom: corrections made while students are still engaged in the writing process; misconceptions addressed before they become embedded habits; the revising and editing process explicitly taught; live modelling, particularly helping learners with SEND and / or EAL; and a lighter teacher marking workload at the end of the day. All of these reasons (and some) explain why so many teachers choose to use live marking with their students. However, there are two big bedrocks of feedback missing from this approach: time and depth. In a single lesson, a teacher does not have enough time to sit down with every child in their class and have a detailed discussion about how to improve their writing.
Our response: Live feedback is better utilised for forgotten punctuation (not misunderstood punctuation) or to correct whole-class misconceptions in the moment. Using this approach for more embedded misconceptions is unlikely to support children in making progress as there’s not enough time to go into the depth needed.
Focusing on age-related skills
Following the curriculum too rigidly often leads to unrealistic feedback. If a child hasn’t yet secured skills from previous year groups, providing them feedback on current year group skills may be a fruitless endeavour. Consider a Year 4 child who omits every comma after a fronted adverbial — yes, that is an error worth highlighting as it’s an age-related skill. However, if that same child cannot demarcate a sentence and use the appropriate punctuation to separate clauses (“the moon was shining in the sky it illuminated the path leading away from the gate”), should the missing commas be the focus of your feedback? Issues with demarcation are often attributed to a gap in understanding around clauses and how to correctly separate them, and so it’s no surprise that the same child doesn’t apply comma rules to fronted adverbials.
Our response: There’s a reason that prerequisite knowledge exists — some concepts cannot be embedded without secure prior knowledge of others. As a result, it’s important for any misconceptions in these earlier skills to be addressed first and foremost, allowing time for practice and new habits to be formed. Only when these foundations have been secured can new, age-related skills be built on top.
Highlighting errors that are not purpose-specific
An error is an error no matter what the genre of writing is. However, when writing for a purpose, some errors are more important than others. It’s important to prioritise which errors are preventing that piece of writing from being the best it can possibly be: what kind of feedback would help that child to improve their writing for that purpose? If a child in your class is using badly-written figurative language in a newspaper report, the feedback shouldn’t be ‘use more accurate comparisons than x’ — it should be ‘use more formal language like y’.
Our response: Some feedback, such as grammar and punctuation, focuses on technical accuracy and is therefore relevant to all writing purposes. To create truly great writers, however, purpose-specific feedback should be provided as well. How can we expect the children in our class to write effectively for a range of purposes and audiences if we aren’t picking up on the relevant errors or missed opportunities? Critiquing the use of dialogue in a diary entry is unlikely to help a child improve how they write diary entries — encouraging them to use more introspective language, however, will help them to make steps in the right direction.
The stylus approach to feedback
We’ve already established that good feedback helps a child to improve. What this improvement looks like, however, will vary depending on why the feedback was needed in the first place:
The feedback might directly correct a misconception that the student was carrying, resulting in less frequent (or no) errors made in the future, e.g. thinking an apostrophe should be used between the contracting two words rather than in place of the missing letter (do’nt instead of don’t).
It might point out an error which the student was able to fix using their own prior knowledge, e.g. missing a closing bracket when using parentheses.
It might suggest different ways of approaching a task or skill, resulting in the student uplevelling their application of that skill and/or expanding their skillset, e.g. using a word-one sentence (or paragraph) for dramatic effect.
In some cases, feedback may be used more by the teacher than the student. This kind of feedback might help the student to see where they needed to improve and help the teacher to plan and deliver a task that led to that improvement, e.g. addressing comma splicing, which shows a gap in understanding about main clauses.
Training AI to generate feedback that hits one or more of the criteria above means understanding the depth, breadth and granularity of knowledge required by an expert teacher — and then breaking that down in such a way that AI can assess with the same depth, breadth and granularity.
Our AI doesn’t just pick up on errors; it’s rigorously trained to explain why errors are errors. We will never offer feedback in the form of ‘you’ve used inverted commas incorrectly’ as this does nothing to help the child improve. Instead, we pinpoint exactly what error was made (you included the unspoken word ‘said’ in your inverted commas; you only used one inverted comma instead of a pair; you used the same pair of inverted commas to capture two characters’ dialogue) and provide follow-on tasks that guides the student on how to fix it. Without this granular diagnosis, how is a child able to learn from their mistake and make steps to improve?
The trickier aspects of feedback come in cases like the last criterion, where misconceptions cannot be addressed with one stand-alone comment and follow-on task. A piece of feedback about comma splicing may include the following:
An explanation of what a main clause is
Suggested punctuation marks to use instead of a comma
Multiple correct and incorrect examples
A chance to practise fixing comma splicing
Even so, it’s still unlikely to ensure that that child never comma splices again; a misconception like this often runs deeper than others, meaning time is needed in order to untangle it. This is why, alongside student data on our teacher tool and individual student reports, we also generate whole-class feedback, suggesting intervention groups made up of children who are making similar errors. This kind of data is invaluable to teachers as it doesn’t just give them more time to plan — it gives them more information to plan with.
Be it used by teacher or student, if our feedback doesn’t make a demonstrable impact on student progress, it is not feedback worth giving. Validating our judgments and our AI-generated feedback is a continual process — not an outcome — to ensure that we are providing the highest quality, most granular feedback that we can to allow children to make the progress that they deserve.
KassiElla Rankine
Curriculum Advisor
Share


