What went wrong with “what went well”? Feedback in science – Part 2 of my researchED Rugby talk

Inspired by Anders Ericsson’s “Peak”, I want feedback to pupils on every piece of knowledge and then opportunities for more practice in light of the feedback.


I’d like to pause a minute to look at feedback, how it has been going wrong for me in science and why I believe textbooks on this model offer such a powerful improvement.

My school at the moment has a feedback policy of written comments in pupil books using the headings What Went Well and Even Better If. We also have Act Now so we always give the pupils a task to do to improve their work. We have purple pen time in lesson in which every pupil has to use the feedback to improve their work. Now, to be fair to my school, it’s been turned around from being quite famously bad to quite amazingly good in terms of behaviour. I’m a big fan of the Keep It Simple Stupid principle so I understand the adoption of a one-size-fits-all feedback policy. We’re moving into the next stage now as we’ve got behaviour right so I’m hopeful they will consider the need for subject-specific feedback policies. Allow me to present the need for subject-specific feedback policies.

WWW/EBI might work for a quality model, i.e. where the work is a biggish task like an essay. If a pupil writes an essay, I can perhaps tell them that their argument is strong but they need to structure it better. I might use an exemplar to illustrate what I mean. I might make specific recommendations such as “present the arguments and counter-arguments alternately”.

(There are in fact arguments that subjects assessed using the quality model are best taught using smaller tasks that are more like those on the difficulty model, but we’ll leave that for now… )

The fact is, science is a difficulty-model subject. Not only is the final assessment of science, the exam, a series of short-to-medium questions of varying levels of difficulty, I argue that the best work in our lessons is to do many questions, often even shorter than exam questions. Questions on every piece of declarative knowledge we want pupils to have, and repeat questions to build fluency in every piece of procedural knowledge we want them to have. Questions that have single, right-or-wrong answers. Questions like the ones in my textbooks.

It’s possible to create WWW/EBI/ACT statements for pupil work on these questions, but it’s undesirable for the following reasons:

  1. It’s hard. I’ve spent too long looking at books wondering what I can say in this format that makes sense.
  2. It takes too long – both because it’s hard and because it’s written in each book
  3. Giving WWW/EBI-type feedback encourages us to change what we do in our lesson to fit the feedback model. In the past I did loads of write-ups of experiments because it was easy to give comments like: WWW: you gave evidence for your conclusion. EBI: Describe the quantitative relationship between variables. But this is the tail wagging the dog. Our feedback needs to work for our lessons, not the other way round.
  4. The fourth reason why WWW/EBI is undesirable in Science is this: We can do so much better. If we check each answer in lesson, against pre-prepared answers, pupils can get what, 20 pieces of highly specific feedback in one lesson alone. If we then re-teach where needed, and provide equivalent questions for pupils to do having received feedback, then we can create something like deliberate practice that matches the nature of our subject instead of distorting it.

I believe textbooks can be the foundation of better feedback in science.

Underneath all the questions in my textbooks, I’ve put all the answers. Pupils self-mark in red. Pupils comparing their work with the answers is feedback, and crucially they’re getting feedback on every single question, not just the blob of “their work for that lesson”. I’m getting pupils to write a note of metacognition next to any wrong answers.

Often they can see what they did wrong, so they describe it.

If they don’t know why they got the wrong answer, they put a big “RT” next to the question. I can scan pupils’ books as I go round, and this is made easier by them marking in red: This is feedback from pupil to teacher. I can re-teach areas that are needed. This is “adaptive teaching” as Dylan Wiliam says. For every question that pupils got wrong, they have to do the equivalent question in purple.

Then they can mark again using the answers.

This is, I hope, getting close to the deliberate practice described by Ericsson, that leads to expertise.

I’ve been using whole-class feedback on top of this, 20 minutes to read books for a class of 30, if you get them to hand in books open at that lesson’s page. I check they’ve been using the feedback system properly, and add any extra feedback that’s needed. We then spend 5-10 minutes as a class making any further improvements with the help of the visualiser.

This model for feedback feels nimble and responsive, precise, in fact it’s highly personalized – but in a good way! It’s really exciting to see pupils’ work, often there are just tiny little bits of purple pen in a sea of black but you know it’s exactly the bit they needed to go over, and you can see straight away if they got it right the second time round because they self-mark in red again. Using textbooks in this way means that feedback is powerful, deliberate practice can take place, and every single pupil can make the most of every second in the lesson.


3 thoughts on “What went wrong with “what went well”? Feedback in science – Part 2 of my researchED Rugby talk

Add yours

  1. This is great. I’ve tried to bring up the issue of one type of marking for all subjects. It just wasn’t working especially for science. Started to hate book marking due to the workload of using WWW/EBIs

  2. Have you heard of showme boards? Mini whiteboards one per student. I use them to quiz my class on music theory, rhythm dictation. They hold up their boards for me to see. I can give individual feedback as the boards come up and then whole class feedback with explanation/correction of common misconceptions that I can see at a glance. Wouldn’t work when you are wanting sentence answers etc, but would work for testing scientific vocab, balancing chemical equations, formulae, short calculations. I started insisting on absolute silence while this goes on – made a world of difference to how well it works.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

Up ↑

%d bloggers like this: