This term I have mostly been getting myself in a pickle about measuring student progress. I want to do it with integrity, reliability and validity but I wonder whether all three of these are possible. When considering student progress, I have been inspired by the work of Alvin Gouldner and I wonder whether we have created our own magnificent minotaur.
Although Gouldner is writing about the myth of value-free sociology, I think many of his arguments should be considered when thinking about how we deal with data on student progress. In his polemic essay, he suggests that we should be aware of the emergence of a group myth and the narratives that surround these ideas. The stories we tell ourselves to justify this approach are well formulated and professionally validated but do they obfuscate or move us further from the truth.
The lair of this minotaur, although reached only by a labrynthian logic and visited only by a few who never return, is still regarded by many sociologists as a holy place …Considering the perils of the visit, their motives are somewhat perplexing. (1962; p.199)
I guess I am wondering whether our obsessions with data have not become our very own professional minotaur. Our individual, departmental and whole school methodologies feed the beast and keep the fear alive. We are essentially dealing with uncertainties, does the minotaur exist to help us cope with that fact. A defence against the unknown?
What grade is it anyway?
Where we are starting with new specifications this year, we have little or no guidance on how the grades will be broken down and at times it feels as though we are lost at sea. However, our data monsters require that we feed them grades, but whether our approximations are anywhere near the real thing, only time will tell. In the meantime, I keep using the old rubrics but who knows if the new grade boundaries will be anywhere close. Have I got to grips with this new mark scheme? Even if I felt 100% confident with my allocation of raw marks, I essentially cannot tell what grade it would be?
Will they get their target grade?
On the face of it, this is a ridiculous question to ask after only a few weeks of learning. How do I know? The students have yet to reach the fulcrum of the specification whereby they will be able to apply the skills and knowledge they have picked up along the way. I can tell you how well they can deal with what we learnt in the Autumn term but this is not necessarily an indicator of their journey over the year. If anything, a young person’s progress should be spiky rather than a gradual incline as they have individual responses to different topics and skills.
To keep in line with changes in KS3, some schools are resorting to the fudge of asking teachers whether students are above target, on target or below target with perhaps some ‘working at’ or quantitative measure alongside.
Oh dear, I am finding this heuristic quite challenging to apply to my mark book and what I observe in the classroom. To give a progress check, I have to triangulate my own grades with those of other teachers with whom I share my classes and decide on the best fit. This sometimes feels like a hunch rather than a wholly evidenced-based approach. I guess the question is how good are my hunches?
Type 1 and Type 2 Errors
In a wider discussion about workload and data, Nick J Rose reminded me that any data decision is subject to two types of mistakes. Type 1 errors (false positives) where we judge an underachieving student as making good progress and type 2 errors (false negatives) where a student who is making good progress is judged to be below target. If we take a more cynical view we tend to make more type 2 errors with a more generous approach we will be guilty of type 1 errors.
However, there is a Goldilocks solution, the safe bet, the mid-point of ‘on target’. In November, can you really put your hand on your heart and say that a student is on target to achieve their ALPS or more in June. Hmmm.
The problem is we are dealing with uncertainty. I feel confident in my ability to comment on a student’s behaviour, engagement and homework effort. I can even give you the grade of their last essay and a summary of my mark book but whether there remains much uncertainty in how this all adds up to progress over time and whether they are above, below or on target. It feels too crude, too reductionist to summarise the terms learning into these nominal categories. It does not seem to do justice to their learning journey.
Clearly, there are different flavours of validity but at its most basic we need to consider the quality of whether the measure in any way reflects the truth. There are lots of barriers to the truth.
Psychologists use the term confirmation bias to explore how we interpret data to fit our pre-existing ideas. We all like to think our opinions are based on years of rational, objective analysis but unfortunately, like any other sphere of human activity, teacher judgements are prone to a range of biases and we are back to the problem of me using my hunches.
Are we all using the same language when we talk about student progress? Have we operationalised our concepts well enough? What counts as on or above target? Is this a consistent measure between teachers, departments, or even between students in the same room? Will I use the same set of observations when I come to complete the progress data later in the year? How do we triangulate progress in classes that have two teachers? The measures need to be consistent.
Who is the data for anyway?
One of our accountability measures has to be student progress. I don’t have a problem with it, per se, but I wonder whether there are perverse effects of using our crude student progress data as a means of measuring ‘good’ teaching and pay reviews. Will teachers be inclined to make the spreadsheet go from red to green rather than the more difficult spiky profile of a young person getting to grips with a subject and skills taught via different topics and units? We are at our best when we are at our most transparent, but the reality is often more complicated than the snapshot.
Snapshot versus longitudinal?
I guess I am worried about whether progress in class and in certain topics is a good indicator of progress over time. I think snapshot data is really useful as a form of assessment for learning but progress overtime needs to be more nuanced and sophisticated.
The labelling process.
If we accept that our ability to measure student progress is at best limited. What happens once we start to apply the labels of under-achieving, more able, the comfortable middle. We know labelling is a complicated process and can be used as a force for good as well as a more negative self-concept. The self-fulfilling prophecy is never as straightforward as it seems and there are a variety of student responses to these labels. I guess it depends on how we manage the student feedback about progress and what this means to them and how it informs our strategies in the classroom.
Our attempts to create certainty from the unknown, clarity amongst the chaos of learning is fraught with problems. We are after all measuring human beings not baked beans and the objective measurement of human behaviour is a complex and contested phenomenon. However, our attempts are important in terms of how we target our limited resources and individual student behaviours. What is needed is a more reflective and reflexive approach that can meaningfully draw on a broader range of experiences of learning.
“We sociologists must—at the very least—acquire the ingrained habit of viewing our own beliefs as we now view those held by others.”
Gouldner (1970; p.489)
I guess I lie in bed wondering whether I have got it right, are the right students on the right lists? Will the monster still find me in the labyrinth?
This is a re-blog post originally posted by Stephen Hickman and published with kind permission.
The original post can be found here.
Anti-Minotaur: The Myth of a Value-Free Sociology, Alvin W. Gouldner, Social Problems, Vol. 9, No. 3 (Winter, 1962), pp. 199-213