In previous posts, I’ve written about the need to ‘strip, flip and trace’ when trying to determine good educational science from the bad. In this post, and once again drawing upon the Daniel Willingham’s 2012 book – When Can You Trust The Evidence: How to tell good science from bad in education – I will look at a number of steps which evidence-informed teachers and school research leads can take to effectively analyse educational research. The rest of this post will be in three sections: what is the role of experience in analysing research; Willingham’s nine steps in analysing research; the notion of practical significance.
The role of experience
The true test of friendship is not when you agree with someone, it’s when you disagree with them. I’ve got a huge amount of time and respect for Tom Bennett and for his work (along with Helen Galdin O’Shea) in developing the researchED movement. Unfortunately, in the following quote from his 2013 book – Teacher Proof – I think Tom gets it wrong.
… there are few things that educational science has brought to the classroom that could not already have been discerned by a competent teacher intent on teaching well after a few years of practice. If that sounds like a sad indictment of educational research, it is. I am astounded by the amount of research I come across that is either (a) demonstrably untrue or (b) patently obvious … Here’s what I believe; this informs everything I have learned in teaching after a decade: Experience trumps theory every time (Bennett 2013, 57-59).
Willingham argues that informal knowledge can mislead us in two ways; first, when we think with certainty; second, when we misremember or misinterpret past experience. As Willingham explains
… ‘I know what happens in this sort of situation.’ I think to myself, ‘My daughter loves playing on the computer. She’ll think the reading program is great!’ I might be right about my experiences – my daughter loves the computer – but that experience happened to have been unusual; perhaps she loved the two programmes that she used, but further expertise will reveal that she doesn’t love to fool around with other programs. Another reason my experience might lead me astray is that I misremember or misinterpret my past experience, possibly due to confirmation bias. Perhaps it’s not that my daughter loves playing on the computers; actually, I’m the one who loves playing on the computer. So I interpret her occasional, reluctant forays onto the Internet as enthusiasm. (p186).
So if teachers cannot rely on their experience to provide guidance on how to proceed then what are we to do? Willingham helpfully identifies fours steps that can be taken to help manage our experiences; recognise that experience can be both fallible and can also be insightful; check out your experience with others. How does it relate to their experience or interpretations; think of the opposite to what your experience tells you. In other words, if you think of an explanation or possible outcome, try and think of the exact opposite and see whether that is reasonable; actively look for daily examples of events that do not confirm the past experience.
Willingham’s Nine Steps Approach to Analysing Evidence
Having discussed the role of experience it is now appropriate to look in more detail at the steps Willingham suggests you take to analyse evidence. Before we do that it is necessary to define two terms – the Change and the Persuader.
The Change refers to a new curriculum or teaching strategy or software package or school restructuring plan – generically anything that someone is urging you to try as a way to better educate kids.
The Persuader refer(s) to any person who is urging you to try the Change, whether it’s a teacher, administrator, salesperson, or the President of the United States (Willingham, p136)
Willingham’s nine steps to analyse evidence are summarised in Table 1.
Table 1 Actions to be taken when analysing evidence (Willingham, 2012 p 205)
|Suggested Action||Why You Are Doing This?|
|Compare the Change’s predicted effects to your experience, but bear in mind whether the outcomes you’re thinking about are ambiguous and ask other people whether they have the same impression.||Your own accumulated experience may be valuable to you, but it is subject to misinterpretation and memory biases.|
|Evaluate whether or not the change Change could be considered a breakthrough.||If it seems revolutionary, it’s probably wrong. Unheralded breakthroughs are exceedingly rare in science.|
|Imagine the opposite outcomes for the Change that you predict.||Sometimes when you imagine ways that an unexpected outcome could happen, it’s easier to see that your expectations were short-sighted It’s a way of counter-acting the confirmation bias.|
|Ensure that the evidence is not just fancy labels.||We can be impressed by a technical-sounding term, but it may mean nothing more than an ordinary conversational term.|
|Ensure that bona fide evidence applies to the Change, not something related to the Change.||Good evidence for a phenomenon related to the Change will sometimes be cited as it proves the change.|
|Ignore testimonials.||The person believes that the Change worked, but he or she could easily be mistaken. You can find someone to testify to just about anything?|
|Ask the Persuader for relevant research.||It’s a starting point to get research articles, and it’s useful to know whether the Persuader is aware of the research.|
|Look up research on the Internet.||The Persuader is not going to give you everything.|
|Evaluate what was measured, what was compared, how many kids were tested, and how much the Change helped.||The first two items get at how relevant the research really is to your interests. The second two items get at how important the results are.|
We know the need to turn to the role of practical significance in determining how the use of research evidence for your practice.
In reading research articles you will come across the terms statistical significance and effect size. Coe (2002) argues there is a difference between significance and statistical significance. Statistical significance means that you are justified in thinking that the difference between the two groups is not just an accident of sampling. Effect size, on the other hand, is a way of measuring the extent of a difference between two groups (Higgins et al 2013). Now if we combine both effect size and statistical significance, this helps get a sense of the notion of practical significance of a change or intervention.
If the confidence interval includes zero, then the effect size would be considered not to have reached conventional statistical significance. The advantage of reporting effect size with a confidence interval is that it lets you judge the size of the effect first and then decide the meaning of conventional statistical significance. So a small study with an effect size of 0.8, but with a confidence interval that includes zero, might be more interesting educationally than a larger study with a negligible effect of 0.01, but which is statistically significant. (Higgins et al p6)
In other words, as Willingham states, practical significance refers to whether not that difference is something you care about (p 203). And, as such, requires you the reader to make a judgement call. Now making judgement calls about research evidence which you are not sure about is never easy. However, Willingham suggests three approaches to tackle the issue. Make a mental note that you think the research may be of practical significance. If you have the opportunity raise the matter with the Persuader. Ask how does the practical significance of the Change relate to your goals and what you are trying to achieve. Is the improvement which is being offered consistent with your objectives and the resources available to achieve them? The next step
Having – flipped, stripped, traced and analysed the evidence – the next step is to consider whether the change should be adopted and that will be the focus of a forthcoming post.
Coe, R. (2002) It’s the Effect Size, Stupid What effect size is and why it is important paper presented at the Annual Conference of the British Educational Research Association, University of Exeter, England, 12-14 September 2002
Bennett, T. 2013. Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it. London: Routledge.
Higgins S., Katsipataki, M. Kokotsaki, Coe R., Elliot Major, L. and Coleman, R. (2013) The Sutton Trust-Education Endowment Foundation Teaching and Learning Toolkit: Technical Appendices
Willingham, D. (2012) When Can You Trust The Experts: How to tell good science from bad in education, Jossey-Bass, San Francisco.
This is a re-blog post originally posted by Gary Jones and published with kind permission.
The original post can be found here.
Featured Image Source: By Jimmie on Flickr under (CC BY 2.0)