The word ‘research’ is increasingly used in education. It is the sign of a profession, and gives a sense of understanding the best steps forward. It legitimises long held beliefs.
Here we explore what is meant by quality of research in education by looking at five levels. Within these, some of the dilemmas around research surface and are explored – many thanks to Professor Jonathon Sharples for his clarity around this area.
Level 0 – “This is what I did”
One teacher’s individual actions, no matter what the claimed outcome, do not represent research unless they go through some form of moderation process. Sharing best practice on this basis carries little if any effect size. This is not classified as research.
Level 1 – Anecdotal
Most evidence in schools is anecdotal. This involves teachers and leaders saying what they thought when they tried something. It is subjective opinion that may or may not have been based around fidelity, proper implementation or/and evidence.
It may be that as the numbers of anecdotes grows so the statistical probability of a decent representation improves. However without clarity and specificity this is a highly unreliable way of gathering evidence.
Level 2 – Case Studies
This involves several incidences of implementation being recorded against criteria for process and outcomes. It has been recommended that to be passable as research, this requires a minimum of 20 cases involving different types of schools and different lead implementors. This should enable case studies to override inbuilt biases.
Single case studies frequently involve too many moderators (variables that could have a causal influence) in order to be classified above anecdotal. Action Research is a good way forward for case studies by bringing rigour and evaluative depth.
Level 3 – Qualitative
Qualitative research involves reflection and detailed description to capture the essence of patterns of change. Methods can vary, and at its best qualitative research can provide deep insights into complex areas. Interpretation of findings can still be subjective but best qualitative research is moderated at University level.
Some areas, particularly human behaviours and interrelated complex problem solving, are best tackled through this method. Interpretation of findings can be difficult as research studies all tend to have slightly different focus and context, making comparative conclusions difficult to draw dependably.
Level 4 – Quantitative
Quantitative research demands measures. This is done by testing groups and collecting data. Perceptions can be gathered but must be triangulated if they are to be meaningful.
Increasingly many of the more esoteric areas can be measured, such as creativity, culture and relationships. Previously, a focus on exam outcomes as a standard for measurement biased research to more limited areas where tests already existed.
Like qualitative research, quantitative should be exposed to rigour, using external agencies for verification of outcomes and interpretations such as universities.
Level 5 – RCTs
“Randomly Controlled Trials” have become prevalent in education largely thanks to the work of the EEF (Education Endowment Foundation) and similar groups in the States plus a growing interest in what works in education. These involve control groups being set up to provide a direct comparison with the active group, so outcomes can be directly compared.
RCTs are not easy to run, involving large numbers of schools to lower the impact of statistical exceptions. It remains a controversial area, often reliant on national tests in numeracy and literacy or formal qualifications. Not all interventions lend themselves to RCTs. Ones where there is a very direct relationship between inputs and outcomes seem to fare better. EG a reading intervention has a much tighter relationship with reading scores than a whole school model of pedagogical development.
As we learn more they seem set to play an increasing role.
Increasingly mixed method research is being used with RCTS being combined with qualitative analysis. This leads us to the relatively new kid on the block.
Meta-analysis emerged from Bolder, Colorado thirty plus years ago. Viviane Robinson and John Hattie’s work have popularised this as an approach. It allows all studies in a specific field to be compared. For example in Visible Learning, Hattie compared all studies into teaching and produced a synthesis of more than 270 interventions.
What this gives us, is an overview of what has a high probability of working. Moderators still abound which can explain individual variation. However in teaching, Hattie has found there to be remarkably few. When applied to learning the findings are far more complex. This probably goes a long way to explaining why we continue to obsess about the more observable teaching rather than the less visible impact on learning that teachers have.
To find out more about Visible Learning contact our friendly team on 01790 755787, or email HCharles@osiriseducational.co.uk.