Evidence, evidence, evidence

A blog from Osiris Educational

Education

Evidence, evidence, evidence

It seems like only yesterday when the mantra was education, education, education. How we miss those days of high trust and even higher spending. Now we have moved on to evidence-based education. Heady heights for an increasingly complex arena. But worthy of our attention.

Education-Osiris-Educational

What exactly do we mean when we use such a term? What works? What seems to work? What can be made to work?

When both Hattie and William said that teachers need to be evaluators of their practice rather than researchers, they were making an important point. Educational Research is a highly specialised area. Simply trying things out in one class or school and claiming there is evidence of impact is fraught with dangers and biases.

I think that when the term “evidence” in education is used, at least three areas are spiked:

  • Evidence of what is ‘proven’ to work
  • Internal evidence building
  • Evidence around successful implementation

Using all three together is the ideal, the reality needs a little more unpicking.

Evidence of what is ‘proven’ to work

A great deal of money and time has been devoted to what is proven to work. EEF (Education Endowment Foundation) has funded hundreds of trials, Hattie and Mazarno have undertaken detailed meta-analysis studies, thousands of doctoral and Masters theses have been produced. Methodologies have varied from qualitative to quantitative and onto randomly controlled trials. What works syntheses have been published and the OECD has devoted endless studies.

So what do we now know?

We now know that some interventions have a higher probability of working than others.

Are they the right ones for your learners, context, budget? Possibly.

Are they easy to implement and uphold fidelity? Not as often as we would like to think.

Do they bring the impact that we would expect? Rarely.

Far too often, trials are not replicated so evidence is vulnerable. Roll-out results rarely match trial outcomes – and here lies the problem! Changing practice in education is not as easy as the research would suggest.

There are many reasons for this; contextual differences, complexities of roll-out, lack of incentive in the system for teachers to develop their expertise, underestimation of on-boarding time, leadership guile, pressures inherent in the system for performance, too many moving parts, trying to operate change whilst the ‘’aircraft is in flight’, the dynamic nature of the system, ‘already doing that’ syndrome….

So knowing what offers a high probability of success is unlikely to bring about the promised rewards, otherwise improvements in education and outcomes for learners would have been easy in all schools and education would be amazing.

There is another problem which sometimes gets overlooked. Virtually all the research uses test data scores to prove impact. This elevates passing the test to the end outcome of education. It also favours narrow interventions in tight scenarios as they tend to produce higher effect sizes. Sadly the sum of the parts does not always equal the whole.

Internal evidence building

If we assume that context is important and that whilst all schools have commonalities they differ in inheritance, expertise and funding (to name but three), there is a strong case that the initial approach should focus on an internal examination.

The inspection process benchmarks against nationally related standards. It alludes to capacities. What it does not do is unpick the process by which internal judgement are made. This would require a deliberate building of a self-evaluative engine. This would be rigorous and continuous in process drawing on evidence from across the school on all the factors pertaining to impact and outcomes.

In our experience schools are hugely data-rich. In most cases they have too much data and most of it telling the same things. What they are poor at is data integrity, analysis and rebuilding their systems and protocols to remove bias and prejudice from their judgement. They also lack evidence in key areas around learner agency, collective endeavors and expectations that could build a much fuller picture.

With this fuller picture it would be far easier to leave alone that which is working and abandon what is not. It would also provide a platform for future initiatives to be trialed, rather than the endemic sow and hope currently tipping workload over the edge. This can deliver stress to highly skilled teachers whose approach is effective but not flavour of the month.

Evidence around successful implementation

If we are to improve systematically, it is vital we understand more about the art and science of implementation. We were working with a ‘good’ school recently who had decided to make their In Service days, an hour a week. Over the course of the year they had covered 32 ideas around improving teaching. The net result was nothing.

To their leadership, it all made logical sense. They could share best practice, bring in the odd twilight speaker and everyone would feel enriched and motivated. To the teachers it became a messy case of cognitive overload.

They are not alone. Flawed implementation plans abound in education. Leaders share notes from conferences or networking and off we go again.

For implementation to work it needs a thorough and objective base line to be established. This can be done really quickly but only if we are absolutely crystal clear and agreed on what we are trying to achieve and how it is going to be executed. If we get the start point wrong everything else is likely to shrivel.

Having an implementation model is vital. Evaluating at each stage gives feedback on the likely success of the venture. When in-built tests begin to vary from expectations key questions can be asked, iterations adjusted or abandonment considered. The net result will be a more measured change process.

By analysing the implementation process over time it can be strengthened, key capacities built and reporting enhanced. Rarely, if ever does this latter focus on implementation capability of the school or organisation take place. Seems we are too busy and doing too much orientated.

In Summary

If you just put in place high probability strategies and interventions, you will be busy.

If you build an internal evidence base first, work out what is already carrying the impact and scale up you will make everyone’s lives that bit easier.

If you can build a science of implementation alongside your own evidence base you may never need to look to silver bullets again.

Osiris Educational is the UK’s leading independent training provider for teachers.We work at the forefront of innovation in education providing pioneering, challenging and effective training solutions in teaching. We use the best and most renowned trainers to help teachers improve their ways of thinking and approaches to teaching.

To find out more about us, please contact:

info@osiriseducational.co.uk or call 01790 755 787

https://osiriseducational.co.uk/

 

Leave a Reply