The Difference Between Innovation-Led Evidence And Evidence-Led Innovation

Recently, I was part of a high-level discussion about maths and computer science education, how we could improve their reach and effectiveness.

Rather quickly the question of evidence came up, and its role in driving innovation. It’s taken me a few days to realize that there were actually two very different “importance of evidence” conversations–one with which I completely concur, and one with which I vehemently disagree.

In the end, what I believe this exposes is a failure of many in charge of education to understand how major innovation usually happens–whether innovation in science, technology, business or education–and how “evidence” can drive effective innovation rather than stifle it. In an age of massive real-world change, the correct and rapid reflection of this in education is crucial to future curricula, their effective deployment, and achieving optimization for the right educational outcomes.

I’m going to call the two evidence utilizations ‘innovation-led evidence’ and ‘evidence-led innovation.’ The difference is whether you build your ‘product’ (e.g., phone, drug, curriculum) first, then test it (using those tests for iterative refinement or rejection) or whether formal evidence that exists from previous products becomes the arbiter of any new products you build.

The former–innovation-led evidence’–is highly productive in achieving outcomes, though of course care must be taken that those outcomes represent your objectives effectively. The latter–evidence-led innovation’ almost by definition excludes fundamental innovation because it means only building stuff that past evidence said would work.

When you build something significantly new it isn’t just a matter of formally assembling evidence from the past in a predictable way. A leap is needed, or several. Different insights. A new viewpoint. Often in practice, these will occur from a mixture of observation, experience and what still appears to be very human-style intelligence. But wherever it comes from, it isn’t straightforwardly ‘evidence-led.’

I strongly agree with the late physicist (and friend of my brother’s) Richard Feynman who explained nicely in one of his famous 1960s Caltech lectures how the scientific process works. I could summarize: Guess, Make a theory, Test it and compare with theory. (Film of this lecture exists–see the first minute!)

In the case of technology, ‘theory’ is the product, in pharmaceuticals, it’s the drug and in education (for the most part) it’s the curriculum. 

 

Evidence-led innovation’ stifles major innovation–it locks out the guess–yet I firmly believe that that’s what most of “evidence-led education” is talking about with painfully little “innovation-led evidence” applied.

I’ve faced this repeatedly with Computer-Based Maths. I’m asked, “Do you have evidence it works”? I sometimes answer, “Where’s your evidence that today’s traditional maths education works? Have you done randomized control trials?”

As quickly as we can build curricula, fund their development and set up projects in different countries, we are starting to gather evidence. Something that slows this down is the need to have student assessments that accurately reflect required outcomes: it’s not just a matter of comparing exam results before and after, open-ended computer-based maths assessments are needed too.

One problem with the ‘evidence-led innovation’ crowd is that they often have no idea how hard it is to build something completely new. They think you can do micro-innovations, then test, then micro-innovate then test.

Actually, so far CBM is the hardest innovation I’ve been involved in. It’s been amazing to me just how different every aspect of the maths curriculum becomes when you do not need to assume hand-calculating. Equally amazing is how deep everyone needs to dig into their own understanding to uncover those differences, particularly since those involved have learnt maths traditionally.

You might ask whether now is the time for a new math curriculum? Can we really take the risk? As guesses go, the idea that maths education should be the same subject as maths in the real world (ie. using mechanised computation) and not the current hand-calculating proxy is an extremely sure-footed one. The risk of not leaping with the real-world poses a very significant danger.

Let’s have the courage to develop and test CBM around the world, indeed more thoroughly than any maths curriculum has been tested before.