Please ensure Javascript is enabled for purposes of website accessibility

How to be wrong

Nobody sets out to be wrong. But it's impossible to avoid error. And mistakes have value. We learn from them, perhaps more than we learn from what we get right.

Then, like Fleming, Goodyear, Perkin and Einstein, learn from mistakes, and use that learning to inform subsequent decisions. Be clear and decisive. Find out what happens next. Learn from mistakes. Repeat.

This article was first published on the Ratio website.

 

Fleming went on holiday without washing his Petri dishes. He might have regretted his mistake had he not noticed, on his return, that mould in the bowls had limited the spread of bacteria. When Charles Goodyear mistakenly put rubber and sulphur into a pot on the stove he didn’t know he was stumbling into the success of mass tyre production. William Perkin set out to make a synthetic cure for malaria but ended up inventing fabric dyes. Banking on the idea that the universe was static Einstein put a constant into one of his equations. A bunch of other scientists had to put him right with the standard model of modern cosmology that describes an expanding universe.

 

Nobody sets out to be wrong. But it's impossible to avoid error. And mistakes have value. We learn from them, perhaps more than we learn from what we get right. Yet searches of reports, websites and the social media of public systems, foundations and innovative organisations turn up success after success and hardly a word about what goes awry. This partly reflects a natural inclination to paint ourselves and our organisations in a good light. But it also betrays a tendency towards gaming. Being evasive and telling each other what we want to hear is the oil that keeps the worn out cogs of public systems moving. And then there are straight lies.

 

We are a network of people working in public systems, foundations, civil society and research organisations. We come together to reflect on how we learn. Once a month we read, listen to podcasts and draw out themes that connect our work. This might seem analytic and dry. Sometimes it is. But our meetings also surface a lot of emotion. Like you, we came into this work to do good. We feel frustrated by false claims, organisational and system constraints and language that is designed to hide more than it reveals.

 

We found that the way in which we learn is changing. Some of this is incremental and cumulative. The work of health scientist and policymaker Don Berwick helped us a lot, framing gradual shifts in the relationship between evidence, policy and practice over a half century span, the power and limitations of the professions, the conflation of good measurement and performance assessment, the greater attention paid to proving than to improving. Other changes comes more as a big bang, such as machines that can see pattern that we humans cannot. Were it not for our capability to see meaning where the machines cannot, they might put us out of business.

 

Public policy is also evolving. The last half century has been focused on ‘I’, on finding interventions that prevent or reduce impairments to individuals. In its first year of work, our group was particularly interested in the power of place -where we live, work, connect- and positive contagion -how we influence each other to be our better selves. As a network we sense the opportunity to spend more time learning about the ‘we’, how to tend the space around people, creating contexts that leave power in the hands of the citizen and encourage flourishing.

 

Not everyone shares our conviction that learning is central to finding better ways of living. There are well articulated arguments to say that the world is unpredictable, unknowable, and that our attempts to comprehend are simply excuses for state, public system or philanthropic interference in private life. There are misinterpretations of ideas, for example those who use an appreciation of complexity as reason to give up on finding out. Examples from health, accident investigations and economics show that embracing complexity can be a catalyst for deep learning and not a barrier. Mostly the enemy is laziness. Philosopher Onora O’Neill talks about slippery statements to evade the truth and the absence of people giving “an account of what they have done, and of their successes and failures, to others who have sufficient time and experience to assess the evidence and report on it”.

 

If laziness is the enemy, how can we recover rigour? The first step is to create the potential to be wrong. Clear, well-articulated decisions about what to do next, at each stage of the work, and informed by the best available evidence are the building block. Be clear and decisive. Next, find out if the decisions lead to their intended effects. Then, like Fleming, Goodyear, Perkin and Einstein, learn from mistakes, and use that learning to inform subsequent decisions. Be clear and decisive. Find out what happens next. Learn from mistakes. Repeat.

 

Which begs the question, what does it mean to be clear and decisive?

Philosophers seem to be best in answering this question, although their answers can make for uncomfortable reading. They say things like:

  • decide about the means not the ends or, put another way, focus on the process not the outcome
  • embrace uncertainty, if we are sure about what is going to happen why bother to decide?
  • concentrate on the things in our power, the things we can change
  • determine the future not the past, and
  • hold ourselves accountable (not others) for our decisions.

 

Aristotle’s writings could fit into a modern day learning manual. He says things like:

  • verify information (the network most appreciated reading books and articles that spent as much time explaining why the proposition in question might be wrong as why it might be right)
  • consult and listen (not a tick box exercise, as has become the norm, but as if one’s life depended upon it)
  • look at the situation from lots of viewpoints and angles and timescales (the best test, according to epidemiologist Bradford-Hill of whether an idea is true or false)
  • take into account the work of others (collaborate don’t compete, and look as much to the mistakes made by others as their successes), and
  • factor in luck and chance (the best decisions can be confounded by unforeseen events, a pandemic for example!).

 

Psychologists Daniel Kahneman and Amos Tversky help also with their formula for combining fast thinking -decisions based on instinct and emotion- with slow thinking -decisions rooted in careful, logical calculation.

 

We have reached the halfway point in our work. Further reading, discussion and reflection may lead us to a different conclusion. But at this halfway point, it appears that the failure to learn is bound up in shying away from being exposed, in being wrong, and therefore in being clear about intended actions. The antidote, we argue, at every stage of work, is to be, clear and decisive (write down the decision points), ready to learn (find out what happens next) and open to learning from mistakes.

 

This way of thinking appears to have welcome side effects. Readying ourselves to be wrong readies us also for:

  • Entertaining competing explanations, and resisting single truths, resting on the one answer that supposedly explains all
  • Thinking differently about outcomes, shifting away from before and after measures and towards continuous assessment of progress over time, embracing the vicissitudes of real life
  • Collaborating more broadly, considering the potential help from machine learning engineers able to sift through mountains of data or from game theorists making sense of the world with no data at all.
  • A greater appreciation for ethics, an in particular the truth of the statements we make about the world.