Why We Hedge Our Bets
One of the common refrains I hear from those skeptical of the usefulness of science to answer both everyday and so-called wicked problems is a sense of frustration with academics because we never tell you that something works with 100% accuracy. We hedge our bets. There’s a reason for this: that unwillingness to say something is absolutely proven is actually a fundamental aspect of how science works (and not just something we do to mess with you). Science - including much of the so-called ‘hard sciences’ - operates on uncertainty. When we say that something has a strong evidence base, it means that it has been well-tested and seems to work under a variety of different conditions, but we cannot conclusively rule out the possibility that it might not work for you, might not work under conditions we have yet to test for, or even that it might stop working. Our knowledge is always, at best, provisional and subject to the possibility of being wrong. What is the best way to put this? Swans. In explaining his theory of how science ought to work, Karl Popper famously pointed out that we cannot prove 100% , beyond a shadow of doubt that something works (‘verifiability’) because some fact could always come along to demonstrate we are incredibly wrong. His example was the case of white swans. Prior to the 1600s, Europeans believed that all swans were white. Why? Because European swans are all white. Imagine their surprise in about 1697, when a Dutch explorer discovered the existence of black swans in Australia. Have to change that theory!
What Popper suggested is that instead of trying to prove things are "true", why don't we we try to challenge our ideas to see whether they can be proven "untrue" (‘falsifiability’). I know: it’s totally counter-intuitive for many peopleBut try thinking about it this way: if we test an idea over and over and we can’t disprove it or show that it doesn’t work, then just maybe it actually does. Then we can say "this is the best we know right now." Hot spots policing is a good example of this. It is based on the theory that crime concentrates in specific places and that you can therefore target those spaces through placing a police officer there for a period of time to deter crime (Felson’s ‘routine activities theory’).
We have tested hot spots fairly rigorously in: Different countries Different cities With different police services Under different patrol conditions By varying police presence at a site (And many more combinations). And it seems to work fairly well in all of those situations when it is implemented properly. And that is the sticking point: while we cannot prove conclusively that it works always, we can say that it seems to work well in a majority of situations under careful conditions. Yup, we hedged our bets. In doing so, we are acknowledging the possibility of that black swan (a fact, circumstance or condition we didn’t expect) and warning you of the same. It doesn’t mean that everything is ‘just a theory’, ‘or that data lies’ or that ‘science is useless’ or that ‘researchers don’t know what they’re talking about.’ Just the opposite: we know only too well there is always room for error and uncertainty. That's what makes science science. And that is why academics drive you crazy. We don’t mean to. We just want to make sure you understand the limits of our knowledge and yours.