Donald Rumsfeld famously spoke about ‘known knowns…things that we know that we know’, ‘known unknowns’, the things we ‘know…we do not know’ and ‘unknown unknowns’, the things ‘we don’t know we don’t know’.
In recent years policing’s grasp of its own mission and efficacy, at least in the UK, has been made less certain by a shift in focus away from ‘known knowns’, like the amount of burglary, robbery and car-crime being reported, to ‘known unknowns’, like the harm caused by intimate violence, sexual abuse, modern slavery and ‘cyber-crime’. These latter phenomena are now more clearly recognised, but are largely and perhaps inherently hidden, and at best only partially understood.
As the focus on these dark corners intensifies, the imagination can start to play tricks, conjuring up shadowy spectres of ‘unknown unknowns’that, as a local policing superintendent told me recently, “we will wonder in five years’ time why we didn’t see coming”.
One aspect of this turn to face the unknown(s) has been a devaluation of police recorded crime data as a currency for gauging progress. Aside from the taint of flawed recording practices and naive target setting, police crime statistics – once the bread and butter of local and central performance management – are simply less useful for understanding the effectiveness of responses to risk, hidden harm and vulnerability, than for gauging the success of efforts to reduce volume crime.
Do more recorded sexual offences against children signify a worrying social trend, the improved willingness of victims to come forward, or greater police proactivity to identify uncover crimes committed but not reported?
Do fewer domestic violence reports indicate falling crime, potentially related to better safeguarding, or a reduction in victim confidence?
Under the bright light of crime data, the shadows become darker still, and alternative light sources, like victimisation surveys and health data, tend to provide only diffuse and delayed, rather than specific and timely illumination – when available in the first place.
It is no coincidence therefore, that efforts are being made to improve the relevance and usefulness of police recorded crime data through innovations such the ONS Crime Severity Score and the Cambridge Harm Index, which seek to measure crime in terms of its ‘harmfulness’ as well as its frequency. These instruments use tariffs, derived (in slightly different ways) from sentencing policy and practice, to weight the crimes within any aggregate offence total, according to the relative seriousness of their classification type.
So, for example, if the police in town A deal with reports of a burglary (which has a weight of 438 using the ONS method[1]), two ABHs (2×127), and a stolen car (34), while in town B , the police record a robbery (746), an arson (439) and two cases of shoplifting (2×13), we can say that, although both towns amassed four crime reports, crime was ‘worse’ (more ‘harmful’/‘severe’/‘demanding’) in town B (with a total of 1,211) than in town A (with 726). What’s more, we can also say how much worse – by 67 per cent in this case.
It’s possible to see how these metrics might provide food for thought when setting priorities, allocating resources or (cautiously) making assessments of performance or impact. However, aside from being silent on the non-crime aspects of police work, the big limitation of these instruments is that they are, fundamentally, just another way of examining the ‘known knowns’; they provide an extra dimension to our understanding of crime and harm, but only to the crime and harm that the police actually become aware of and record.
So, for instance, when we look at the reasons why the national Crime Severity Score has been increasing faster than recorded crime since 2013, we find explanations in more people coming forward to report rape and sexual offences, and in better police recording of violent crime. The increase in the aggregate weighted crime score resulting from these, undoubtedly positive, developments, more than cancels out the reductions that come from (apparently real) falls in acquisitive crime – another positive development.
Which brings us to Schrodinger’s cat.
Just like that paradoxical feline, we cannot know whether any change in an aggregate weighted score is a good or bad sign, a cause for concern or a sign of progress, an indicator of efficacy or an alert to worsening ‘performance’, until we break open the box.
This is because the outcomes the police seek to achieve pull the measure in two directions at once. Preventing burglaries and deterring alcohol-fuelled fights will bring it down, while encouraging domestic violence victims to come forward or seeking out hidden exploitation (ie making progress with the known unknowns) will push it up.
Actually, this applies to unweighted crime totals too, but the push and pull exerted on raw offence counts by (lower volume, higher harm) recorded crimes of abuse, has tended to be negligible in comparison to the great swells of (higher volume, often lower harm) acquisitive crime and public place violence. In practice, weighting by harm/severity makes these opposing forces much more equally balanced and, therefore, there appears to be little evaluative meaning in talking about (weighted) crime going ‘up’ or ‘down’. Instead we need to ask; what type of crime is being recorded more or less often? In other words, we need to disaggregate the aggregates.
But we currently don’t have the tools to crack open the box in the right way. We cannot easily split out the part of the recorded crime total we would like to see coming down (as a potential indicator of prevention/police efficacy) from those crimes we should rightly want to see recorded more often (as indicators of more victims assisted and protected, justice sought and confidence improved). We might say that we cannot distinguish ‘reduce’ crime from ‘abuse’ crime. But that’s not to say a workable data cleavage could not be possible, at least at the police force level.
It strikes me that knowing and tracking how weighted totals for ‘reduce’ crime and ‘abuse’ crime change over time – perhaps even in the form of a ratio between the two – would be helpful for anyone with strategic oversight of a modern, balanced policing function, which simultaneously seeks to cut crime and protect people from harms that we know are often unreported and can impact individuals repeatedly.
There are other splits that would be useful too. What if we could keep track of the weighted score arising from the ‘latent’ crimes that the police identified through proactivity, compared with that ‘patently’ reported by the public?[2] What if this helped maintain focus on reducing the latter, and the demand associated with it, which should, incrementally, free resource to reinvest in increasing the former? What if we could inform our approach to public protection and safeguarding by monitoring the weighted score for crime reported by ‘first-time’ versus ‘repeat’ victims? For some (particularly ‘abuse’) crimes we might view an increase in the former as a positive development, but question our processes if the latter began to rise.
Modern policing must respond rationally and purposefully to the things it knows it does not know. New metrics allow us to ‘weigh’ as well as count the things we do know. However, only by breaking down the read-outs these give into chunks that reflect the diverse outcomes now being sought, can we begin to understand progress against a policing mission that is increasingly complex and often only partially revealed.
Read Andy’s latest paper Mixed signals for police improvement: The Value of your Crime Severity Score may go up as well as down.