Evgeny Morozov writes about recent advances in ‘predictive policing’. This is not the telepathy of Minority Report. It’s designing algorithms to analyse the ‘big data’ that is now available to police forces, so that hitherto unrecognised patterns and probabilities can help you guess the places where crime is more likely to take place, and the people who are more likely to be criminals.
This is a section from his latest book, To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems that Don’t Exist.
The police have a very bright future ahead of them – and not just because they can now look up potential suspects on Google. As they embrace the latest technologies, their work is bound to become easier and more effective, raising thorny questions about privacy, civil liberties, and due process.
For one, policing is in a good position to profit from “big data“. As the costs of recording devices keep falling, it’s now possible to spot and react to crimes in real time. Consider a city like Oakland in California. Like many other American cities, today it is covered with hundreds of hidden microphones and sensors, part of a system known as ShotSpotter, which not only alerts the police to the sound of gunshots but also triangulates their location. On verifying that the noises are actual gunshots, a human operator then informs the police.
It’s not hard to imagine ways to improve a system like ShotSpotter. Gunshot-detection systems are, in principle, reactive; they might help to thwart or quickly respond to crime, but they won’t root it out. The decreasing costs of computing, considerable advances in sensor technology, and the ability to tap into vast online databases allow us to move from identifying crime as it happens – which is what the ShotSpotter does now – to predicting it before it happens.
Instead of detecting gunshots, new and smarter systems can focus on detecting the sounds that have preceded gunshots in the past. This is where the techniques and ideologies of big data make another appearance, promising that a greater, deeper analysis of data about past crimes, combined with sophisticated algorithms, can predict – and prevent – future ones. This is a practice known as “predictive policing”, and even though it’s just a few years old, many tout it as a revolution in how police work is done. It’s the epitome of solutionism; there is hardly a better example of how technology and big data can be put to work to solve the problem of crime by simply eliminating crime altogether. It all seems too easy and logical; who wouldn’t want to prevent crime before it happens?
Police in America are particularly excited about what predictive policing – one of Time magazine’s best inventions of 2011 – has to offer; Europeans are slowly catching up as well, with Britain in the lead. Take the Los Angeles Police Department (LAPD), which is using software called PredPol. The software analyses years of previously published statistics about property crimes such as burglary and automobile theft, breaks the patrol map into 500 sq ft zones, calculates the historical distribution and frequency of actual crimes across them, and then tells officers which zones to police more vigorously.
It’s much better – and potentially cheaper – to prevent a crime before it happens than to come late and investigate it. So while patrolling officers might not catch a criminal in action, their presence in the right place at the right time still helps to deter criminal activity. Occasionally, though, the police might indeed disrupt an ongoing crime. In June 2012 the Associated Press reported on an LAPD captain who wasn’t so sure that sending officers into a grid zone on the edge of his coverage area – following PredPol’s recommendation – was such a good idea. His officers, as the captain expected, found nothing; however, when they returned several nights later, they caught someone breaking a window. Score one for PredPol?
Click here if you want to read more, especially about the privacy issues, the dangers of reductive or inaccurate algorithms, and widening the scope of the personal data that might be available for analysis:
An apt illustration of how such a system can be abused comes from The Silicon Jungle, ostensibly a work of fiction written by a Google data-mining engineer and published by Princeton University Press – not usually a fiction publisher – in 2010. The novel is set in the data-mining operation of Ubatoo – a search engine that bears a striking resemblance to Google – where a summer intern develops Terrorist-o-Meter, a sort of universal score of terrorism aptitude that the company could assign to all its users. Those unhappy with their scores would, of course, get a chance to correct them – by submitting even more details about themselves. This might seem like a crazy idea but – in perhaps another allusion to Google – Ubatoo’s corporate culture is so obsessed with innovation that its interns are allowed to roam free, so the project goes ahead.
To build Terrorist-o-Meter, the intern takes a list of “interesting” books that indicate a potential interest in subversive activities and looks up the names of the customers who have bought them from one of Ubatoo’s online shops. Then he finds the websites that those customers frequent and uses the URLs to find even more people – and so on until he hits the magic number of 5,000. The intern soon finds himself pursued by both an al-Qaida-like terrorist group that wants those 5,000 names to boost its recruitment campaign, as well as various defence and intelligence agencies that can’t wait to preemptively ship those 5,000 people to Guantánamo…
Given enough data and the right algorithms, all of us are bound to look suspicious. What happens, then, when Facebook turns us – before we have committed any crimes – over to the police? Will we, like characters in a Kafka novel, struggle to understand what our crime really is and spend the rest of our lives clearing our names? Will Facebook perhaps also offer us a way to pay a fee to have our reputations restored? What if its algorithms are wrong?
The promise of predictive policing might be real, but so are its dangers. The solutionist impulse needs to be restrained. Police need to subject their algorithms to external scrutiny and address their biases. Social networking sites need to establish clear standards for how much predictive self-policing they’ll actually do and how far they will go in profiling their users and sharing this data with police. While Facebook might be more effective than police in predicting crime, it cannot be allowed to take on these policing functions without also adhering to the same rules and regulations that spell out what police can and cannot do in a democracy. We cannot circumvent legal procedures and subvert democratic norms in the name of efficiency alone.