Feeds:
Posts
Comments

Posts Tagged ‘google analytics’

Evgeny Morozov writes about recent advances in ‘predictive policing’. This is not the telepathy of Minority Report. It’s designing algorithms to analyse the ‘big data’ that is now available to police forces, so that hitherto unrecognised patterns and probabilities can help you guess the places where crime is more likely to take place, and the people who are more likely to be criminals.

saveevit

This is a section from his latest book, To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems that Don’t Exist.

The police have a very bright future ahead of them – and not just because they can now look up potential suspects on Google. As they embrace the latest technologies, their work is bound to become easier and more effective, raising thorny questions about privacy, civil liberties, and due process.

For one, policing is in a good position to profit from “big data“. As the costs of recording devices keep falling, it’s now possible to spot and react to crimes in real time. Consider a city like Oakland in California. Like many other American cities, today it is covered with hundreds of hidden microphones and sensors, part of a system known as ShotSpotter, which not only alerts the police to the sound of gunshots but also triangulates their location. On verifying that the noises are actual gunshots, a human operator then informs the police.

It’s not hard to imagine ways to improve a system like ShotSpotter. Gunshot-detection systems are, in principle, reactive; they might help to thwart or quickly respond to crime, but they won’t root it out. The decreasing costs of computing, considerable advances in sensor technology, and the ability to tap into vast online databases allow us to move from identifying crime as it happens – which is what the ShotSpotter does now – to predicting it before it happens.

Instead of detecting gunshots, new and smarter systems can focus on detecting the sounds that have preceded gunshots in the past. This is where the techniques and ideologies of big data make another appearance, promising that a greater, deeper analysis of data about past crimes, combined with sophisticated algorithms, can predict – and prevent – future ones. This is a practice known as “predictive policing”, and even though it’s just a few years old, many tout it as a revolution in how police work is done. It’s the epitome of solutionism; there is hardly a better example of how technology and big data can be put to work to solve the problem of crime by simply eliminating crime altogether. It all seems too easy and logical; who wouldn’t want to prevent crime before it happens?

Police in America are particularly excited about what predictive policing – one of Time magazine’s best inventions of 2011 – has to offer; Europeans are slowly catching up as well, with Britain in the lead. Take the Los Angeles Police Department (LAPD), which is using software called PredPol. The software analyses years of previously published statistics about property crimes such as burglary and automobile theft, breaks the patrol map into 500 sq ft zones, calculates the historical distribution and frequency of actual crimes across them, and then tells officers which zones to police more vigorously.

It’s much better – and potentially cheaper – to prevent a crime before it happens than to come late and investigate it. So while patrolling officers might not catch a criminal in action, their presence in the right place at the right time still helps to deter criminal activity. Occasionally, though, the police might indeed disrupt an ongoing crime. In June 2012 the Associated Press reported on an LAPD captain who wasn’t so sure that sending officers into a grid zone on the edge of his coverage area – following PredPol’s recommendation – was such a good idea. His officers, as the captain expected, found nothing; however, when they returned several nights later, they caught someone breaking a window. Score one for PredPol?

Click here if you want to read more, especially about the privacy issues, the dangers of reductive or inaccurate algorithms, and widening the scope of the personal data that might be available for analysis:

An apt illustration of how such a system can be abused comes from The Silicon Jungle, ostensibly a work of fiction written by a Google data-mining engineer and published by Princeton University Press – not usually a fiction publisher – in 2010. The novel is set in the data-mining operation of Ubatoo – a search engine that bears a striking resemblance to Google – where a summer intern develops Terrorist-o-Meter, a sort of universal score of terrorism aptitude that the company could assign to all its users. Those unhappy with their scores would, of course, get a chance to correct them – by submitting even more details about themselves. This might seem like a crazy idea but – in perhaps another allusion to Google – Ubatoo’s corporate culture is so obsessed with innovation that its interns are allowed to roam free, so the project goes ahead.

To build Terrorist-o-Meter, the intern takes a list of “interesting” books that indicate a potential interest in subversive activities and looks up the names of the customers who have bought them from one of Ubatoo’s online shops. Then he finds the websites that those customers frequent and uses the URLs to find even more people – and so on until he hits the magic number of 5,000. The intern soon finds himself pursued by both an al-Qaida-like terrorist group that wants those 5,000 names to boost its recruitment campaign, as well as various defence and intelligence agencies that can’t wait to preemptively ship those 5,000 people to Guantánamo…

Given enough data and the right algorithms, all of us are bound to look suspicious. What happens, then, when Facebook turns us – before we have committed any crimes – over to the police? Will we, like characters in a Kafka novel, struggle to understand what our crime really is and spend the rest of our lives clearing our names? Will Facebook perhaps also offer us a way to pay a fee to have our reputations restored? What if its algorithms are wrong?

The promise of predictive policing might be real, but so are its dangers. The solutionist impulse needs to be restrained. Police need to subject their algorithms to external scrutiny and address their biases. Social networking sites need to establish clear standards for how much predictive self-policing they’ll actually do and how far they will go in profiling their users and sharing this data with police. While Facebook might be more effective than police in predicting crime, it cannot be allowed to take on these policing functions without also adhering to the same rules and regulations that spell out what police can and cannot do in a democracy. We cannot circumvent legal procedures and subvert democratic norms in the name of efficiency alone.

Read Full Post »

They really are tracking you. It’s not just the information that you knowingly put on the internet. It’s also the information that your friends knowingly put there; and all the other embedded information that neither you nor your friends realise is being shared. See this video (sorry about the advert…)

The simplest example, which I had no idea about, is the global positioning info that is automatically uploaded from a digital camera with some photographs. So if you are tagged by someone else on a photo, your time (to the second) at a particular location (to within three metres) is there for everyone to see. Then it just needs the analytics to bring all this data together, and work out what it says about known past behaviour and probable future behaviour. Put this together with your Tesco Club-Card and Amazon buying history and the Google analytics on your recent searches, and they know more about you than you know about yourself.

I’m not exaggerating. When did you ever really reflect on what your movements and searches and purchases say about yourself? Do you even remember what you bought or searched for last month or last year? Well Tesco and Amazon and Google and now apparently Raytheon certainly do.

Ryan Gallagher explains:

A multinational security firm has secretly developed software capable of tracking people’s movements and predicting future behaviour by mining data from social networking websites.

A video obtained by the Guardian reveals how an “extreme-scale analytics” system created by Raytheon, the world’s fifth largest defence contractor, can gather vast amounts of information about people from websites including Facebook, Twitter and Foursquare…

Using Riot it is possible to gain an entire snapshot of a person’s life – their friends, the places they visit charted on a map – in little more than a few clicks of a button.

In the video obtained by the Guardian, it is explained by Raytheon’s “principal investigator” Brian Urch that photographs users post on social networks sometimes contain latitude and longitude details – automatically embedded by smartphones within “exif header data.”

Riot pulls out this information, showing not only the photographs posted onto social networks by individuals, but also the location at which the photographs were taken…

Riot can display on a spider diagram the associations and relationships between individuals online by looking at who they have communicated with over Twitter. It can also mine data from Facebook and sift GPS location information from Foursquare, a mobile phone app used by more than 25 million people to alert friends of their whereabouts. The Foursquare data can be used to display, in graph form, the top 10 places visited by tracked individuals and the times at which they visited them.

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 2,240 other followers

%d bloggers like this: