Skip to main content

What we learned from Cambridge Analytica

WilliamWatkinbig

Ever since it was revealed that Cambridge Analytica had taken data from 87m users via a Facebook app that exploited the social media site’s privacy settings, it has been suggested that anything from Donald Trump’s election in the US to the European Union referendum result in the UK, could have been the result of the persuasive power of targeted ads based on voter preferences.

But Aleksandr Kogan, the researcher whose data-collecting app formed the basis for Cambridge Analytica’s subsequent work for various political groups, appeared to pour cold water on this idea when speaking to a US Senate committee. “The data is entirely ineffective,” he said. “If the goal of Cambridge Analytica was to show personalised advertisements on Facebook, then what they did was stupid.” Even if the boasts by former Cambridge Analytica CEO Alexander Nix and the statements of whistleblower Christopher Wylie of the company’s influence are overblown, as Kogan claims, the firm nevertheless hit on something with its approach of harvesting private personal data in order to influence voter behaviour.  And even if their model does not yet work fully, if you look at it close enough you get a pretty accurate picture of the paradoxical contemporary sense of our neo-privacy...

<iframe width="560" height="315" src="https://www.youtube.com/embed/sb_iFiUUX4A" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

The main issue, I think, is our misunderstanding of consent. Kogan’s data-scrape may have been unethical, but he didn’t steal the data from those that used his app – they gave it willingly. When you use a social media platform you, by definition, are publishing your private life. More so, you effectively sell your private life on an open market through giving your consent for it to be monetised by that platform.

Following admissions by Facebook chief operating officer Sheryl Sandberg, we now know that “online privacy” settings exist only as a means to allow Facebook users to believe they have a consumer’s right to privacy, when in fact they are not the consumer, but the product itself. If privatisation is a process of transferring ownership from the public to the private realm, this means privacy itself has been privatised. You publish your data, making it public, so that private companies can capitalise on what this data says about you by selling you things.  So that you can have access to a highly sophisticated platform, Facebook, which costs a shed-load of money to keep going, without having to ‘pay’ for it, because you have already paid by giving Facebook access to your private life. Which means you are the product, your privacy published is the income generator, the one thing that makes you you, is actually what changes you from human consumer to inhuman data. Scary, isn’t it?

This financial model leads to the paradoxical situation I am calling neoprivacy, following neoliberalism’s similar disregard for, and exploitation of, the private individual. In a neoprivate world privacy exists to be exploited financially. The neoprivate individual both values their personal life so much that they publish it, yet is so neglectful of their privacy that, well, they publish it.

Cambridge Analytica’s stroke of genius was to combine two different kinds of datasets, let’s call them deep and broad, to capitalise on this burgeoning sense of neoprivacy. The deep psychometric tests of a small sample (from Kogan’s app) were combined with the broad online behaviour of a massive sample. With this they claimed they could predict people’s behaviour simply by their actions on Facebook.

The firm sold this to political campaigns and lobbyists as their “secret weapon”. This model shows a real understanding of social media by grounding it on people’s actions on Facebook – what they click on, read, and like – rather than their expressed statements. It’s what you do that matters, not what you say.

Typically deep, expert research generates the evidence that informs policies. But data-driven governance appears increasingly disassociated from ordinary lives, with voters preferring crowd-pleasing factoids when it comes to major decisions. Indeed, suspicion of experts may even be a contributing factor in the rise of what could be called demagogcracy and fake news.

In contrast, broad data is generated by people based on what they choose to do, not what an expert has asked them, or prompted them, to say. Neoprivate individuals feel a sense of ownership and investment when they share something on Facebook or Instagram, precisely because they have chosen to share personal, important, unique, and private data.  In effect, they are sharing their sense of self and individuality, and what could be more important to the modern soul than that?

Yet the odd thing is, as we said, this neoprivacy is a contradiction: a published privacy.  And based on a mis-conception of consent. We think when we publish picture of our kids in Santa’s Grotto that we have given our consent for them to be shared, and we tick a little box.  We appear not to think that we have given our consent for these images to also be sold. But the fact of the matter is, as Sandberg said, sharing is selling, at least online.

Perhaps the final, most astounding result of this neoprivacy is that the very thing that is the origin of the value of our privacy, our subjective, special, rights-enriched, agency-performing, millennial selves, is the product.  When you share something private online, you transform yourself, according to Facebook’s business model, from a consumer with rights, a subject if you will, to an object with value. You go into the Facebook grotto telling Santa your inner desires, all those toys you crave, but thanks to the magic of those Silicon valley elves, you exit the Grotto on the other side having become the toy, the object, the desires of Santa Clause himself.  I wonder if that is why so many kids find St. Nick just a little bit scary? Think of it like Pinocchio or Hoffman’s famous Sandman, but in reverse. This is the new uncanny, not that of puppets that become human, but humans that willingly become marionettes, allowing big tech forms and shady third parties to pull their privacy strings for financial gain.