By Holly Marriott Webb
Bravery, according to Aristotle, is largely a matter of identifying the right things to be afraid of. Facing down a house spider should not get you any plaudits for your courage, but risking your life for someone else’s will. Aristotle suggests we are right to fear destitution, disgrace and death, but I would like to add one more fear to the list: data.
If knowledge is power, data is the ultimate weapon. Google knows more about you than your partner, friends or family will ever know. It knows more about you than your state does. It records every search, reads every email, maps every click (yes, even in incognito mode). It knows your age, where you live, where you work, and everywhere you travel. Most are quite comfortable with this. “No one is actually reading it,” they say. “Even if they were, I’ve got nothing to hide,” they protest. “Everyone watches porn,” they laugh. This is somewhat missing the point.
The truth is that almost everyone would be mortally embarrassed if their internet search history was published. It would be even worse if it was Facebook messages or, God forbid, Tinder conversations. On a personal level, this opens people up to online blackmail or extortion by hackers, which is set to become more and more common – cybercrime is predicted to generate more profit than the global illegal drugs trade by 2021. People with a public profile whose career depends on maintaining a good reputation – like politicians – will be particularly vulnerable. If you’re just an ordinary citizen, though, you might reasonably expect that in the great ocean of internet users (nearly four billion in 2017) you are unlikely to be targeted.
But there are reasons why everyone, however ordinary, should be afraid of companies wielding data. If you are understood, you can be manipulated. Facebook lets advertisers use all of your demographic information, interests, shopping behaviour and connections to target you, which is an amazingly powerful set of tools to influence your actions. The company has faced criticism for this, and for the research it conducts on its users. In 2014, for example, there was an outcry when researchers published a study where they manipulated people’s newsfeeds to demonstrate that the positive or negative emotions expressed by others on Facebook influence our own emotions. People were mainly concerned about being experimented on by Facebook and whether they had agreed to have their data used in this way, but I think the results of the study are by far the most worrisome part of the story.
If the content of your Facebook newsfeed can be chosen to make you feel more positive or negative, could it be chosen to elicit a more specific emotion? Could tools be created to make you feel despondent, or apathetic, or elated, or energetic? Could they make you feel violent? Blunter communication technologies have been used to successfully stoke violence before – radio broadcasting, for example, played a role in inciting the genocide in Rwanda. Imagine the power of something more subtle – tiny, personalised hints dripped into your newsfeed, making you feel angry, indignant, marginalised, powerless, afraid. This is not a far-fetched scenario – Facebook’s advertising tools have already been used to great effect by Russia to sow discord among the American electorate.
So, Facebook, Google and other internet companies should not be offering up our data to advertisers – at best it helps them to emotionally manipulate us into buying things, at worst it can be a weapon in the hands of dividers and conquerors. Should companies be allowed to collect it at all? Even if the data was incredibly well protected, if we could pretty much guarantee that it would never be used for ill, would it be right for it to exist? We can draw an analogy here with nuclear weapons. If all of them were locked into a no first use structure – guarded in a way that meant they could not be fired apart from in retaliation to another nuclear attack – you could argue that there is no harm in anyone having them. But really, we would all be safer from mass destruction if they just did not exist, and we would all be safer from mass manipulation if enormous amounts of data on us did not exist.
We should be afraid of a world where the stupid things we say or the embarrassing things we look up can be held against us for the rest of our lives. We should be afraid of a world where elections can be manipulated and anger can be manufactured through platforms we willingly supply with our most personal information. And we should be afraid of a world in which we are known and understood, even if no one is looking.
We know what to fear. To be brave, Aristotle-style, we must face up to the internet companies and end the mass collection of personal data.
By Holly Marriott Webb