It’s the world’s largest social networking site with 1.23 billion users – a number that’s almost equal to the population of China – but now Facebook is being investigated by the UK’s Information Commissioner’s Office (ICO) to see if it has broken any privacy laws whilst undertaking its notorious Emotion Study.
As most of you will probably have already heard, it has recently emerged that Facebook carried out a week long study on “emotional contagion” on 689,000 English speaking Facebook users in January 2012 – without their consent. The aim of the study, carried out with academics from Cornell and the University of California, was to find out whether positive or negative words in information – videos, statuses, pictures etc. – shared by our Facebook ‘friends’ has an influence on positive or negative words used in our own information sharing. I.e. if our friends appear to be happy, are we happy too?
The study was done in 2 parts – the first reduced users’ exposure to ‘positive’ content, and the other reduced exposure to ‘negative’ content. Their Facebook activity was then monitored to see how the content they had seen affected their own. The results: users exposed to less positive Facebook content tend to be less positive when posting their own content.
The news of the study has sparked outrage amongst the public, being described as ‘disturbing’, ‘creepy’, ‘spooky’ and ‘evil’. It has sparked debates about mass deception, corporate ethics, privacy laws and much more. People are worried about what Facebook could do next.
So, has Facebook gone too far this time? And should we be worried about what they might do in the future?
On one hand, it’s easy to see why Facebook did it. If they had told users about the study before they carried it out, would the results have been the same? I don’t think so. Users being unaware of what was happening meant that they responded in a natural way, thus giving Facebook accurate results. Facebook has said that intensive monitoring of users is what enables them to tailor their services to their users’ needs, and provide advertisers with detailed targeting information. And to be fair, isn’t our every move on the internet being watched by someone or other anyway? We probably shouldn’t have expected anything less than deceptive actions like this from them. Facebook is just a multi-billion corporation just out to do whatever they can to make money. Who can blame them?
Facebook has argued that its terms and conditions involve agreeing to turn over your information for “data analysis, testing [and] research”. However, this doesn’t make it ethical. Even with their argument that the terms and conditions cover it, they should really have asked for consent from participants. They have now run the risk of gaining a horrible reputation, lots and lots of mistrust, doubt, suspicion and anxiety from users, not to mention a £500,000 fine from the ICO. Not that that’s a lot for them.
So, what’s next?
We now know that Facebook can manipulate our thoughts by determining what content we can and cannot see on the social media site. But what could this mean for the future? Could they use this tool to influence who we vote for, what we think and even our behaviour? It’s scary to think that, if they wanted to, Facebook could probably influence users on just about anything. And with over 1.23 billion users, that’s a lot of influence to be had. But whether they will or not remains to be seen. We don’t know what Facebook has planned for the future, all we can do is wait and see.