The Unwitting, Unethical Facebook Emotion Study

by

facebook_thumb_downjpg

This weekend, the Facebook-using public—so essentially everyone but your off-the-grid friends who may be a little bit snobby but are relishing their "I told you so"s right about now—were livid to learn about a recent study published in the Proceedings of the National Academy of Science titled "Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks." The study conducted by Adam Kramer of Facebook's data research team, Jamie Guillory of Center for Tobacco Control Research and Education, University of California, San Francisco and Jeffrey Hancock from the Departments of Communication and Information Science at Cornell University intended to look at how emotional contagion, the notion that how we feel is in part caused environmentally through the expressed emotions of those around us, can occur through social networking. However, their means of conducting the experiment was what raised the most eyebrows and ultimate furor. These scientists conducted the study through filtering unwitting Facebook users' news feeds.

For the week of January 11-18, 2012, 689,003 random "participants" had positive and negative words from their their news feeds filtered through the Linguistic Inquiry and Word Count software. The experiment found that in their respective groups (which held around 155,000 people per positive, negative and control condition), those individuals whose news feeds reflected positive statuses were more likely to post positively, and with more frequency. Those individuals whose news feeds reflected negative status were more likely to post negatively, and at times with less frequency. The control group had their friend's status updates filtered through at random, indicating sometimes it didn't even matter what your friends may have said, the algorithm could have thrown stuff out for no good reason at all. That's just science! While the study was conducted on a portion of the overall English-speaking Facebook community, the better part of a million users isn't anything at which to sneeze.

Setting aside the notion that software has a few peccadillos regarding reading tone, sarcasm or determining the use of negatives in a sentence, there's doubt about whether Facebook users have given informed consent to take part in an experiment merely by agreeing to (and not really reading, because I mean

come on) the terms of service wherein the data given to Facebook may be use "for internal operations, including troubleshooting, data analysis, testing, research and service improvement." So the legality of the matter may possibly hold up, but it should be pretty clear that this all feels somewhat unethical and super creepy, as if it were any surprise coming from Facebook (who have already put out a rather callous response to the backlash).

So yes, Facebook is out there playing with our emotions everyday by keeping us connected in ways we may have never anticipated, and yes, that news feed algorithm has gotten rather pernicious in the way it filters status updates and ads, but who knew they were so nefarious? The company never ceases to amaze.

The ethics of the matter should be quite clear. A large swath of users had their emotions played with under the shakiest degree of consent. A website devoted to the sharing of personal information edited said information without duly informing its users. A study conducted as seemingly as pure as possible (because there honestly couldn't be better conditions to conduct a study than this) made an egregious overstep of trust using faulty software (they weren't just being unethical, they're also being inaccurate). The list could go on. Could we really expect any less from Facebook?

comment