media, mood manipulation, and morals

If you read anything news-related, it’s likely you heard about last week’s ethics and privacy discussions surrounding Facebook and social science research. Essentially, a paper was published by a team of researchers from Cornell University and Facebook—”Experimental evidence of massive-scale emotional contagion through social networks“—through the Proceedings of the National Academy of Sciences (June 2014), which says to have physically manipulated the news feeds of thousands of Facebook users to control the emotions of their perceived-networks, to ultimately analyze their not-in-person emotional responses.

The research findings are fascinating. They’re simply described in the following excerpt from the paper Abstract: 

Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.

Jeff Hancock, one of the lead authors of the study, was by far one of my favorite professors at Cornell. HOWEVER, having had prior knowledge of this study, I’ve always been curious at which point it would become an ethical issue, and always questioned it. I myself have used public social media analysis in research, but there is a very, very fine ethical and moral line between analyzing things anyone can see and manipulating what people see to see their emotional reaction.

I think they either crossed it, or got far too close. But that’s just my opinion. 

What about people who are suffering from mental diseases and disorders? How has this influenced self-harm and violence as a result of such? Even if nothing done here was “illegal,” per se, or even “unethical” (as defined in such research areas), as these researchers are likely protected under Facebook’s Terms and Conditions, at the most basic level, this study was inflicting unnecessary pain on hundreds of thousands of people. This cost to what benefit? Publishing? 

According to Kashmir Hill’s fascinating and well-worth-reading analysis in her Forbes piece, “Facebook Added ‘Research’ To User Agreement 4 Months After Emotion Manipulation Study“:

In January 2012, the policy did not say anything about users potentially being guinea pigs made to have a crappy day for science, nor that “research” is something that might happen on the platform.

One of my friends equated my argument to “blaming crime scene investigators for murder. [The Cornell University researchers] didn’t have any part in what Facebook had done, they just did the data analysis. Cornell Internal Review Board didn’t pre-approve because the experiment had already been done.”

I don’t want to say whether or not these researchers should be punished, because I don’t know the full story — HOWEVER, even if my friend’s argument is in fact the case, I think that at some point morally you have to step back and wonder whether your personal image as an academic is worth being built off of/supporting research programs (Facebook’s or whoever’s) that cross these questionable, ethical lines. “Ethical” or not, under the internal review board, at some point common sense and humility needs to prevail. 

Perhaps this is one of the reasons academics are often perceived to be insensitive and disconnected? Social science already gets a bad rap. We need to remember that, ultimately, we are studying people. Real, living, breathing, feeling people.

 

(Featured Image via The Next Web)

 

 

 

 

 

 

3 thoughts on “media, mood manipulation, and morals

    • Thanks, Prof. Lewenstein! This was a very informative read. It’s nice to know my bit of a tangent wasn’t way off base. I sort of look at it like this: Just because it’s not “illegal” (nationally, anyway) to smoke a pack of cigarettes in your closed vehicle with your children in the back seat, it doesn’t mean you *should.* In the long-run, if research ethics is approached like that—as in, boundaries are pushed like smoking in cars even though we know it’s not good for our kids—academics will have little trust, not to mention respect.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s