June 29, 2014

Ask first

Via The Atlantic, there’s a new paper in PNAS (open access) that I’m sure is going to be a widely cited example by people teaching research ethics, and not in a good way:

 In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

More than 650,000 people had their Facebook feeds meddled with in this way, and as that paragraph from the abstract makes clear, it made a difference.

The problem is consent.  There is a clear ethical principle that experiments on humans require consent, except in a few specific situations, and that the consent has to be specific and informed. It’s not that uncommon in psychological experiments for some details of the experiment to be kept hidden to avoid bias, but participants still should be given a clear idea of possible risks and benefits and a general idea of what’s going on. Even in medical research, where clinical trials are comparing two real treatments for which the best choice isn’t known, there are very few exceptions to consent (I’ve written about some of them elsewhere).

The need for consent is especially clear in cases where the research is expected to cause harm. In this example, the Facebook researchers expected in advance that their intervention would have real effects on people’s emotions; that it would do actual harm, even if the harm was (hopefully) minor and transient.

Facebook had its research reviewed by an Institutional Review Board (the US equivalent of our Ethics Committees), and the terms of service say they can use your data for research purposes, so they are probably within the law.  The psychologist who edited the study for PNAS said

“I was concerned,” Fiske told The Atlantic, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”

Fiske added that she didn’t want the “the originality of the research” to be lost, but called the experiment “an open ethical question.”

To me, the only open ethical question is whether people believed their agreement to the Facebook Terms of Service allowed this sort of thing. This could be settled empirically, by a suitably-designed survey. I’m betting the answer is “No.” Or, quite likely, “Hell, no!”.

[Update: Story in the Herald]

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar

    One of the things which concerns me is that commercial operations undertaking what we in Universities or doing Government commissioned research might call” human subjects research” require no ethical approval at all. That whole concept of ethical oversight doesn’t exist in my experience (having worked in both worlds).

    Does anybody else have experience of what ethical oversight controls there are in a purely commercial research environment? Other than free market principles and dollars that is? Yes there are Market Research Society and Direct Marketing “guidelines” to act as Watchpoodle of the public interest, but is anybody aware of any research which really was stopped in it’s tracks by these?

    At best the ambulance at the bottom of the cliff (after the research has been done) is trouble getting it reported in reputable places. Which simply means you feed a PR document to the newspapers or TV instead.

    Or am I being too cynical?

    10 years ago

    • avatar
      Megan Pledger

      I have a vague idea that any human research not captured by an institution’s ethics committee used to have to go to a DHB research and ethics committee but I think they changed it recently so that people now can go to regional committees not attached to DHBs.

      10 years ago

      • avatar
        Barbara Joppa

        In NZ, all research on human subjects must go through the NZ HDEC (ethics.health.govt.nz). This is all centralised such that if there are multiple sites doing the research in NZ, only one site completes the application. But all sites, or institutions (usually DHB or universities) must also approve the research through their own research offices. And then there is the Maori Research Review…. There is plenty of oversight for research in NZ. But it seems to work well.

        10 years ago

  • avatar

    Apparently all sorts of websites do this. For example, Duolingo gives different users slight different versions of its courses so they can test hypotheses about how people learn/get addicted to the site. Is this one bad because it involved emotions?

    10 years ago

    • avatar
      Thomas Lumley

      The emotions aspect is one reason — specifically, that they expected to cause harm. Presumably, Duolingo doesn’t try versions of its courses that it expects to be less effective than the current one — their situation is more like a clinical trial than like psych research.

      More generally, though, I think there’s a big difference between trying out changes to your service to see whether they are improvements, and using your customers for unrelated research.

      10 years ago

      • avatar
        Martin Kealey

        Where is the line between “harm” and “the mundane knocks of life”? And where is the line marking sufficient intent (or disregard) on the part of the experimenter?

        And in this case, it’s arguable that the *entirety* of Facebook is a psychological experiment being done for-profit, and yes it causes harm in the sense of loss of privacy, disconnection from RealLifeFriends(tm) etc; so how is this particular “experiment” morally different from the rest of their operation?

        10 years ago

        • avatar

          “disconnection from RealLifeFriends(tm) etc;”

          Is there any evidence that this is true in general? My experience has been that Facebook helped me maintain lots of friendships, particularly with Australians who I rarely see in person since I left 5 years ago.

          10 years ago

        • avatar
          Thomas Lumley

          Like Brendon, I don’t really buy the disconnection from Real Life Friends.

          I don’t have evidence one way or the other for adults, but danah boyd writes about this for teenagers in her book “It’s Complicated”.

          10 years ago

  • avatar

    Great post thanks Thomas. Lengthy Terms of Service which basically no-one reads when signing up is “consent” but not “informed consent”.

    A vague notion of “research” in Facebook’s context makes me think of research to maximize revenue for Facebook.

    This feels very different to me than common A/B testing which is aimed at improving conversions.

    10 years ago

  • avatar
    Barbara Joppa

    I work in clinical trial – setting up ethics approvals and regulatory documentation. The comment about consent vs informed consent is absolutely correct. I am stunned that an IRB approved the study. International Good Clinical Practice (ICH-GCP) defines informed consent very clearly. The study is unethical for the reason of consent. One more reason to delete the facebook account

    10 years ago

  • avatar
    David Hood

    Private companies can do a lot of stuff that people who limit themselves to “first do no harm” studies cannot do. If they should or not is arguable (and if it is in their long term question is another matter).

    However, I think there is a much clearer that PNAS should not have published the research, given all research published is supposed (must) conform to the Declaration of Helsinki. See: http://www.slate.com/articles/health_and_science/science/2014/06/facebook_unethical_experiment_it_made_news_feeds_happier_or_sadder_to_manipulate.html

    10 years ago