Last week, Facebook notified its users of changes that it was making to its research activities. This is a welcome development given the concern that arose recently around user testing and online research. Earlier this year, many users were surprised and angry regarding allegations that Facebook had experimented on users to see if emotions were “contagious” on the site. Similar concerns were raised last year about another study of “self-censorship” on Facebook. The public outcry that followed brought to light the extent of testing and experimentation that occurs when users utilize online services, ranging from complex studies like the Facebook research, to more basic A/B testing, which allows sites to test different layouts, styles, and designs.
When user testing rises to the level of changing a user’s experience or engaging in analysis of private information that users don’t reasonably expect a service provider to examine, we think that providers should inform users that such testing may occur.
While some may argue that merely informing users would lead to an observer effect, in which users change their behavior merely because they are being observed, we believe that informing users is vital in conducting responsible, ethical research. There are well-known methods from qualitative studies such as debriefing (when a user isn’t informed before an experimental intervention but is instead told afterwards and given an opportunity to opt-out at that time) that can easily accommodate these methodological concerns. It’s important that online service providers err on the side of safety and user trust, given that subjects, unlike in college psychology departments, usually aren’t actively consenting to research, and that providers likely aren’t obligated to use structures like Institutional Review Boards to oversee research practices.
To that end, the Facebook announcement highlights several pieces that we believe will create meaningful oversight. Their new framework requires sensitive research (like the study that caused the initial uproar) to go through an enhanced review process; the creation of an interdisciplinary review board of engineers, researchers, lawyers, and policy experts; increased training for engineers and data analysts about research issues; and a centralized portal for Facebook research studies for the public to access. All of these steps are welcome ones, and should reduce the opacity of current online research practices. We hope other sites heed Facebook’s changes and provide similar pledges and information to their users. Greater transparency in this area will only benefit users, and increase their trust in the companies that engage in such research.
There is always room for improvement and we hope that Facebook and other online providers will continue to evaluate their efforts here and make them even stronger in the future.
There is always room for improvement and we hope that Facebook and other online providers will continue to evaluate their efforts here and make them even stronger in the future. For example, while Facebook’s blog post states that they’ve given their researchers new, clear guidelines, those guidelines are not available to Facebook’s users. Having a sense of what those standards are would help users understand, broadly, what research might be conducted, and would allow for external oversight in addition to the internal safeguards that Facebook has instituted. Having a sense of what lines in the standard Facebook won’t cross would likely reassure the company’s massive user base.
Further, the new research portal is a very effective place to highlight the research that Facebook engages in, but has little to offer on the procedural mechanics of how research is done inside Facebook. People will be interested in details about the make up of the internal review board, what the experience of the members is, what types of experiments will never be approved (regardless of the business case or potential to contribute to generalizable knowledge), and an archive of past research protocols that describe the specific protections put in place. For example, there were some misleading reports about Facebook’s self-censorship study that gave the impression that unposted content was being stored and analyzed when in fact the researchers specifically designed their experiment to only work on the client side, within the browser, and only sent aggregate information about the user interaction to Facebook’s servers, not content of the unposted messages. We’d like to see Facebook go farther to offer a “Frequently Asked Questions” (FAQ) page on the new research portal that fielded questions from users, other researchers, and research ethicists, and described better the mechanics of how research is done.
User research can be a sensitive subject, not only in terms of the potential effects on users and their trust of provider’s custodianship of their data and relationships, but also because of the competitive angle between online providers who seek to improve their products compared to their competitors. However, we think that clarity as to the conduct of research with users is critical for trust, and that transparency about research practices can be done in a way that doesn’t affect competitive interests. We applaud Facebook for the changes they have just announced, while realizing that there are improvements that can and should be made. Other service providers engaged in similar research practices should take similar steps to publicize what research they’re conducting, how they create internal oversight to protect consumers. Most importantly, they should publicize the substantive guidelines and protections created in order to both promote research while keeping individual privacy and ethical interests front and center.