Reforming the Non-Medical IRB: A Shift from Preventing Harm to Doing Good

As some of you know (grandma), my area is law and mind sciences. To date, most of my scholarship has involved applying existing insights from social psychology, social cognition, and other fields to legal topics. However, over the last few months, I’ve been working on designing a set of experiments with a cognitive psychologist and, as a result, I have had a chance to engage the institutional review board process for the first time.

I must say that while the people running the IRBs at Drexel and Penn seem well-intentioned and nice enough, the process is utterly befuddling to me. As has been noted on this blog previously, more legal academics are doing work that is potentially covered by IRBs than ever before and it is worth pausing to think about whether radical changes to the existing approach are not appropriate.

(I certainly do not purport to be the first person to advocate reform in this area or to have thought about it as much as others; my hope is that this post will provoke some readers to consider their experiences and whether they feel like the current IRB process is worth its costs.)

I’d like to focus on the non-medical IRB (covering social and behavioral research, ethnography studies, etc.) and I’d like to propose eliminating review completely in this area. No more paper work, no more calls, no more meetings. Instead, we will simply rely on professional norms to channel behavior and existing legal mechanisms to deter the most harmful conduct. (I will leave to the side, in this post, the sticky issue of university liability.)

Now, this doesn’t mean that everyone is off the hook. All of the money and energy that universities currently expend on the IRB process will simply be redirected. The idea is to use resources to directly improve people’s lives, rather than to try to avoid harms that may or may not arise. All of the time previously spent on filling out paperwork, on the phone asking and answering questions, taking human subjects tests, and filing updates, among other things, would now be spent actively participating in socially-beneficial endeavors.

As a licensed attorney, what if I used every hour I would expend on IRB compliance volunteering at a legal aid clinic instead? Or what if I used that time to help high school students in north Philadelphia work on their college essays or removing trash from the Schuylkill River? What if all of the staff at the Office of Research Compliance spent their days finding and coordinating opportunities for professors to volunteer in the community? I would argue that the social good likely to result would considerably outweigh the potential costs of not subjecting non-medical experiments to formal review.

The truth is that the new regime would not be perfect—people would occasionally be harmed—but the magnitude of this threat might be less than imagined. When a person goes to design a psychology experiment there are many factors that act as constraints on the design: Do my colleagues approve of my proposal? Will members of my field look favorably on this experiment? Will resulting harms negatively impact my tenure review (remember that Stanley Milgram was denied tenure at Harvard)? Does this align with my sense of morality? Will my friends/parents/wife/children think less of me if someone is hurt on my watch? How does this experiment compare to other experiments that were conducted in the past and how did people react to those projects?

The IRB process is not the primary reason why the vast majority of non-medical experiments today do not pose major risks to human subjects. It would seem to me that while the process prevents some harms, it does not prevent enough to justify its existence and thinking of alternative uses of the resources currently dedicated to IRBs has the potential to leave us all better off.

You may also like...

10 Responses

  1. @Adam, I do human subjects research in identity theft at UC-B, and comply with IRB regs. No one wants to be subject to such oversight, but the risks to research subjects are real, even in the social sciences. In my field, carelessness could put the university at risk, thus it has an interest in ensuring that I have security measures and procedures to protect data. Ethical norms aren’t enough to police this risk, esp w/r/t the fair treatment of research subjects. It is not clear to me how the state bar would effectively discipline me if my research were unfair to subjects.

    Recently, when conducting a telephonic opinion survey of Americans on privacy issues, we sought IRB approval and got it pretty quickly and without much hassle. I think the IRB understands risk and adjusts administrative burden accordingly.

  2. The idea that professional norms are sufficient to prevent unethical conduct even in human subjects research is belied by the extensive history of such conduct in the U.S. in the 19th and 20th centuries (esp. in the latter). See here for a short discussion of this history in response to a recent post here at CoOp by Deven.

    Social science research may be of lesser risk, but the risk is certainly non-negligible, as Milgram and Asch could probably testify to.

    Of course, the insufficiency of professional self-regulation to ensure virtuous conduct does not imply that IRBs are a good mechanism for such protection. For a variety of reasons, I share your perspective that they are not. But leaving it up to professional norms, which have been historically completely unable to police abuses in human subjects research, is almost assuredly a worse resolution.

  3. The medical/non-medical distinction is far too coarse. Take interviews as an example. These can cover information that, if made public, could embarrass an individual or contain evidence of criminal behavior, or embarrass a company. Writing a protocol, preparing a statement of informed consent, etc. forces you think through how you’ll protect data and minimize the risk of leaking it or being compelled to turn it over in readily usable form to investigators, litigants.

    Dealing with IRBs does require a bit of learning, but I found the process went very smoothly when I teamed up with someone who had been through it before.

  4. Adam Benforado says:

    Chris, Daniel, and Aaron, thanks for the thoughtful comments!

    Here are some quick responses:

    It’s important to clarify that I did not (and do not) claim that IRBs do not prevent any harms suffered by members of the public. I do, however, believe that the threat of these harms is not as substantial as some other scholars have suggested. Daniel, the strongest examples of 19th and 20th century abuses are in the medical arena and I have specifically excluded these by focusing on the non-medical IRB context (Aaron, I agree that the distinction is a bit coarse but I believe it is the way that many schools differentiate things, including Drexel, Duke, Stanford, the University of South Dakota, etc.). Yes, some subjects in the Milgram experiment reported being very upset as a result of their participation in the “shocking” studies and we certainly need to consider these real risks (and ought to try to minimize them), but I would argue that they are ultimately, with some important exceptions, of a different degree of seriousness. Embarrassing an individual or company by accidently releasing some data is not the same as poisoning someone with dioxin.

    More critically, I think it’s worth considering how different the environment of non-medical university research is today than it was 50 or 100 years ago. As Milgram himself would remind us, the situation of the individual is what you need to worry about, not his or her disposition, and the situation has changed. To point to but one alteration, I would assert that academics today engage in far more exchanges as they construct projects than their predecessors as a result of various technological advances and changes to professional and social practices. We email colleagues, fly across the country and test out ideas at conferences, blog about potential research topics, etc. All of these things provide us with feedback on our experiments before a subject ever walks in the door of the laboratory or logs into our online survey. Likewise, all of us in academia operate against the backdrop of the tragedies of the 20th century. Every college freshman who takes Intro Psych reads about Zimbardo’s prison experiment and Milgram’s work. Does this guarantee that these students won’t, later in life, design experiments that harm people? No, but we should expect that it at least lowers that probability.

    Finally, and this is the most important point, I do not just want to eliminate the IRB process and leave everything “up to professional norms.” I want to replace the IRB process with a robust regime aimed at directly creating benefits to society. I am proposing that professors use the time that they would otherwise spend on compliance to helping people in the community and that Offices of Research Compliance change their focus to coordinating opportunities for professors to volunteer. This is about where we allocate resources. The choice to me is between spending time, money, and energy on preventing harms that may or may not ever occur and devoting resources to efforts that help real people, here and now.

    Okay, must run to a faculty candidate dinner!

  5. Adam,

    Thanks for the thoughtful reply. I certainly do not deny the distinction in either the probability of the risk or the magnitude of the harm from social science vs. biomedical research. (Quite the contrary, I both work in a medical center and do some oral history work, and so am somewhat familiar with the distinction). But I do think it is ethically inadvisable to rely as heavily on this distinction as you seem to do. The psychosocial harms the Milgram-Asch experiments caused the subjects was, IMO, not nearly as insignificant as seems to be implied in your choice of words. Such harms are quite real and may be extremely potent, such that even a lower likelihood of their occurring is insufficient to simply do away with any system of oversight whatsoever (which I now understand you may not be suggesting, but which I submit was not entirely clear from the original post).

    Moreover, I think an undue focus on the risk of harms itself risks, no pun intended, a lack of attention paid to the well-documented dynamics by which investigators in any kind of human subjects research all too often come to act in ways that do not seem in the best interests of their subjects. Given these dynamics, and the enormous amount of evidence we have that this happens all too often, I would want a great deal more detail as to exactly what you would replace the IRB system with. For despite your clarification, I still see far too much reliance on professional norms, media exposure, etc., to make me comfortable (e.g., I have no idea what a “robust regime aimed directly at creating benefits to society” has to do with research oversight, nor do I see how it guards against abuse. Who would oppose such a generic formulation?). Most historical examples of unethical human subjects research were not clandestine. Investigators in the Tuskegee study published their results in leading medical journals for decades, as did many other investigators involved in other unethical human subjects research.

    The history of unethical human subjects research should put firmly to bed the notion that self-regulation is sharp enough to police human subjects research, and I see no reason to exempt social science research from this evidence base simply because the risks are lower.

    Thanks for the interesting discussion.

  6. Kelly Hills says:

    To follow up on Daniel’s commentary, it’s also worth noting that we are talking “invisible” (potential) harms for these non-medical cases that you refer to, Adam. Invisible injuries – PTSD is a fabulous example – are difficult to treat and even more difficult to give legitimacy to. Adding to that the stigma that comes with these invisible problems, the potential lack of treatment, and other possible harms? I don’t see how people can advocate for anything less than full IRB process.

    Yes, IRBs are often inefficient, and like much of our medical and academic system are in need of an overhaul. That doesn’t make them pointless, even if the paperwork is inefficient. They exist because academics and researchers have shown repeatedly, over a long period of time (again, as Daniel points out) to be unable or incapable of properly policing themselves. I think it is bordering on hubris to suggest that “we’re all better now.”

    Checks and balances exist for a reason. Make them more efficient, sure – but ditching them all together, for a return to the way things were, is not the solution.

    Besides, everyone knows graduate students should be used to put together all but the final touches of IRB forms. 😉

  7. Adam Benforado says:

    Daniel and Kelly,

    Thanks for the follow up comments; they are certainly helpful in trying to better clarify my argument!

    1. To be very clear, I do not think that “we’re all better now.” Rather, I think that there are reasons to believe that the same abuses that occurred 50 or 100 years ago are less likely to occur now (as I tried to sketch out in my last comment).

    2. I do not think that IRBs are “pointless.” I think they do prevent certain harms. My concern is with the cost/benefit here: are there ways that we could use existing resources that would better serve the public (with respect to maximizing happiness / minimizing acute suffering)? I could be required to spend a couple of hours filling out IRB forms or I could be required to take two hours to do some direct community service activity like offering legal advice to people who are being threatened with foreclosure. Which is a better use of my time?

    3. I think we need to be very concerned about psychosocial harms and agree that these are often hard to measure. That said, I’m not sure that IRBs—and IRBs alone—have prevented much in the way of serious psychosocial harms. I readily concede that I just don’t have the data here and would be interested to hear from those who do! My hazy memory of reading about the Milgram experiments, for example, was that certain participants were quite upset that they had gone all the way to 450 volts and, even after being debriefed about the powerful situational manipulations by the experimenters, continued to dispositionalize their own actions. What I don’t know is how many people felt this way and whether these folks experienced continuing severe anxiety, depression, etc. weeks, months, and years after the experiments. Moreover, I don’t have a good sense of whether IRBs can be credited with preventing similar experiments (and suffering) in subsequent decades . . .

    All right, I think I need to call it a night, but thanks, again, for raising your concerns and sharing your thoughts; they’re much appreciated!

  8. Joe says:

    I think the possibility of doing harm to people is more real and happens a little too often for us to not focus on that when drafting policies. We just cannot always trust people to do the right thing or not make mistakes.

  9. Sorry, not buying it. The IRB itself doesn’t prevent the harm, rather it forces you to perform a series of processes and behaviours that helps to ensure the risks are minimized, worth the reward and the subject is informed.

    I’m sure many will agree that in most (behavioral/social) cases the IRB review isn’t necessary, but how do we find those cases. Are you confident that researchers will know when their study should be reviewed? I’m not. By pushing all research down the same path we can ensure that those research projects that are risky (relative term) are properly reviewed.

    Can the process be streamlined? Sure. Are there cases where the effort required to go through the IRB wasn’t worth it? Sure. And are there cases where the IRB did review and ok something and it was still risky? Sure. Nothing is perfect. But like so many other things in modern democratic societies, we’re willing to do extra work to help ensure that the rights of all a protected. It’s the price we pay (willingly I hope) in our society.

  10. @Adam, thank you for your thoughtful responses; I understand your position better. But FWIW, my situation is different than you describe–I am very secretive about what I am studying because it has political implications. Thus, no one (aside from collaborators and the IRB) even really knows what I’m up to; I imagine that others have similar restrains on their research, especially those who are involved in sponsored projects.

    The IRB process also encourages the researcher to think about how the project will help subjects affirmatively. That duty is not intuitive, and I’ve found that the exercise helps humanize the subjects. It’s all to easy to treat them as objects in the struggle to get research out.