Review: The Censor’s Hand (Carl E. Schneider)

Medical and social progress depend on research with human subjects. When that research is done in institutions getting federal money, it is regulated (often minutely) by federally required and supervised bureaucracies called “institutional review boards” (IRBs). Do–can–these IRBs do more harm than good? In The Censor’s Hand, Schneider addresses this crucial but long-unasked question.
Schneider answers the question by consulting a critical but ignored experience–the law’s learning about regulation–and by amassing empirical evidence that is scattered around many literatures. He concludes that IRBs were fundamentally misconceived. Their usefulness to human subjects is doubtful, but they clearly delay, distort, and deter research that can save people’s lives, soothe their suffering, and enhance their welfare. IRBs demonstrably make decisions poorly. They cannot be expected to make decisions well, for they lack the expertise, ethical principles, legal rules, effective procedures, and accountability essential to good regulation. And IRBs are censors in the place censorship is most damaging–universities.
In sum, Schneider argues that IRBs are bad regulation that inescapably do more harm than good. They were an irreparable mistake that should be abandoned so that research can be conducted properly and regulated sensibly.

Did you read Scott Alexander’s blogpost on his horror story of IRB and was wondering whether there’s more of that kind? Well, there is, and Schneider has written an entire book about it. Schneider is a rare breed of a polymath, being professor of both medicine and law (University of Michigan). The book proceeds in fairly simple steps:

  1. Contrary to a few sensationalized stories about Nazi camp experiments and the Tuskegee experiment, harm to patients in research was actually very rare before the implementation of IRBs. In fact, it’s safer to be in research than to be in regular medical care. So, for IRB to make sense, it must further reduce the already low levels of harm while not creating additional harm by wasting researchers time and money, delaying useful treatments etc.
  2. Based on the available evidence, IRB as currently practiced completely fails the above. It is expensive, slow, and arbitrary. He cites experiments where the same protocols where sent to different IRBs only to get different judgments, sometimes even with contradictory revision requirements (one demanded children’s parents be told, one demanded they shouldn’t be told).
  3. He diagnoses the problems of IRBs as being due to lack of clear regulations (paradoxically), instead relying on supposedly clear but vague principles like those in the Belmont Report: respect for persons, beneficence, and justice. Furthermore, members of IRBs don’t have the necessary expertise to know how to deal with the studies they are supposed to regulate since they are rarely subject matter experts, and indeed, some are complete laymen.
  4. He further argues that the very nature of IRBs’ event licensing — need permission before doing anything, instead of the usual get punished if you do something wrong, i.e. reverse burden of proof — results in ever creeping scope of IRBs, and the system is thus fundamentally broken and cannot be fixed with reforms. They originally were meant for medical research, but not try to regulate pretty much everything in social science.

He illustrates the above with a number of disturbing case studies, similar to that from Alexander’s post. Let’s start with a not extremely egregious one:

Intensive care units try to keep desperately ill people alive long enough for their systems to recover. Crucial to an ICU’s technology is the plastic tube threaded through a major vein into the central circulatory system. This “central line” lets doctors give drugs and fluids more quickly and precisely and track the patient’s fluid status better.

Every tool has drawbacks. An infected IV in your arm is a nuisance, but the tip of a central line floats near your heart and can spread bacteria throughout your body. When antibiotics fail to stop these infections, patients die. Because there is one central-line infection for every 100 or 200 patient-days, a hospital like St. Luke’s in Houston, with about 100 ICU beds, will have a central-line infection every day or two. There are perhaps 10,000 or 20,000 central-line fatalities annually, and a 2004 study estimated 28,000.

There is a well-known sequence of steps to follow to reduce these infections: (1) wash your hands, (2) don cap, mask, and gown, (3) swab the site with antibiotic, (4) use a sterile full-length drape, and (5) dab on antibiotic ointment when the line is in. Simple enough. But doctors in one study took all five steps only 62% of the time. No surprise. Doctors might forget to wash their hands. Or use an inferior alternative if the right drape or ointment is missing.

Peter Pronovost is a Johns Hopkins anesthesiologist and intensivist who proposed three changes. First, have a nurse with a checklist watching. If the doctor forgets to wash his hands, the nurse says, “Excuse me, Doctor McCoy, did you remember to wash your hands?” Second, tell the doctor to accept the nurse’s reminder—to swallow hard and say, “I know I’m not perfect. I’ll do it right.” Third, have ICUs stock carts with everything needed for central lines.

It worked. Central-line infections at Johns Hopkins fell from 11 to about zero per thousand patient-days. This probably prevented 43 infections and 8 deaths and saved $2 million. In medical research, reducing a problem by 10% is ordinarily a triumph. Pronovost almost eliminated central-line infections. But would it work in other kinds of hospitals? Pronovost enlisted the Michigan Hospital Association in the Keystone Project. They tried the checklist in hospitals big and small, rich and poor. It worked again and probably saved 1,900 lives.

Then somebody complained to OHRP that Keystone was human-subject research conducted without informed consent. OHRP sent a harsh letter ordering Pronovost and the MHA to stop collecting data. OHRP did not say they had to stop trying to reduce infections with checklists; hospitals could use checklists to improve quality. But tracking and reporting the data was research and required the patients’, doctors’, and nurses’ consent. And what research risks did OHRP identify? Ivor Pritchard, OHRP’s Acting Director, argued that

“the quality of care could go down,” and that an IRB review makes sure such risks are minimized. For instance, in the case of Pronovost’s study, using the checklist could slow down care, or having nurses challenge physicians who were not following the checklist could stir animosity that interferes with care. “That’s not likely, but it’s possible,” he said.

Basically, experimenting is okay, as long as you don’t collect any data to know whether something worked or not. Obvious example of stifling of useful research. Another one:

In adult respiratory distress syndrome (ARDS), lungs fail. Patients not ventilated die, as do a third to a half of those who are. Those who survive do well, so ventilating properly means life or death.

Respirators have multiple settings (for frequency of breathing, depth of breathing, oxygen percentage, and more). The optimal combination depends on factors like the patient’s age, sex, size, and sickness. More breathing might seem better but isn’t, since excessive ventilation can tax bodies without additional benefit. Respirator settings also affect fluid balance. Too much fluid floods the lungs and the patient drowns; too little means inadequate fluid for circulatory functions, so blood pressure drops, then disappears.

In 1999, a National Heart, Lung, and Blood Institute study was stopped early when lower ventilator settings led to about 25% fewer deaths. But that study did not show how low settings should be or how patients’ fluid status should be handled. So the NHLBI got eminent ARDS specialists to conduct a multisite randomized trial of ventilator settings and fluid management.

In November 2001, two pulmonologists and two statisticians at the NIH Clinical Center sent OHRP a letter criticizing the study design Pressed by OHRP, the NHLBI suspended enrollment in July 2002: the federal institute with expertise in lung disease bowed to an agency with no such expertise. NHLBI convened a panel approved by OHRP. It found the study well-designed and vital. OHRP announced its “serious unresolved concerns” and demanded that the trials remain suspended. Meanwhile, clinicians had to struggle.

Eight months later, OHRP loosed its hold, without comment on the costs, misery, and death it had caused. Rather, it berated IRBs for approving the study without adequately evaluating its methodology, risks and benefits, and consent practices. It did not explain how an IRB could do better when OHRP and NHLBI had bitterly disagreed.

And the ridiculous:

Helene Cummins, a Canadian sociologist, knew that many farmers did not want their children to be farmers because the life was hard and the income poor. She wondered about “the meaning of farm life for farm children.” She wanted to interview seven- to twelve-year-olds about their parents’ farms, its importance to them, pleasant and unpleasant experiences, their use of farm machinery, whether they wanted to be farmers, and so on.

Cummins’ REB [same as IRB] first told her she needed consent from both parents. She eventually dissuaded them. They then wanted a neutral party at her interviews. A “family/child therapist” told the REB that “there would be an inability of young children to reply to some of the questions in a meaningful way,” that it was unlikely that children would be able to avoid answering a question, and that the neutral party was needed to ensure [again] an ethical level of comfort for the child, and to act as a witness.” Cummins had no money for an observer, thought one might discomfit the children, and worried about the observers’ commitment to confidentiality. Nor could she find any basis for requiring an observer in regulations or practice. She gathered evidence and arguments and sought review by an institutional Appeal REB, which took a year. The Appeal REB eventually reversed the observer requirement.

Farm families were “overwhelmingly positive.” Many children were eager and excited; siblings asked to be interviewed too. Children showed Cummins “some of their favorite places on the farm. I toured barns, petted cows, walked to ponds, sat on porches, and watched the children play with newborn kittens.” Cummins concluded that perhaps “a humble researcher who respects the kids who host her as smart, sensible, and desirous of a good life” will treat them ethically.

There are some links to the growing snowflake craze, namely that IRBs are tasked with protecting so-called vulnerable groups. But what exactly is that? Well, because IRBs want more power, they continuously expand the category to include pretty much everybody. When dealing with vulnerable groups, extra rules apply, so basically this is a power grab to expand the extra rules to just about all cases.

Regulationists’ most common questions about vulnerability are expanding IRB authority, “with the answer usually being ‘yes.’”99 The regulations already say that subjects “likely to be vulnerable to coercion or undue influence, such as children, prisoners, pregnant women, mentally disabled persons, or economically or educationally disadvantaged persons,” require “additional safeguards” to protect their “rights and welfare.”100 IRBs can broaden their authority in two ways. First, “additional safeguards” and “rights and welfare” are undefined. Second, the list of vulnerable groups is open-ended and its criteria invitingly unspecified.

Who might not be vulnerable? Children are a quarter of the population. Most women become pregnant. Millions of people are mentally ill or disabled. “Economically and educationally disadvantaged” may comprise the half of the population below the mean, the three quarters of the adults who did not complete college, the quarter that is functionally illiterate, the other quarter that struggles with reading, or the huge majority who manage numbers badly. And it is easily argued, for example, that the sick and the dying are vulnerable.

Basically, you can expect to be persuaded somewhat towards libertarianism by reading this book. It’s a prime example of inept, slow, counter-productive, expensive regulation slowing down progress for everybody.

Leave a Reply