Twenty years ago, during my anesthesia training, I took care of a young woman scheduled for a laparotomy who also happened to be a research subject in my professor’sstudy on post-operative pain. The study compared the pain scores of patients receiving traditional intravenous narcotics with those receiving spinal narcotics—a new therapy at the time. My professor, an ambitious young go-getter, eager to publish his first journal article, told me to inject a dose of spinal narcotic before inducing anesthesia, and then, for the rest of the case, give no more than 1 cc of fentanyl (a short-acting narcotic) through the intravenous line, a dose so low in the setting of intra-abdominal surgery as to make it a placebo. The patient awoke from anesthesia screaming in pain, prompting me to violate the study’s protocol by giving her extra intravenous narcotic. When I arrived in the recovery room, my professor, who had heard what I had done,|grew furious with|mefor having “contaminated” his study. Why had I not at least tried a placebo first? he demanded to know. Still, I had no regrets. A human being was in pain; therapy was available; the march of science and myprofessor’s futile career would have to wait another day.
Reading Jeremy Howick’s (2009) article made me think thatDr. Howick would have taken my side in that conflict— but for different reasons, which makes me nervous. Dr. Howick encourages researchers to use recognized treatments rather than placebos in their control groups, having concluded that doing so will not ruin their experiments. In three areas of major concern—assay sensitivity, absolute effect, and sample size—placebo controlled trials (PCTs) are often no better or worse than active control trials (ACTs), he argues. With the resolution of these issues, the “tension between clinical ethics and research ethics over the use of PCTs dissolves” (34).
But what if PCTswere better than ACTs? One might infer from Dr. HowickÂ’s (2009) target article that the integrity of my professorÂ’s study vied in importance with my patientÂ’s desire for pain relief, that a conflict existed between the two, making the choice between placebo therapy and active therapy a hard one, and my decision to give extra narcotic on that day a questionable one.
Dr. Howick’s (2009) division between the “moral duties” (34) of the clinician and those of the researcher is the source of this conflict. His division implies that the claims of the patient and the claims of researcher are different, but of equal rank—hence, the tension. The moral duty of the clinician is to provide the patient with the best available treatment; the moral duty of the researcher, according to Dr. Howick, requires her to “consider PCTs even when an established treatment is available” (34).
But the tension is specious. The moral duty of the clinician is to treat the patient with the best available therapy. The moral duty of the researcher is the same! The researcherÂ’s duty to her study is merely a tactical one, not a moral one.
Awarding the researcher a competing moral duty of equal rank brings to mind a classic science fiction horror. Imagine a human being who feels it her moral duty to care for someone; she is a moral agent. Now imagine a computer that collates this human being’s experiences so as to establish a scientific trend. The computer is a tactical device—very useful to society as a whole, but with no moral purpose other than what a thinking, feeling human being gives it. Suddenly, the computer invents its own moral imperative, one that conflicts with that of the moral agent it serves. The computer decides that collating data is just as important as caring for individuals. A researcher’s “moral duties” (Howick 2009, 34) independent of a clinician’s moral duties, recall to mind the worst science fiction nightmare: the machine that thinks for itself.
Dr. Howick (2009) parses these moral duties so as to make the ideal researcher seem just as concerned with a patient’s well being as the ideal clinician is. For example, he cites poor quality research as a reason why a researcher might decide against using active therapy. Poor quality research, he argues, exposes subjects to unnecessary risks and burdens. But this line of thought makes no sense. Active therapy would only be chosen on the assumption that it was active, that it benefited patients, and so would present less risk to a patient—certainly less risk than a placebo would. True, the placebo might also turn out to have real benefits, but going forward, the researcher would not know this. No, the major risk in using active therapy is not to the individual subject, but to humanity as a whole, because poor research deprives human beings of important scientific knowledge. Unless the researcher invokes the utilitarian argument that individual lives must be sacrificed for the greater good, and that scientific knowledge gained from a PCT is worth a few suffering individuals—a dangerous slippery slope—the researcher substituting placebo therapy for reliable active therapy has no moral leg to stand on. On the contrary, she is merely using the pretence of morality to give her actions cover.
I am not saying that clinicians are more moral than researchers. On the contrary, many clinicians also flirt with subterfuge in how they use placebos. Well into the 20th century, most doctors used placebos sparingly. Placebo was a pejorative term—a form of low trickery—and physicians rarely admitted to using them. Today, many physicians use placebos in their practices, and do so knowingly, justifying their actions by a questionable science called psychoneuroimmunology, which argues that happiness bolsters a patient’s immune system, thereby making it easier for the patient to fight off disease (Dworkin 2006). Placebos make patients happy; happy patients recover faster; therefore one need feel little guilt about using them.
The science of psychoneuroimmunology gives clinicians cover to use placebos in the same way that impugning active therapy gives researchers cover to use placebos. Both are deceptions. When I pointed out to a physician that alternative medicine’s value remained unconfirmed and that psychoneuroimmunology remained just a theory, the physician nervously replied: “All I know is that it works.” Not exactly a strong moral position.
All this is perfectly intelligible to anyone. Researchers would prefer to use PCTs if they could. Fortunately for us, Dr. Howick (2009) has come up with evidence to stay their hand. But if he had not, and the moral duties of the clinician and researcher do conflict as he says, then some ethicist will inevitably invent explanations as to why it is a very good thing to side with the researcher, just as some clinicians have invented explanations as to why it is a good thing to use placebos. This will lead to all sorts of complex, cunning, and subtle considerations about relative risk, the good of humanity, and so forth. The more these considerations are propagated the further they obscure the matter and conceal what should be done. Dr. HowickÂ’s target article does nothing to stop this trend.
Indeed, this is the great danger in bioethics. Ethical questions become wrapped in such a thick layer of complex argument and full of so many subtle twists of meaning and words that all discussions of such questions go round and round in circles, catching hold of nothing, like a disconnected car wheel. They lead nowhere except to achieve the one purpose for which they are instigated: to conceal from people the immorality of doing what they very much want to do.
Dr. Howick (2009) hints at this, noting how the Helsinki Declaration allows for PCTs even when proven therapy is available if there are “compelling reasons” (34).AsDr. Howick observes, the Declaration’s authors refuse to elaborate on what those compelling reasons might be. Their inability to answer this question straightforwardly is ominous. The deliberations of people who know clearly what is right and what is wrong are typically simple, uncomplicated, and honest; the mental activity of those who are uncertain about such matters, or who do not want to admit in public that they prefer to do wrong, are often subtle, extremely complex, and dishonest. The phrase compelling reasons belongs in the second category.
Twenty years ago,my professor piled up in his mind all sorts of explanations for why it was a good thing to withhold pain medicine from his patient. These explanations deflected his attention from what is important and essential, making it possible for him to stagnate in a lie without noticing it, and think it right to let his patient writhe in pain. His rationalizations, while no doubt complicated and skillful, seemed on the surface to serve science, but, at bottom, served only to let him do what he wanted to do. I doubt Dr. HowickÂ’s (2009) target article would have helped him see the light, failing as it does to bolster our capacity to distinguish between good and evil, falsehood and truth.
REFERENCES
Dworkin, R. W. 2006. Artificial Happiness. New York, NY: Basic Books.
Howick, J. 2009. Questioning the methodologic superiority of ‘placebo’ over ‘active’ controlled trials. American Journal of Bioethics–Neuroscience 9(9): 34–48.