The Ethical Structure behind
Human Experimentation




Curtis Jantzi
November 25, 1996
Biology Senior Seminar





Outline


I. Introduction

II. History A. Examples of unethical human research cases
B. Publics fears concerning these cases
C. Governments response

III. Formation of Internal Review Boards (IRB) A. Reasons for developing this specific system
B. Implementation
IV. Conceptual problems encountered by IRBs A. Differentiating between experiment and treatment in medical studies
  • 1. Defined by courts
  • 2. Defined by insurance agencies
  • 3. Defined by the government
  • B. Obtaining consent from potential human research subjects

    V. Track record of IRBs A. Problems concerning the structure of IRBs
    B. Potential solutions to these problems
    C. Formation of a new ethics board in the US

    VI. Conclusion



    Introduction

    The history of medical research in the twentieth century provides abundant evidence which shows how easy it is to exploit individuals, especially the sick, the weak, and the vulnerable, when the only moral guide for science is a naive utilitarian dedication to the greatest good for the greatest number. Locally administered internal review boards were thought to be a solution to the need for ethical safeguards to protect the human guinea pig. However, with problems surrounding informed consent, the differentiation between experimentation and treatment, and the new advances within medicine, internal review boards were found to be inadequate for the job. This led to the establishment of the National Bioethics Advisory Commission by President Bill Clinton in the hopes of setting clear ethical standards for human research.


    History

    Examples of unethical human research cases

    The dark history of human experimentation began with the clarification between experimentation and treatment. The larger public began to notice experimenters ethical neglect for their subjects in the early 1960s. Those charged with administering research funding took note of the public furor generated by the exposure of gross abuses in medical research. These included uncontrolled promotional distribution of thalidomide throughout the United States, labeled as an experimental drug; the administration of cancer cells to senile and debilitated patients at the Brooklyn Jewish Chronic Disease Hospital; and the uncontrolled distribution of LSD to children at Harvard Medical Center through Professors Alpert and Leary. Most important was Henry Beechers 1966 article in the New England Journal of Medicine, detailing 22 protocols of dubious ethicality and declaring that the roster had been winnowed down from a longer list culled more or less from periodicals crossing his desk (Edgar, 495).


    Publics fears concerning these cases

    The public was very sensitive to these experiments since the US government had imprinted the crimes committed by the Nazi doctors throughout the war into their minds. When the public became aware that their own government was capable of the same devious unethical experimentation, two fears arose. The first was the frightening power of some political ideologies to demand that no private interest impede the accomplishment of the public good, and the second was the acute fear that people must adapt to whatever science produces, and that science is ultimately beyond social control (Edgar 496). Therefore, the U.S. government changed their slogans to focus on how the Nuremburg trials taught us that there must be limits to government power.

    If Nuremberg was one critical underpinning for public attitudes toward human experimentation, the second was the social awareness that new medical breakthroughs affected not simply the individual patient, but also human life more generally. Given the dimensions of the potential transformation, the innovations had to be reviewed and authorized by someone other than the particular investigators. The scientific community was already leaning this way. The most noteworthy example for this time was the Seattle doctors move to establish a lay kidney dialysis committee for the purpose of deciding who received the life saving benefits. The tipping point toward developing organized committees to review research was crossed when physicists watched the mushroom cloud while realizing they had altered the course of history without securing a societal consensus about the wisdom of doing so.


    Governments response

    Federal regulations grow out of and express the social values and concerns of its people. With each new example in researchers neglect of human rights being brought to the attention of the public, the government would have been deaf not to have heard and reacted to the outcry by its people. The people were demanding for ethical principles to be joined with political ideals, aesthetic standards, and other norms to inform administrative decisions (Williams 169).

    The government responded to the publics distress by forming the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (The National Commission) in 1978. The regulations that followed from their work were designed to protect experimental subjects autonomy by requiring voluntary, informed consent, to protect subject and public welfare, to protect privacy and maintain confidentiality of data, and to ensure equability in the selection of subjects (Williams 172). They accomplished this by transferring a great deal of authority from central bureaucracy that funded research to the newly-created Institutional Review Boards (IRB). The National Commission set up the following guidelines for the IRBs principles of decision making: decisions should be made in an informed manner, decisions should be made with ample time for deliberation and reflection, decisions should be made under calm circumstances, decision makers should be willing to make public their conclusions and reasons, collective decision making can help reduce bias and error and increase the chance of rational decision making, and allow time for circumstances and opinions to change (Williams 173). After creating the IRBs, the National Commission was no longer needed and was disbanded.


    Formation of Internal Review Boards (IRB)

    Reasons for developing this specific system

    The National Commission primarily decided to create local review boards because the research community was ahead of public demand, regulating itself before others did so. Therefore, this gave a guideline to begin with as well as to improve. The National Commissions regulators thought that by making the review board local, the penalties of regulation would be minimized. Secondly, regulators presumed that IRBs would almost always operate within a university teaching hospital where a shared commitment to the ideal of good science would far outweigh any tendency for persons to trade favors or elevate concerns for the financial viability of the institution above their loyalty to the integrity of science or the well-being of subjects. Third, the designers of the IRB system expected that the subjects themselves were likely to be suspicious about human experimentation, adopting a cautious, self-protective stand against involvement (Edgar 499). These factors formed the premise of the IRBs formation.

    Implementation

    Once the IRBs were formed, they were left to their own devices. They encountered two major problems: being able to differentiate between experimentation and treatment and recognizing the correct way to obtain consent from clinical research patients. These initial problems are still being worked out. The IRBs have made great improvements in these areas of debate within both the research and medical communities.


    Conceptual Problems Encountered by IRBs

    Differentiating between experiment and treatment in medical studies

    Historically, medical treatments have long been distinguished from experiments. The term experiment embodies societal ambivalence about what is new about medicine. Our society both embraces innovation and recoils from it at once. New AIDS drugs, new cancer treatments, prenatal surgery, gene therapy, and other examples repeatedly raise questions about when and how new technologies should be made available, who should have access to them, and who should pay for them. Most often, those questions are cast in terms of discerning whether what is new is experimental or therapeutic.

    The big question, then, is What marks a medical technology as experimental rather than standard procedure? Nancy M. P. King suggests that there are three principal contexts in which the designation experimental is currently applied to medical technology and ultimately controlled. These include medical malpractice, reimbursement, and research regulation. In medical malpractice, distinguishing between experimentation and therapy determines whether injured patients have been treated appropriately and provided with sufficient information about their treatment. In the context of insurance reimbursement, the choice of label determines whether an insurance plan will pay for a procedure. The federal regulations governing research with human subjects contain a definition that determines whether federal oversight is necessary (7).

    Before the twentieth century, the practice of medicine was not organized into the powerful, self-regulating, scientific profession recognized today. It was not until the professions power and reputation had grown considerably that the restraints of medical progress came to be a concern of the people and eventually the courts. Cases raising the question of experimentation in the modern era generally apply the standards of medical malpractice and of informed consent. Upon review, the only thing that is clear from the cases about the employment of the term experimental in the context of malpractice is that it is linked to avoidable injury to patients.

    Insurers and other third-party payers have attempted to define experimental in the reimbursement context with a different object in mind: the limitation of their payment obligations. Treatment ordered for a patient by a physician is almost always paid for by someone else: a Blue Cross/Blue Shield plan, HMO, Medicare or other plans. The primary coverage criterion for these plans is often whether the treatment ordered is reasonable and necessary. Insurers are increasingly attempting to curb their responsibility to pay for extremely costly treatments that are unlikely to succeed by seeking to apply more scientific definitions and standards of acceptability (King 8). This system is a good check, however, desperate patients argue that technology assessment takes too long and that is not fair to have to wait in the face of a life-threatening illness.

    The third setting in which the attempt is made to distinguish experiment from treatment is governmental regulations. King says that the key to distinguishing between research and treatment in this context has been held to be the physicians intent regarding the activities in question (9). The National Commissions Belmont Report defined medical practice as interventions . . . designed solely to enhance the well-being of an individual patient . . . that have a reasonable expectation of success (9). Research was then defined as an activity designed to test a hypothesis, permit conclusions to be drawn, and thereby to develop or contribute to generalizable knowledge (9).
    Given the same example, decision makers in the three settings might well draw different conclusions about where the line between experimentation and treatment should be drawn. This could present some serious problems. However, this was the problem that the National Commission tried to eradicate with the integration of IRBs. IRBs are the main decision maker in most instances, although, a few cases are ultimately decided by one of the three contexts. The IRBs influence is usually acknowledged and considered.


    Obtaining consent from potential human research subjects

    Another major problem that is still being debated by IRBs is the correct pathway to obtain a patients consent in a clinical research project. Clinical research is necessary to establish the safety and efficacy of a therapy. Clinical testing of a new drug, operation, therapy, or any new advancement in medical technology is required by the National Institute of Health (NIH) before a product license for that treatment can be given. However, as the treatment is being tested for safety and efficacy, patients taking part will be put at risk of unknown side-effects and may also be randomized to receive either the unproven drug or a placebo which is called the randomized controlled trial (RCT). RCTs are extremely important in medical research since it gives the best method of obtaining unbiased results.

    The question is: How do we protect the patient in an RCT? By far the strongest protection for the patient is her or his consent. Consent is an autonomous authorization by one person to permit another person to carry out an agreed procedure which affects the subject (Hewlett 233). By attaining consent the researchers respect the patients wishes, enable them to be self-governing, and uphold the principle of respect for persons. Four elements must be present for consent to be morally acceptable: competence, information, understanding of that information, and voluntariness.
    All four of these elements must be observed before consent can be ascertained from the patient. A person is considered competent if and only if that person can make reasonable decisions based on rational reasons (Hewlett 233). The information presented to the patient must be sufficient and unbiased, such that a substantially autonomous decision can be made. It is the responsibility of the health care professionals to ensure that the patient understands the proposed research. For a patients consent to be adequately voluntary, two factors must be fulfilled: the absence of controlling influences and the ability to choose one of at least two options.

    The first three elements of consent have been widely researched and debated over the past fifteen years but little work has been done on the fourth element, voluntariness. This is usually the most seriously threatened of the four in clinical research. Unlike healthy volunteers, many patients invited to participate in clinical research have an illness. The fear of IRBs is that the experience of illness (which may at times include pain, disability, fear of deterioration or death) and the accompanying psychological response (possibly depression, mourning, denial, anger, and anxiety) may well reduce autonomy. IRBs are also concerned that patients might commit to a treatment, despite being told it was unlicensed and was being tested for safety, because they believe that the doctor would only suggest a treatment that was in their best interest.

    IRBs are now trying to reduce influences on consent to clinical research. Areas in which IRBs are trying to reduce these influences include the doctor/patient relationship (by using a patient advocate), selection of patients that are able to understand the research, education of the researcher in the problem areas of consent, and using easily understood information sheets describing the research in detail.

    Two recently publicized examples show the ethical problems associated with not following just one of these elements when obtaining consent. The first was in the 1940-50s when researchers subjected 131 ill-informed prison inmates to nuclear radiation, in order to study its effect on human beings (Watson 14). In this example, the inmates (the patients) were not told of the risks. It is also conjectured that their consent shortened the inmates sentences, which represents an outside influence on the decision. In another study involving schizophrenic patients, the subjects were not told the full extent of the risks of participation (Willwerth 62). The schizophrenic patients were not told that some of them might receive placebos instead of their regulating drugs. Those patients that received placebos usually regressed to previous conditions.


    Track Record of IRBs

    Problems concerning the structure of IRBs

    With an understanding of the initial problems that the IRBs have had to deal with concerning consent and the differentiation between experiment and therapy, it can be seen that, while not perfect, the IRBs have made headway in these areas and many other issues. However, other questions which arise concern the IRBs living up to the ideal model envisioned by the National Committee, and whether they are the best model for ethical review boards in the future. As with most organizations, IRBs have not lived up to their ideal conceptual model, and they are criticized as being inadequate for the problems that lie ahead in the future.

    Upon review of past records of IRBs, many scientists and bioethicists find the IRBs inadequate to deal with future advances in the medical field. The largest problem is that new medical technologies continue to move society in totally new directions, with no systematic review of their desirability. Each IRB might decide differently on the same case. Also, the proportion of research that is industry funded, rather than government supported, has increased dramatically. This diminishes the power of the IRBs to do anything since their power, the control of the money, has been taken away. In fact, the academic center which served as a paradigm for the IRB is likely in the future to lose what was once a near monopoly over research. Huge multistate and international trials have begun to take over. With research becoming more national, ethic reviews on the local level make little sense (Edgar 501).


    Potential solutions to these problems

    If the old paradigm no longer holds, what revisions should be made in public policy? Most scientists and bioethicists agree that the IRB system has worked reasonably well and that to dismantle it would be a mistake. However, where do IRBs go from here? One current suggestion is the establishment of a super committee or committees, charged at the minimum with a monitoring function or at the maximum with the right to veto research deemed unacceptable. This super committee would act as the Supreme Court within the bioethical review board realm.

    Edgar and Rothman suggest that there are three principal and interrelated issues that must be addressed in the design of such a committee. The first issue is whether to constitute one committee, endowing it with visibility and prestige because of singularity, or several committees, distributing responsibility among members selected for their particular expertise. Secondly, it must be determined how expansive a committees jurisdiction should be: whether it will be limited to reviewing funded grant proposals and issuing advisory opinions, leaving the ultimate decisions to local IRBs and researchers; or whether its approval will be required before research is undertaken. The third issue involves deciding who should appoint such a committee and what kind of staff it should have (503).


    Formation of a new ethics board in the US

    Within the last year at a White House ceremony, President Clinton described the newly implemented National Bioethics Advisory Committee (NBAC). The committees formation was mainly in response to the newly acquired knowledge that the United States had sponsored radiation experiments during the Cold War. President Clinton set up the committee with the goal of setting clear ethical standards for human research so these atrocities would never happen again (Centofanti 245). The influence of this new committee is uncertain since they are still researching the many case studies that have been compiled by the IRBs over the past fifteen years.


    Conclusion

    IRBs have set a good standard for ethical research from which to build upon. However, with the ever advancing fields of research in neurobiology, genetic therapy, and reproduction, it is time to take the superintendence of human research to a different and more national level. Whether this change can be accomplished by President Clintons National Bioethics Advisory Commission within the current political climate is debatable. The necessity for such a shift is not.



    Bibliography



    Budiansky, Stephen. "Blinded by the cold-war light." U.S. News & World Report. v116, n6. January 10, 1994. pp. 6-8.

    Corbett, Fiona, Julia Oldham, and Richard Lilford. "Offering patients entry in clinical trials: preliminary study of the views of prospective participants." Journal of Medical Ethics. v22, n4. August 1996. pp. 227-232.

    Centofanti, Marjorie. "Controversy sparks panel." Science News. v148, n16. October 14, 1995. pp. 245.

    Edgar, Harold, and David Jonathan. "The institutional review board and beyond: furture challenges to the ethics of human experimentation." The Milbank Quarterly. v73, n4. Winter 1995. pp. 489-507.

    Fethe, Charles. "Beyond voluntary consent: Hans Jonas on the moral requirements of human experimentation." Journal of Medical Ethics. v19, n2. June 1993. pp. 99- 104.

    Hewlett, Sarah. "Consent to clinical research - adequately voluntary or substantially influenced?" Journal of Medical Ethics. v22, n4. pp. 232-238.

    Jackson, Jennifer. "Unproven treatment in childhood oncology - how far should paediatricians co-operate: commentary." Journal of Medical Ethics. v20, n2. June 1994. pp. 77-80.

    King, Nancy M.P., "Experimental treatment: oxymoron or aspiration?" The Hastings Center Report. v25, n5. July-August 1995. pp. 6-16.

    Lindsay, Cecile. "Corporality, Ethics, Experimentation: Lyotard in the Eighties." Philosophy Today. v.36. Winter 1992. pp. 389-401.

    Marshall, Eliot. "Panel faults research consent process." Science. v270, n5233. October 6. pp. 25.

    Marwick, Charles. "Ethicist faults human research subject protection." JAMA. v271, n16. April 27, 1994. pp. 1228-1229.

    Nowak, Rachel. "Staging ethical AIDs trials in Africa." Science. v269, n5229. September 8, 1995. pp. 1332-1336.

    Raloff, Janet. "Tamoxifen turmoil: new issues emerge as healthy women volunteer to take a potent drug." Science News. v146, n17. October 22, 1994. pp. 268-270.

    Sea, Geoffrey. "The radiation story no one would touch." Columbia Journalism Review. v32, n6. March-April 1994. pp. 37-41.

    Skolnick, Andrew A., "Advisory committee report recommends that US make amends for human radiation experiments." JAMA. v274, n12. September 27, 1995. pp. 933.

    Stone, Richard. "Eyeing a project's ethics." Science. v259, n5103. March 26, 1993. pp. 1820.

    Watson, Russel. "America's nuclear secrets." Newsweek. v122, n26. December 27, 1993. pp. 14-19.

    Williams, Peter. "Ethical principles in federal regulations: the case of children and research risks." The Journal of Medicine and Philosophy. v21, n2. April 1996. pp. 169-214.

    Willwerth, James. "Madness in fine print: using mentally ill subjects for psychiatric experiments too often means extracting and relying on their ill-informed consent." Science News. v144, n19. November 7, 1994. pp. 62-64.

    Yeoh, C., E. Kiely, and H. Davies. "Unproven treatment in childhood oncology - how far should paediatricians co-operate." Journal of Medical Ethics. v20, n2. June 1994. pp. 75-77.