






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The concept of scientific ethics and the importance of maintaining scientific integrity in research. It covers various aspects of scientific misconduct, the role of stakeholders in promoting ethical science, and self-regulation mechanisms in science. The document also discusses the definition of scientific misconduct, its impact, and the need for effective oversight.
Typology: Summaries
1 / 12
This page cannot be seen from the preview
Don't miss anything!
Scientific ethics, defined as the standards of conduct for scientists in their professional endeavors, covers a broad swath of activities from issues handled by White House advisory groups on topics such as the use of human subjects in research or the appropriateness of patenting genetically modified organisms to peer review, to one-on-one mentoring in individual laboratories. Ethical issues in the news recently have spanned a range of issues – from whether or not to use fetal tissue in research to the appropriate role of private sector sponsorship of academic research. Many professional associations, such as the American Association for the Advancement of Science, Sigma Xi, and the American Physical Society, have established codes of ethical conduct for scientists. However, relatively little of the literature on scientific ethics reflects an organizational effectiveness and management perspective – that is, describing how ethical behavior is encouraged and misconduct is sanctioned by an organization. This review focuses on that portion of the literature that addresses organizational structures and processes related to scientific ethics.
Ethics is part of the “warrant” for science; that is, science can be excellent only if its practitioners conduct their research in accordance with the accepted practices in their fields. For all scientific fields, ethical behavior includes adherence to the principles and practices of valid experimentation (the scientific method, accurate and sufficient sampling of data, accurate record keeping and reporting, etc.), education and mentoring, unbiased peer and expert review, and communication of results to the scientific community. Thus, over and above the results themselves, excellent science has to pass the ethics test.
Science can be said to be ethical in two different ways: ♦ Ethics of the topics and findings (morality): Ethicists ponder the question of whether science is good or bad, especially in specific arenas of science such as biomedical and other research where human or animal subjects are involved. Also, groups with strong beliefs raise ethical questions when possible uses of the findings or the process for doing the science are in opposition to their tenets. Scientists themselves may raise moral or ethical issues, understanding the potential for harm related to the research process or outcome. ♦ Ethics of method and process (integrity): The process of doing and reporting science has a strong ethical component. It addresses the nature of the design, the experimental procedures, and the reporting of the research effort. The assumption of scientific integrity in carrying out the processes of science is basic to trust among scientists, between society and scientists, and to the credibility of scientific results. The research record is important because it is by examining the inputs to a piece of scientific research that scientists with similar expertise can judge the competence of the research design and the credibility of the findings.
(^1) Related chapters include: Science Policy; Performance Assessment; Organizational Culture; Leadership.
The topic of this review is scientific ethics and the management function of ensuring that scientific research is being conducted carefully and honestly.^2 Scientific integrity, ethical science, and high-value science are entwined, particularly for government-funded science. The literature focuses on the definition of scientific misconduct, the factors contributing to misconduct or enhancing scientific integrity, and the ways the science funding and research community can create or sustain environments in which scientific integrity can thrive.
Specific aspects of scientific integrity are determined largely by the scientific community, and vary over time. Persons entering science must learn what is appropriate behavior and what is not. In many settings neophyte scientists learn about practices and behaviors appropriate to their disciplines and research settings by observing more senior scientists on a day-to-day basis. In attempting to achieve high levels of scientific integrity and responsible science, strategies that foster learning and work settings that reinforce desired values and behaviors are powerful complements to strategies that focus on detecting and punishing misconduct. Figure 1 outlines the sources of learning and reinforcement for scientific integrity on the part of individual scientists. These measures provide opportunities for science organizations, as well as those that train scientists, to strengthen the elements of learning and work settings that foster and reinforce scientific integrity. This provides a basis for designing a balanced strategy for promoting
(^2) This literature is generally quite distinct and focused, with few linkages to the broader organizational
effectiveness literature. Related topics in the organizational effectiveness and management literature are business ethics and control functions. Discussion of oversight as an activity is located primarily in the literature on public policy and public administration.
♦ Research project managers, who both conduct science and oversee the work of other scientists ♦ Institutional research program officials, who employ the scientists and therefore have direct line responsibility for ensuring compliance with regulatory and contractual requirements and a need to maintain a volume of research that supports those employees and the institutional infrastructure ♦ Officials in federal and other research funding agencies who commission the research and have responsibility for ensuring that the funds are used effectively and provide benefit.
These stakeholders have both competing and complementary interests. Effective oversight of science requires awareness of the dynamic created by these various interests and the roles they can play in promoting scientific integrity and a commitment to high standards of scientific conduct.
Self-regulation plays a major role in identifying and controlling errors and misconduct in science. Professions have traditionally been granted relative autonomy to oversee and correct the behavior of their members; that is, to self-regulate. Science has been of particular interest to scholars of the history and sociology of professions (e.g., Merton 1973; Ziman 1984). Societies are willing to grant autonomy only so long as the members of the profession demonstrate themselves to be trustworthy in regulating themselves, rather than merely furthering the self-interests of group members. According to Guston (2000), the federal government, at least prior to the 1970s, generally trusted the science community to have integrity.
Self-regulation is also seen as possible because the social nature of science creates a self- correcting process that maintains scientific integrity. If science is seen as cumulative and consensual, most typically moving forward in small increments that build progressively on earlier work, other scientists can influence what becomes accepted as valid knowledge and incorporated into the foundation of subsequent work (Bauer 1992; Grinnell 1992). The process assumes that scientific peers will examine the evidence presented by other scientists, and, if they find it not credible, will say so and make the corrected information available to the scientific community as part of the normal course of scientific inquiry.
An important feature of self-regulation is the ability of other investigators to adequately judge the credibility and influence the acceptance of those findings. This feature is important since science comprises a multitude of sub-specialties, each with its own questions, evidence, and settings for establishing evidence, so that no single method can be used to judge all types of science – although the notion of a scientific method as an ideal hovers in the background (Bauer 1992:19- 41). Grinnell (1992:47) redefines the notion of scientific method as more of a “scientific attitude” that lies in the thought collective of scientists as a group. The scientific attitude means that “…most scientists do science in ways that they assume will be believable (e.g., publishable, fundable) by other investigators. Even the most basic features of what counts for scientific evidence can change depending upon what kinds of evidence investigators think will be convincing to each other.”
The literature on scientific oversight discusses three principal mechanisms that provide this self- correction and serve as the basis of most self-regulation in science (Bell 1992:xiv):
♦ Peer review (review by specialists of individual proposals submitted for funding, research designs submitted for approval, or research results) ♦ Refereed publication (review by qualified reviewers of publications, who recommend revision, rejection, or publication of an article) ♦ Replication of experiments (i.e., other scientists attempting to duplicate the same experiment to see if the same outcome can be achieved).
As systems for dealing with scientific misconduct have become more formalized, increasing attention has been given to the definition of misconduct. A brief review of the resulting categories of possible errors scientists can make and unethical behaviors in which they can engage illustrates that many gray areas exist. This increases the difficulty of teaching scientists or overseers the difference between innocent mistakes, dubious professional behavior, and misconduct. As a result of this definitional effort, four categories of problematic behavior have emerged: ♦ Honest mistakes ♦ Unethical behavior ♦ Noncompliance with legal or contractual requirements ♦ Deliberate deceit (scientific misconduct).^3
Sources of these behaviors vary from carelessness to deliberate attempts to mislead. Theoretically, many are correctable by self-regulation.
Scientists and their assistants, being only human, can make inadvertent mistakes of various kinds during design, calibration, logging, data entry, and so forth. Errors in interpretation might also fall into the category of honest mistakes. Honest errors and errors resulting from the sloppy execution of research can be corrected by the scientists themselves – if they discover their own mistakes – as well as by those who review or try to replicate the research. Since the stakes are high – mistakes can affect future funding and careers—scientists are likely to take pains to avoid mistakes.
Norms in the scientific community define acceptable and unacceptable practices. Teich and Frankel (1992:4) provide examples of behaviors that are not condoned but are in “gray areas:” ♦ Improprieties of authorship, such as duplicate publication of a single set of research results or fractional publication ♦ “Gift” or “honorary” authorship
(^3) Similarly, the National Academy of Sciences (NAS, NAE and IOM 1992) broke misconduct into three
broad categories, Professional Misconduct (e.g., bad mentoring, authorship disputes), General Misconduct (e.g., embezzlement, sexual harassment), and Research Misconduct.
Science Foundation; National Institutes of Health; National Aeronautics and Space Administration; and the Departments of Energy, Agriculture, and Defense.
The policy (65 FR 235) defines research misconduct as “…fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.” The policy statement defines each of the terms and states that “research misconduct does not include honest error or honest differences of opinion.” The policy states that a finding of research misconduct requires that “there be a significant departure from accepted practices of the scientific community; and the misconduct be committed intentionally, or knowingly, or recklessly; and the allegation be proven by a preponderance of evidence.” It confirms that a researcher's home institution has principal responsibility to respond to allegations of research misconduct, while pointing out that federal agencies have ultimate oversight authority for federally funded research. It also provides safeguards for the subjects of allegations as well as for informants.
Previous definitions have for the most part incorporated fabrication, falsification, and plagiarism, but have differed with respect to various types of activities usually believed to be typical of some specific research funding program or research arena. Bird and Dustira (2000) provide a brief description of the concerns about the policy and its implementation that have been expressed by the stakeholders, including federal agencies, scientists, research institutions, and the scientific publishing community.
The major federal science-funding agencies have historically not kept statistics on scientific misconduct. A recent regulation now requires science-implementing organizations to report such cases to the agency providing the funding and provide some data on allegations and confirmed cases of misconduct. In fiscal year 1996, nearly 170 scientists were under suspicion by the federal government for possibly committing scientific misconduct and at least 20 scientists were found to have committed scientific misconduct (Dooley and Kerch 2000). The National Institutes of Health (NIH) had close to 100 “active cases” of scientific misconduct and found 17 individuals to have committed scientific misconduct while using NIH research funds. The NSF had approximately 70 active cases and found approximately six individuals guilty of scientific misconduct. Although such figures document the incidence of formally reported cases, they do little to reveal the prevalence of misconduct among scientists practicing in the mainstream of federally funded research, much of which will not be quickly, if ever, detected and reported.
An alternative approach to estimating the prevalence of scientific misconduct is to survey scientists themselves. In 1991 the American Association for the Advancement of Science (AAAS) conducted a survey of 1500 scientists. A quarter of those who responded reported that they had witnessed faking, falsifying, or outright theft of research in the past decade (Marsa 1992:40).^4 Nevertheless, the perspective of the research community is that given the large numbers of projects funded each year, the rate of scientific misconduct is low. This does not mean, however, that scientists, science-implementing organizations, and science-funding organizations do not need to be concerned about scientific misconduct.
In 1995, a Panel on Scientific Responsibility and the Conduct of Research stated that:
(^4) Some concerns have been raised about this particular study’s methodology, yet it remains useful as the
only broad-based survey that is focused on research misconduct.
…the number of confirmed cases of misconduct in science is low compared to the level of research activity in the United States. However, as with all forms of misconduct, underreporting may be significant; federal agencies have only recently imposed procedural and reporting requirements that may yield larger numbers of reported cases. Any misconduct comes at a price to scientists, their research institutions, and society. Thus every case of misconduct in science is serious and requires attention (NAS, NAE, and IOM 1995:9).
The seriousness of the problem has been debated. For example, in 1992, an article titled “Scientific Fraud” in the lay-oriented magazine Omni (Marsa 1992:38) began with the boldfaced lead: “As mounting scandals shake the public trust, researchers struggle to reconstruct science’s shattered reputation.” Three years later, an article in the journal Science and Engineering Ethics (Shore 1995:383) put a more measured spin on the situation: “In response to a series of allegations of scientific misconduct in the 1980’s, a number of scientific societies, national agencies, and academic institutions…devised guidelines to increase awareness of optimal scientific practices and to attempt to prevent as many episodes of misconduct as possible.” 5
Despite the historical confidence in self-regulation, in the early 1990s the literature reflects growing concern, at least in Western societies, that scientific misconduct may be a greater problem than previously recognized and that scientific integrity and self-regulation mechanisms are not as robust as previously thought. This concern was reinforced by indications that the public’s trust in science to be self-regulating was crumbling. At this point, the literature began to focus on how the social structure of science, the unfair advantages held by the scientific elite, and the intense competition for financial gain or prestige served to compromise self-regulation in science (e.g., Bell 1992; Chubin and Hackett 1990; Martin 1992). The principal self-regulating mechanisms upon which science traditionally relied (listed above) were judged to be inadequate in the face of these pressures.
The examination of alternatives to self-regulation that followed this growing concern provides a most useful discussion of the issues and approaches to scientific oversight. During the 1990s, major scientific collectives have addressed ways to promote responsible research practice (e.g., NAS, NAE, and IOM 1992, 1993, 1995; Schwartz, 1997). During the same time period, sometimes at the insistence of Congress, the major research funding bureaucracies have extended their capabilities and guidance for handling misconduct (e.g., Frankel 1993, 1995; Ryan 1996; Pascal 1999a, 1999b; ORI 1999). Scientific and professional associations have also stepped forward to develop standards and foster the reconstruction of scientific integrity in the face of the increasing complexity and competing pressures on the scientific enterprise (e.g., AAAS and ORI 2000). As discussed below, this literature focuses on measures that promote “responsibility” in science or in the conduct of research.
As described earlier, scientists desire autonomy in the conduct of their work and claim that only they have the unique capabilities to evaluate the work of their colleagues. Academic and research
(^5) This multi-disciplinary quarterly, launched in 1995, comprises a useful compendium of articles on the
ethical issues and solutions that are emerging cross-nationally in both basic research and practical applications of science and engineering. See tables of contents and abstracts at http://www.opragen.co.uk/SEE/index.html.
approximately 95 percent of all investigations involving PHS extramural support are conducted by the institution that had the grant. However, the point is also made that the funding organizations need to be involved. Pascal, as Acting Director of the Office of Research Integrity emphasized that the Department of Health and Human Services has a critical role in providing oversight of institutional investigations because over 50 percent of all misconduct findings result in debarment from federal funds for a given period, usually three years. “Imposition of this sanction requires assessment of the federal interest at stake and assertion of federal authority that is beyond the scope of any individual institution” (Pascal 1999a: 195).
The central approach by the primary research funding agencies is to emphasize mechanisms for fostering scientific integrity in both their intramural and extramural programs. Agreements between federal agencies and funding recipients can explicitly state expectations for the implementation of such mechanisms, including those that foster and reinforce integrity. Awareness of the ethical problems scientists face and examples of how they can be managed can be incorporated into such approaches as scientist mentoring programs in the basic research programs and into the graduate education programs under the auspices of the funding agencies. Federal program managers should be familiar with these mechanisms and how they can be incorporated into their interactions with those receiving federal funds.
Federal agencies can examine the way in which the pressures of highly competitive science are manifested in the laboratories and other funding recipients, and consider ways of alleviating some of these pressures through adjustment of expectations and reward structures. Agencies can also make it clear to their research laboratories and principal investigators where it stands with respect to the handling of allegations of misconduct, investigation, and sanctions and take measures to support effective implementation of the mechanisms that provide self-correction in science, including active mentoring, a supportive work environment, peer review, refereed publication, and replication of experiments.
Federal agencies can encourage their institutions to develop and implement meaningful and interactive approaches for bringing ethical issues and expectations to the attention of scientists. The key word here is “interactive,” since scientists are more receptive to practical forums for meaningful discussion that are directly related to their work. Required training videos or classes are ineffective at best, especially since they are so often used to transmit compliance training.
Federal agencies that support scientific research could also support important research on the relationship of the elements of research environments to scientific integrity and on the way in which certain types of guidance and oversight enhance or inhibit creativity and innovation in research and development organizations.
65 FR 235. December 6, 2000. Office of Science and Technology Policy. Federal Policy on Research Misconduct. Federal Register.
American Association for the Advancement of Science (AAAS) and the U.S. Office of Research Integrity (ORI). 2000. The Role of Activities of Scientific Societies in Promoting Research Integrity. Report of a Conference, April 10, 2000, Washington, D.C. Available URL: http://www.aaas.org/spp/dspp/sfrl/projects/integrity.htm (as of September 30, 2000).
Bauer, Henry H. 1992. Scientific Literacy and the Myth of the Scientific Method. Urbana, IL: University of Illinois Press.
Bell, Robert. 1992. Impure Science: Fraud, Compromise, and Political Influence in Scientific Research. New York: John Wiley & Sons.
Bird, Stephanie J., and Alicia K. Dustira. 2000. New Common Federal Definition of Research Misconduct in the United States. Science and Engineering Ethics 6:123-130.
Bird, Stephanie J., and Alicia K. Dustira. 1999. Misconduct in Science: Controversy and Progress (Editorial in the Special Issue on Scientific Misconduct). Science and Engineering Ethics 5(2):131-137.
Chubin, Daryl E., and Edward J. Hackett. 1990. Peerless Science. Albany, NY: State University of New York Press.
Dooley, James J., and Helen K. Kerch. 2000. Evolving Research Misconduct Policies and Their Significance for Physical Scientists. Science and Engineering Ethics, Special issue on Scientific Misconduct: International Perspectives 6(1).
Frankel, Mark S. 1995. Research Integrity Commission Picks up the Pace. Professional Ethics Report VII(2). Published by the American Association for the Advance of Science Scientific Freedom, Responsibility, and Law Program. Available URL: http://www.aaas.org/spp/dspp/sfrl/per/per1.htm (as of September 30, 2000).
Frankel, Mark S. 1993. Professional Societies and Responsible Research Conduct. In Responsible Science, Volume II: Ensuring the Integrity of the Research Process. NAS/NAE/IOM. Pp. 26-49. Panel on Scientific Responsibility and the Conduct of Research. Committee on Science, Engineering, and Public Policy. Washington DC: National Academy Press.
Grinnell, Frederick. 1992. The Scientific Attitude. 2nd^ Edition. New York: The Guilford Press.
Guston, David H. 2000. Retiring the Social Contract for Science. Issues in Science and Technology Online. (Summer). A publication of the National Academy of Sciences, National Academy of Engineering and Ida and Cecil Green Center for the Study of Science and Society. Available URL: http://www.nap.edu/issues/16.4/p_guston.htm (as of September 30, 2000).
Institute of Medicine (IOM). 1989. The Responsible Conduct of Research in the Health Sciences. Committee on the Responsible Conduct of Research, Division of Health Sciences Policy. Washington DC: National Academy Press. Available URL: http://www.nap.edu/openbook/0309062373/html/R.1html (as of September 30, 2000).
Marsa, Linda. 1992. Scientific Fraud. Omni 14:38-43,82-83.
Martin, Brian. 1992. Scientific Fraud and the Power Structure of Science. Prometheus 10(1):83-
Merton, Robert K. 1973. The Sociology of Science. Chicago: University of Chicago Press.
National Academy of Sciences (NAS), National Academy of Engineering (NAE), and Institute of Medicine (IOM). 1995. On Being a Scientist: Responsible Conduct in Research. 2nd Edition. Committee on Science, Engineering, and Public Policy. Washington DC: National Academy Press.
National Academy of Sciences (NAS), National Academy of Engineering (NAE), and Institute of Medicine (IOM). 1993. Responsible Science, Volume II: Ensuring the Integrity of the