The Mertonian Principles Revised: Can the Normative Structure of SciencePrevent Fraud?
IN THE PAST, SCIENCE HAS been commonly associated with the pursuit of truth in a controlled and honest manner. For this reason, the existence of fraud within the scientific craft has been largely ignored, partly owing to its assumed non-existence, but also because it was traditionally maintained that the normative structure of science possessed inherent mechanisms to prevent deviant acts in scientific inquiry. The present essay calls this assumption into question and investigates the extent to which the structure of science averts scientific misconduct. To achieve this goal, the study starts by defining scientific fraud and then scrutinizes the idealistic conceptualizations of the normative structure of science in order to determine whether these are presently applicable or not. Finally, the paper addresses several potential motivational factors leading scientists to commit fraud and demonstrates that certain aspects of the scientific structure rather than the individual make such acts possible or even likely.
Although a precise definition is lacking, by scientific fraud we understand an act of deception whereby one’s work or the work of others is consciously and intentionally misrepresented. It belongs to the wider category of scientific misconduct, defined as deviation from accepted ethical practices for proposing, conducting, and reporting research. Scientific fraud may take numerous forms, the most common of which are falsification of data, such as outright fabrication of data, deceptive selection and reporting of findings, and omission of conflicting information. Moreover, scientific fraud is a label for improprieties of authorship, which includes plagiarism and other improper assignment of credit such as excluding others or claiming the work of someone else as one’s own. Additionally, under the term scientific fraud are classified acts of misappropriation of others’ ideas, for instance through improper use of information or influence gained by privileged access, such as service on peer review panels, editorial boards, and policy boards of research funding organizations. Finally, it is necessary to distinguish fraud from honest error and from ambiguities of interpretation that are considered inevitable in the scientific process.
For many years, scientific fraud as defined above was not perceived as an issue of concern, given that the normative structure of science would make such acts unlikely. This view was most clearly articulated by Robert K. Merton, who understood the institutional goal of science as being the “extension of certified knowledge” and outlined four norms that he saw central to this pursuit. Universalism, as he maintained, implies that the validity and truth of scientific statements be totally separated from the personal characteristics of the one who initiates them. Communality entails that scientific findings should be freely shared with others, whilst secret or classified research is antithetical to the spirit of science. Disinterestedness refers to the fact that the scientist’s research be guided not by personal motives (e.g. profit), but by the wish to extend scientific knowledge. Finally, organized skepticism means that scientists should be encouraged to examine openly, honestly, and publicly each other’s work and provide constructive criticism. Merton believed that conforming to these norms generates two types of control mechanisms that discourage fraud in science.
According to him, the first is an “inner mechanism” secured by the scientist through the internalization of norms and other processes of socialization. In the long process of training, the scientist assimilates the guidelines and methods of scientific inquiry and learns that fraud represents the most serious crime in the search for scientific certainties. The second form of control in science is an “external mechanism”, which follows from the norms of communality and organized skepticism and implies that if a scientific result were important enough, other scientists would try to repeat it. Replication of the experiments would facilitate exposure of cheating and encourage honesty. For this reason, science has commonly been perceived as self-policing and self-correcting and fraud extremely rare if not completely absent. Therefore, “in the past scientists widely believed that science possessed sufficient internal checks to effectively deter fraud, to discover dishonesty so quickly and efficiently that the resulting damage to a scientist’s professional career would be too great to risk.” Nevertheless, an overview of the present scientific reality should not only confirm that social control mechanisms in science are much weaker than usually accepted, but also that there are structural as well as personal incentives conducive to scientific fraud, which place doubt on the traditional assertions.
This is a preview of the whole essay
To begin with, the four aforementioned Mertonian principles can be criticized for failing to accurately reflect the present state of affairs in the field of science. For instance, the norm of universalism hardly appears to be satisfied nowadays, given the fact that the truth of a scientific statement is not always separated from the personal characteristics of the individual scientist. Instead, “truth” in science is often negotiated and clearly depends on the prestige of the scientist who produces the statement. The acceptance or rejection of claims in science is often determined by the source and their fit to prevailing beliefs and knowledge. It is usually unlikely that the scientific community will pay as much attention to a young inexperienced scholar than to an established scientific VIP, especially if the claim in discussion contradicts or rejects generally accepted scientific principles. In short, reputation and the appeal of a theory often give “immunity from scrutiny”.
Belief in the accepted scientific paradigms on which many spend their entire life working and building their careers can be blamed for inducing a natural propensity to support new findings and theories that agree with established scientific world-views, even if they are deceptive. Science is not free from internal or external (cultural and political) influences and ample examples exist of cases where deviance in science can be traced to beliefs that have “blinded” those responsible for upholding the truth. Illustrative of the introduction of “science” into a political cultural matrix that wants to believe that this “science” is valid and true is the case of Troffim D. Lysenko, a former peasant and plant-breeder who publicly endorsed the debunked Larmakian theory, i.e. that the mechanism by which evolution works is the inheritance of traits acquired in response to one’s environment. Lysenko’s view accorded nicely with the political view of Stalin’s communist regime of “social engineering” and was proclaimed a dogma by the Kremlin, ensuring decisive victory of Lysenko over every opponent, who was systematically eliminated from any post.
Also highly relevant in this context is the case of Cyril Burt. This well respected scientist died (in 1971) before his studies – claiming that there is a clear correlation between occupational status of parents and the I.Q. of the children – were investigated and proven beyond any reasonable doubt (in 1978) to be fraudulent. Asimov notes that Burt had probably fabricated data so that they would fit his theories, and Wade adds that “Burt’s data remained unchallenged because they confirmed what everyone wanted to believe”.
Moreover, contrary to Merton’s second norm, almost daily scientists fail to share their findings with each other for a variety of reasons, such as secret or classified research, fierce competition, and the fear of having one’s ideas stolen. This explains, for instance, that despite the norm of communality, accessibility to original data belonging to other scientists is extremely problematic. In these circumstances, replication is not always possible since in many cases not enough information is available about the original work, making detection of fraud difficult.
Furthermore, it appears that disinterestedness in science can hardly be maintained. After all, a scientist derives two sometimes-related types of personal rewards from his work, a psychological and an economical one. The former refers to such rewards as prestige, fame, recognition, self-esteem, international reputation, respect and the bestowal of various honors. Especially in the latter half of the twentieth century, scientists are often consulted by high government officials and participate in top-level policy discussions on national, international, social, medical, and political issues. Additionally, the highest award any scientist can attain, the Nobel Prize, is very personalized since it is awarded personally to the chosen scientist. For instance, in 1923 the Nobel Prize in physiology was awarded to Macleod and Banting for discovering how to isolate insulin and according to Bliss, it appears that Macleod (and others) twisted the truth about who should have gained the honor. Certainly, Macleod did not show too much disinterest in this case. As far as the personal-economic compensation goes, Ben-Yehuda argues that “recognized, well-established scientists usually enjoy economic security and a moderate to luxurious life-style. Many travel throughout the world very comfortably and have access to research grants, which might further bolster their already comfortable salaries”. Consequently, scientists have in fact profound interests in being successful and disinterestedness does not appear to be a realistic norm.
Finally, and arguably most important, the norm of organized skepticism appears to be the most troublesome among Merton’s principles and suggests that the external control mechanism may be far weaker than conventionally assumed. Several important problems exist with the replication argument. First, replication is in general not considered very prestigious or interesting work for scientists who have been trained to innovate. In addition, granting institutes and scientists are reluctant to provide funds, time, and effort for replication alone, unless the replication itself is significant in some important way. Moreover, the idea of replication is most applicable to experimental disciplines which lend themselves to empirical experiments and reproductions. Replications are rather unlikely regarding ideas and theories that dominate the humanities and social sciences. In addition, the use of methodologies and techniques such as surveys, participant observations, and ethnographies virtually guarantee that results cannot be replicated. Not to mention that replication can never really prove that the experiment is deceptive, but only that its results are wrong. Additionally, to conclude that fraud is involved, one needs serious premises for support. Finally, sometimes replication may be obstructed by financial, structural, or technological limitations. For example, the advent of X-rays and other sophisticated technical equipment made it possible only forty years after its discovery to declare the Piltdown Man a hoax.
Consequently, the fact that replication is not a popular or straightforward endeavor means that the probability of fraud detection is not particularly high. The complicated and elaborate division of labor in science, leading to increased specialization, makes the detection of fraud even more remote. Additionally, the low possibilities of exposing fraud are coupled with generally low sanctions once a fraudulent scientist is indeed discovered. Ben-Yehuda argues that “fraudulent scientists are not submitted to anything like the criminal justice system, and a formal trial does not follow investigations.” In many cases, the deviant act of the scientist is not even made public. The superiors of deviant scientists are never over-anxious to disclose the case, for it might damage their own, and the institute’s prestige and credibility. Thus, if the case is not blatant fraud and fabrication on a mass scale, the most that the deviant scientist can incur is unemployment. As a result, it appears that the combination of low probability of detection and non-harsh punishments provides a fertile context for misconduct.
One can thus observe that there are several elements and forces inherent in the normative structure of science that are very much conducive to scientists committing fraudulent acts. Nevertheless, this is certainly not sufficient to explain the motivational element for engaging in scientific misconduct. This topic has been rather controversial in the past, with “current popular efforts tending to be either individualistic ‘bad apple’ explanations, or indictments of the pressures to produce inherent in the structure of modern science”. The explanation in favor of individualistic interpretations is for instance reflected in the statement of Philip Handler, former president of the National Academy of Science, who argued that the issue of scientific fraud “need not be a matter of general scientific concern” as it was primarily the result of individuals who were “temporarily deranged”. Such attempts to explain scientific deviance by attributing it to character or personality flaws of the perpetrators are excellent examples of the “bad person” approach to explaining deviant behavior. “Being the least threatening to the status quo, this approach has the added advantage of deflecting attention and potential criticism away from the institutional arrangements in which the deviance occurs – in this case the structure of modern scientific research.” Nevertheless, criticism has been forthcoming, much of it from within the scientific community.
At least since the second half of the 1970s, the notion crystallized that mobility in most Western scientific establishments depended primarily on published works. This put heavy pressure on young scientists, as the slogan “publish or perish” became very meaningful. Those who could write fast and had good contracts with editors and publishers moved forward; the others agonized. “When the idea of producing much and quickly is being impressed upon a young scientist who very badly wants an academic career, a temptation to deviate is likely to be created.” Ebert, former Dean of the Harvard Medical School, notes that science has “inadvertently fostered a spirit of intense, often fierce competition” and that the pressure to produce and publish is probably THE reason for fraud in science. Promotion in the most prestigious medical schools is based almost entirely on the evaluation of published research. Moreover, power and prestige are now dependent on the ability to generate external funding with younger researchers being pressured to turn out papers reporting positive results as fast as possible. In many cases, getting grant money makes the difference between doing research and being out of job. Consequently, what had been a purely intellectual competition in the past has now become an intense struggle for scarce resources based on published work. What this entails is that fraudulent scientists are certainly not born to be as such; rather, they become fraudulent. “A lonely scholar, hungry for publications and recognition, in a stiff competition and in a desperate subjective need for tenure, may easily become cynical of science.”
Given the ubiquitous pressures exerted by the structure of science, it would surely be justified to assume a high incidence of scientific misconduct. Nevertheless, determining the actual amount of scientific fraud is much like measuring the frequency of crime in society; only those crimes that have been exposed are taken into account. In this regard, there appears to be few incidents of reported and recorded scientific fraud. Broad and Wade, for example, list thirty-four cases of “known or suspected” scientific fraud from the second century BC through 1981. Yet, reported incidents are not an accurate reflection of the actual amount of activity, and it would be naïve to assume that there are no incidents of scientific fraud that remain undiscovered and unreported. Broad and Wade estimate for instance that there are about twelve instances of unreported fraud for every reported case. Such arguments that scientific fraud may be much more prevalent than what is reported or estimated by official reporting form part of the so-called “iceberg theory.” The scientific ethos and community usually vigorously negate the possibility of this theory. Fraud in science is still seen as the exception and rare. Nevertheless, the critical discussion of the Mertonian principles – replication and referring, the structural and motivational aspects conducive to deviance, weak control mechanisms, low probability of being caught and non-harsh punishments for fraudulent scientists – suggest that the few cases uncovered may indeed be just the tip of the iceberg. Consequently, it seems safe to assume that fraud is probably more prevalent in science than is usually considered. Nevertheless, as Ben-Yehuda correctly points out, concluding that all of science is fraudulent, manipulated, cooked or faked would be a misinterpretation. A scientific community does in fact exist, there is intensive scientific activity which functions along explicit, well-established norms, and certified knowledge is produced and transmitted from generation to generation of scientists.
Moreover, some actually argue that a certain amount of fraud is not only tolerable but also essential for progress. Such a radical view on the ubiquity of fraud comes from philosopher of science Paul Feyerabend, who holds that small-scale cheating is necessary to the advancement of science. He argues that no theory, no matter how good, ever agrees with all the facts in its domain. A scientist must therefore rhetorically nudge certain facts out of the picture, defuse them with an ad hoc hypothesis, or just plain ignore them. Similarly, Kuhn divides the history of science into periods of normal and revolutionary activity, arguing that during normal periods, anomalies observed by the scientists must be suppressed or ignored. According to him, even our most important theories have been maintained in spite of anomalous experimental results, in the hope that these anomalies will eventually be explained and the threat to our favorite theories eliminated.
However, even if we concede to the existence of a beneficial amount of fraud for science, there is still the issue that certain frauds and their amount can have serious implications for the scientific field in which they are produced as well as for society at large. For instance, suppression of evidence for toxicity or disease consequences of certain products and medicine can have disastrous effects for the health of the wide population. Extensive fraud can further raise questions about moral and human dignity (e.g. cloning) and the trust that people can afford in science. Therefore, bringing into discussion possible preventive methods seems justified. A vibrant attempt for solutions to scientific fraud has been so far largely absent, but obvious proposals can be mentioned such as harsher sanctions, a review of the current peer review system, more secure contracts for scientists or a proper financial support structure, which does not involve a desperate annual search for funds. An appeal to the conscious of those scientists contemplating fraud can also be of assistance. Even if Milgram demonstrated in his famous experiment that there is potential for evil in each of us, it can still be useful to remind ourselves of Socrates, who argued that misconduct harms deeply our conscience and also the society at large. Consequently, one should fight to inhibit wicked impulses and strive to act responsibly.
The present analysis has thus uncovered that the Mertonian view of science is somewhat misleading as science has at least a few problematic areas conducive to the creation and maintenance of fraudulent acts. The Mertonian norms can therefore be conceptualized as ideals, not reflecting reality in its entirety. As it has already been discussed, fraud in science occurs first because social control in science is weak, in contrast to the Mertonian principles. Second, structural and motivational factors in science and the scientific community as such are surely favorable to committing acts of misconduct. The pressure to deviate has very little to do with the scientists’ personality, but more with the structure of science, its goals, the way it functions and the specific positions which scientists hold. This is certainly not meant to excuse the fraudulent scientist of all guilt, since pressure is an ever-present feature of science and most scientists do not succumb to the temptation of committing fraud. Nonetheless, it can be positively affirmed that the structure of science does permit such temptations to arise in those scientists who ignore their conscience and allow the devil dwelling deep inside each one of us to surface.
Adam, D. & Knight, J. (2002). “Publish, and Be Damned.” Nature. Vol. 419. 772-776.
Bechtel, H.K. & Pearson, W. (1985). “Deviant Scientists and Scientific Deviance.” Deviant Behavior. Vol. 6. 237-252.
Ben-Yehuda, N. (1986). “Deviance in Science.” The British Journal of Criminology. Vol. 26. No. 1. 1-27.
Broad, W.J. (1981). “Fraud and the Structure of Science.” Science. Vol. 212. 137-141.
Broad, W.J. & Wade, N. (1982). Betrayers of the Truth. New York. Simon and Schuster.
California Polytechnic State University (1996). “Policies and Procedures for the Handling of Allegations of Scientific Fraud and Serious Misconduct.” [Available under: ]
Gardner, M. (1952). Fads and Fallacies in the Name of Science. New York. Dover Edition.
List, C.J. (1985). “Scientific Fraud: Social Deviance or the Failure of Virtue?” Science, Technology, & Human Values. Vol. 10. No. 4. 27-36.
Martin, B. (1992). “Scientific Fraud and the Power Structure of Science.” Prometheus. Vol. 10. No. 1. 83-98. [Available under: ]
Merton, R.K. (1973). “The Normative Structure of Science (1942)” in Merton, R.K. Sociology of Science. Chicago. University of Chicago Press. 267-278.
Milgram, S. (1974). Obedience to Authority: An Experimental View. New York. Harper & Row Publishers.
Plato (1993). Republic. Oxford, New York. Oxford University Press.
Schmaus, W. (1983). “Fraud and the Norms of Science.” Science, Technology, & Human Values. Vol. 8. No. 4. 12-22.
Straus, W.L. (1954). “The Great Piltdown Hoax.” Science. Vol. 119. No. 3087. 265-269.
Weinstein, D. (1979). “Fraud in Science.” Social Science Quarterly. Vol. 59. 639-652.
Zuckerman, H. (1977). “Deviant Behavior and Social Control in Science” in Sagarin, E. (ed.) Deviance and Social Change. Beverly Hills. Sage Publications. 87-138.
California Polytechnic State University 1996
Merton 1973. See also Schmaus 1983 and Zuckerman 1977.
Ben-Yehuda 1986: 2
Bechtel and Pearson 1985: 244
Broad 1981: 140
Gould showed how scientists distorted or invented data in order to uphold their political theories and prove that certain types of people were inferior (Gould in Ben-Yehunda 1986)
See for instance Zuckerman 1977
Ben -Yehunda 1986: 13-14
Gardner 1952: 140
The studies claimed that the higher the parents’ social rank, the higher the IQ of the child. See Broad 1981: 140
Asimov 1979: 139 cited in Ben -Yehunda 1986: 15
Wade 1976: 918 cited in Ben -Yehunda 1986: 15
Bliss 1982 cited in Ben-Yehuda 1986: 11
Ben-Yehuda 1986: 5
Zuckerman 1977: 95
List 1985: 29
In 1912, the archeological discovery of a skull, which became an example of the notorious "missing link" between humanity and its primate ancestors due to the huge, humanlike braincase and flattened teeth but ape-like jaw. Forty years later it was revealed that the teeth of the exponent had been grounded down artificially to appear human and the jawbone had been stained chemically for the purpose of making it appear ancient. The crude forgery was exposed to the surprise and embarrassment of those involved, including leading evolutionary experts but it eventually failed to establish who is to blame, through what means it achieved the deceit, and why. See Straus 1954 for details
Ben-Yehuda 1986: 8
Bechtel and Pearson 1985: 243
Handler in ibid.
Bechtel and Pearson 1985: 243
Wallis in Ben-Yehuda 1986: 10
Ebert in Bechtel and Pearson 1985: 244
Ben-Yehuda 1986: 20
Broad and Wade 1982
For instance, Ben-Yehuda argues that the number of known cases of fraud in science is indeed relatively small. However, these cases are biased, as most of them are on fraud in those areas of science where “hot” issues and “breakthrough” research are debated. Yet, most of the scientific work is very far from this type and is in non-breakthrough research. If fraud is being committed in one of those areas, the probability of its ever being detected is very slim (Ben-Yehuda 1986: 18)
Ben-Yehuda 1986: 22
Feyerabend in Broad 1981: 139
Kuhn in Schmaus 1983: 17
See Adam and Knight 2002