For centuries, observers have noted the many obstacles to intellectual change in science. In a much-discussed paper published in Scientific American in 1972, molecular biologist Gunther Stent proposed an explicit criterion for one kind of obstacle to scientific discovery. He denoted a claim or hypothesis as "premature" if its implications cannot be connected to canonical knowledge by a simple series of logical steps. Further, Stent suggested that it was appropriate for the scientific community to ignore such hypotheses so that it would not be overwhelmed by vast numbers of false leads. In this volume, eminent scientists, physicians, historians, social scientists, and philosophers respond to Stent's thesis.
Prematurity in Scientific Discovery On Resistance and Neglect
Prematurity in Scientific Discovery One of the depressing by-products of the fantastically rapid progress that was made in molecular genetics in the past twenty-five years is that now merely middle-aged participants in its early development are obliged to look back upon their early work from a depth of historical perspective that, in the case of biological specialties that came into flower in earlier times, had opened up only after all the witnesses of the first blossoming were long dead. I have been trying to make virtue out of necessity and actually exploit this singular position for fathoming the evolution of a scientific field.1 Thus, in looking back on the history of molecular genetics from the viewpoint of my own experiences, I have found that one of its most famous incidents, Oswald T. Avery's identification of DNA as the active principle in bacterial transformation and, hence, as genetic material, illuminates a general problem of cultural history.2 The case of Avery brings, I think, insights into the question of whether it is meaningful, or merely tautological, to allege that a discovery is "ahead of its time," or premature.
Gunther S. Stent
Prematurity In 1968, I published a brief retrospective essay on molecular biology with particular emphasis on its origins.3 In this historical account, I mentioned neither Avery's name nor DNA-mediated bacterial transformation. My essay brought forth a letter to the editor by Carl Lamanna, who complained that "it is a sad and surprising omission that Stent makes no mention of the definitive proof of DNA as the basic hereditary substance by O. T. Avery, C. M. MacLeod and I. L. McCarty.4 The growth of [molecular genetics] rests upon this experimental proof. . . . I am old enough to remember the excitement and enthusiasm induced by the publication of the paper by Avery, MacLeod and McCarty. Avery, an effective bacteriologist, was a quiet, self-effacing, non-disputatious gentleman. These characteristics of personality should not prevent the general scientific public represented by the audience of Science to let his name go unrecognized."5
I was taken aback by Lamanna's letter and replied that I agreed that I should have really mentioned in my essay Avery's proof in 1944 that DNA is the hereditary substance.6 But, I went on to say, in my opinion it is not true that the growth of molecular genetics rests upon Avery's proof. For many years that proof had actually made a surprisingly small impact on geneticists, both molecular and classical, and it was only the Hershey-Chase experiment of 1952 which caused those people to focus on DNA.7 The reason for this delay was neither that Avery's work was unknown to or mistrusted by geneticists nor that the Hershey-Chase experiment was technically superior. Instead, Avery's discovery, so I declared, had been merely "premature." And in the last two sentences of my reply to Lamanna, I sketched out the argument about prematurity that I shall try to develop here in somewhat greater detail.
My prima facie reason for considering Avery's discovery premature is that it was not appreciated in its day.8 But is it, in fact, true that Avery's discovery was not appreciated? Lamanna, for example, mentions his own excitement and enthusiasm induced by the publication of Avery's paper, and several participants in the 1946 Cold Spring Harbor Symposium on Heredity and Variation in Microorganisms have told me that Avery's discovery formed the subject of intense discussion at that symposium. So how can I say that it was not appreciated? By lack of appreciation I do not mean that Avery's discovery went unnoticed, or even that it was not considered important. What I do mean is that no one seemed to be able to do much with it, or build upon it, except for the students of the transformation phenomenon. That is to say, Avery's discovery had virtually no effect on general genetic discourse.
By way of support of this allegation, I invite examination of the 1946 Cold Spring Harbor Symposium volume. It contains a paper by McCarty, Harriet Taylor, and Avery, whose main concern is not the meaning of the discovery for genetics but the elucidation of the role of serum in the DNA-mediated transformation phenomenon. Although many of the other papers of the volume are followed by discussants' remarks, no discussant of the McCarty, Taylor, and Avery paper is on record. Only five of the other 26 symposium papers refer to Avery's discovery.
Three phage workers, T. F Anderson, A. D. Hershey, and S. E. Luria, venture the opinion that the phenomenon is probably of wide biological importance. L. Dienes concludes that since DNA is "a substance without apparent organization," Avery's discovery means that "bacteria possess a mechanism for the exchange of hereditary characteristics, [that is] different from the usual sexual processes," and S. Spiegelman is under the impression that Avery discovered "the induction of a particular enzyme with a nucleoprotein [sic] component."9 Neither Max Delbrück nor J. Lederberg and E. L. Tatum mention Avery at all in their now famous 1946 symposium papers.
An even more convincing demonstration of the lack of appreciation of Avery's discovery is provided by the 1950 Golden Jubilee of Genetics symposium "Genetics in the 20th Century."10 Here some of the most eminent geneticists of that time presented essays that surveyed the progress of the first 50 years of genetics and assessed its present status. Only one of the 26 essayists saw fit to make more than a passing reference to Avery's discovery, then six years in the past, namely A. E. Mirsky, who still expressed some doubts that the active transforming principle is really pure DNA. H. J. Muller's 1950 symposium essay on the nature of the gene contains no mention of Avery or DNA.
So, why was Avery's discovery not appreciated in its day? Because it was "premature." But is this really an explanation or is it merely an empty tautology? In other words, is there a way of providing a criterion of the prematurity of a discovery other than its failure to make an impact? Yes, there is such a criterion: A discovery is premature if its implications cannot be connected by a series of simple logical steps to contemporary canonical [or generally accepted] knowledge.11 This criterion is not to be confused with that of an unexpected discovery, which can be connected with the canonical ideas of its day but might overthrow one or more of them. For instance, the finding of a "reverse transcriptase" would fall into the category of unexpected discoveries—provided, of course, that the function attributed to that enzyme of catalyzing the assembly of a DNA replica from an RNA template can eventually be shown to occur in vivo.12 Although prior to that finding, it had been generally assumed by molecular geneticists that there is no reverse flow of "information" from RNA to DNA, there is no difficulty at all in understanding such a process from the viewpoint of the previous current ideas of polynucleotide synthesis.
Why could Avery's discovery not be connected with canonical knowledge? By 1944, DNA had long been suspected of exerting some function in hereditary processes, particularly after R. Feulgen [with H. Rossenbeck] had shown in 1924 that DNA is a major component of the chromosomes.13 But the then current view of the molecular nature of DNA made it well nigh inconceivable that DNA could be the carrier of hereditary information. First of all, until well into the 1930s DNA was generally thought to be merely a tetranucleotide composed of one residue each of adenylic, guanylic, thymidylic, and cytidylic acid. Secondly, even when it was finally realized by the early 1940s that the molecular weight of DNA is actually much higher than that demanded by the tetranucleotide theory, it was still widely believed that the tetranucleotide is the basic repeating unit of the large DNA polymer in which the four purine and pyrimidine bases recur in regular sequence. DNA was therefore viewed as a monotonously uniform macromolecule which, like other monotonous polymers such as starch or cellulose, is always the same no matter what its biological source. The ubiquitous presence of DNA in the chromosomes was, therefore, generally explained in purely physiological or structural terms. Instead, it was usually to the chromosomal protein that the informational role of the genes had been assigned since the great differences in the specificity of structure that exist between heterologous proteins in the same organism, or between homologous proteins in different organisms, had been appreciated since the beginning of this century. The conceptual difficulty of assigning the genetic role to DNA had by no means escaped Avery, for in the conclusion of his paper he states that "if the results of the present study of the transforming principle are confirmed[,] then nucleic acids must be regarded as possessing biological specificity the chemical basis of which is as yet undetermined."
However, by 1950, the tetranucleotide theory had been overthrown, thanks largely to the work of Erwin Chargaff who showed that, contrary to the demands of that theory, the four nucleotide bases are not necessarily present in DNA in equal proportions.14 Chargaff found, furthermore, that the exact base composition of DNA differs according to its biological source, suggesting that DNA may not be a monotonous polymer after all. So when, two years later, Hershey and Chase showed that upon infection of the host bacterium at least 80% of the phage DNA enters the cell whereas at least 80% of the phage protein remains outside, it was now possible to connect their conclusion that DNA is the genetic material with canonical knowledge.15 For Avery's "as yet undetermined" chemical basis of the biological specificity of nucleic acids could now be envisaged as the precise sequence of the four nucleotide bases along the polynucleotide chain. The general impact of the Hershey-Chase experiment was immediate and dramatic. DNA was suddenly in and protein was out, as far as thinking about the nature of the gene was concerned. Within a few months, there arose the first speculations about the genetic code, and Watson and Crick were inspired to set out to discover the structure of DNA.
Naturally, the case of Avery is only one of many premature discoveries in the history of science. I have presented it here for consideration mainly because of my own failure to appreciate it when I joined Delbrück's phage group and took the Cold Spring Harbor phage course in 1948. Since then, I have often wondered what my later fate would have been if only I had been intelligent enough to appreciate Avery's discovery and infer from it four years before the Hershey-Chase experiment that DNA must also be the genetic material of the phage.
Probably the most famous case of prematurity in the history of biology is that of Gregor Mendel, whose discovery of the particulate nature of heredity in 1865 had to await 35 years before it was "rediscovered" at the turn of the century.16 Mendel's discovery made no immediate impact, so it can be argued, because the concept of discrete hereditary units could not be connected with the (mid 19th century) canonical knowledge of anatomy and physiology. Furthermore, the statistical methodology by means of which Mendel interpreted his data was wholly foreign to the way of thinking of his contemporary biologists. By the end of the 19th century, however, chromosomes, mitosis, and meiosis had been discovered, and Mendel's results could now be accounted for in terms of microscopically visible structures and processes. Furthermore, by then the application of statistics to biology had become commonplace. In some respects, however, Avery's case is a more dramatic example of prematurity than Mendel's. Whereas Mendel's discovery seems to have been hardly mentioned by anyone until its rediscovery, Avery's discovery was widely discussed, and yet could not be appreciated for eight years.
A striking example of delayed appreciation of a discovery in the physical sciences, as well as an explanation of that delay in terms of the concept to which I refer here as prematurity, has been provided by Michael Polanyi.17 In the years 1914-1916, Polanyi published a theory of the adsorption of gases on solids which assumed that the force attracting a gas molecule to a solid surface depends only on the position of that molecule, but not on the presence of other molecules, in the force field. Despite the fact that Polanyi was able to provide strong experimental evidence in favor of his theory, it was generally rejected. Not only was the theory rejected, but it was considered so ridiculous by the leading authorities of the time that Polanyi believes continued defense of his theory would have ended his professional career had he not managed to publish work on other more palatable ideas. The reason for the general rejection of Polanyi's adsorption theory was that, at the very time he put it forward, the role of electrical forces in the architecture of matter had just been discovered. And hence, there seemed to be no doubt that gaseous adsorption must also involve electrical attraction between gas molecules and solid surfaces. That point of view, however, was irreconcilable with Polanyi's basic assumption of the mutual independence of individual gas molecules in the adsorption process. Instead of Polanyi's theory, the theory of I. Langmuir, which did envisage a mutual interaction of the gas molecules of the kind expected from electrical forces, found general acceptance. It was only in the 1930s after F. London developed his new theory of cohesive molecular forces based on quantum mechanical resonance rather than electrostatic attraction, that it became conceivable that gas molecules could behave in the way in which Polanyi's experiments indicated they are actually behaving. Meanwhile, Langmuir's theory had become so well-established, and Polanyi's had been consigned so authoritatively to the ash can of crackpot ideas, that Polanyi's theory was rediscovered only in the 1950s.18
We may now consider whether the notion of prematurity is actually a useful historical concept. First of all, is prematurity the only possible explanation for the lack of contemporary appreciation of a discovery? No, evidently not. For instance, Lamanna suggested the "quiet, self-effacing, non-disputatious" personality of Avery as the cause for the failure of general recognition of his discovery. And Chargaff is another believer in the idea that personal modesty and reticence for self-advertisement accounts for lack of contemporary appreciation.19 For instance, Chargaff has attributed the 75-year hiatus between F. Miescher's discovery of DNA in 1869 and the appreciation of its importance to Miescher being "one of the quiet in the land," who lived when "the giant publicity machines, which today accompany even the smallest move on the chess-board of nature, with enormous fanfares were not yet in place." Indeed, the 35-year hiatus in the appreciation of Mendel's discovery is often attributed to Mendel having been a modest monk living in an out of the way Moravian monastery. Hence, the notion of prematurity provides an alternative to the—in my opinion, for the cases mentioned here, false—invocation of lack of publicity as an explanation for delayed appreciation.
But, more importantly, does the prematurity concept pertain only to retrospective judgments made with the wisdom of hindsight? No, I think it can be used also to judge the present. For some discoveries have been made recently that are still premature at this very time. One example of here-and-now prematurity is the alleged finding that sensory information received by an animal can be stored in RNA or other macromolecules.
In the early 1960s, there began to appear reports by experimental psychologists purporting to have shown that the memory trace, or engram, of a task learned by a trained donor animal can be transferred to a naive recipient animal by injecting or feeding the recipient with an extract made from the tissues of the donor.20 At that time, the central message of molecular genetics that nucleic acids and proteins are "informational macromolecules" had just gained wide currency, and the facile equation of sensory information with genetic information soon led to the proposal that macromolecules—DNA, RNA, or protein—store memory. As it happens, the experiments on which the macromolecular theory of memory is based have been very difficult to repeat, and the results claimed for them may indeed not be true at all. But it is significant that few neurophysiologists have even bothered to check these experiments, despite everybody having heard about them and being aware that the possibility of chemical memory transfer would constitute a fact of capital importance. The lack of interest of neurophysiologists in the macromolecular theory of memory can be accounted for by recognizing that this theory, whether true or false, is clearly premature: there is no chain of reasonable inferences by means of which our present, albeit very imperfect, view of the functional organization of the brain can be reconciled with the possibility of its acquisition, storage, and retrieval of experiential information by encoding such information in nucleic acid or protein molecules. Thus for the community of neurobiologists there is no point in devoting its time to checking on experiments whose results, even if they were true as alleged, could not be connected with canonical knowledge.
The concept of here-and-now prematurity can be applied also to the troublesome subject of extrasensory perception, or ESP. During the summer of 1948, while taking the Cold Spring Harbor phage course, I happened to witness a heated argument between two future mandarins of molecular biology, S. E. Luria and R. E. Roberts. Roberts was then interested in ESP and felt it had not been given fair consideration by the scientific community. As far as I remember, he thought one might be able to set up some experiments with molecular beams which could provide more definitive data on the possibility of mind-induced departures from random distributions than J. B. Rhine's then much discussed card-guessing procedures.21 Luria declared that not only was he not interested in Roberts proposed experiments, but that in his opinion it was unworthy of anyone claiming to be a scientist even to discuss such rubbish. How could an intelligent fellow like Roberts entertain the possibility of phenomena totally irreconcilable with the most elementary physical laws? Moreover, a phenomenon which is manifest only to specially endowed subjects, as claimed by parapsychologists to be the case for ESP, is outside the proper realm of science, which must deal with phenomena accessible to every observer. Roberts replied that far from his being unscientific, it was Luria whose bigoted attitude toward the unknown is unworthy of a true scientist. The fact that not everybody has ESP only means that it is an elusive phenomenon, such as musical genius. And just because a phenomenon cannot be reconciled with what we now know, we need not shut our eyes to it. On the contrary, it is the duty of the scientist to try to devise experiments designed to probe its truth or falsity.
It seemed to me then that both Luria and Roberts were right, and, in the intervening years, I often thought about this puzzling disagreement, unable to resolve it in my own mind. Finally, I read C. W. Churchman's review of a book on ESP,22 and I began to see my way toward a resolution. Churchman set forth that there are three different possible scientific approaches to ESP. The first of these is that the truth or falsity of ESP, like that of the existence of God or the immortality of the soul, is totally independent of either the methods or findings of empirical science. And hence, an adherent of the tenets of logical positivism would relegate ESP to the class of meaningless propositions. Thus the problem of ESP is defined out of existence. I imagine that this was more or less Luria's position.
Churchman's second approach is to reformulate the ESP phenomenon in terms of currently acceptable scientific notions, such as unconscious perception or conscious fraud. This procedure is not as arbitrary as it might seem on first sight, because the "extra" in extrasensory perception is a conceptually fuzzy negative property anyhow. Thus, rather than defining ESP out of existence, it is trivialized. The second approach probably would have been acceptable to Luria too, but not to Roberts.
Finally, the third approach is to take the proposition of ESP literally and to attempt to examine in all seriousness the evidence for its validity. That was, more or less, Roberts' position. But, as Churchman points out, this approach is not likely to lead to satisfactory results. Parapsychologists can maintain with some justice that the existence of ESP has already been proven to the hilt, since no other set of hypotheses of psychology has received the degree of critical scrutiny that has been given to ESP experiments. And many other phenomena have been accepted on much less statistical evidence than that which has been offered for ESP. The reason that Churchman advances for the futility of a strictly evidential approach to ESP is that, in the absence of a hypothesis of how ESP could work, it is not possible to decide whether any set of relevant observations can be accounted for only by ESP, to the exclusion of alternative explanations. Churchman thus applies to the problem of ESP the principles of Karl Popper's "hypothetic deductive" theory of scientific discovery, according to which facts gain scientific meaning only within the framework of preconceived hypotheses.
After reading Churchman's review, I realized that Roberts would have been ill-advised to proceed with his ESP experiments, not because, as Luria claimed, they would not be "science," but because any positive evidence he might have found in favor of ESP would have been, and would still be, premature. That is, until it is possible to connect a phenomenon like telepathy with canonical knowledge of, say, electromagnetic radiations and neurophysiology, no demonstration of its occurrence can be appreciated.
Is the lack of appreciation of premature discoveries merely attributable to the intellectual shortcoming of scientists, who, if they were only more perceptive, would give immediate recognition to any well-documented scientific proposition? Polanyi is not of that opinion. Upon reflecting on the cruel fate of his theory half a century after first advancing it, he declares that . . . "this miscarriage of the scientific method could not have been avoided. . . . There must be at all times a predominantly accepted scientific view of the nature of things, in the light of which research is jointly conducted by members of the community of scientists. A strong presumption that any evidence which contradicts this view is invalid must prevail. Such evidence has to be disregarded, even if it cannot be accounted for, in the hope that it will eventually turn out to be false or irrelevant.23
This is a view of the operation of science rather different from that commonly held, under which acceptance of authority is seen as something that must be avoided at all costs. The good scientist is seen as an unprejudiced man with an open mind who is ready to embrace any new idea supported by the facts. As the history of science shows, its practitioners do not appear to act according to that popular view. . . . 24
StructuralismIt is only since about mid-century, more or less contemporaneously with the growth of molecular biology, that a resolution of the age-old epistemological conflict of materialism versus idealism emerged in the form of what has come to be known as structuralism.25 This development is another example of R. K. Merton's multiple discovery concept, since structuralism emerged simultaneously, independently, and in different guises in several diverse fields of study, for example in psychology, linguistics, anthropology, and biology.26
Both materialism and idealism take it for granted that all the information gathered by our senses actually reaches our mind; materialism envisages that thanks to this information reality is mirrored in the mind whereas idealism envisages that thanks to this information reality is constructed by the mind. But structuralism has provided the insight that knowledge about the world enters the mind not as raw data but in an already highly abstracted form, namely as structures. And in the preconscious process of converting step-by-step the primary data of our experience into structures, information is necessarily lost, for the creation of structures, or the recognition of patterns, is nothing else than the selective destruction of information. So since the mind does not, and cannot, gain access to the full set of data about the world, it can neither mirror nor construct reality. Instead, for the mind reality is a set of structural transforms of primary data taken from the world. This transformation process is hierarchical in that "stronger" structures are formed from "weaker" structures through selective destruction of information. And any set of primary data becomes meaningful only after a series of such operations has so transformed it that it has become isomorphic with a stronger structure preexisting in the mind.
Neurophysiological studies which Stephen Kuffler, David Hubel and Torsten Wiesel have carried out on the process of visual perception in higher mammals have not only shown directly that the brain actually operates according to the tenets of structuralism but also offer an easily understood illustration of those tenets.27 According to these studies, the primary photoreceptors in the retina report the absolute light intensity that reaches the eye from individual points in the visual field. These primary data are not sent on from the retina to the brain, however. They are first transformed in the retina into information about the light-dark contrast existing at individual points in the visual field, the absolute intensity data having been largely destroyed in the abstraction process. Upon first reaching the brain, the light contrast data for individual points are then transformed into the light contrast data for individual straight edges, or point sets, in the visual field, the information about contrast at individual points being destroyed in that second abstraction process. And at the next level of processing in the brain, the contrast data for individual straight edges are transformed into the corresponding data for sets of parallel edges, or sets of point sets in the visual field, entailing further destruction of information about individual edges. It is not yet clear what transformations take place at the next higher level of processing in the visual pathway, but it is certain that the mind experiences reality without knowing the "real" point-to-point light intensity in its surrounding.
Finally, we may consider the relevance of structuralist philosophy for the problem in the history of science under discussion here. For prematurity, structuralism provides us with an understanding of why a discovery cannot be appreciated until it can be connected logically to contemporary canonical knowledge.
In the parlance of structuralism, canonical knowledge is simply the set of preexisting "strong" structures with which primary scientific data are made isomorphic in the mental abstraction process. Hence, data which cannot be transformed into a structure isomorphic with canonical knowledge are a dead end; in the last analysis, they remain meaningless. They remain meaningless, that is, until a way has been shown how to transform them into a structure that is isomorphic with the canon.
AcknowledgmentsI made an informal presentation of the ideas covered in this essay at a conference on the history of biochemistry and molecular biology, held in May 1970 at the American Academy of Arts and Sciences. I am indebted to the dozen or so conference participants whose vigorous discussion accompanied my presentation and who helped me focus my ideas more sharply. I am particularly grateful to Harriet Zuckerman for calling my attention to Polanyi's paper on prematurity.
Notes1Editor's note: This progress refers to work in the period 1945-190.
2Avery et al. 1944. Editor's note: Stent also mentioned here uniqueness, and the Watson and Crick double-helix paper of 1953 as an example of uniqueness.
4Avery et al. 1944.
7Hershey and Chase 1952.
8Editor's note: Stent has asked me to note here that what he actually meant by this was that "a lack of appreciation in its day was [his] prima facie reason for considering Avery's discovery as a candidate for prematurity."
9Dienes 1946, p. 58.
11Editor's note: The words "generally accepted" were inserted in Stent 1972b but do not appear in the published version of Stent 1972a, reproduced here.
12The discovery of reverse transcriptase is described in Baltimore 1970; and Temin and Mizutani 1970. Editor's note: Shortly after publication of Stent's article, the function was shown to occur in vivo.
13Feulgen and Rossenbeck 1924.
15Hershey and Chase 1952.
18Editor's note: For further discussion of this case and additional references, see Nye, chapter 11 in this volume.
20For a critical summary, see Quarton 1967.
24Editor's note: Stent's paper makes a major excursion here into the uniqueness of discovery. Stent then invokes structuralism to explain uniqueness as well as prematurity.
27Kuffler 1953; Huble and Wiesel 1968.