Volcanic dust, climate change, tsunamis, earthquakes—geoscience explores phenomena that profoundly affect our lives. But more than that, as Doug Macdougall makes clear, the science also provides important clues to the future of the planet. In an entertaining and accessibly written narrative, Macdougall gives an overview of Earth’s astonishing history based on information extracted from rocks, ice cores, and other natural archives. He explores such questions as: What is the risk of an asteroid striking Earth? Why does the temperature of the ocean millions of years ago matter today? How are efforts to predict earthquakes progressing? Macdougall also explains the legacy of greenhouse gases from Earth’s past and shows how that legacy shapes our understanding of today’s human-caused climate change. We find that geoscience in fact illuminates many of today’s most pressing issues—the availability of energy, access to fresh water, sustainable agriculture, maintaining biodiversity—and we discover how, by applying new technologies and ideas, we can use it to prepare for the future.
Why Geology Matters Decoding the Past, Anticipating the Future
Set in Stone
In 1969, when I was a student in California, there was a rash of predictions from astrologers, clairvoyants, and evangelists that there would be a devastating earthquake and the entire state-or at least a large part of it-would fall into the ocean. The seers claimed this would happen during April, although they were not in agreement about the precise date. A few people took the news very seriously, sold their houses, and moved elsewhere. Others, a bit less cautious, simply sought out high ground on April 4, the date of the Big One according to several of the predictors. Cartoonists and newspaper columnists had a field day poking fun at the earthquake scare, and for us geology students the hubbub was amusing but also seemed a bit bizarre. Police and fire stations, along with university geology departments, got thousands of anxious telephone calls from nervous citizens. Ronald Reagan, then the state's governor, had to explain that his out-of-state vacation that month had been planned long in advance and had nothing to do with earthquakes. The mayor of San Francisco planned an anti-earthquake party for April 18, the sixty-third anniversary of the great 1906 San Francisco earthquake. He assured the public that it would be held on dry ground.
California didn't fall into the sea in 1969, of course, nor was there a huge earthquake (although there were earthquakes, as there are every year, most of them quite small). Astrologers can't predict earthquakes (or much else). Even earth scientists, with the best geological information and most up-to-date instrumentation, find precise earthquake prediction elusive, as we will see later in this book. However, the prognosis is much better for many other geological phenomena. And at the core of this geological prediction lies the kind of work geologists have traditionally done: decoding the past.
But how, exactly, do they do that? Where do earth scientists look to find clues to the details of our planet's history, and how do they interpret them? Those questions are at the heart of this book, and the answer to the first is hinted at in the title of this chapter: the clues are found, for the most part, in the stones at the Earth's surface. (There are also many other natural archives of Earth history, such as tree rings and Antarctic ice. Ice cores in particular provide invaluable information about past climate. But these other records tell us only about the relatively recent geological past. Rocks allow us to probe back billions of years.)
To the uninitiated a rock is just a rock, a hard, inanimate object to kick down the road or throw into a pond. Look a little closer and ask the right questions, however, and it becomes more-sometimes much more. Every single rock on the Earth's surface has a story to tell. How did the rock form? When did it form? What is it made of? What is its history? How did it get here, and where did it come from? Why is this kind of rock common in one region and not in another? For a long time in the predominantly Christian countries of the West, answers to questions like these were constrained by religion. The biblical flood was thought to have been especially important in shaping the present-day landscape, and explanations for many geological features had to be built around the presumed reality of this event. However, as the ideas of the Enlightenment took hold during the seventeenth and eighteenth centuries, and as close observation of the natural world became ever more crucial for those seeking to understand the Earth, the sway of religion diminished and more rational explanations began to emerge. For geology especially, a field with its roots in the search for and extraction of mineral resources from the Earth, the pressure of commerce was also important. Those with the best understanding of how gold veins formed, or with the best knowledge of the kinds of geological settings likely to contain such veins, had the best chance of finding the next gold mine.
I will not dwell at length here on the history of geology's development as a science, or on the details of how early geological ideas evolved; these things have been dealt with in many other books. But it is worth pointing out a few key early concepts that revolutionized the way everyone-not just scientists-thought about our planet. Most of these intellectual breakthroughs arose in Europe (especially in Britain) in the eighteenth and early nineteenth centuries, and although there had been independent thinkers in the Middle East and elsewhere who had arrived at similar conclusions much earlier, the European versions would form the bedrock(!) of the emerging field of earth science.
What were these ideas and how did they come about? Without exception they stemmed from examination of rock outcroppings in the field together with observations of ongoing geological processes. One of the new concepts was that different rock types have quite different origins, something that seems obvious enough to us today. But in the eighteenth century a popular concept was that all rocks were formed by precipitation, either from a primordial global ocean or from the waters of the biblical flood. Those who championed this idea were dubbed-for obvious reasons-Neptunists, and they did not give up their theory easily. However, observations like those of Scottish geologist James Hutton, who described outcrops showing clear evidence that some rocks had once been molten, eventually turned the tables. The rock outcrops told Hutton a vivid story: flowing liquid material, now solid rock, had intruded into, and disrupted and heated up, preexisting rock strata. Hutton's descriptions of these once-molten rocks-not to mention the presence of active volcanoes like Vesuvius and Etna in southern Europe-led to the realization that there must be reservoirs of great heat in the planet's interior.
A second important early concept was that slow, inexorable geological processes that can readily be observed (rainwater dissolving rocks, rivers cutting valleys, sedimentary particles settling to the seafloor) follow the laws of physics and chemistry. Once again this seems an obvious conclusion in hindsight, but its implication-this was the revolutionary part for early geologists-was that geological processes in the distant past must have followed these very same laws. This meant that the physical and chemical characteristics of ancient rocks could be interpreted by observing present-day processes. Charles Lyell, the foremost British geologist of his day, promoted this idea as a way of understanding the Earth's history in his best-selling book Principles of Geology, first published in 1830. (The book was so popular it went through numerous editions and is still in print today in the Penguin Classics series.) Lyell himself was not the originator of the concept, but he called it the "principle of uniformitarianism," and the name stuck. Although the phrase itself is no longer in vogue, generations of geology students have learned that it really means "the present is the key to the past." And although the early geologists were primarily interested in working out the Earth's history, Lyell's principle of uniformitarianism can also be turned around: using the same logic, it is true that-to a degree-the past is a key to the future.
Finally, the most revolutionary of the new concepts was that the Earth is extremely old. This flew in the face of both the conventional wisdom of contemporary scholars and the religious dogma of the day. Once again, as with so much early geological thought, the idea of an ancient Earth was formalized by James Hutton, who wrote (in a much-quoted pronouncement about geological time) "we find no vestige of a beginning, no prospect of an end." No single observation led Hutton to the concept of a very old Earth; it was instead a conclusion he drew from a synthesis of all his examinations of geological processes and natural rock outcroppings-observations of things like great thicknesses of rock strata made up of individual sedimentary particles that could only have accumulated slowly, grain by grain, over unimaginably long periods of time.
With a foundation built on these new ideas, which were popularized and widely disseminated through Lyell's book, and with an ever-increasing demand for minerals and resources from the Earth, geology, now mostly free of religious fetters, exploded as a science during the nineteenth century. Countries developed geological surveys to map the terrain and discover resources, and universities founded departments of geological sciences. Decoding the past became a full-time occupation for a legion of geologists.
Today geology is subsumed into the much broader field of earth science, which includes everything from oceanography to mineralogy and environmental science. In a modern university earth science department, it is not uncommon to find researchers in the same building probing subjects as diverse as climate change, biological evolution, the chemical makeup of the Earth's interior, and even the origin of the Moon.
Let's return, however, to those clues to the Earth's past that are inherent in the physical and chemical properties of the planet's rocks-the clues that are set in stone. The challenge for earth scientists is to find ways to extract and interpret them, and in recent years very sophisticated techniques have been developed to do this. Nevertheless, there are also some very simple examples, long used by geologists, that illustrate how the approach works. Take the igneous rocks, those that form from molten material welling up from the Earth's interior. They come in many flavors, from common varieties familiar to most people, like granite or basalt, to more exotic types you may never have heard of, with names like lamprophyre and charnockite. The chemical compositions of these rocks can provide information about how they originated, but chemical analysis requires sophisticated equipment. On the other hand, there is a very simple feature-one that can be assessed quickly by anyone-that provides evidence about where the rocks formed. That characteristic is grain size.
Igneous rocks are made up of millions of tiny, intergrown mineral grains that crystallized as the liquid rock cooled down. How big these grains grow depends crucially on how fast the rock cools; lava flows that erupt on the Earth's surface cool rapidly, and the resulting rocks are very fine-grained. But not all lava makes it to the surface. Some remains in the volcanic conduits, perhaps miles deep in the ground. Well insulated by the overlying rocks, it takes this material a long time to cool, and the slowly growing mineral grains get much bigger than their surface equivalents. For this reason, rocks with exactly the same chemical makeup can have contrasting textures and look very different, depending on how quickly they congeal. This simple characteristic can be used to say something about the depth in the Earth at which the rocks formed.
Less obvious characteristics require more ingenuity to decode, but because the payoff-in terms of what can be learned about the Earth's history-is so great, earth scientists are continually searching for new ways to probe rocks. As we will see in later chapters of this book, geochemistry, especially the fine details of a rock's or an ice core's chemical composition, has become especially important. The behavior of chemical elements such as iron, or sulfur, or molybdenum, for example, depends on the amount of oxygen in their environment. As a result, the minerals formed by these elements are sensitive indicators of oxygen levels when they formed-and in some cases can be used to determine the amount of oxygen in the ancient ocean or atmosphere.
Similarly, analysis of isotopes has become one of the most important ways to extract information about the Earth's past. (Isotopes are slightly different forms of a chemical element; almost every element in the periodic table has several isotopes.) Often the conditions that prevailed when a sample formed can be deduced by measuring the abundances of different isotopes of a particular chemical element; we will encounter many examples of this approach in later chapters of this book. In an ice core, for example, oxygen or hydrogen isotope abundances might tell us about the temperature 100,000 years ago; in an ancient rock, isotopes might fingerprint the process that formed the rock, and allow us to investigate how similar or different that process was to those that occur today.
The very first application of isotopes in the earth sciences-aside from the use of radioactive isotopes for dating-still evokes admiration among geochemists and sometimes amazement from those who know nothing about geochemistry. It is a good illustration of how ordinary rocks can be a treasure trove of information about the past when the right questions are asked. In the late 1940s Harold Urey, a Nobel Prize-winning chemist at the University of Chicago, discovered from theoretical considerations that in some compounds the proportions of the different isotopes of oxygen depend on the temperature when the compound formed. In a flash of insight, he realized that this property could be used to deduce the temperature of the ancient ocean-a groundbreaking idea. Urey proposed that measurements of oxygen isotopes in the calcium carbonate shells of fossil marine organisms could be used to calculate the water temperature when these creatures grew. He and his students then verified the theory by making those measurements, and in doing so they pioneered the field of "paleotemperature" analysis. Since that early work, tens of thousands, if not hundreds of thousands, of oxygen isotope measurements have been made to document in fine detail how seawater temperatures have fluctuated in the past. In my humble opinion-perhaps with a slight bias because my own background is in geochemistry-Urey's paleotemperature work ranks among the all-time great advances in the earth sciences.
Different rock types raise different questions about the past, of course, or at least allow different questions to be asked, but well-defined approaches for extracting evidence have been worked out by earth scientists for most rock varieties within the three great categories: igneous, sedimentary, and metamorphic. These familiar subdivisions of the rock kingdom are based on mode of formation: igneous rocks such as granite are formed from molten precursors, as James Hutton was one of the first to realize; sedimentary rocks result from the deposition or precipitation of particles, usually from water; and metamorphic rocks arise when any preexisting rock is changed chemically and/or physically, typically when heated or stressed during a process like deep burial or mountain building. Current theories about how the outer part of the Earth formed and has evolved rest on evidence derived mainly from the chemical properties of igneous and metamorphic rocks, which are the primary components of both the continents and the seafloor. But in many ways sedimentary rocks are the most important for decoding the Earth's history.
Why should that be? There are at least two reasons. First, they form at the Earth's surface, mostly in the sea but sometimes (as in the case of sandstones composed of desert sand) in contact with the atmosphere. This means that, potentially, these rocks incorporate information about the Earth's surface environment in the distant past. And second, many sedimentary rocks contain fossils, the primary record of how life on Earth arose and evolved. Without fossils, our understanding of evolution would be rudimentary.
By putting together thousands upon thousands of stories from studies of individual igneous, sedimentary, and metamorphic rocks and rock outcroppings, earth scientists have gradually woven together a history of the Earth. As for most histories, the details become less sharp the farther back one probes. Some of the most ancient evidence is missing entirely, or made difficult to interpret because geological processes operating over the Earth's long history have altered the rocks' characteristics and muddled the clues they contain. Nevertheless the narrative of our planet's evolution as we know it today is a superb scientific achievement. It is also a story in revision, continually updated as new discoveries are made and improvements in analytical capabilities allow new questions to be asked.
But what about chronology? How have earth scientists determined the timescale of this narrative? Events need to be ordered in time if we are to understand their significance; it isn't very helpful to know the temperature of the seawater in which a fossil animal grew if you have no idea when it lived. Ever since Hutton's "no vestige of a beginning, no prospect of an end"-and even before that-earth scientists have sought ways to determine the ages of rocks and the Earth as a whole. The ultimate goal-the development of techniques that could provide the "absolute" age of rocks in years-came within reach only with the discovery of radioactivity near the end of the nineteenth century. We will come to that shortly. But long before radioisotope dating methods were devised, earth scientists had already developed early versions of the geological timescale, placing important events from the Earth's history in a time sequence (see figure 1 for a modern version; if you are not already familiar with the names of geological eons, periods, etc., you may want to refer to this figure repeatedly as you read this book). How did they do this?
As early as the 1660s Nicolas Steno, a Danish anatomist who had an insatiable curiosity about the natural world, realized that rocks at the bottom of a stack of sedimentary layers must be older than those at the top. Steno was living in Italy at the time, and his observations were made while he was examining sedimentary rocks in the Alps. His insight was that the Alpine sedimentary strata-and the fossils they contain-have time significance. It is only relative time significance, to be sure; Steno could say whether a particular layer was older or younger than neighboring layers, but he couldn't determine its actual age. All this may seem obvious now, but at the time it was a breakthrough. By studying the inert rock layers of the Alps, Steno was able to visualize the nature and timing of their formation. Today he is generally regarded as the founder of the field of stratigraphy, the scientific study of sedimentary rock strata.
From Steno's time onward, his simple principle of ordering sedimentary layers in time was used to work out the relative chronology of geological events. This was easy enough to do in local areas where distinctive individual layers could often be traced from one rock outcrop to another. But long-distance correlation was difficult. Was a limestone layer in France the same age as one in England or Sweden, or across the Atlantic in the United States? It was difficult to say. Regional relative timescales could be constructed, but a global one seemed beyond reach.
However, there were clues in the sedimentary rocks that helped resolve this dilemma. Long before Charles Darwin wrote about evolution, earth scientists recognized that life on Earth had changed through time. Wherever they looked, they found the same story. Fossils in the youngest rocks near the top of sedimentary sequences looked similar to living forms, but in lower, older layers, the fossils were often small and quite different from any known plants or animals. And in some places, below (and therefore even older than) the rocks containing the old, unfamiliar fossils were strata completely barren of any sign of animal or plant life.
An English surveyor named William Smith was one of the first to recognize the practical significance of this changing sequence. Surveying was his trade but geology was his passion, and as he traveled around the British Isles in pursuit of his profession he took notes about the local geology and collected fossils. He noticed that the sequence of fossils, the way the assemblages of organisms changed as he proceeded from older to younger rocks, was always the same-even if the rocks themselves looked quite different. Half a century or more before Darwin published his Origin of Species, Smith organized his fossil collection, which he proudly displayed to friends, according to relative age, not in groups of similar-looking organisms as most contemporary collectors did. Although he didn't know it, he was using evolution, as recorded by the fossils, as a way to make correlations among sedimentary rocks formed at the same time but in far-flung localities. The goal of a global relative timescale was a step closer.
Those who followed in the footsteps of Steno and Smith gradually built up the geological timescale until they had filled in most of the subdivisions shown in figure 1, from the Cambrian period to the present. The names they gave to the major subdivisions of this timescale, particularly the names of the geological periods, usually referred to geographical regions where fossil-containing rocks of that particular time were abundant and first described in detail-for example, Jurassic after the Jura Mountains of Switzerland, or Ordovician and Silurian after two ancient tribes that lived in different parts of Wales. All of this was done before the discovery of radioactivity, and there was no real sense of the great span of time represented. And because the relative timescale was based on fossils, it was blank below the base of the Cambrian period. As far as early geologists could tell, rocks older than this did not contain any fossils at all (as we will see, there was life on Earth long before then, but most of the fossils from those earlier times are rare, small, and easy to overlook). The ancient, apparently barren rocks were simply referred to as "Precambrian."
This early relative timescale was in reality a record of the evolution of marine life. Although there were geographical variations in life forms in the past, just as there are today, the general pattern of evolution is clear enough in the fossil record that sedimentary rocks anywhere in the world could be placed in the correct sequence, as long as they contained fossils. Devonian rocks in Europe, for example, contain fossil assemblages that are recognizably similar to those in Devonian rocks from America or Africa. This helped greatly in the construction of the timescale because there is no single locality on Earth where rocks spanning the entire time from the Cambrian period to the present-or even a significant portion of it-occur in a continuous, uninterrupted sequence of sedimentary layers. The timescale had to be constructed bit by bit through detailed examinations of small portions of the geological column (as it is often called) in different places, coupled with correlation between localities where there was obvious overlap. This might seem at first to be an ad hoc approach, but it has been extremely successful, as the timescale in figure 1 attests. So complete is our understanding of evolution that an experienced field geologist can walk up to an outcrop of sedimentary strata anywhere in the world and, if he can find a few fossils, place it quite precisely in the geological timescale.
All of this has been accomplished in spite of the fact that only a very small fraction of all species that have ever existed on Earth occur as fossils. It is simply not very easy to become a fossil. Most estimates suggest that fewer than 1 percent of species have been preserved in rocks, and it is easy to understand why. Even in the most favorable environments-a quiet sea bottom with slowly accumulating muddy sediments, for example-most dead organisms are consumed by scavengers or simply rot and dissolve away before they can be preserved. Usually only the hard parts-shells, bones or teeth-are preserved, and even then it may be only a fragment. Adding to the challenge is the fact that it is sometimes difficult to deduce the whole from the parts, especially for unfamiliar organisms. Sharks' teeth are relatively common as fossils, but for a long time-in spite of the fact that sharks were well known-nobody knew what the fossils were because they were isolated objects, not obviously associated with anything else. And even if complete fossils are preserved, the sedimentary rocks that contain them may later be destroyed by erosion or metamorphism. Darwin was one of the many scientists concerned about the resulting gaps in the fossil record.
Nonetheless, even with the limited available sample of fossil species, sedimentary rocks have yielded up in amazing detail the story of how life on Earth has changed. The early geologists placed boundaries between individual eras and periods, and even finer subdivisions of the timescale at places in the geological column where they observed rapid changes in the types of fossils preserved. The names of the three eras shown in figure 1-Paleozoic, Mesozoic, and Cenozoic-are derived from Greek for "ancient life," "middle life," and "recent life," because of the abrupt and truly radical changes in fossil species that occur at the boundaries between them, with the preserved life forms becoming increasingly familiar toward the present. The boundaries can be readily identified everywhere on Earth where rocks of these ages occur, and we now know that they record short periods of widespread extinctions, when large fractions of the organisms inhabiting the oceans were wiped out through catastrophic environmental disruption. The extinctions were followed relatively quickly (in geological terms) by evolution and radiation of new life forms. Less drastic but still major changes in the nature of marine life mark the boundaries between the geological periods.
The timing of these changes was, for a long time, elusive. As the nineteenth century grew to a close, scientists of all stripes were working on ways to measure geological time. Physicists wanted to know the age of the Earth; geologists wanted to know the ages of individual rocks and the duration of different parts of the timescale. Many ingenious approaches were tried, but most of them rested on questionable assumptions and all had very large associated uncertainties. The most extreme estimates of the Earth's age were in the range of a few tens of millions of years up to perhaps 100 million years. There was simply no reliable way to know how much time was represented by Precambrian rocks, or to work out anything about the rate of evolution.
That all changed with the discovery of radioactivity in 1896. Once it was understood that radioactive isotopes decay at a constant rate, their potential for geological dating became clear. One of the early pioneers of radioactivity research, Ernest Rutherford, was the first to make that leap. He was a physicist and an experimentalist, and he asked his geologist colleagues to give him rocks they thought were very old. From measurements of the radioactive isotopes in these samples, he calculated that they were about 500 million years old. This was a startling result, and it shook up the scientific establishment. If Rutherford's result was accurate, it meant that the Earth as a whole was even older than 500 million years, and therefore much older than was generally thought.
By today's standards, Rutherford's experiments were crude. Geochronology, the science of dating rocks, has made huge advances over the century or so since he made his initial measurements. The approach is still the same, based on the knowledge that radioactive isotopes decay at a known, constant rate. But today's analytical instruments are capable of making very precise measurements on small amounts of material, and the dates that result are also very precise. All of the boundary ages shown in figure 1 are based on radiometric dating (as the process is usually called), and the same techniques have shown that the Earth is between 4.5 and 4.6 billion years old. Time is such an important part of decoding the past that it is worth spending a few pages examining just how radiometric dating works.
The first thing to say is that geological time is immense. Four and a half billion years is a very long time, hard to comprehend from a human viewpoint. In this era of billionaires and trillion-dollar bailouts, the number itself is not unusual, but its enormity becomes apparent when it is put into perspective. Our species, Homo sapiens, has been around for about 200,000 years or perhaps a bit less, a very long time by most standards. But that is a minuscule fraction, just a few millionths, of the Earth's age. A commonly used analogy is a hypothetical three-hour movie depicting the Earth's history. Three hours is very long for a movie, but even so Homo sapiens would appear only in the last half second.
One of the implications of the enormous span of geological time is that even though many geological processes seem to operate at insignificantly slow speeds, they can wreak huge changes. Tectonic plates, as we will see in a later chapter, move at speeds of only a few inches per year, yet multiply that by hundreds of millions of years and whole new ocean basins can open up and then disappear again. Over similar timescales great mountain ranges can be thrust up, then worn down to a flat plain by erosion.
But to return to the details of the dating methods used to measure these great swaths of geological time: fortunately, there are many elements in the periodic table that have naturally occurring radioactive isotopes, and many natural materials contain small amounts of one or more of them. This means that in principle, and with judicious sample selection, almost anything can be dated. However, each of the dating procedures that has been developed has its own limitations. For example, radiocarbon dating, which is probably the most widely known of all the geochronological methods, can only be used to date organic material that was once part of a living plant or animal, and is also restricted to material younger than about fifty thousand years. This limited time span results from the fact that the method is based on the radioactive decay of the isotope carbon-14, which decays away very quickly. (Isotopes are labeled according to the number of neutrons plus protons in their nucleus-fourteen in the case of carbon-14. In scientific writing this is usually shown as a superscript to the chemical symbol, i.e., 14C, but here I'll use the longer and easier to read format, i.e., carbon-14.)
The radiometric dating methods most commonly used for rocks employ isotopes of elements that are relatively abundant and familiar, like potassium and uranium, and also some that are more exotic, such as rubidium, rhenium, and samarium. Each of the methods has its own advantages and disadvantages, and often the circumstances-things like the geological setting of the sample to be dated-dictate which method is most likely to be useful. The most commonly used technique for ancient rocks, one that we will encounter repeatedly in this book, is based on the decay of uranium to isotopes of lead. One of the reasons uranium-lead dating is so useful is that a wide range of rock types contain a mineral that can be easily extracted for analysis and is both naturally rich in uranium and very resistant to alteration: the mineral zircon.
As you might guess from its name, zircon is rich in zirconium, and from its chemical formula, ZrSiO4, it is apparent that silicon and oxygen are its other major constituents. Uranium is present only in trace quantities but still at much higher concentrations than in most other minerals, because uranium atoms easily take the place of zirconium in the mineral's structure. Zircon is a hard and dense mineral that is usually reddish in color. Small grains of it are ubiquitous in igneous rocks. Rarer large crystals are sometimes sold as semiprecious gemstones, but for geologists, zircon's real value lies in its usefulness for dating. It is so resistant to alteration that even when rocks are buried, heated, and undergo significant metamorphism, the zircon crystals often remain unscathed-and retain the age of the original rock. When rocks like granite are weathered at the Earth's surface, many of the minerals they contain dissolve away or are turned to clay, but zircon crystals survive. Because of this, beach sands invariably contain grains of zircon.
Alongside uranium-lead dating, a second widely used radiometric dating technique that we will encounter involves the decay of potassium to an isotope of the gas argon. There is no potassium-rich equivalent of the mineral zircon, but because potassium is a relatively common element at the Earth's surface, many common minerals-for example, certain types of mica and feldspar-can be dated using this technique. For various technical reasons, potassium-argon dating is especially useful for the younger parts of the geological record.
Most of the dates for the modern geological timescale were measured using either uranium-lead or potassium-argon dating. In cases where the right samples were available, both methods were used; such cross-checking using independent techniques ensures that the results are accurate. But there is an issue concerning the ages shown in figure 1 that needs to be addressed: there are significant difficulties in applying both the potassium-argon and uranium-lead dating methods to sedimentary rocks, and as we saw earlier, fossils in sedimentary rocks are the basis of the timescale. How, then, were these ages obtained?
The problem becomes clear if we consider how sedimentary rocks are formed. Many of the mineral grains that comprise them were originally part of other rocks on the continents; they were eroded from their parent rocks, carried to the sea, and deposited. Dating these minerals would give the ages of the parent rocks, not the sedimentary rocks themselves. Furthermore, the minerals in sedimentary rocks that are directly precipitated from seawater (and thus would be appropriate for dating these rocks) don't contain enough uranium or potassium or other radioactive isotopes to make them useful for age determinations. Calcium carbonate, a widespread component of ocean sediments, falls into this category. This is one instance in which Mother Nature has not been very kind to earth scientists.
But obviously the problem has been circumvented; accurate radiometric dates do exist for many sedimentary rocks. What was the solution? The answer has to do with the fact that the Earth is a very active planet, with volcanoes spewing out volcanic ash almost continuously. This was brought home with a jolt early in 2010 when an ash cloud from the Eyjafjallajökull volcano in Iceland shut down air travel across Europe. (Pronouncing the name of this volcano may seem daunting to most native English speakers-but I understand that "I forgot the yogurt" is not a bad approximation.) The Eyjafjallajökull eruption was tiny by global standards, but it illustrated how ash from a single volcano can spread over a wide region. The very largest eruptions disperse ash globally, and when this material settles to the seafloor it forms layers that are easily recognizable. The ash layers are markers of essentially instantaneous geological events, and are therefore ideal candidates for dating. Fortunately, they often contain zircon crystals, or minerals that can be dated using the potassium-argon method.
Volcanic ash layers are so abundant that they have become by far the most important material used for dating sedimentary rocks. The movement of the Earth's plates cause the most explosive volcanoes-the ones that produce the most ash-to occur mainly along the margins of ocean basins. Think of the volcanoes of Indonesia, or the Andes. Even if individual volcanoes erupt only sporadically, sediments accumulating in these regions are laced with ash layers. If it is important to know the age of a particular level in the sediments-say, a level that marks a geological boundary-and there doesn't happen to be a convenient ash layer in exactly the right position, it is usually possible to interpolate between closely spaced layers.
A case in point is a sequence of limestone beds in southern China that extends across the boundary between the Permian and Triassic periods. Fossils show that this boundary-which is also the divide between the Paleozoic and Mesozoic eras (figure 1)-marks the most extensive mass extinction event in the Earth's history, a time when more than 90 percent of the species living in the oceans quite suddenly disappeared. Accurate dating was a high priority, but the limestone couldn't be dated directly. Fortunately, though, it was deposited in a volcanically active region and contains numerous interbedded ash layers. In the 1990s a team of geochronologists from the Massachusetts Institute of Technology (MIT) sampled ash layers from above and below the boundary, painstakingly separated out the small zircon crystals they contain, and measured the zircon ages using uranium-lead dating. The results are shown in figure 2. They show that a segment of the sediment sequence several yards thick and spanning the boundary was deposited over a time period of just two million years. The dates for the ash layers also pin down the age of the boundary precisely to 251.4 million years. Perhaps equally important, by dating closely spaced ash layers these researchers were able to conclude that the great pulse of extinctions occurred over a short time span, less than a million years.
I have not yet said anything about the Precambrian part of the timescale. Radiometric dating has uncovered its true extent; the scientists who worked out the early versions of the timescale from fossils would be astounded to know that it comprises more than 85 percent of the Earth's history. Lacking fossils, rocks from the Precambrian can only be placed in a time context by direct radiometric dating. Figure 1 shows only a few major divisions of this part of the timescale: the Hadean, the Archean, and the Proterozoic. Ages for the boundaries between these subdivisions are partly arbitrary, and partly based on recognized events that have affected the Earth globally. In spite of the antiquity of the Precambrian rocks, they have revealed a rich tapestry of sometimes surprising information, as we will see in later chapters. They depict a world that for billions of years was very different from the one we know today.
This brief introduction is meant to provide an overview of how earth scientists use different types of information stored in rocks to decipher events from the Earth's past and to work out their chronology. That effort is ongoing, and new discoveries continually sharpen or modify different aspects of the story. In recent years special emphasis has been put on identifying times and events in the past that have relevance to what may occur in the future. This has become particularly important for issues that will affect the near-term future of human societies, such as global warming. Before we turn to such concerns, however, the following chapter goes back to the very beginning, 4.5 billion years ago, to explore our planet's origin and its very earliest days. We have no earthly rocks left over from that time; any that once existed have long since been destroyed by geological processes. What we do have, however, are rocks from space. Like Earth rocks, they too have stories to tell.