Cats. The very mention of them has the power to generate innumerable lazy hits on a blog post. If one were to do an anthropological study of cats using only the Internet as source material, one might be lead to believe that we worship them as deities.
We wouldn’t be the first culture, either. The ancient Egyptians held them in pretty high regard. Their goddess of justice and execution, Mafdet, was a feline-headed creature who protected against snakes and scorpions. Baset, another feline goddess, represented protection, fertility, and motherhood.
Despite the high regard humans have had for cats since at least the dawn of written history, we know very little about how or when they became domesticated. We are pretty sure that the housecat is descended from the African wildcat (Felis silvestris lybica), and people generally assume the process involved a mutually beneficial relationship between farmers and felines in which the cats protected the farmer’s grain from vermin, and the grain provided for a steady supply of vermin for them to eat.
But it is really hard to figure out when that would have happened, and even harder to figure out if that general picture, which makes a great deal of heuristic sense, is accurate. A recent study on this subject published by a team of archeologists got a great deal of press. They found a small Chinese farming village dated to about 5300 years ago with cat and rodent fossils (among others) discovered at the site.
The basic gist of the study was that chemical analysis of the animal bones found at the site revealed that the rodents ate grain, and that the cats ate those rodents or the grain products directly—suggesting a mutually tolerant relationship between human and cat. Other wild animals found at the site, like deer and hares, didn’t seem to eat any grain, suggesting their food web was independent of any human influence. It’s a fascinating and impressive result, and it seems to be consistent with the generally accepted theory of cat domestication.
The longest conveyor belt in the world runs 61 miles from the hostile interior of Moroccan occupied Western Sahara to the port city of El-Aaiún. Open to gusty desert winds in many places, the belt’s precious white cargo is strewn across the dusty brown desert, marking the Earth so profoundly that this massive machine’s outline can be seen from space.
Between around 100 and 55 million years ago, marine waters of the nascent and ever-widening Atlantic Ocean transgressed and regressed over this now dry land. These waters deposited thick muddy sediments containing the decaying tissue, bones, shells, and excrement of dead marine life that had collected and concentrated on the ocean floor over millions of years. As a result, this thick oozing mud, a complex mélange of fetid material, was rich in phosphorus.
Without phosphorus, life itself is not possible. It exists in all living things —in cells, in bones, indeed, even in DNA. For that reason, the mud that formed the hills of Western Sahara so many millions of years ago were full of phosphorus. Now, millions of years later, it is that same phosphorus that we extract from the Earth and load onto a conveyor belt. Read the rest of this entry »
Buried under thousands of feet of hard, ancient ice lies the solid earth of the Antarctic continent. For some 34 million years, vast glacial plains have ebbed and flowed over this rocky land. But the initiation of Antarctic glaciation—the point in time when conditions became right for snowfall to exceed snowmelt year after year—began suddenly and enigmatically.
The growth of glaciers on Antarctica marks the end of the geologic epoch known as the Eocene—an epoch actually known for some of the hottest global temperatures in Earth’s geologically recent history. High CO2 punctuated by extreme bursts of even more CO2 caused significant warming for the early part of the Eocene’s 22 million year span. Fossil records show that the Antarctic continent was not only ice free then, but that it supported rainforests and crocodiles!
So the transition from a lush tropical landscape to a barren ice covered wasteland is a mystery that scientists have yet to fully explain. Cooling began gradually around the middle Eocene, and it made a pronounced and sudden shift at the Eocene’s conclusion 34 million years ago.
At that time, CO2 levels plummeted. In a geological instant—400,000 years—Antarctica was covered in ice. Some sort of threshold must have been passed, geologists reason. Cooling can beget more cooling because ice reflects incoming heat from the Sun back into space. This undoubtedly happened. But something else had to have occurred to cause the drop in CO2 that allowed the world to become cool enough to form glaciers in the first place.
No complete rocks have survived to tell of the formative years following Earth’s formation some 4.56 billion years ago. The material that would have existed at that time has been broken apart by the power of wind and water. It has melted and metamorphosed under the immense pressure and heat deep within Earth’s interior. It has been recycled back on to the surface. It has existed at every stage of the rock cycle many, many times over.
Despite the thousands of millions of years Earth’s earliest material has encountered, tiny pieces of some of these rocks still exist. Small microscopic grains of a mineral, once part of rocks that would have witnessed Earth as it existed just a couple of hundred million years after its formation, survive to this day.
The oldest known terrestrial material is a single grain of a mineral called zircon which was found in the Jack Hills formation in western Australia. It is 4.4 billion years old. The grain itself was part of a rock composed of the broken and eroded bits of other ancient material that itself has been subject to billions of years of geologic reworking.
Zircon is an extremely rugged mineral made up of silicon and the obscure element zirconium. Its tenacity in the face of time and its ability to provide scientists with enough information to figure out the age when it was formed are among the many reasons it is exciting to geologists. Read the rest of this entry »
Up until around 500 million years ago, the continents of Earth were practically lifeless, harboring – at most – slimy mats of bacteria on rocky, barren wastelands. Around this time plants began to creep out of the oceans, gradually developing adaptations that allowed them to expand further and further inland over millions and millions of years. But there is a dark side to this story: the increasing success of plants on land may have contributed to one of the largest set of extinctions known to the fossil record.
Plants colonized land over a period as long as tens to hundreds of millions of years. But there were a number of evolutionary advances that brought about swift change. Each advance allowed plants to either expand to new habitats or grow larger. And with each advance, the roots of these pioneering plants broke more and more earth apart. To Tom Algeo, a geologist at the University of Cincinnati, this process may have created a chain of events that removed massive quantities of oxygen from the ocean.
Although the first land plants evolved around 500 million years ago, they remained close to the waters edge and did not grow very large for around 100 million years. But these early plants paved the way for the future success of larger plants. This later success is largely due to lignin, tissue that gives plants structure and support.
Author’s note: This post is the first in a series of great Earth history moments. Stay tuned for a new post every other week.
Around 6 million years ago, the Mediterranean Sea became separated from the Atlantic. Cut off from the world’s oceans, it began to evaporate. By 5.3 million years ago, there was literally no sea left. 1000 years later, it was refilled in a geologic instant.
A number of discoveries led to the conclusion that the Mediterranean dried out completely sometime in the past. The first came in the 1960s, when seismic studies of the floor of the Mediterranean revealed a unique layer – dubbed the M reflector – across the whole basin. Scientists interpreted it to be a large layer of salt distributed evenly across the seafloor.
Later, in 1970, a leg of the Deep Sea Drilling Project cored deep into the Mediterranean seabed. They found what the seismic data predicted: a hard layer of evaporites – rocks composed of salts.
The only way to get evaporite rocks at the base of a sea is to evaporate water until it becomes so concentrated with salts that they can no longer be dissolved. This forces them to precipitate into a solid form.
Just as enigmatic as the salt layer, engineers mapping the base of the Nile River in preparation for the construction of the Aswan Dam around this time found that carved deep beneath the silty floor of the Nile was a canyon whose ancient base was well below sea level.
The only way for a canyon to be carved into bedrock is for a river to flow through it. But a river won’t cut lower than sea level. This deep canyon meant that Medteranian sea level must have been dramatically lower in the past.
In 1972, Kenneth Hsu, the primary investigator on the Deep Sea Drilling Leg that cored the Mediterranean, authored a paper in Nature concluding that the sea must have evaporated nearly completely to produce such an anomalous layer of evaporite minerals and to have cut canyons so deep. In the paper he admitted it was a “preposterous idea,” but stated that no other explanation presented itself. Read the rest of this entry »
The rainforests of Madagascar highlight, with great clarity, the power the physical environment exerts on evolution. As a study abroad student in the fall of 2006, I was researching the sleep habits of the brown mouse lemur in Ranomafana National Park, a protected tract of land in the high rain-forested mountains of Madagascar’s east coast.
During the day, I bushwhacked through this dense rainforest, attempting to locate two or three of these nocturnal mouse lemurs, who had been fixed with tracking collars, as they slept. In the evening, I waited for the lemurs to wake up so that I could record the size and consistency of their sleeping groups.
One day, as the sun was setting on the bamboo, ferns, and mossy trees of the forest, I watched as multiple lemurs suddenly emerged and attempted to rouse the female lemur I was tracking from her sleep. These lemurs, all male, were attempting to mate with my study subject.
Female brown mouse lemurs, and indeed many species of female lemurs in Madagascar, are only receptive to mating for a very short period of time each year. To make the most of this short mating season, the male lemurs, deathly focused on a single goal, spend the winter months growing testicles that end up being a quarter of their entire body mass. It is no question, given the males’ months of stored hormonal energy, that there would be a significant interest in my study subject that day. Read the rest of this entry »
Climate science is an extremely complicated discipline. Climate change skeptics and deniers, I believe, thrive on this complexity. They highlight what is not known or not agreed upon to suggest that the discipline as a whole is flawed. The best way to combat such an argument is with simplicity.
In that light, I present a simple, four-point argument demonstrating the reality of anthropogenic global warming.
Carbon Dioxide Causes Warming
The central mechanism driving anthropogenic climate change is the combustion of fossil fuels. Fossil fuels, the chemicals we use to heat our houses and move our cars, are compounds formed when ancient organic material, predominantly the remains of algae, is buried and cooked at a high temperature and pressure for millions of years. The result is a set of carbon-based chemicals that release a lot of energy, and form carbon dioxide (CO2), when burned.
This CO2, when released into the atmosphere, traps heat by blocking the escape of Earth’s radiation into space. (Anything that has a temperature, Earth included, produces radiation.) Known as the greenhouse effect, this is not a new or controversial idea. In 1861, John Tyndall, a British professor of natural philosophy, gave a lecture titled “On the Absorption and Radiation of Heat by Gases and Vapours, and on the Physical Connexion of Radiation Absorption and Conduction.” Tyndall demonstrated conclusively that CO2, among other gasses, absorbs long wave radiation – the same type that Earth emits to space. His experiment was simple. Tyndall produced radiation with a bunson burner, knowing that the heat would emit a full spectrum of wavelengths, including long-wave radiation. He then measured those wavelengths after passing them through different gasses. Because not all wavelengths traveled through the CO2, Tyndall concluded that the CO2 must be absorbing some of the heat. This simple experiment has huge implications for our planet.
Tyndall’s work greatly influenced a Swedish physicist named Svante Arrhenius. In 1896, Arrhenius published a paper titled “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground.” (Carbonic acid was what carbon dioxide was called at the time.) Arrhenius, in essence, took Tyndall’s work out of the lab and applied the concept to the real world. Instead of a bunson burner, he used observations of infrared radiation from the moon. Because he knew that the moon, without an atmosphere, should transmit all of its long wave radiation to Earth, he was able to calculate the effect our atmosphere had on it by documenting which wavelengths didn’t make it. For each lunar observation, he compared that data with atmospheric conditions (humidity and CO2 levels) to see what effect they had on the radiation that made it to Earth. By doing this he determined that with a rise in CO2 came a “nearly arithmetic” rise in temperature. Using his calculations he determined that a doubling of atmospheric CO2would result in a 5ºC temperature rise. Even with the advent of massive computer models and high-tech lab equipment, this value is still in agreement with modern climate science.
Both Tyndall and Arrhenius speculated that CO2 has played a role in controlling the Ice Ages. Arrhenius, back in 1896, even predicted that human fossil fuel use might result in future global warming.
Carbon Dioxide Concentrations in the Atmosphere are Increasing
This is the easiest point to make. Scientists can measure the amount of CO2 in the atmosphere. It is increasing.
The best evidence is the famous “Keeling Curve.” In 1958, Charles Keeling, a professor of oceanography at Caltech, began making continuous measurements of CO2 on the peak of the big Island of Hawaii. Because this station is far away from major urban centers, and because the station is at a high altitude, the location is perfect for making CO2 measurements that are representative of the whole atmosphere. His measurements, which continue to this day, show a progressive rise in CO2from around 315 parts per million in 1958 to about 394 ppm as of September 2012.
The Increased Carbon Dioxide is Coming from Human Activity
This is the heart of the controversy, but this is just as easy to demonstrate as the previous two points. The CO2 that is associated with the recent increase has a chemical signature that unequivocally ties it to human activity.
CO2 can come from a variety of sources. CO2 in the ocean is constantly exchanged with CO2 in the atmosphere; there is CO2 in the mantle, which can be released through volcanoes, and wildfires can release CO2 the same way that burning fossil fuels does. By looking at the carbon contained in CO2, scientist can distinguish between each of these sources.
Fossil fuels come from the cooked remains of ancient life. Therefore, the carbon in this CO2 must be derived from the remains of living things that existed a very long time ago. Both the age and the source of carbon can be inferred using chemical entities known as isotopes.
Elements like carbon can have differing masses, caused by changes in the number of neutrons in the atom. These are called isotopes, and each isotope acts a bit differently. When a plant takes in CO2 from the atmosphere through photosynthesis, it prefers carbon with a mass of 12 to carbon with a mass of 13. Therefore, anything that photosynthesizes, or anything that eats something produced by photosynthesis (essentially all life on this planet), is composed of less carbon-13 than is typically found in the atmosphere. This is the signature of carbon that comes from living things. Life, both alive and transformed into fossil fuels, represents a massive reservoir of carbon-12. If this kind of carbon were released into the atmosphere, the concentration of carbon-13 in the atmosphere would be reduced by dilution with carbon-12.
Carbon can also have an isotope with a mass of 14. This type of carbon is created in the atmosphere continuously. Because of this, there is a constant source of carbon-14 on the surface of Earth. Unlike carbon-13, carbon-14 is radioactive. This means that it cannot remain carbon-14 forever. It slowly decays away at a known rate. This property allows scientists to use carbon-14 to date once living things, but anything past approximately 60,000 years, cannot be dated since it will have virtually no carbon-14 left. A complete lack of carbon-14 is the signature ancient carbon. If enough of it is released to the atmosphere, it will decrease the relative concentration of carbon-14 in the atmosphere by diluting it with carbon-14 free CO2.
The combustion of fossil fuels, then, should reduce the concentration of both carbon-13 and carbon-14 in the atmosphere.
Both are happening. They are known collectively as the Suess effect. The concentration of carbon-14 and carbon-13 in the atmosphere is declining, and it is declining at the same time that CO2 is increasing. This means that the CO2 increase we are seeing must come from ancient, organic carbon.
No other source of CO2 could have this signature. Wildfires can’t because the carbon being burned is young; it has plenty of carbon-14. Carbon from the ocean has the same problem – too young, too much carbon-14. CO2 from volcanoes does not work either. This carbon does not come from once living matter, so it has plenty of carbon-13.
Carbon derived from the remains of ancient life buried deep inside our Earth is the only plausible source. The only way to release a great deal of it at once is to dig it up and burn it, as humans are doing today.
Average Global Temperatures Are Rising
Just like CO2 concentrations, scientists are able to measure air temperature – in fact the technology has been around for quite a while. The real challenge is getting past the variability, which is the result of things like el Nino and other short term weather patterns, to figure out what the long-term temperature trend is globally. There are plenty of studies showing that the trend is overall warming, but I will highlight a study by Richard Muller.
Richard Muller was an outspoken climate change skeptic, and the Koch brothers, prominent right-wing political figures who deny climate change, funded his research. He gathered as much data as possible and corrected for all known biases – the fact that temperatures are generally higher in cities, for example – and plotted average temperature since 1750. They are rising – a full degree since 1900. A degree may not sound like much, but a rise of 2 degrees would result in ~3 meters of sea level rise, according to a collection of recent estimates. Most of New York City would be underwater.
Carbon dioxide in the atmosphere can warm our planet. This has been taken as fact for well over a century – well before any widespread scientific conspiracy would have been hatched. Carbon dioxide is increasing – it’s real hard to argue with measurements. The increase in carbon dioxide is changing the chemical composition in the atmosphere in a way only fossil fuels are able to. Also, the planet is warming.
The Adirondacks are something of a paradox. Made from some of the oldest rocks on Earth, they are one of the youngest mountain ranges in existence. Pushing their way through the younger rocks of the Appalachians, this jagged, deformed mess of ancient rock, once trapped deep in the crust, has been rising for the past 15-20 million years. And nobody really knows why.
Over a billion years ago, standing high above the lifeless lowlands of the supercontinent Rodinia, a massive mountain range known as the Grenville Orogen extended from coast to coast – one of the largest and longest lived ranges our planet has ever known. Formed when prehistoric continents collided to form a single and massive landmass, its rocks have since fallen deep into fractured valleys and risen once more. They have formed the floors of ancient oceans, and they have withstood the extreme heat of deep burial. These are the rocks that are forcing their way to the surface as the Adirondacks. This complex history makes them unlike any other mountain range – a lesson I learned the hard way.
As a young and somewhat naive hiker in my freshmen year at Skidmore College, I had my heart set on climbing as many of the Adirondack ‘high peaks’ as possible, those peaks that are higher than 4000 feet. I picked up a map of the high peak region and quickly identified what I felt was a surefire way to conquer as many mountains in one trip as possible – I would traverse the Great Range in two days, allowing myself nine peaks in one trip. I was familiar with the ridges of the White Mountains in nearby New Hampshire, and felt assured that it would be similar to those experiences. There I was able to climb to the highest point of a ridge and slowly descend it, making only slight climbs to ascend the other peaks as I moved forward.
The trip was a categorical failure. Two peaks into the trip, my hiking buddy and I were woefully behind schedule and dangerously exhausted. After finishing only the second mountain of what was supposed to be many more that day, I was both dehydrated and incoherent from the effects of mild hypothermia. (Though the trip was late May, there was still three feet of snow on the ground.) Slurring my words, I explained to my friend that I thought we might have set our sights a bit too high.
Unfortunately we were too high to set up camp – it would have been both illegal and too cold. Returning to camp was not easy though. There were two mountains on either side of us, requiring a significant hike before we could get to a lower elevation. Forced to climb, we ascended both Basin and Saddleback mountains, some of the most challenging hikes in the Adirondacks. One of the most terrifying and beautiful sights I have ever seen as a hiker was the sun setting while we were on top of this final mountain, miles from any safe campsite. Beaten by the mountains, we did make it to camp that night, but ended our trip a day early.
We were entirely unprepared for the conditions, and had no business hiking at that time of year. These issues aside, though, there was a more central problem at hand. The Adirondacks are not like the White Mountains, nor are they like any other mountain range on our planet. The ridges that characterize so many mountain ranges, formed by the fault lines of colliding land, do not exist in the Adirondacks. To tackle all the peaks of the Great Range, a hiker must ascend and descend each peak nearly in full, finding no benefit in a raised line of topography.
This difference is rooted in how mountains form in the first place. The White Mountains, for example, are part of the larger Appalachian mountain range. (The Adirondacks are technically considered part of the Appalachians as well, but only because they are close to the other ranges.) The formation of the Appalachians is typical of most mountain ranges. These mountains trace their origins to a time many hundreds of millions of years after the great Grenville Mountains. Rodinia, the supercontinent which held the Grenville Orogen, began to rift apart about 800 million year ago. The process that destroyed those mountains created the Iapetus Ocean – named after the Greek father of Atlantis.
Around 500 million years ago, the Iapetus Ocean began to close. As it closed, landmasses within the Iapetus crashed into the eastern side of what is now North America. As seafloor was forced under North America, volcanoes formed, erupting through land and forming islands that eventually crashed into the continent as well. This process continued for many millions of years, until 250 million years ago, when the super continent Pangea was formed. As this myriad of landmasses hit the North American continent, they formed long ridges – reminiscent of the ridges of a car’s hood after a head-on crash. They are beautifully clear if you get a chance to fly over them, and they make for easy hiking, as peaks connected by a ridge require less descent and ascent.
The Adirondacks, however, are like a giant wart, pushing its way through the beautifully ordered structure of the Appalachians. A giant dome, the Adirondacks look misplaced on even the simplest of maps. The reason for this is unclear. What is known is that for about 15-20 million years the crust under the Adirondacks has been rising, forcing the younger, more typical Appalachian mountains above to erode away. As they erode and the crust continues to rise, the deepest, oldest rocks are exposed – the Grenville ones. Because these rocks have been subject to one billion years of torture, they have a jagged and disordered topography, making the typical ridges I was used to hiking non-existent.
How fast they are rising is the subject of much debate. Some say they are rising nearly as fast as the Himalayas, thought to be the fastest rising mountain range today. Others say they may not be rising much at all. Even more enigmatic is why they are rising. “Both the existence of current uplift and its modus operandi remain a mystery,” states an official 1995 United States Geologic Survey report on the Adirondacks. The mystery remains unsolved.
The most popular idea is that there is a hotspot under the Adirondacks, creating a pocket of relatively less dense mantle, which, forced to rise, pushes the crust above, and ultimately the Adirondacks, to the surface. This would explain why the Adirondacks are dome shaped, but the hypothesis is hard to test.
What was not hard to test was how different the Adirondacks were to other mountain ranges I had climbed. The disconnected peaks of the Adirondacks are a completely different world compared with the ridge-connected peaks of the rest of the Appalachians. Exceedingly beautiful and unique, they remain my favorite mountains of the many I have visited, but they taught me a cruel geologic lesson. Know the history of your mountains, as enigmatic as it may be, before you try to conquer them.
Geologists are able to tell you the exact history of the waxing and waning of glaciers over the past five million years because microscopic creatures in the ocean have been unwittingly recording this dance in their shells. Their shells are made from the carbon and oxygen found in seawater. As glaciers form, seawater is removed from the ocean and trapped on land, resulting in subtle changes in the chemistry of the ocean. These changes are recorded in the shells, which create a detailed history as they pile up on the ocean floor.
For decades paleoclimatologists have used the records of seashells to reconstruct either the volume of glacial ice trapped on land or the temperature history of the ocean, providing a beautifully detailed picture of climate over the past 5 million years. These approaches, however, are limited by the fact that the scientist must know the exact chemistry of the water that the shells were formed in to calculate temperature, or the exact temperature at which they formed to calculate the chemistry of the water. This fact has confounded hundreds of studies about the history of our planet. A revolutionary new method has solved that problem. It also has its sights set on topics as diverse as the biology of dinosaurs and the evolution of man.
These methods work because some molecules of the same element have different masses. These variants are known as isotopes. Many of these isotopes are unstable, meaning they break down into other elements while releasing harmful radiation. Just as important to geologists, though, are the stable ones – atoms that exist for eternity with a fixed number of protons and neutrons. Carbon has two stable isotopes, one with an atomic mass of 12, and one with an atomic mass of 13. Similarly, oxygen as three: masses 16, 17, and 18. In both cases the light isotopes are common, while the heavier ones are exceedingly rare.
Because of these variations in mass, nature treats the heavy isotopes slightly differently than the light ones. When the shells of ocean creatures are formed, they form with a fixed ratio of heavier and lighter isotopes, leaving hints about environmental conditions at that time. These ratios are trapped in carbonate, a molecule that contains one carbon and three oxygen atoms and is the primary building block of seashells (among many other things).
Relative changes in the temperature of the ocean at the time the shell formed can be calculated using these ratios. This is because at colder temperatures oxygen 18 and oxygen 16 behave more similarly than at warm temperatures, when increased energy makes oxygen 16 more likely to react and form carbonate than oxygen 18. Shells that form under colder temperatures, then, will have more oxygen 18 than at warm temperatures.
Information like this is critical if we wish to understand how our climate system operates, and what changes humanity will face as our planet continues to warm. But actual temperature values (i.e. degrees Celsius) would be even more useful.
Using oxygen isotope ratios to calculate absolute temperature is problematic, though. This ratio is also affected by the amount of oxygen 18 and oxygen 16 in the water to begin with. Ice prefers to form from oxygen 16. Therefore as more ice is trapped on land, more oxygen 16 is stripped from the ocean water. This results in oceans with more oxygen 18 in glacial times. This is the principle that is employed when scientists study shells to reconstruct the history of ice ages, but it means that more than one process can affect the oxygen isotope ratio in shells.
Because fluctuations in this ratio can be driven by both temperature and the original isotopic composition of the seawater, a researcher could not simply take, for example, a 65 million year old seashell and tell you the temperature of the water was when it formed. Absolute temperature values are almost impossible to calculate from oxygen isotopes in older carbonate samples.
That was before 2006, before the term ‘clumped-isotope’ entered the geologic lexicon and revolutionized the use of isotopes to study temperature. Using a new method known as ‘clumped-isotope paleothermometry,’ a researcher could indeed pick up a 65 million year old seashell and tell you the temperature in which it was formed without knowing anything else about it.
While the technique represents a complex and technical scientific achievement, the premise behind the method is fairly straightforward. When a carbonate molecule forms with more than one heavy isotope, the bond holding that molecule together is stronger than if it formed with only the common light isotopes. Before a carbonate mineral is formed and locked in place as a bone, shell, or rock, the carbon and oxygen atoms dance around, repeatedly switching partners. Because of this dance, you might expect a random distribution of light isotope to light isotope bonds (i.e. carbon 12 bonded to oxygen 16) and heavy-to-heavy bonds (i.e carbon 13 to oxygen 18).
The beauty lies in the fact that this is not the case. Because heavy-to-heavy bonds are stronger, they last just a little bit longer than the other arrangements. This is especially true in cold conditions, when there is less energy to break bonds to begin with. The higher the temperature in this sea of isotopes, the more chaotic the dance becomes. With more chaos, the ability for heavy-to-heavy bonds to remain together is reduced, and eventually removed, yielding the random distribution of bonds one might expect. The result is simple: carbonate formed in cool conditions will have more molecules with more than one heavy isotope, whereas carbonate formed in warm conditions will have fewer.
The process by which heavy isotopes join together is referred to as ‘clumping,’ and with the advent of new and highly sophisticated laboratory equipment, scientists can measure the degree to which it has occurred in carbonate. Years of experiments have related the amount of clumping in carbonate directly to temperature. Best of all, the starting composition of the water that formed the carbonate is irrelevant. If it formed at the same temperature, a researcher will get the same value whether large amounts of oxygen 16 were removed from the ocean by ice or not.
Ocean temperatures are not the only questions that can be addressed using clumped-isotopes, though.
Drs. Benjamin Passey and Naomi Levin at Johns Hopkins University, for example, are interested in human evolution. They wanted to tackle an old but important question: was the time period that led to the emergence of hominids cooler than the present in key anthropologic sites in Africa? Some researchers have said yes, but many have suggested that it was significantly warmer than present.
Many theories of human evolution depend on knowing what the environment was like at this time, so Passey and Levin decided to apply clumped-isotope methods to fossilized African soil (soil commonly contains carbonate minerals). They found that temperatures over the past 5 million years have been either the same as, or warmer than, the present. This limits, in their view, any hypothesis relating the evolution of human traits to those that can be explained by similar or warmer temperature to the present.
Jumping back many millions of years, Dr. Robert Eagle at Caltech wanted to know more about the metabolism of sauropod dinosaurs, massive creatures like the famous Brachiosaurus. A long-standing debate amongst paleontologists is whether these creatures were cold-blooded, deriving their energy from the environment like modern day reptiles, or if they possessed some form of endothermy, maintaining their body heat internally as mammals and birds do today.
Because the bone and teeth of vertebrates are composed of bioapatite, a carbonate mineral, Eagle decided to use clumped isotopes to tackle this question. Previous work on modern animals has shown that the temperatures derived from teeth are representative of body temperatures. So in a 2011 paper, he looked at the temperatures recorded in the fossilized teeth of sauropods. He determined that the temperature at which the bioapatite in their teeth were forming was much higher than those for modern reptiles, similar to mammals, but lower than birds. This ruled out a cold-blooded dinosaur, and posed new questions about dinosaur biology.
Clumped-isotope paleothermometry is still in its infancy, but it is rapidly expanding. 2006 was the first year any paper used the term “clumped-isotope.” In 2011, 19 papers did, and many more are on the horizon.
Still many kinks need to be worked out. Methods need to be standardized and conclusions need to be scrutinized. Undoubtedly a period of time will come, as is the case with most scientific developments, where researchers identify more and more problems, adding a dose of reality to optimism. For now, though, the slight preference for some isotopes to stay bonded together is ushering in a new world of possibilities for earth scientists. This is the beginning of something big.