Climate science is an extremely complicated discipline. Climate change skeptics and deniers, I believe, thrive on this complexity. They highlight what is not known or not agreed upon to suggest that the discipline as a whole is flawed. The best way to combat such an argument is with simplicity.
In that light, I present a simple, four-point argument demonstrating the reality of anthropogenic global warming.
Carbon Dioxide Causes Warming
The central mechanism driving anthropogenic climate change is the combustion of fossil fuels. Fossil fuels, the chemicals we use to heat our houses and move our cars, are compounds formed when ancient organic material, predominantly the remains of algae, is buried and cooked at a high temperature and pressure for millions of years. The result is a set of carbon-based chemicals that release a lot of energy, and form carbon dioxide (CO2), when burned.
This CO2, when released into the atmosphere, traps heat by blocking the escape of Earth’s radiation into space. (Anything that has a temperature, Earth included, produces radiation.) Known as the greenhouse effect, this is not a new or controversial idea. In 1861, John Tyndall, a British professor of natural philosophy, gave a lecture titled “On the Absorption and Radiation of Heat by Gases and Vapours, and on the Physical Connexion of Radiation Absorption and Conduction.” Tyndall demonstrated conclusively that CO2, among other gasses, absorbs long wave radiation – the same type that Earth emits to space. His experiment was simple. Tyndall produced radiation with a bunson burner, knowing that the heat would emit a full spectrum of wavelengths, including long-wave radiation. He then measured those wavelengths after passing them through different gasses. Because not all wavelengths traveled through the CO2, Tyndall concluded that the CO2 must be absorbing some of the heat. This simple experiment has huge implications for our planet.
Tyndall’s work greatly influenced a Swedish physicist named Svante Arrhenius. In 1896, Arrhenius published a paper titled “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground.” (Carbonic acid was what carbon dioxide was called at the time.) Arrhenius, in essence, took Tyndall’s work out of the lab and applied the concept to the real world. Instead of a bunson burner, he used observations of infrared radiation from the moon. Because he knew that the moon, without an atmosphere, should transmit all of its long wave radiation to Earth, he was able to calculate the effect our atmosphere had on it by documenting which wavelengths didn’t make it. For each lunar observation, he compared that data with atmospheric conditions (humidity and CO2 levels) to see what effect they had on the radiation that made it to Earth. By doing this he determined that with a rise in CO2 came a “nearly arithmetic” rise in temperature. Using his calculations he determined that a doubling of atmospheric CO2would result in a 5ºC temperature rise. Even with the advent of massive computer models and high-tech lab equipment, this value is still in agreement with modern climate science.
Both Tyndall and Arrhenius speculated that CO2 has played a role in controlling the Ice Ages. Arrhenius, back in 1896, even predicted that human fossil fuel use might result in future global warming.
Carbon Dioxide Concentrations in the Atmosphere are Increasing
This is the easiest point to make. Scientists can measure the amount of CO2 in the atmosphere. It is increasing.
The best evidence is the famous “Keeling Curve.” In 1958, Charles Keeling, a professor of oceanography at Caltech, began making continuous measurements of CO2 on the peak of the big Island of Hawaii. Because this station is far away from major urban centers, and because the station is at a high altitude, the location is perfect for making CO2 measurements that are representative of the whole atmosphere. His measurements, which continue to this day, show a progressive rise in CO2from around 315 parts per million in 1958 to about 394 ppm as of September 2012.
The Increased Carbon Dioxide is Coming from Human Activity
This is the heart of the controversy, but this is just as easy to demonstrate as the previous two points. The CO2 that is associated with the recent increase has a chemical signature that unequivocally ties it to human activity.
CO2 can come from a variety of sources. CO2 in the ocean is constantly exchanged with CO2 in the atmosphere; there is CO2 in the mantle, which can be released through volcanoes, and wildfires can release CO2 the same way that burning fossil fuels does. By looking at the carbon contained in CO2, scientist can distinguish between each of these sources.
Fossil fuels come from the cooked remains of ancient life. Therefore, the carbon in this CO2 must be derived from the remains of living things that existed a very long time ago. Both the age and the source of carbon can be inferred using chemical entities known as isotopes.
Elements like carbon can have differing masses, caused by changes in the number of neutrons in the atom. These are called isotopes, and each isotope acts a bit differently. When a plant takes in CO2 from the atmosphere through photosynthesis, it prefers carbon with a mass of 12 to carbon with a mass of 13. Therefore, anything that photosynthesizes, or anything that eats something produced by photosynthesis (essentially all life on this planet), is composed of less carbon-13 than is typically found in the atmosphere. This is the signature of carbon that comes from living things. Life, both alive and transformed into fossil fuels, represents a massive reservoir of carbon-12. If this kind of carbon were released into the atmosphere, the concentration of carbon-13 in the atmosphere would be reduced by dilution with carbon-12.
Carbon can also have an isotope with a mass of 14. This type of carbon is created in the atmosphere continuously. Because of this, there is a constant source of carbon-14 on the surface of Earth. Unlike carbon-13, carbon-14 is radioactive. This means that it cannot remain carbon-14 forever. It slowly decays away at a known rate. This property allows scientists to use carbon-14 to date once living things, but anything past approximately 60,000 years, cannot be dated since it will have virtually no carbon-14 left. A complete lack of carbon-14 is the signature ancient carbon. If enough of it is released to the atmosphere, it will decrease the relative concentration of carbon-14 in the atmosphere by diluting it with carbon-14 free CO2.
The combustion of fossil fuels, then, should reduce the concentration of both carbon-13 and carbon-14 in the atmosphere.
Both are happening. They are known collectively as the Suess effect. The concentration of carbon-14 and carbon-13 in the atmosphere is declining, and it is declining at the same time that CO2 is increasing. This means that the CO2 increase we are seeing must come from ancient, organic carbon.
No other source of CO2 could have this signature. Wildfires can’t because the carbon being burned is young; it has plenty of carbon-14. Carbon from the ocean has the same problem – too young, too much carbon-14. CO2 from volcanoes does not work either. This carbon does not come from once living matter, so it has plenty of carbon-13.
Carbon derived from the remains of ancient life buried deep inside our Earth is the only plausible source. The only way to release a great deal of it at once is to dig it up and burn it, as humans are doing today.
Average Global Temperatures Are Rising
Just like CO2 concentrations, scientists are able to measure air temperature – in fact the technology has been around for quite a while. The real challenge is getting past the variability, which is the result of things like el Nino and other short term weather patterns, to figure out what the long-term temperature trend is globally. There are plenty of studies showing that the trend is overall warming, but I will highlight a study by Richard Muller.
Richard Muller was an outspoken climate change skeptic, and the Koch brothers, prominent right-wing political figures who deny climate change, funded his research. He gathered as much data as possible and corrected for all known biases – the fact that temperatures are generally higher in cities, for example – and plotted average temperature since 1750. They are rising – a full degree since 1900. A degree may not sound like much, but a rise of 2 degrees would result in ~3 meters of sea level rise, according to a collection of recent estimates. Most of New York City would be underwater.
Carbon dioxide in the atmosphere can warm our planet. This has been taken as fact for well over a century – well before any widespread scientific conspiracy would have been hatched. Carbon dioxide is increasing – it’s real hard to argue with measurements. The increase in carbon dioxide is changing the chemical composition in the atmosphere in a way only fossil fuels are able to. Also, the planet is warming.
The 2010 US census workers had a tough job, but at least they were on land, counting residents with home addresses. 2010 was also the year a group of marine biologists completed a much tougher assignment: a global canvass of ocean residents who don’t fill out forms, live in some of the most remote places on the planet, and often move thousands of miles in a single year. The first study of its scope, the Census of Marine Life has added thousands of new species to the books, and has shown, in the words of project director Jesse Ausubel, that “the ocean’s even richer in diversity than anybody had known.”
Census scientists collected over 6,000 new species, and have already described 1,200 of them in detail. They discovered deep-sea jellyfish, 500-year-old tubeworms, bejeweled copepods and isopods, and a hairy white crab that lives near sulfurous vents on the ocean floor. They found a mat of filamentous bacteria the size of Greece off the coast of Chile, and located a squid previously believed to have gone extinct in the Jurassic. The Census uncovered new life forms even in some of the world’s most studied and heavily trafficked ocean regions, said Ausubel, Census co-founder and program director at the Alfred P. Sloan Foundation, who described the results in a talk in Washington, DC last Thursday.
Each newly discovered species now has its own web page in the online Encyclopedia of Life, which will eventually catalog every known life form on Earth. Pages in the encyclopedia include physical descriptions of the species, scientific information like where the creature is found and how common it is, and, of course, color photos. “It’s like facebook,” Ausubel said. In addition, scientists gathered DNA from every creature found, new or not, for a project called the International Barcode of Life. The iBOL is a reference library made of segments of specific genes that are shared among many forms of life, but whose precise sequence varies in an identifiable way from species to species.
Scientists also learned that many familiar marine animals make long-distance trips across the ocean, “commuting like jetset businessmen” in Ausubel’s words. Census researchers attached acoustic tags to various creatures and released them; the tags then emitted sounds that were picked up by receivers on the ocean floor as the animals passed by. Scientists watched bluefin tuna swim from Mexico to Japan and back in a year, and tracked seals fishing from underwater mountains off the Antarctic coast. They monitored salmon swimming up the west coast of Canada, and learned that many of them don’t make it back to rivers to spawn the next year. And they used the tags to collect data beyond just the animals’ locations; for example, they enlisted leatherback turtles to collect ocean temperature readings during their journeys around the South Pacific. “Animals connect the ocean in incredible ways,” said Ausubel.
Surveying the astounding diversity of marine habitats—coastlines, continental shelves, deep-sea trenches and mountain ranges, the vast open ocean—required a correspondingly varied array of exploratory techniques: “a concerto of technologies,” said Ausubel. To explore the ocean surface and shallow waters, scientists worked from submarines, airplanes, and massive research vessels. For probing the deep ocean, they turned to robotic and remotely controlled vehicles that could operate at depth without risk to human life. In total, the project cost $650 million spread over a decade, and involved almost 3,000 scientists. “Marine biology hasn’t had a tradition of big science” like physics has, said Ausubel, but with the Census of Marine Life, that may be starting to change.
Although Ausubel noted that “extinction is rare in the ocean,” scientists found ample evidence of humans’ effects on life in the sea, few of them good. Overharvesting has depleted the populations of various fish, mammals, and reptiles since the time of the Romans, and in recent times has led to explosions of less desirable creatures, like jellyfish. Modern scourges like the huge floating garbage patches in the Atlantic and Pacific are also harming aquatic life, particularly island-nesting birds that are often found dead with plastic in their stomachs. But the greatest impacts may be yet to come, as humans increase shipping, oil drilling, and underwater communication, and as rising greenhouse gas emissions continue to warm the ocean and make it more acidic. The Census has given scientists a valuable baseline against which to measure future changes to the abundance and distribution of ocean dwellers.
Despite their impressive findings, marine biologists have just begun the hard part of counting every creature in the sea. They believe the ocean could contain a million or more undiscovered species, most of which are likely to be small, rare, and hard to find. And those are just the relatively well-studied multi-cellular ocean dwellers; the number of microbial species in the ocean is far larger, perhaps as many as a billion. Ausubel also noted that few people study most marine life forms, besides the well-known ones like fish and mammals (a hint to any young scientists out there searching for a specialty.)
The seas have long fascinated and mystified us. Over 60 years ago, Rachel Carson’s best-selling book The Sea Around Us told the public about the stunning discoveries in marine biology made possible by World War 2-era innovations in sonar and submarine technology. Since then we have learned much about what lives in the deep sea, and we now know the ocean floor is not barren but in fact teems with strange and wonderful life. But the Census of Marine Life also reaffirms the lure of the unknown that Carson described in her 1951 masterpiece: “We can only sense that in the deep and turbulent recesses of the sea are hidden mysteries far greater than any we have solved.”
What makes you, you? The nature vs. nurture debate has been going on for more than a century, and recent work with honeybees has managed to make it even more complex. Researchers focused not on the nature part, the bees’ DNA, nor on the nurture part, how the bees grew up and lived, but on a fuzzy gray area in between.
All of the worker bees in a hive are sisters, descended from the same queen. All of these bees grow up together sharing the same environment. Worker bees divide into two groups: nurse bees, who take care of the eggs and larvae, and forager bees, who fly around collecting pollen and nectar. Nurses’ and foragers’ missions are different, but the DNA of these sister bees is quite similar and their nurturing seems to have been the same. How do they end up with different purposes?
Enter epigenetics, a relatively new field that studies how the environment affects the expression of genes. Genes were once thought to be instructions written in stone, unchangable directives for the cell. The nurture effects were thought to go on after the instructions were read, a result of environmental factors, i.e. where a toxic chemical causes cancer or someone overcomes a natural stutter. In reality, the genes themselves get buffeted by the winds of chance and circumstance from the outside world. Epigenetics has found “tags” sitting on top of sections of DNA. These tags control whether the cell will “read” a gene or if it will remain silent. The DNA itself is not actually changed, but its accessibility is. The whole set of DNA, called the genome, is overlaid with a pattern of these tags, called the epigenome.
This epigenome develops throughout life, starting with very few tags at birth. Tags are added or removed due to environmental factors such as nutrition, stress, and disease, allowing cells access to some genes and not others. Not only do these tags accumulate throughout one lifetime, some of them can be passed down to offspring. Which means that parents, grandparents, and various distant relatives all gave some of their epigenome to an individual, contributing to how their genes are expressed. They also contributed DNA, too, but unlike DNA, the epigenome is influenced by lifestyle choices. Those grandparents’ actions and experiences, not just their genes, influence who you are.
After finding no difference between the genome and the epigenomes of the queen bee and workers right after birth, Dr. Andrew Feinberg and his colleagues at Johns Hopkins University examined the differences between the two castes of worker bees: nurses and foragers. These workers perform very different roles in the hive. Usually, newly born bees start out as a nurses, and as older foragers die in the risky outdoors, some of them start foraging. Researchers took care to compare the epigenomes of workers that were the same age, each nurse and forager getting the same amount of time to accumulate epigenetic tags.
In their experimental design, the researchers were sneaky. They took advantage of the ability of the workers’ ability to change jobs: switching back to nurse from forager if the need arises. This doesn’t happen very often, but researchers created the need. If we manipulate their hive a bit, and get some of the forager bees to change back to being nurses, they asked, what will their genome look like then?
While the forager bees were out foraging, the researchers moved the hives, so that the bees came back to another hive, empty of bees but not of honeycomb full of larvae that needed tending. With a distinct need for nurse bees, half of the workers went back to their old jobs. Researchers looked to see if foragers and nurses have different epigenomes, and what type of epigenome the foragers-turned-back-to-nurses had.
It turned out that not only did nurses and foragers have distinct epigenomes, but it seems the epigenome changed with the job. When foraging worker bees were steered back to being nurses, their epigenome, and the genes it allowed to be expressed, reverted back to look like it did when it was a young nurse bee. It was like flipping a switch attached to about 100 genes at the same time, turning them on or off if the worker was fulfilling a nurse role or a forager role. These worker bees acted very differently, and the specific epigenome patterns seem to be the key to why.
This is, as the researchers note in very understated tones, “the first evidence in any organism of reversible epigenetic changes associated with behavior.” Does our epigenome change our behavior or does our behavior change our epigenome? No one knows, but this is evidence of the large role epigenetics plays in each individual. And our epigenome is greatly affected by every facet of the environment we live in. So how did you become you? A murky causal soup of your environment, combined with your genes, combined with the gray area of environmentally-affected gene expression. Epigenetics, and bees, have just made the nature-nurture debate much more interesting.
Caenorhabditis elegans, a millimeter-long nematode or roundworm, has been poked and prodded, dissected and inspected. Every cell in its body has been mapped, the circuitry of its neurons traced, and its entire genome sequenced. For the past 50 years, it has been the experimental animal of choice, the subject of over 15,000 articles on everything from genetics to drug development. Biologically speaking, we know more about this animal than any other in the world—including ourselves.
But for all that we know about C. elegans, one aspect remains a mystery. In an abnormal birth process called matricide, the offspring eat and kill their mother. Researchers have shown that this unusual phenomenon may in fact be an evolutionary adaptation. By committing suicide for the sake of her young, the mother provides them the opportunity to become dauers, larvae that are incredibly stress-resistant.
Sydney Brenner, a biologist at Oxford, first saw the nematode’s potential in the 1960’s. C. elegans, he realized, is the ideal multicellular organism to study in the lab—simple yet possessing the basic tissues common to all animals. Almost all C. elegans are hermaphrodites, essentially female bodies capable of producing and self-fertilizing with their own sperm. About four days after birth, the worm reaches maturity and self-breeds, laying up to three hundred eggs, which hatch outside its body. In a defective worm that is unable to form a vulva or opening necessary to expel the eggs from its body, the eggs hatch inside. As biologists Diana McCulloch and David Gems from the University College London described, “eggs eventually hatch within the uterus, and the emerging larvae devour the mother.”
Such a worm is often called a “bag of worms,” because under the microscope it looks like a bloated worm. (Watch the “bag of worms” in action here.) The eggs hatch inside and, with nowhere else to go, the offspring writhe frantically about and eat their mother’s insides until they pierce their way out of her body. Once they escape, many of the larvae who have inherited the genes involved in matricide will encounter the same fate as their mother.
In the underbelly of the worm, a special cell called the anchor cell signals three precursor cells to form the vulva in preparation for breeding. In a worm carrying the genetic mutation, however, the anchor cell breaks down in relaying its messages to the precursor cells. The worm is unable to form a vulva and is then fated to become a bag of worms.
While matricide has often been cast as a defect, emerging research has shed light on its evolutionary value. In 2003, Jianjun Chen and Edward P. Caswell-Chen, scientists in the department of nematology at the University of California, Davis, found that far from being a rare phenomenon of the laboratory, matricide occurs in nematodes living in the wild as well.
In the lab or the wild, severe stress can cause matricide, even in worms that do not carry the mutation. Starve a pregnant C. elegans, expose it to toxic substances, or transfer it from a solid to a liquid environment, and it is likely to develop into a bag of worms. In their experiment, the researchers starved batches of C. elegans and watched their response under the microscope for several hours. They found that when starved, the mother “sacrifices its body” to provide nutrition to its offspring. Interestingly, they also discovered that matricide is reversible: feeding a starving mother allows it to lay its eggs normally, assuming offspring that already hatched inside did not cause too much damage already.
Most importantly, the researchers found that in matricide, the mother provides her offspring with a mechanism for coping with stress through the dauer stage, a larval stage that is a kind of emergency survival mode. When the mother’s body has been consumed and food is still not available, the larva can enter developmental arrest, reducing its metabolism and increasing its capacity to withstand stress. Compared to the typical two- to three-week lifespan of C. elegans, dauer larvae can survive months without food. In particular, the researchers found that the longer they starved the mother, the fewer the offspring that survived (due to competition for resources), but a higher proportion of those survivors were able to reach the dauer stage.
Leaving behind even a single dauer is an evolutionary fitness advantage for C. elegans. By sacrificing herself, the mother is able to ensure that her young live on. In the survival of the fittest, dog eat dog does not always win the game. Sometimes, altruism goes a long way—even if it means being the one that gets eaten.
The Adirondacks are something of a paradox. Made from some of the oldest rocks on Earth, they are one of the youngest mountain ranges in existence. Pushing their way through the younger rocks of the Appalachians, this jagged, deformed mess of ancient rock, once trapped deep in the crust, has been rising for the past 15-20 million years. And nobody really knows why.
Over a billion years ago, standing high above the lifeless lowlands of the supercontinent Rodinia, a massive mountain range known as the Grenville Orogen extended from coast to coast – one of the largest and longest lived ranges our planet has ever known. Formed when prehistoric continents collided to form a single and massive landmass, its rocks have since fallen deep into fractured valleys and risen once more. They have formed the floors of ancient oceans, and they have withstood the extreme heat of deep burial. These are the rocks that are forcing their way to the surface as the Adirondacks. This complex history makes them unlike any other mountain range – a lesson I learned the hard way.
As a young and somewhat naive hiker in my freshmen year at Skidmore College, I had my heart set on climbing as many of the Adirondack ‘high peaks’ as possible, those peaks that are higher than 4000 feet. I picked up a map of the high peak region and quickly identified what I felt was a surefire way to conquer as many mountains in one trip as possible – I would traverse the Great Range in two days, allowing myself nine peaks in one trip. I was familiar with the ridges of the White Mountains in nearby New Hampshire, and felt assured that it would be similar to those experiences. There I was able to climb to the highest point of a ridge and slowly descend it, making only slight climbs to ascend the other peaks as I moved forward.
The trip was a categorical failure. Two peaks into the trip, my hiking buddy and I were woefully behind schedule and dangerously exhausted. After finishing only the second mountain of what was supposed to be many more that day, I was both dehydrated and incoherent from the effects of mild hypothermia. (Though the trip was late May, there was still three feet of snow on the ground.) Slurring my words, I explained to my friend that I thought we might have set our sights a bit too high.
Unfortunately we were too high to set up camp – it would have been both illegal and too cold. Returning to camp was not easy though. There were two mountains on either side of us, requiring a significant hike before we could get to a lower elevation. Forced to climb, we ascended both Basin and Saddleback mountains, some of the most challenging hikes in the Adirondacks. One of the most terrifying and beautiful sights I have ever seen as a hiker was the sun setting while we were on top of this final mountain, miles from any safe campsite. Beaten by the mountains, we did make it to camp that night, but ended our trip a day early.
We were entirely unprepared for the conditions, and had no business hiking at that time of year. These issues aside, though, there was a more central problem at hand. The Adirondacks are not like the White Mountains, nor are they like any other mountain range on our planet. The ridges that characterize so many mountain ranges, formed by the fault lines of colliding land, do not exist in the Adirondacks. To tackle all the peaks of the Great Range, a hiker must ascend and descend each peak nearly in full, finding no benefit in a raised line of topography.
This difference is rooted in how mountains form in the first place. The White Mountains, for example, are part of the larger Appalachian mountain range. (The Adirondacks are technically considered part of the Appalachians as well, but only because they are close to the other ranges.) The formation of the Appalachians is typical of most mountain ranges. These mountains trace their origins to a time many hundreds of millions of years after the great Grenville Mountains. Rodinia, the supercontinent which held the Grenville Orogen, began to rift apart about 800 million year ago. The process that destroyed those mountains created the Iapetus Ocean – named after the Greek father of Atlantis.
Around 500 million years ago, the Iapetus Ocean began to close. As it closed, landmasses within the Iapetus crashed into the eastern side of what is now North America. As seafloor was forced under North America, volcanoes formed, erupting through land and forming islands that eventually crashed into the continent as well. This process continued for many millions of years, until 250 million years ago, when the super continent Pangea was formed. As this myriad of landmasses hit the North American continent, they formed long ridges – reminiscent of the ridges of a car’s hood after a head-on crash. They are beautifully clear if you get a chance to fly over them, and they make for easy hiking, as peaks connected by a ridge require less descent and ascent.
The Adirondacks, however, are like a giant wart, pushing its way through the beautifully ordered structure of the Appalachians. A giant dome, the Adirondacks look misplaced on even the simplest of maps. The reason for this is unclear. What is known is that for about 15-20 million years the crust under the Adirondacks has been rising, forcing the younger, more typical Appalachian mountains above to erode away. As they erode and the crust continues to rise, the deepest, oldest rocks are exposed – the Grenville ones. Because these rocks have been subject to one billion years of torture, they have a jagged and disordered topography, making the typical ridges I was used to hiking non-existent.
How fast they are rising is the subject of much debate. Some say they are rising nearly as fast as the Himalayas, thought to be the fastest rising mountain range today. Others say they may not be rising much at all. Even more enigmatic is why they are rising. “Both the existence of current uplift and its modus operandi remain a mystery,” states an official 1995 United States Geologic Survey report on the Adirondacks. The mystery remains unsolved.
The most popular idea is that there is a hotspot under the Adirondacks, creating a pocket of relatively less dense mantle, which, forced to rise, pushes the crust above, and ultimately the Adirondacks, to the surface. This would explain why the Adirondacks are dome shaped, but the hypothesis is hard to test.
What was not hard to test was how different the Adirondacks were to other mountain ranges I had climbed. The disconnected peaks of the Adirondacks are a completely different world compared with the ridge-connected peaks of the rest of the Appalachians. Exceedingly beautiful and unique, they remain my favorite mountains of the many I have visited, but they taught me a cruel geologic lesson. Know the history of your mountains, as enigmatic as it may be, before you try to conquer them.
Last week, I along with millions of people took off work and went 25 hours without food or water. No, we weren’t orchestrating a spontaneous hunger strike; we were observing Yom Kippur, the holiest day of the Jewish calendar. The instruction to fast comes from the Bible, but the Bible also tells us to do all kinds of other things—sacrifice animals, stone adulterers—that even the most pious ignore today. So why do we continue to find this one relevant? I believe the answer can be sought not just in religious texts, of which I confess to being almost entirely ignorant, but also in science.
Physiologically, fasting sets off a chain of chemical and electrical signals in the body. After the stomach empties the last bits of a meal into the small intestine, it releases a hormone called ghrelin, which activates a region deep in the brain known as the hypothalamus. The hypothalamus then sends out a nerve signal that manifests as hunger, and we feel an unpleasant sensation in our stomach. That meal we ate also provided a surge in glucose, a simple sugar that fuels the brain, and when the surge ends we feel weak, tired, and sluggish until our next meal.
I’m familiar with the bodily effects of hunger, but I also notice changes in my mental state when I fast. The hyperactive part of my brain that normally wants to execute plans and think a million things—what Buddhists sometimes call the “monkey mind”—seems to shut down, or at least quiet down. I feel grumpy and pessimistic; I become contemplative; I write; if I go to services, I feel more like praying than I normally would. I also notice a heightened sense of connection to others. In an intangible but distinct and sometimes powerful way, I feel linked to people in other places who are also fasting now, as well as to people who have fasted in other times.
I began wondering this year if there was a connection between fasting and what was happening in my brain. If I were a neuroscientist, I could put some fasting people in MRI machines, like what Richard Davidson at the University of Wisconsin does with meditating monks. But I’m a writer, not a scientist, so I did the unscientific thing and called a few friends.
For Rhea Kennedy, who has fasted on Yom Kippur for many years, going without food doesn’t necessarily lead to turning inward (“I’m usually pretty introverted anyway,” she observes), but she does find herself feeling profound empathy with “other people who can’t eat for some reason.” Rhea also feels herself linked with people who lived in the past. “It affects the way I relate to my Jewish ancestors and survivors of the Holocaust,” she says. “A lot of [their experiences] had to do with food deprivation.”
Laura Bellows, another friend with a long history of fasting, echoes Rhea’s experience of empathy with those for whom going without food may not be a choice. “It makes me feel like this is a little taste of what it’s like to be hungry,” she says. Laura also finds that fasting enhances the intensity of her prayers, and creates a sense of bonding with her community. “I feel very connected to those with whom I’ve fasted,” she says. “It’s as if we’ve been through this communal hardship together.”
Muslims also report feeling empathy during fasting for Ramadan. Here are a couple of examples:
“At one point he [a Washington, DC taxi driver] said that Ramadan and fasting have a broader social impact because they are ‘a reminder of people who cannot eat,’ and how lucky we are.” (from the blog No Kid Hungry)
“When [Emad Meerza, a Muslim community leader in Bakersfield, CA] fasts, he thinks about others around the world who are also fasting — not by choice, but because of famine, war or political strife. Through our own suffering, he says, empathy is born.
‘The only way to feel that is to feel a little bit of that pain,’ he says.”
(from the Bakersfield Californian)
These are anecdotes, not even close to the kind of data that would be needed to approach this question scientifically. Unfortunately, from what I can tell, scientists haven’t sought the answer either. I conducted article database searches on various combinations of terms like “Yom Kippur,” “fasting,” “psychology,” “science,” “empathy,” “mental states,” “hunger,” and “food deprivation,” but turned up little that seemed to address a link between religious fasting and the brain. Maybe I’m missing it—if you know of such a study, please drop me a line; I’d love to know about it.
Might Jews, Muslims, Mormons, Hindus, and others have developed and maintained fasting practices in part because they understood a mind-body connection that scientists have yet to make? Empathy is a near universal human experience, and has also been observed in rats and monkeys. But it has been a puzzle for neuroscience to figure out how one brain can share an experience occurring in another, entirely distinct brain.
Recently, some scientists have looked to systems of nerve cells known as mirror neurons, which are thought to fire in ways that mimic what we believe to be happening in the brains of others. V. S. Ramachandran, a neuroscientist at the University of California, San Diego and a prominent mirror neuron proponent, has even called them “empathy neurons.” Other scientists question this link, and some doubt whether humans truly have neurons dedicated to mimicking others’ brain activity. But whether due to mirror neurons or not, I do find it suggestive that our processing centers for emotions like empathy seem to be located not in the thought-processing and decision-executing regions of the brain—the frontal lobes—but in deeper regions of the brains. Our understanding of the neurological basis of empathy is described in a paper by psychologists Stephanie Preston of the University of California at Berkeley) and Frans de Waal of Emory University.
Does that mean thinking and deciding require more fuel than emoting? Many times I have become extremely hungry after doing sustained, mentally demanding work, suggesting to me that concentrating hard and long might demand more energy than, say, sitting by a stream. But while some studies have shown that people do better on certain difficult mental tasks when supplied with a sugary drink, it seems that in general the brain consumes a nearly constant amount of fuel no matter what it’s doing. Ferris Jabr wrote a good review of some of this research for Scientific American recently.
Regardless, hunger does seem to make it hard to concentrate on the kinds of tasks we normally think of as “work.” I’ve had that experience, Rhea and Laura both reported it, and so did Jonah Lehrer. It happens to millions of office workers every day; it’s why schools provide free breakfast to students. Perhaps it’s mainly because we get distracted by hunger, but for some reason we can’t seem to concentrate and think well without food in our stomachs. On the other hand, we can contemplate, pray, and feel connected.
My naïve hypothesis, then, is that fasting may quiet the noisy thought-processing and decision-making brain regions, and give us a chance to listen to a softer, less pushy voice—one that has less to say about the day-to-day that consumes us most of the time, but a lot to say about the longer and deeper currents that run under our lives. And one of those currents seems to be a sense of connection to other living beings on our planet. Fasting is not sufficient to hear this voice—we also need to choose to listen, perhaps by going to a prayer service, or by spending time in a quiet place. (Lehrer in his blog post described a fast not leading to a religious state of mind.) But maybe fasting makes the listening easier.
Many in the popular press continue to write about the supposed rift between science and religion. (See a Time article, a Discover blog post, and a 2010 book for some examples.) Ritual fasting seems to me like a perfect meeting place, where science can help elucidate the value of religious traditions, and religion can stimulate scientific investigation. I’d love to see scientists take on this kind of research.