Science's crisis of faith
By Alan P. Lightman
Alan Lightman, a physicist and novelist, teaches at MIT. His new book, Mr g: A Novel About the Creation , will be published in January by Pantheon.
In the fifth century B.C., the philosopher Democritus proposed that all matter was made of tiny and indivisible atoms, which came in various sizes and textures—some hard and some soft, some smooth and some thorny. The atoms themselves were taken as givens. In the nineteenth century, scientists discovered that the chemical properties of atoms repeat periodically (and created the periodic table to reflect this fact), but the origins of such patterns remained mysterious. It wasn’t until the twentieth century that scientists learned that the properties of an atom are determined by the number and placement of its electrons, the subatomic particles that orbit its nucleus. And we now know that all atoms heavier than helium were created in the nuclear furnaces of stars.
The history of science can be viewed as the recasting of phenomena that were once thought to be accidents as phenomena that can be understood in terms of fundamental causes and principles. One can add to the list of the fully explained: the hue of the sky, the orbits of planets, the angle of the wake of a boat moving through a lake, the six-sided patterns of snowflakes, the weight of a flying bustard, the temperature of boiling water, the size of raindrops, the circular shape of the sun. All these phenomena and many more, once thought to have been fixed at the beginning of time or to be the result of random events thereafter, have been explained as necessary consequences of the fundamental laws of nature—laws discovered by human beings.
This long and appealing trend may be coming to an end. Dramatic developments in cosmological findings and thought have led some of the world’s premier physicists to propose that our universe is only one of an enormous number of universes with wildly varying properties, and that some of the most basic features of our particular universe are indeed mere accidents —a random throw of the cosmic dice. In which case, there is no hope of ever explaining our universe’s features in terms of fundamental causes and principles.
It is perhaps impossible to say how far apart the different universes may be, or whether they exist simultaneously in time. Some may have stars and galaxies like ours. Some may not. Some may be finite in size. Some may be infinite. Physicists call the totality of universes the “multiverse.” Alan Guth, a pioneer in cosmological thought, says that “the multiple-universe idea severely limits our hopes to understand the world from fundamental principles.” And the philosophical ethos of science is torn from its roots. As put to me recently by Nobel Prize–winning physicist Steven Weinberg, a man as careful in his words as in his mathematical calculations, “We now find ourselves at a historic fork in the road we travel to understand the laws of nature. If the multiverse idea is correct, the style of fundamental physics will be radically changed.”
The scientists most distressed by Weinberg’s “fork in the road” are theoretical physicists. Theoretical physics is the deepest and purest branch of science. It is the outpost of science closest to philosophy, and religion. Experimental scientists occupy themselves with observing and measuring the cosmos, finding out what stuff exists, no matter how strange that stuff may be. Theoretical physicists, on the other hand, are not satisfied with observing the universe. They want to know why . They want to explain all the properties of the universe in terms of a few fundamental principles and parameters. These fundamental principles, in turn, lead to the “laws of nature,” which govern the behavior of all matter and energy. An example of a fundamental principle in physics, first proposed by Galileo in 1632 and extended by Einstein in 1905, is the following: All observers traveling at constant velocity relative to one another should witness identical laws of nature. From this principle, Einstein derived his theory of special relativity. An example of a fundamental parameter is the mass of an electron, considered one of the two dozen or so “elementary” particles of nature. As far as physicists are concerned, the fewer the fundamental principles and parameters, the better. The underlying hope and belief of this enterprise has always been that these basic principles are so restrictive that only one, self-consistent universe is possible, like a crossword puzzle with only one solution. That one universe would be, of course, the universe we live in. Theoretical physicists are Platonists. Until the past few years, they agreed that the entire universe, the one universe, is generated from a few mathematical truths and principles of symmetry, perhaps throwing in a handful of parameters like the mass of the electron. It seemed that we were closing in on a vision of our universe in which everything could be calculated, predicted, and understood.
However, two theories in physics, eternal inflation and string theory, now suggest that the same fundamental principles from which the laws of nature derive may lead to many different self-consistent universes, with many different properties. It is as if you walked into a shoe store, had your feet measured, and found that a size 5 would fit you, a size 8 would also fit, and a size 12 would fit equally well. Such wishy-washy results make theoretical physicists extremely unhappy. Evidently, the fundamental laws of nature do not pin down a single and unique universe. According to the current thinking of many physicists, we are living in one of a vast number of universes. We are living in an accidental universe. We are living in a universe uncalculable by science.
“Back in the 1970s and 1980s,” says Alan Guth, “the feeling was that we were so smart, we almost had everything figured out.” What physicists had figured out were very accurate theories of three of the four fundamental forces of nature: the strong nuclear force that binds atomic nuclei together, the weak force that is responsible for some forms of radioactive decay, and the electromagnetic force between electrically charged particles. And there were prospects for merging the theory known as quantum physics with Einstein’s theory of the fourth force, gravity, and thus pulling all of them into the fold of what physicists called the Theory of Everything, or the Final Theory. These theories of the 1970s and 1980s required the specification of a couple dozen parameters corresponding to the masses of the elementary particles, and another half dozen or so parameters corresponding to the strengths of the fundamental forces. The next step would then have been to derive most of the elementary particle masses in terms of one or two fundamental masses and define the strengths of all the fundamental forces in terms of a single fundamental force.
There were good reasons to think that physicists were poised to take this next step. Indeed, since the time of Galileo, physics has been extremely successful in discovering principles and laws that have fewer and fewer free parameters and that are also in close agreement with the observed facts of the world. For example, the observed rotation of the ellipse of the orbit of Mercury, 0.012 degrees per century, was successfully calculated using the theory of general relativity, and the observed magnetic strength of an electron, 2.002319 magnetons, was derived using the theory of quantum electrodynamics. More than any other science, physics brims with highly accurate agreements between theory and experiment.
Guth started his physics career in this sunny scientific world. Now sixty-four years old and a professor at MIT, he was in his early thirties when he proposed a major revision to the Big Bang theory, something called inflation. We now have a great deal of evidence suggesting that our universe began as a nugget of extremely high density and temperature about 14 billion years ago and has been expanding, thinning out, and cooling ever since. The theory of inflation proposes that when our universe was only about a trillionth of a trillionth of a trillionth of a second old, a peculiar type of energy caused the cosmos to expand very rapidly. A tiny fraction of a second later, the universe returned to the more leisurely rate of expansion of the standard Big Bang model. Inflation solved a number of outstanding problems in cosmology, such as why the universe appears so homogeneous on large scales.
When I visited Guth in his third-floor office at MIT one cool day in May, I could barely see him above the stacks of paper and empty Diet Coke bottles on his desk. More piles of paper and dozens of magazines littered the floor. In fact, a few years ago Guth won a contest sponsored by the Boston Globe for the messiest office in the city. The prize was the services of a professional organizer for one day. “She was actually more a nuisance than a help. She took piles of envelopes from the floor and began sorting them according to size.” He wears aviator-style eyeglasses, keeps his hair long, and chain-drinks Diet Cokes. “The reason I went into theoretical physics,” Guth tells me, “is that I liked the idea that we could understand everything—i.e., the universe—in terms of mathematics and logic.” He gives a bitter laugh. We have been talking about the multiverse.
While challenging the Platonic dream of theoretical physicists, the multiverse idea does explain one aspect of our universe that has unsettled some scientists for years: according to various calculations, if the values of some of the fundamental parameters of our universe were a little larger or a little smaller, life could not have arisen. For example, if the nuclear force were a few percentage points stronger than it actually is, then all the hydrogen atoms in the infant universe would have fused with other hydrogen atoms to make helium, and there would be no hydrogen left. No hydrogen means no water. Although we are far from certain about what conditions are necessary for life, most biologists believe that water is necessary. On the other hand, if the nuclear force were substantially weaker than what it actually is, then the complex atoms needed for biology could not hold together. As another example, if the relationship between the strengths of the gravitational force and the electromagnetic force were not close to what it is, then the cosmos would not harbor any stars that explode and spew out life-supporting chemical elements into space or any other stars that form planets. Both kinds of stars are required for the emergence of life. The strengths of the basic forces and certain other fundamental parameters in our universe appear to be “fine-tuned” to allow the existence of life. The recognition of this finetuning led British physicist Brandon Carter to articulate what he called the anthropic principle, which states that the universe must have the parameters it does because we are here to observe it. Actually, the word anthropic , from the Greek for “man,” is a misnomer: if these fundamental parameters were much different from what they are, it is not only human beings who would not exist. No life of any kind would exist.
If such conclusions are correct, the great question, of course, is why these fundamental parameters happen to lie within the range needed for life. Does the universe care about life? Intelligent design is one answer. Indeed, a fair number of theologians, philosophers, and even some scientists have used fine-tuning and the anthropic principle as evidence of the existence of God. For example, at the 2011 Christian Scholars’ Conference at Pepperdine University, Francis Collins, a leading geneticist and director of the National Institutes of Health, said, “To get our universe, with all of its potential for complexities or any kind of potential for any kind of life-form, everything has to be precisely defined on this knife edge of improbability…. [Y]ou have to see the hands of a creator who set the parameters to be just so because the creator was interested in something a little more complicated than random particles.”
Intelligent design, however, is an answer to fine-tuning that does not appeal to most scientists. The multiverse offers another explanation. If there are countless different universes with different properties—for example, some with nuclear forces much stronger than in our universe and some with nuclear forces much weaker—then some of those universes will allow the emergence of life and some will not. Some of those universes will be dead, lifeless hulks of matter and energy, and others will permit the emergence of cells, plants and animals, minds. From the huge range of possible universes predicted by the theories, the fraction of universes with life is undoubtedly small. But that doesn’t matter. We live in one of the universes that permits life because otherwise we wouldn’t be here to ask the question.