Achieving Transcendence: The Physicist Way

I wanted to shed some light on something I’ve been working on recently, but I realized that a little background was needed to explain some of the ideas. As such, this post is going to be a bit more math-y than usual, but I hope it’s educational!

Pi is special. Familiar to all through the area of a circle \pi r^2, pi is particularly interesting in that you cannot write an algebra equation made up of whole numbers whose solution is pi. While you can easily get fractions (3x=4 gives x=\frac{4}{3}) and even many irrational numbers (x^2=2 gives x=\sqrt{2}), pi is one of a set of numbers that it is impossible to get. These special numbers transcend other numbers, in that you cannot use more everyday numbers to get to them, and as such mathematicians call them transcendental numbers.

In addition to transcendental numbers, you can have transcendental functions. Transcendental functions are functions that can take in a normal number and produce a transcendental number. For example, you may be aware of the delightful equation below:

e^{i \pi}=-1

We can manipulate both sides of this equation by taking the natural logarithm, \ln, to find

i\pi=\ln(-1)

This tells us that the natural logarithm function can take a (negative) whole number (-1) and give us a transcendental number (pi). This means that the natural logarithm is a transcendental function.

There are many other transcendental functions. In addition to logarithms, there are a whole host of related functions called the polylogarithms, and even more generally the harmonic polylogarithms. All of these functions can take in whole numbers like -1 or 1 and give transcendental numbers.

Here physicists introduce a concept called degree of transcendentality, or transcendental weight, which we use to measure how transcendental a number or a function is. Pi (and functions that can give pi, like the natural logarithm) have transcendental weight one. Pi squared has transcendental weight two. Pi cubed (and another number called \zeta(3)) have transcendental weight three. And so on.

Note here that, according to mathematicians, there is no rigorous way that a number can be “more transcendental” than another number. In the case of some of these numbers, like \zeta(5), it hasn’t even been proven that the number is actually transcendental at all! However, physicists still use the concept of transcendental weight because it allows us to classify and manipulate a common and useful set of functions. This is an example of the differences in methods and standards between physicists and mathematicians, even when they are working on similar things.

In what way are these functions common and useful? Well it turns out that in N=4 super Yang-Mills many calculated results are not only made up of these polylogarithms, they have a particular (fixed) transcendental weight. In situations when we expect this to be true, we can use our knowledge to guess most, or even all, of the result without doing direct calculations. That’s immensely useful, and it’s a big part of what I’ve been doing recently.

Model-Hypothesis-Experiment: Sure, Just Not All the Same Person!

At some point, we were all taught how science works.

The scientific method gets described differently in different contexts, but it goes something like this:

First, a scientist proposes a model, a potential explanation for how something out in the world works. They then create a hypothesis, predicting some unobserved behavior that their model implies should exist. Finally, they perform an experiment, testing the hypothesis in the real world. Depending on the results of the experiment, the model is either supported or rejected, and the scientist begins again.

It’s a handy picture. At the very least, it’s a good way to fill time in an introductory science course before teaching the actual science.

But science is a big area. And just as no two sports have the same league setup, no two areas of science use the same method. While the central principles behind the method still hold (the idea that predictions need to be made before experiments are performed, the idea that in order to test a model you need to know something it implies that other models don’t, the idea that the question of whether a model actually describes the real world should be answered by actual experiments…), the way they are applied varies depending on the science in question.

In particular, in high-energy particle physics, we do roughly follow the steps of the method: we propose models, we form hypotheses, and we test them out with experiments. We just don’t expect the same person to do each step!

In high energy physics, models are the domain of Theorists. Occasionally referred to as “pure theorists” to distinguish them from the next category, theorists manipulate theories (some intended to describe the real world, some not). “Manipulate” here can mean anything from modifying the principles of the theory to see what works, to attempting to use the theory to calculate some quantity or another, to proving that the theory has particular properties. There’s quite a lot to do, and most of it can happen without ever interacting with the other areas.

Hypotheses, meanwhile, are the province of Phenomenologists. While theorists often study theories that don’t describe the real world, phenomenologists focus on theories that can be tested. A phenomenologist’s job is to take a theory (either proposed by a theorist or another phenomenologist) and calculate its consequences for experiments. As new data comes in, phenomenologists work to revise their theories, computing just how plausible the old proposals are given the new information. While phenomenologists often work closely with those in the next category, they also do large amounts of work internally, honing calculation techniques and looking through models to find explanations for odd behavior in the data.

That data comes, ultimately, from Experimentalists. Experimentalists run the experiments. With experiments as large as the Large Hadron Collider, they don’t actually build the machines in question. Rather, experimentalists decide how the machines are to be run, then work to analyze the data that emerges. Data from a particle collider or a neutrino detector isn’t neatly labeled by particle. Rather, it involves a vast set of statistics, energies and charges observed in a variety of detectors. An experimentalist takes this data and figures out what particles the detectors actually observed, and from that what sorts of particles were likely produced. Like the other areas, much of this process is self-contained. Rather than being concerned with one theory or another, experimentalists will generally look for general signals that could support a variety of theories (for example, leptoquarks).

If experimentalists don’t build the colliders, who does? That’s actually the job of an entirely different class of scientists, the Accelerator Physicists. Accelerator physicists not only build particle accelerators, they study how to improve them, with research just as self-contained as the other groups.

So yes, we build models, form hypotheses, and construct and perform experiments to test them. And we’ve got very specialized, talented people who focus on each step. That means a lot of internal discussion, and many papers published that only belong to one step or another. For our subfield, it’s the best way we’ve found to get science done.

Sound Bite Management; or the Merits of Shock and Awe

First off, for the small demographic who haven’t seen it already (and aren’t reading this because of it), I wrote an article for Ars Technica. Go read it.

After the article went up, a professor from my department told me that he and several others were concerned about the title.

Now before I go on, I’d like to clarify that this isn’t going to be a story about the department trying to “shut me down” or anything paranoid like that. The professor in question was expressing a valid concern in a friendly way, and it deserves some thought.

The concern was the following: isn’t a title like Earning a PhD by studying a theory that we know is wrong” bad publicity for the field? Regardless of whether the article rebuts the idea that “wrong” is a meaningful descriptor for this sort of theory, doesn’t a title like that give fuel to the fire, sharpening the cleavers of the field’s detractors as one commenter put it? In other words, even if it’s a good article, isn’t it a bad sound bite?

It’s worryingly easy for a catchy sound bite to eclipse everything else about a piece. As one commenter pointed out, that’s roughly what happened with Palin’s fruit fly comment itself. And with that in mind, the claim that people are earning PhDs based on “false” theories definitely sounds like the sort of sound bite that could get out of hand in a hurry if the wrong community picked it up.

There is, at least, one major difference between my sound bite and Palin’s. In the political climate of 2008 it was easy to believe that Sarah Palin didn’t understand the concept of fruit fly research. On the other hand, it’s quite a bit less plausible that Ars would air a piece calling most work in theoretical physics useless.

In operation here is the old, powerful technique of using a shocking, dissonant headline to lure people in. A sufficiently out-of-character statement won’t be taken at face value; rather, it will inspire readers to dig in to the full article to figure out what they’re missing. This is the principle behind provocateurs in many fields, and while there are always risks, often this is the only way to get people to think about complex issues (Peter Singer often seems to exemplify the risks and rewards of this tactic, just to give an example).

What’s the alternative here? In referring to the theory I study as “wrong”, I’m attempting to bring readers face to face with a common misconception: the idea that every theory in physics is designed to approximate some part of the real world. For the physicists in the audience, this is the public perception that everything in theoretical physics is phenomenology. If we don’t bring this perception to light and challenge it, then we’re sweeping a substantial amount of theoretical physics under the rug for the sake of a simpler message. And that’s risky, because if people don’t understand what physics really is then they’re likely to balk when they glimpse what they think is “illegitimate” physics.

In my view, shocking people by describing my type of physics as not “true” is the best way to teach people about what physicists actually do. But it is risky, and it could easily give people the wrong impression. Only time will tell.

What’s A Graviton? Or: How I Learned to Stop Worrying and Love Quantum Gravity

I’m four gravitons and a grad student. And despite this, I haven’t bothered to explain what a graviton is. It’s time to change that.

Let’s start like we often do, with a quick answer that will take some unpacking:

Gravitons are the force-carrying bosons of gravity.

I mentioned force-carrying bosons briefly here. Basically, a force can either be thought of as a field, or as particles called bosons that carry the effect of that field. Thinking about the force in terms of particles helps, because it allows you to visualize Feynman diagrams. While most forces come from Yang-Mills fields with spin 1, gravity has spin 2.

Now you may well ask, how exactly does this relate to the idea that gravity, unlike other forces, is a result of bending space and time?

First, let’s talk about what it means for space itself to be bent. If space is bent, distances are different than they otherwise would be.

Suppose we’ve got some coordinates: x and y. How do we find a distance? We use the Pythagorean Theorem:

d^2=x^2+y^2

Where d is the full distance. If space is bent, the formula changes:

d^2=g_{x}x^2+g_{y}y^2

Here g_{x} and g_{y} come from gravity. Normally, they would depend on x and y, modifying the formula and thus “bending” space.

Let’s suppose instead of measuring a distance, we want to measure the momentum of some other particle, which we call \phi because physicists are overly enamored of Greek letters. If p_{x,\phi} is its momentum (physicists also really love subscripts), then its total momentum can be calculated using the Pythagorean Theorem as well:

p_\phi^2= p_{x,\phi}^2+ p_{y,\phi}^2

Or with gravity:

p_\phi^2= g_{x}p_{x,\phi}^2+ g_{y} p_{y,\phi}^2

At the moment, this looks just like the distance formula with a bunch of extra stuff in it. Interpreted another way, though, it becomes instructions for the interactions of the graviton. If g_{x} and g_{y} represent the graviton, then this formula says that one graviton can interact with two \phi particles, like so:

graviton

Saying that gravitons can interact with \phi particles ends up meaning the same thing as saying that gravity changes the way we measure the \phi particle’s total momentum. This is one of the more important things to understand about quantum gravity: the idea that when people talk about exotic things like “gravitons”, they’re really talking about the same theory that Einstein proposed in 1916. There’s nothing scary about describing gravity in terms of particles just like the other forces. The scary bit comes later, as a result of the particular way that quantum calculations with gravity end up. But that’s a tale for another day.

What if there’s nothing new?

In the weeks after the folks at the Large Hadron Collider announced that they had found the Higgs, people I met would ask if I was excited. After all, the Higgs was what particle physicists were searching for, right?

 As usual in this blog, the answer is “Not really.”

We were all pretty sure the Higgs had to exist; we just didn’t know what its mass would be. And while many people had predictions for what properties the Higgs might have (including some string theorists), fundamentally they were interested for other reasons.

Those reasons, for the most part, are supersymmetry. If the Higgs had different properties than we expected, it could be evidence for one or another proposed form of supersymmetry. Supersymmetry is still probably the best explanation for dark matter, and it’s necessary in some form or another for string theory. It also helps with other goals of particle physics, like unifying the fundamental forces and getting rid of fine-tuned parameters.

Fundamentally, though, the Higgs isn’t likely to answer these questions. To get enough useful information we’ll need to discover an actual superpartner particle. And so far…we haven’t.

That’s why we’re not all that excited about the Higgs anymore. And that’s why, increasingly, particle physics is falling into doom and gloom.

Sure, when physicists talk about the situation, they’re quick to claim that they’re just as hopeful as ever. We still may well see supersymmetry in later runs of the LHC, as it still has yet to reach its highest energies. But people are starting, quietly and behind closed doors, to ask: what if we don’t?

What happens if we don’t see any new particles in the LHC?

There are good mathematical reasons to think that some form of supersymmetry holds. Even if we don’t see supersymmetric particles in the LHC, they may still exist. We just won’t know anything new about them.

That’s a problem.

We’ve been spinning our wheels for too long, and it’s becoming more and more obvious. With no new information from experiments, it’s not clear what we can do anymore.

And while, yes, many theorists are studying theories that aren’t true, sometimes without even an inkling of a connection to the real world, we’re all part of the same zeitgeist. We may not be studying reality itself, but at least we’re studying parts of reality, rearranged in novel ways. Without the support of experiment the rest of the field starts to decay. And one by one, those who can are starting to leave.

Despite how it may seem, most of physics doesn’t depend on supersymmetry. If you’re investigating novel materials, or the coolest temperatures ever achieved, or doing other awesome things with lasers, then the LHC’s failure to find supersymmetry will mean absolutely nothing to you. It’s only a rather small area of physics that will progressively fall into self-doubt until the only people left are the insane or the desperate.

But those of us in that area? If there really is nothing new? Yeah, we’re screwed.

Physics and its (Ridiculously One-Sided) Search for a Nemesis

Maybe it’s arrogance, or insecurity. Maybe it’s due to viewing themselves as the arbiters of good and bad science. Perhaps it’s just because, secretly, every physicist dreams of being a supervillain.

Physicists have a rivalry, you see. Whether you want to call it an archenemy, a nemesis, or even a kismesis, there is another field of study that physicists find so antithetical to everything they believe in that it crops up in their darkest and most shameful dreams.

What field of study? Well, pretty much all of them, actually.

Won’t you be my Kismesis?

Chemistry

A professor of mine once expressed the following sentiment:

“I have such respect for chemists. They accomplish so many things, while having no idea what they are doing!”

Disturbingly enough, he actually meant this as a compliment. Physicists’ relationship with chemists is a bit like a sibling rivalry. “Oh, isn’t that cute! He’s just playing with chemicals. Little guy doesn’t know anything about atoms, and yet he’s just sluggin’ away…wait, why is it working? What? How did you…I mean, I could have done that. Sure.”

Biology

They study all that weird, squishy stuff. They get to do better mad science. And somehow they get way more funding than us, probably because the government puts “improving lives” over “more particles”. Luckily, we have a solution to the problem.

Mathematics

Saturday Morning Breakfast Cereal has a pretty good take on this. Mathematicians are rigorous…too rigorous. They never let us have any fun, even when it’s totally fine, and everyone thinks they’re better than us. Well they’re not! Neener neener.

Computer Science

I already covered math, didn’t I?

Engineering

Think about how mathematicians think about physicists, and you’ll know how physicists think about engineers. They mangle our formulas, ignoring our pristine general cases for silly criteria like “ease of use” and “describing the everyday world”. Just lazy!

Philosophy

What do these guys even study? I mean, what’s the point of metaphysics? We’ve covered that, it’s called physics! And why do they keep asking what quantum mechanics means?

These guys have an annoying habit of pointing out moral issues with things like nuclear power plants and worry entirely too much about world-destroying black holes. They’re also our top competition for GRE scores.

Economics

So, what do you guys use real analysis for again? Pretending to be math-based science doesn’t make you rigorous, guys.

Psychology

We point out that surveys probably don’t measure anything, and that you can’t take the average of “agree” and “strongly agree”. Plus, if you’re a science, where is your F=ma?

They point out that we don’t actually know anything about how psychology research actually works, and that we seem to think that all psychologists are Freud. Then they ask us to look at just how fuzzy the plots we get from colliders actually are.

The argument escalates from there, often ending with frenzied makeout sessions.

Geology?  Astronomy?

Hey, we want a nemesis, but we’re not that desperate.eyH

A physicist by any other trade

Physicists have a tendency to stick their noses in other peoples’ work. We’ve conquered Wall Street (and maybe ruined it), studied communication networks and neural networks, and in a surprising number of cases turned from the study of death to the study of life. Pretty much everyone in physics knows someone who left physics to work on something more interesting, or better-funded, or just straight-up more lucrative. Occasionally, they even remember their roots.

What about the reverse, though? Where are the stories of people in other fields taking up physics?

Aside from a few very early-career examples, that just doesn’t happen. You might say that’s just because physics is hard, but that would be discounting the challenges present in other fields. A better point is that physics is hard, and old.

 Physics is arguably the oldest science, with only a few fields like mathematics and astronomy having claim to an older pedigree. A freshman physics student spends their first semester studying ideas that would have been recognizable three hundred years ago.

Of course, the same (and more) could be said about philosophy. The difference is that in physics, we teach ideas from three hundred years ago because we need them to teach ideas from two hundred years ago. And the ideas from two hundred years ago are only there so we can fill them in with information from a hundred years ago. The purpose of an education in physics, in a sense, is to catch students up with the last three hundred years of work in as concise a manner as possible.

Naturally, this leads to a lot of shortcuts, and over the years an enormous amount of notational cruft has built up around the field, to the point where nothing can be understood without understanding the last three hundred years. In a field where just getting students used to the built-up lingo takes an entire undergraduate education, it’s borderline impossible to just pick it up in the middle and expect to make progress.

Of course, this only explains why people who were trained in other fields don’t take up physics mid-career. What about physicists who go over to other fields? Do they ever come back?

I can’t think of any examples, but I can’t think of a good reason either. Maybe it’s hard to get back in to physics after you’ve been gone for a while. Maybe other fields are just so fun, or physics so miserable, no-one ever wants to come back. We shall never know.

There’s something about Symmetry…

Physicists talk a lot about symmetry. Listen to an article about string theory and you might get the idea that symmetry is some sort of mysterious, mystical principle of beauty, inexplicable to the common man or woman.

Well, if it was inexplicable, I wouldn’t be blogging about it, now would I?

Symmetry in physics is dead simple. At the same time, it’s a bit misleading.

When you think of symmetry, you probably think of objects: symmetric faces, symmetric snowflakes, symmetric sculptures. Symmetry in physics can be about objects, but it can also be about places: symmetry is the idea that if you do an experiment from a different point of view, you should get the same results. In a way, this is what makes all of physics possible: two people in two different parts of the world can do the same experiment, but because of symmetry they can compare results and agree on how the world works.

Of course, if that was all there was to symmetry then it would hardly have the mystical reputation it does. The exciting, beautiful, and above all useful thing about symmetry is that, whenever there is a symmetry, there is a conservation law.

A conservation law is a law of physics that states that some quantity is conserved, that is, cannot be created or destroyed, but merely changed from one form to another. Energy is the classic example: you can’t create energy out of nothing, but you can turn the potential energy of gravity on top of a hill into the kinetic energy of a rolling ball, or the chemical energy of coal into the electrical energy in your power lines.

The fact that every symmetry creates a conservation law is not obvious. Proving it in general and describing how it works required a major breakthrough in mathematics. It was worked out by Emmy Noether, one of the greatest minds of her time, which given that her time included Einstein says rather a lot. Noether struggled for most of her life with the male-dominated establishment of academia, and spent many years teaching unpaid and under the names of male faculty, forbidden from being a professor because of her gender.

Why must women always be banished to the Noether regions of physics?

Noether’s proof is remarkable, but if you’re not familiar with the mathematics it won’t mean much to you. If you want to get a feel for the connection between symmetries and conservation laws, you need to go back a bit further. For the best example, we need to go all the way back to the dawn of physics.

Christiaan Huygens was a contemporary of Isaac Newton, and like Noether he was arguably as smart as if not smarter than his more famous colleague. Huygens could be described as the first theoretical physicist. Long before Newton first wrote his three laws of motion, Huygens used thought experiments to prove deep facts about physics, and he did it using symmetry.

In one of Huygens’ thought experiments, two men face each other, one standing on a boat and the other on the bank of a river. The men grab onto each other’s hands, and dangle a ball on a string from each pair of hands. In this way, it is impossible to tell which man is moving each ball.

Stop hitting yourself!

From the man on the bank’s perspective, he moves the two balls together at the same speed, which happens to be the same speed as the river. The balls are the same size, so as far as he can see they should have the same speed afterwards as well.

On the other hand, the man in the boat thinks that he’s only moving one ball. Since the man on the bank is moving one of the balls along at the same speed as the river, from the man on the boat’s perspective that ball is just staying still, while the other ball is moving with twice the speed of the river. If the man on the bank sees the balls bounce off of each other at equal speed, then the man on the boat will see the moving ball stop, and the ball that was staying still start to move with the same speed as the original ball. From what he could see, a moving ball hit a ball at rest, and transferred its entire momentum to the new ball.

Using arguments like these, Huygens developed the idea of conservation of momentum, the idea of a number related to an object’s mass and speed that can never be created or destroyed, only transferred from one object to another. And he did it using symmetry. At heart, his arguments showed that momentum, the mysterious “quantity of motion”, was merely a natural consequence of the fact that two people can look at a situation in two different ways. And it is that fact, and the power that fact has to explain the world, that makes physicists so obsessed with symmetry.

Anthropic Reasoning, Multiverses, and Eternal Inflation (Part Two of Two)

So suppose you want to argue that, contrary to appearances, the universe isn’t impossible, and you want to use anthropic reasoning to do it. Suppose further that you read my post last week, so you know what anthropic reasoning is. In case you haven’t, anthropic reasoning means recognizing that, while it may be unlikely that the location/planet/solar system/universe you’re in is a nice place for you to live, as long as there is at least one nice place to live you will almost certainly find yourself living there. Applying this to the universe as a whole requires there to be many additional universes, making up a multiverse, at least one of which is a nice place for human life.

Is there actually a multiverse, though? How would that even work?

One of the more plausible proposals for a multiverse is the concept of eternal inflation.

Eternal inflation is idea with many variants (such as chaotic inflation), and rather than give the details of any particular variant, I want to describe the setup in as broad strokes as possible.

The first thing to be aware of is that the universe is expanding, and has been since the Big Bang. Counter-intuitively, this doesn’t mean that the universe was once small, and is now bigger: in all likelihood, the universe was always infinite in size. Instead, it means that things began packed in close together, and have since moved further apart. While various forces (gravity, electromagnetism) hold things together on short scales, the wide open spaces between galaxies are constantly widening, spreading out the map of the universe.

You would expect this process to slow down over time. While it might have started with a burst of energy (aforementioned Big Bang), as the universe gets more and more spread out it should be running out of steam. The thing is, it’s not. The evidence (complicated enough that I’m not going to go into it now) shows that the universe actually sped up dramatically shortly after the Big Bang, and seems to be speeding up again now. This speeding up is called inflation.

So what could make the universe speed up? You might have heard of Einstein’s cosmological constant, a constant added to Einstein’s equations of general relativity that, while originally intended to make the universe stay in a steady state forever, can also be chosen so as to speed up the universe’s expansion. While that works mathematically, it’s not really an explanation, especially if it changes with time.

Enter scalar fields. A scalar is what happens when you let what looks like a constant of nature vary as a quantum field. Scalar fields can vary over space, and they can change over time, making them ideal candidates for explaining inflation. And as a quantum field, the scalar field behind inflation (often called the inflaton) should randomly fluctuate, giving rise to the occasional particle just like the Higgs (another scalar field) does.

Well, not just like the Higgs. See, the Higgs controls mass, and if the mass of some particles increases a bit in a tiny area, it’s weird, but it’s not going to spread. On the other hand, if space in some place is inflating faster than space in another place…

Suppose you have two empty blocks in the middle of intergalactic space, each a cube one foot on each side, with one inflating faster than the other. Twice as fast, let’s say, so that when one cube grows to two feet on a side, the other grows to four feet on a side. Then when the first cube is four feet on a side, the other will be sixteen. When the first has eight foot sides, the other’s will be sixty-four. And so forth. Even a small difference in expansion rates quickly leads to one region dominating the other. And if inflation stops slightly later in one region than in another, that can be a pretty dramatic difference too.

The end result is that if inflation were this sort of scalar field, the universe would just keep expanding forever, faster and faster. Only small pockets would slow down enough that anything could actually stick together. So while most of the universe would just tear itself apart forever, some of it, the parts that tear themselves apart slowly, can contain atoms and stars and well, life. A universe like that is one that is experiencing eternal inflation. It’s eternal because it doesn’t have a beginning or end: what looks to us like the Big Bang, the beginning of our universe, is really just the point at which our part of the universe started expanding slow enough that anything we recognize as matter could exist.

There’s no reason for us to be the only bubble that slowed down, though, and that’s where the multiverse aspect comes in. In eternal inflation there are lots and lots of slow regions, each one like a mini-universe in its own right. What’s more, each region can have totally different constants of nature.

To understand how that works, remember that each region has a different rate of inflation, and thus a different value for the inflaton scalar field. It turns out that many types of scalar fields like to interact with each other. If you recall my post on scalar fields (already linked, not gonna link it again), you’ll remember that for everything that looks like a constant of nature, chances are there’s a scalar field that controls it. So different values for inflation means different values for all of those scalar fields too, which means different physical constants. With so many (possibly infinitely many) regions with different physical constants, there’s bound to be one where we could live.

Now, before you get excited here, there are a few caveats. Well, a lot of caveats.

First, it’s all well and good if the multiverse can produce life, but what if it produces dramatically different life? What sort of life is eternal inflation most likely to produce, and what are the chances it would look at all like us? For that matter, how do you figure out the chances of anything in an infinite, eternally expanding universe? This last is a very difficult problem, and work on it is ongoing.

Beyond that, we don’t even know enough about inflation to know whether eternal inflation would happen or not. We’ve got a pretty good idea that inflation involves scalar fields, but how many and in what combination? We don’t know yet, and the evidence is still coming in. We’re right on the cutting edge of things now, and until we know more it’s tough to say for certain whether any of this is viable. Still, it’s fun to think about.

Anthropic Reasoning, Multiverses, and Eternal Inflation (Part One of Two)

You and I are very very lucky. Human life is very delicate, and the conditions under which it can thrive are not in the majority. Going by random chance, neither of us should exist.

I am referring, of course, to the fact that the Earth’s surface is about 70 percent ocean. Just think how lucky you are not to have been born there: you would have drowned! Let alone if you were born beneath the Earth’s crust!

If you understand why the above is ridiculous, congratulations: you’ve just discovered anthropic reasoning.

There are some situations we find ourselves in because they are common. Most (all) of the Earth is in orbit around the Sun, so if you find yourself in orbit around the Sun you should hardly be surprised. Some situations, on the other hand, keep happening not because they are common in the universe in general, but because they are the part of the universe in which we can exist. Recognizing those situations is anthropic reasoning.

It’s not weird that you were born on land, even though land is rarer than water, because land, and not water, is where people live. As long as there was any land on the earth at all, you would expect people to be born on it (or on ships, I suppose) rather than on the ocean.

The same sort of reasoning explains why we evolved on Earth to begin with. There are eight planets in the solar system (yes, Pluto is not a planet, get over it), and only one of them is in the right place for life like us. We aren’t “lucky” that we ended up on Earth rather than another planet, nor is it something “unlikely” that needs to be explained: we’re on Earth because the universe is big enough that there happens to be a planet that has the right conditions for life, and Earth is that planet.

What anthropic reasoning has a harder time explaining (but what some people are working very hard to make it explain) is the question of why our whole universe is the way it is. Our universe is a pretty good place for life to evolve. Granted, that’s probably just a side effect of it being a good place for stars to evolve, but let’s put that aside for a second. Suppose the universe really is a particularly nice place for life, even improbably nice. Can anthropic reasoning explain that?

Probably. But it takes some work.

See, the difficulty is that in order for anthropic reasoning to work, you need to be certain that some place hospitable to life actually is likely to exist. Earthlike planets may be rare, but there are enough planets in the universe that some of them are bound to be like Earth. If universes like ours are rare, though, then how can there be enough universes to guarantee one like ours? How can there be more than one universe at all?

That’s why you need a multiverse.

A multiverse, in simple terms, is a collection of universes. If you object that a universe is, by definition, all that exists, and thus there can’t possibly be more than one, then you can use an alternate definition: a multiverse is a vast universe in which there are many smaller universe-like regions. These sub-universes don’t have much (or any) contact with eachother, and (in order for anthropic reasoning to work) must have different properties.

Does a multiverse exist, though? How would one work?

There are several possibilities, of varying degrees of plausibility. Some people have argued that quantum mechanics leads to many parallel universes, while others posit that each universe could be like a membrane in some higher dimensional space. The multiple universes could be separated in ordinary space, or even in time.

In the next post, I will discuss one of the more plausible (if still controversial) possibilities, called eternal inflation, in which new universes are continually birthed in a vast sea of exponentially expanding space. If you have no idea what the heck I meant by that, great! Tune in next time to find out!