Tag Archives: philosophy of science

What Can Replace Space-Time?

Nima Arkani-Hamed is famous for believing that space-time is doomed, that as physicists we will have to abandon the concepts of space and time if we want to find the ultimate theory of the universe. He’s joked that this is what motivates him to get up in the morning. He tends to bring it up often in talks, both for physicists and for the general public.

The latter especially tend to be baffled by this idea. I’ve heard a lot of questions like “if space-time is doomed, what could replace it?”

In the past, Nima and I both tended to answer this question with a shrug. (Though a more elaborate shrug in his case.) This is the honest answer: we don’t know what replaces space-time, we’re still looking for a good solution. Nima’s Amplituhedron may eventually provide an answer, but it’s still not clear what that answer will look like. I’ve recently realized, though, that this way of responding to the question misses its real thrust.

When people ask me “what could replace space-time?” they’re not asking “what will replace space-time?” Rather, they’re asking “what could possibly replace space-time?” It’s not that they want to know the answer before we’ve found it, it’s that they don’t understand how any reasonable answer could possibly exist.

I don’t think this concern has been addressed much by physicists, and it’s a pity, because it’s not very hard to answer. You don’t even need advanced physics. All you need is some fairly old philosophy. Specifically we’ll use concepts from metaphysics, the branch of philosophy that deals with categories of being.

Think about your day yesterday. Maybe you had breakfast at home, drove to work, had a meeting, then went home and watched TV.

Each of those steps can be thought of as an event. Each event is something that happened that we want to pay attention to. You having breakfast was an event, as was you arriving at work.

These events are connected by relations. Here, each relation specifies the connection between two events. There might be a relation of cause-and-effect, for example, between you arriving at work late and meeting with your boss later in the day.

Space and time, then, can be seen as additional types of relations. Your breakfast is related to you arriving at work: it is before it in time, and some distance from it in space. Before and after, distant in one direction or another, these are all relations between the two events.

Using these relations, we can infer other relations between the events. For example, if we know the distance relating your breakfast and arriving at work, we can make a decent guess at another relation, the difference in amount of gas in your car.

This way of viewing the world, events connected by relations, is already quite common in physics. With Einstein’s theory of relativity, it’s hard to say exactly when or where an event happened, but the overall relationship between two events (distance in space and time taken together) can be thought of much more precisely. As I’ve mentioned before, the curved space-time necessary for Einstein’s theory of gravity can be thought of equally well as a change in the way you measure distances between two points.

So if space and time are relations between events, what would it mean for space-time to be doomed?

The key thing to realize here is that space and time are very specific relations between events, with very specific properties. Some of those properties are what cause problems for quantum gravity, problems which prompt people to suggest that space-time is doomed.

One of those properties is the fact that, when you multiply two distances together, it doesn’t matter which order you do it in. This probably sounds obvious, because you’re used to multiplying normal numbers, for which this is always true anyway. But even slightly more complicated mathematical objects, like matrices, don’t always obey this rule. If distances were this sort of mathematical object, then multiplying them in different orders could give slightly different results. If the difference were small enough, we wouldn’t be able to tell that it was happening in everyday life: distance would have given way to some more complicated concept, but it would still act like distance for us.

That specific idea isn’t generally suggested as a solution to the problems of space and time, but it’s a useful toy model that physicists have used to solve other problems.

It’s the general principle I want to get across: if you want to replace space and time, you need a relation between events. That relation should behave like space and time on the scales we’re used to, but it can be different on very small scales (Big Bang, inside of Black Holes) and on very large scales (long-term fate of the universe).

Space-time is doomed, and we don’t know yet what’s going to replace it. But whatever it is, whatever form it takes, we do know one thing: it’s going to be a relation between events.

Physical Truths, Lost to the Ages

For all you tumblr-ers out there (tumblr-ists? tumblr-dwellers?), 4 gravitons is now on tumblr. It’s mostly going to be links to my blog posts, with the occasional re-blog of someone else’s work if something catches my eye.

Nima Arkani-Hamed gave a public lecture at Perimeter yesterday, which I encourage you to watch if you have time, once it’s up on the Perimeter site. He also gave a technical talk earlier in the day, where he finished up by making the following (intentionally) provocative statement:

There is no direct evidence of what happened during the Big Bang that could have survived till today.

He clarified that he doesn’t just mean “evidence we can currently detect”. Rather, there’s a limit on what we can know, even with the most precise equipment possible. The details of what happened at the Big Bang (the sorts of precise details that would tell you, for example, whether it is best described by String Theory or some other picture) would get diluted as the universe expands, until today they would be so subtle and so rare that they fall below the level we could even in principle detect. We simply don’t have enough information available, no matter how good our technology gets, to detect them in a statistically significant way.

If this talk had happened last week, I could have used this in my spooky Halloween post. This is exactly the sort of thing that keeps physicists up at night: the idea that, fundamentally, there may be things we can never truly know about the universe, truths lost to the ages.

It’s not quite as dire as it sounds, though. To explain why, let me mention another great physics piece, Tom Stoppard’s Arcadia.

Despite appearances, this is in fact a great work of physics popularization.

Arcadia is a play about entropy. The play depicts two time periods, the early 19th century and the present day. In the present day a pair of scholars, Hannah and Bernard, argue about the events of the 19th century, when the house was occupied by a mathematically precocious girl named Thomasina and her tutor Septimus. Thomasina makes early discoveries about fractals and (to some extent) chaos theory, while Septimus gradually falls in love with her. In the present, the two scholars gradually get closer to the truth, going from a false theory that one of the guests at the house was killed by Lord Byron, to speculation that Septimus was the one to discover fractals, to finally getting a reasonably accurate idea of how the events of the story unfolded. Still, they never know everything, and the play emphasizes that certain details (documents burned in a fire, the true feelings of some of the people) will be forever lost to the ages.

The key point here is that, even with incomplete information, even without the ability to fully test their hypotheses and get all the details, the scholars can still make progress. They can propose accounts of what happened, accounts that have implications they can test, that might be proven wrong or right by future discoveries. Their accounts will also have implications they can’t test: lost letters, feelings never written down. But the better their account, the more it will explain, and the longer it will agree with anything new they manage to turn up.

That’s the way out of the problem Nima posed. We can’t know the truth of what happened at the Big Bang directly. But if we have a theory of physics that describes everything we can test, it’s likely to also make a prediction for what happened in the Big Bang. In science, most of the time you don’t have direct evidence. Rather, you have a successful theory, one that has succeeded under scrutiny many times in many contexts, enough that you trust it even when it goes out of the area you’re comfortable testing. That’s why physicists can make statements about what it’s like on the inside of a black hole, and it’s why it’s still good science to think about the Big Bang even if we can’t gather direct evidence about the details of how it took place.

All that said, Nima is well aware of this, and the problem still makes him uncomfortable. It makes me uncomfortable too. Saying that something is completely outside of our ability to measure, especially something as fundamental and important as the Big Bang, is not something we physicists can generally be content with. Time will tell whether there’s a way around the problem.

A Nobel for Blue LEDs, or, How Does That Count as Physics?

When I first heard about this year’s Nobel Prize in Physics, I didn’t feel the need to post on it. The prize went to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, whose discoveries enabled blue LEDs. It’s a more impressive accomplishment than it might seem: while red LEDs have been around since the 60’s and 70’s, blue LEDs were only developed in the 90’s, and only with both can highly efficient, LED-based white light sources be made. Still, I didn’t consider posting on it because it’s pretty much entirely outside my field.

p-device20blue20led-23

Shiny, though

It took a conversation with another PI postdoc to point out one way I can comment on the Nobel, and it started when we tried to figure out what type of physicists Akasaki, Amano, and Nakamura are. After tossing around terms like “device physicist” and “condensed matter”, someone wondered whether the development of blue LEDs wasn’t really a matter of engineering.

At that point I realized, I’ve talked about something like this before.

Physicists work on lots of different things, and many of them don’t seem to have much to do with physics. They study geometry and topology, biological molecules and the nature of evolution, income inequality and, yes, engineering.

On the surface, these don’t have much to do with physics. A friend of mine used to quip that condensed matter physicists seem to just “pick whatever they want to research”.

There is something that ties all of these topics together, though. They’re all things that physicists are good at.

Physics grad school gives you a wide variety of tools with which to understand the world. Thermodynamics gives you a way to understand large, complicated systems with statistics, while quantum field theory lets you understand everything with quantum properties, not just fundamental particles but materials as well. This batch of tools can be applied to “traditional” topics, but they’re equally applicable if you’re researching something else entirely, as long as it obeys the right kinds of rules.

In the end, the best definition of physics is the most useful one. Physicists should be people who can benefit from being part of physics organizations, from reading physics journals, and especially from training (and having been) physics grad students. The whole reason we have scientific disciplines in the first place is to make it easier for people with common interests to work together. That’s why Akasaki, Amano, and Nakamura aren’t “just” engineers, and why I and my fellow string theorists aren’t “just” mathematicians. We use our knowledge of physics to do our jobs, and that, more than anything else, makes us physicists.


Edit: It has been pointed out to me that there’s a bit more to this story than the main accounts have let on. Apparently another researcher named Herbert Paul Maruska was quite close to getting a blue LED up and running back in the early 1970’s, getting far enough to have a working prototype. There’s a whole fascinating story about the quest for a blue LED, related here. Maruska seems to be on friendly terms with Akasaki, Amano, and Nakamura, and doesn’t begrudge them their recognition.

Science is Debugging

What do I do, when I get to work in the morning?

I debug programs.

I debug programs literally, in that most of the calculations I do are far to complicated to do by hand. I write programs to do my calculations, and invariably these programs have bugs. So, I debug.

I debug programs in a broader sense, too.

In science, a research program is a broad approach, taken by a number of people, used to make progress on some set of important scientific questions. Someone suggests a way forward (“Let’s try using an ansatz of transcendental functions!” “Let’s try to represent our calculations with a geometrical object!”) and they and others apply the new method to as many problems as they can. Eventually the program loses steam, and a new program is proposed.

The thing about these programs is, they’re pretty much never fully fleshed out at the beginning. There’s a general idea, and a good one, but it usually requires refinement. If you just follow the same steps as the first person in the program you’re bound to fail. Instead, you have to tweak the program, broadening it and adapting it to the problem you’re trying to solve.

It’s a heck of a lot like debugging a computer program, really. You start out with a hastily written script, and you try applying it as-is, hoping that it works. Often it doesn’t, and you have to go back, step by step, and figure out what’s going wrong.

So when I debug computer programs at work, I’m doing it with a broader goal. I’m running a scientific program, looking for bugs in that. If and when I find them, I can write new computer programs to figure out what’s going wrong. Then I have to debug those computer programs…

I’ll just leave this here.

Love It or Hate It, Don’t Fear the Multiverse

“In an infinite universe, anything is possible.”

A nice maxim for science fiction, perhaps. But it probably doesn’t sound like productive science.

A growing number of high profile scientists and science popularizers have come out in favor of the idea that there may exist a “multiverse” of multiple universes, and that this might explain some of the unusual properties of our universe. If there are multiple universes, each with different physical laws, then we must exist in one of the universes with laws capable of supporting us, no matter how rare or unlikely such a universe is. This sort of argument is called anthropic reasoning.

(If you’re picky about definitions and don’t like the idea of more than one universe, think instead of a large universe with many different regions, each one separated from the others. There are some decent physics-based reasons to suppose we live in such a universe.)

Not to mention continuity reasons.

Why is anyone in favor of this idea? It all goes back to the Higgs.

The Higgs field interacts with other particles, giving them mass. What most people don’t mention is that the effect, in some sense, goes both ways. Because the Higgs interacts with other particles, the mass of the Higgs is also altered. This alteration is large, much larger than the observed mass of the Higgs. (In fact, in a sense it’s infinite!)

In order for the Higgs to have the mass we observe, then, something has to cancel out these large corrections. That cancellation can either be a coincidence, or there can be a reason for it.

The trouble is, we’re running out of good reasons. One of the best was supersymmetry, the idea that each particle has a partner with tightly related properties. But if supersymmetry was going to save the day, we probably would have detected some of those partners at the Large Hadron Collider by now. More generally, it can be argued that almost all possible “good reasons” require some new particle to be found at the LHC.

If there are no good reasons, then we’re stuck with a coincidence. (This is often referred to as the Naturalness Problem in particle physics.) And it’s this uncomfortable coincidence that has driven prominent physicists to the arms of the multiverse.

There’s a substantial backlash, though. Many people view the multiverse as a cop-out. Some believe it to be even more toxic than that: if there’s a near-infinite number of possible universes then in principle any unusual feature of our universe could be explained by anthropic reasoning, which sounds like it could lead to the end of physics as we know it.

You can disdain the multiverse as a cop-out, but, as I’ll argue here, you shouldn’t fear it. Those who think the multiverse will destroy physics are fundamentally misunderstanding the way physics research works.

The key thing to keep in mind is that almost nobody out there prefers the multiverse. When a prominent physicist supports the multiverse, that doesn’t mean they’re putting aside productive work on other solutions to the problem. In general, it means they don’t have other solutions to the problem. Supporting the multiverse isn’t going to stop them from having ideas they wouldn’t have had to begin with.

And indeed, many of these people are quite supportive of alternatives to the multiverse. I’ve seen Nima Arkani-Hamed talk about the multiverse, and he generally lists a number of other approaches (some quite esoteric!) that he has worked (and failed to make progress) on, and encourages the audience to look into them.

Physics isn’t a zero-sum game, nor is it ruled by a few prominent people. If a young person has a good idea about how to explain something without the multiverse, they’re going to have all the support and recognition that such an idea deserves.

What the multiverse adds is another track, another potentially worthwhile line of research. Surprising as it may seem, the multiverse doesn’t automatically answer every question. It might not even answer the question of the mass of the Higgs! All that the existence of a multiverse tells us is that we should exist somewhere where intelligent life could exist…but if intelligent life is more likely to exist in a universe very different from ours, then we’re back to square one. There’s a lot of research involved in figuring out just what the multiverse implies, research by people who wouldn’t have been working on this sort of problem if the idea of the multiverse hadn’t been proposed.

That’s the key take-away message here. The multiverse may be wrong, but just considering it isn’t going to destroy physics. Rather, it’s opened up new avenues of research, widening the community of those trying to solve the Naturalness Problem. It may well be a cop-out for individuals, but science as a whole doesn’t have cop-outs: there’s always room for someone with a good idea to sweep away the cobwebs and move things forward.

Does Science have Fads?

97% of climate scientists agree that global warming exists, and is most probably human-caused. On a more controversial note, string theorists vastly outnumber adherents of other approaches to quantum gravity, such as Loop Quantum Gravity.

As many who disagree with climate change or string theory would argue, the majority is not always right. Science should be concerned with truth, not merely with popularity. After all, what if scientists are merely taking part in a fad? What makes climate change any more objectively true than pet rocks?

Apparently this wikipedia’s best example of a fad.

People are susceptible to fads, after all. A style of music becomes popular, and everyone’s listening to the same sounds. A style of clothing, and everything’s wearing the same thing. So if an idea in science became popular, everyone might…write the same papers?

That right there is the problem. Scientists only succeed by creating meaningfully original work. If we don’t discover something new, we can’t publish, and as the old saying goes it’s publish or perish out there. Even if social pressure gets us working on something, if we’re going to get any actual work done there has to be enough there, at least, for us to do something different, something no-one has done before.

This doesn’t mean scientists can’t be influenced by popularity, but it means that that influence is limited by the requirements of doing meaningful, original work. In the case of climate change, climate scientists investigate the topic with so many different approaches and look at so many different areas of impact (for example, did you know rising CO2 levels make the ocean acidic?) that the whole field simply wouldn’t function if climate change wasn’t real: there’d be a contradiction, and most of the myriad projects involving it simply wouldn’t work. As I’ve talked about before, science is an interlocking system, and it’s hard to doubt one part without being forced to doubt everything else.

What about string theory? Here, the situation is a little different. There aren’t experiments testing string theory, so whether or not string theory describes the real world won’t have much effect on whether people can write string theory papers.

The existence of so many string theory papers does say something, though. The up-side of not involving experiments is that you can’t go and test something slightly different and write a paper about it. In order to be original, you really need to calculate something that nobody expected you to calculate, or notice a trend nobody expected to exist. The fact that there are so many more string theorists than loop quantum gravity theorists is in part because there are so many more interesting string theory projects than interesting loop quantum gravity projects.

In string theory, projects tend to be interesting because they unveil some new aspect of quantum field theory, the class of theories that explain the behavior of subatomic particles. Given how hard quantum field theory is, any insight is valuable, and in my experience these sorts of insights are what most string theorists are after. So while string theory’s popularity says little about whether it describes the real world, it says a lot about its ability to say interesting things about quantum field theory. And since quantum field theories do describe the real world, string theory’s continued popularity is also evidence that it continues to be useful.

Climate change and string theory aren’t fads, not exactly. They’re popular, not simply because they’re popular, but because they make important contributions and valuable to science. And as long as science continues to reward original work, that’s not about to change.

What does Copernicus have to say about String Theory?

Putting aside some highly controversial exceptions, string theory has made no testable predictions. Conceivably, a world governed by string theory and a world governed by conventional particle physics would be indistinguishable to every test we could perform today. Furthermore, it’s not even possible to say that string theory predicts the same things with fewer fudge-factors, as string theory descriptions of our world seem to have dramatically many more free parameters than conventional ones.

Critics of string theory point to this as a reason why string theory should be excluded from science, sent off to the chilly arctic wasteland of the math department. (No offense to mathematicians, I’m sure your department is actually quite warm and toasty.) What these critics are missing is an important feature of the scientific process: before scientists are able to make predictions, they propose explanations.

To explain what I mean by that, let’s go back to the beginning of the 16th century.

At the time, the authority on astronomy was still Ptolemy’s Syntaxis Mathematica, a book so renowned that it is better known by the Arabic-derived superlative Almagest, “the greatest”. Ptolemy modeled the motions of the planets and stars as a series of interlocking crystal spheres with the Earth at the center, and did so well enough that until that time only minor improvements on the model had been made.

This is much trickier than it sounds, because even in Ptolemy’s day astronomers could tell that the planets did not move in simple circles around the Earth. There were major distortions from circular motion, the most dramatic being the phenomenon of retrograde motion.

If the planets really were moving in simple circles around the Earth, you would expect them to keep moving in the same direction. However, ancient astronomers saw that sometimes, some of the planets moved backwards. The planet would slow down, turn around, go backwards a bit, then come to a stop and turn again.

Thus sparking the invention of the spirograph.

In order to take this into account, Ptolemy introduced epicycles, extra circles of motion for the planets. The epicycle would move on the planet’s primary circle, or deferent, and the planet would rotate around the epicycle, like so:

French Wikipedia had a better picture.

These epicycles weren’t just for retrograde motion, though. They allowed Ptolemy to model all sorts of irregularities in the planets’ motions. Any deviation from a circle could conceivably be plotted out by adding another epicycle (though Ptolemy had other methods to model this sort of thing, among them something called an equant). Enter Copernicus.

Enter Copernicus’s hair.

Copernicus didn’t like Ptolemy’s model. He didn’t like equants, and what’s more, he didn’t like the idea that the Earth was the center of the universe. Like Plato, he preferred the idea that the center of the universe was a divine fire, a source of heat and light like the Sun. He decided to put together a model of the planets with the Sun in the center. And what he found, when he did, was an explanation for retrograde motion.

In Copernicus’s model, the planets always go in one direction around the Sun, never turning back. However, some of the planets are faster than the Earth, and some are slower. If a planet is slower than the Earth and it passes by it will look like it is going backwards, due to the Earth’s speed. This is tricky to visualize, but hopefully the picture below will help: As you can see in the picture, Mars starts out ahead of Earth in its orbit, then falls behind, making it appear to move backwards.

Despite this simplification, Copernicus still needed epicycles. The planets’ motions simply aren’t perfect circles, even around the Sun. After getting rid of the equants from Ptolemy’s theory, Copernicus’s model ended up having just as many epicycles as Ptolemy’s!

Copernicus’s model wasn’t any better at making predictions (in fact, due to some technical lapses in its presentation, it was even a little bit worse). It didn’t have fewer “fudge factors”, as it had about the same number of epicycles. If you lived in the 16th century, you would have been completely justified in believing that the Earth was the center of the universe, and not the Sun. Copernicus had failed to establish his model as scientific truth.

However, Copernicus had still done something Ptolemy didn’t: he had explained retrograde motion. Retrograde motion was a unique, qualitative phenomenon, and while Ptolemy could include it in his math, only Copernicus gave you a reason why it happened.

That’s not enough to become the reigning scientific truth, but it’s a damn good reason to pay attention. It was justification for astronomers to dedicate years of their lives to improving the model, to working with it and trying to get unique predictions out of it. It was enough that, over half a century later, Kepler could take it and turn it into a theory that did make predictions better than Ptolemy, that did have fewer fudge-factors.

String theory as a model of the universe doesn’t make novel predictions, it doesn’t have fewer fudge factors. What it does is explain, explaining spectra of particles in terms of shapes of space and time, the existence of gravity and light in terms of closed and open strings, the temperature of black holes in terms of what’s going on inside them (this last really ought to be the subject of its own post, it’s one of the big triumphs of string theory). You don’t need to accept it as scientific truth. Like Copernicus’s model in his day, we don’t have the evidence for that yet. But you should understand that, as a powerful explanation, the idea of string theory as a model of the universe is worth spending time on.

Of course, string theory is useful for many things that aren’t modeling the universe. But that’s the subject of another post.

Elegance, Not So Mysterious

You’ll often hear theoretical physicists in the media referring to one theory or another as “elegant”. String theory in particular seems to get this moniker fairly frequently.

It may often seem like mathematical elegance is some sort of mysterious sixth sense theorists possess, as inexplicable to the average person as color to a blind person. What’s “elegant” about string theory, after all?

Before explaining elegance, I should take a bit of time to say what it’s not. Elegance isn’t Occam’s razor. It isn’t naturalness, either. Both of those concepts have their own technical definitions.

Elegance, by contrast, is a much hazier, and yet much simpler, notion. It’s hazy enough that any definition could provoke arguments, but I can at least give you an approximate idea by telling you that an elegant theory is simple to describe, if you know the right terms. Often, it is simpler than the phenomenon that it explains.

How does this apply to something like string theory? String theory seems to be incredibly complicated: ten dimensions, curled up in a truly vast number of different ways, giving rise to whole spectrums of particles.

That said, the basic idea is quite simple. String theory asks the question: what if, in addition to fundamental point-particles (zero dimensional objects), there were fundamental objects of other dimensions? That idea leads to complicated consequences: if your theory is going to produce all the particles of the real world then you need the ten dimensions and the supersymmetry and yadda yadda. But the basic idea is simple to describe. An elegant theory can have very complicated consequences, but still be simple to describe.

This, broadly, is the sort of explanation theoretical physicists look for. Math is the kind of field where the same basic systems can describe very complex phenomena. Since theoretical physics is about describing the world in terms of math, the right explanation is usually the most elegant.

This can occasionally trip physicists up when they migrate to other careers. In biology, for example, the elegant solution is often not the right one, because evolution doesn’t care about elegance: evolution just grabs whatever is within reach. Financial systems and economics occasionally have similar problems. All this is to say that while elegance is an important thing for a physicist to strive for, sometimes we have to be careful about it.

Braaains…Boltzmann Braaaains…

In honor of Halloween yesterday, let me tell you a scary physics story:

Sarah was an ordinary college student, in an ordinary dorm room, ordinary bean bag chairs strewn around an ordinary bed with ordinary pink sheets. If she concentrated, she could imagine her ordinary parents back home in ordinary Minnesota. In her ordinary physics textbook on her ordinary desk, ordinary laws of physics were written, described as the result of centuries of experimentation.

Unbeknownst to Sarah, the universe was much more chaotic and random than she realized, and also much more vast. Arbitrary collections of matter formed and dissipated, and over the universe’s long history, any imaginable combination might come to be.

Combinations like Sarah.

You see, Sarah too was a random combination, a chance arrangement of particles formed only a bare few moments ago. In truth, she had no ordinary parents, nor was she surrounded by an ordinary college, and the laws of physics that her textbook asserted were discovered through centuries of experimentation were just a moment’s distribution of ink on a page.

And as she got up to open the door into the vast dark of the outside, her world dissipated, and she ceased to exist.

That’s the life of a Boltzmann Brain. If a universe is random and old enough, it is inevitable that such minds exist. They might have memories of an extended, orderly world, but these would just be illusions, chance arrangements of their momentary neurons. What’s more, they may think they know the laws of physics through careful experiment and reasoning, but such knowledge would be illusory as well. And most frightening of all, if the universe is truly ancient and unimaginably vast, there would be many orders of magnitude more Boltzmann Brains than real humans…so many, that it would almost certainly be the case that you are in fact a Boltzmann Brain right now!

This is legitimately worrying to some physicists. The situation gets a bit more interesting when you remember that, as a Boltzmann Brain, anything you know about physics may well be a lie, since the history of research you think exists might not have. The problem is, if you manage to prove that you are probably a Boltzmann Brain, you had to use physics to do it. But your physics is probably wrong!

This, as Sean Carroll argues is why the concept of a Boltzmann Brain is self-defeating. It is, in a way, a logical impossibility. And if a universe of Boltzmann Brains is logically impossible, then any physics that makes Boltzmann Brains more likely than normal humans must similarly be wrong. That’s Carroll’s argument, one that he uses to argue for specific physical conclusions about the real world, namely a proposal about the properties of the Higgs boson.

It might seem philosophically illegitimate to use such a paradox to argue about the real world. However, philosophers have a similar argument when it comes to such “reality is a lie” scenarios. In general, modern philosophers point out that any argument that proves that all of our knowledge is false or meaningless by necessity also proves itself false or meaningless. This is what allows analytical philosophy to carry forward and make progress, even if it can’t reject the idea that reality is an illusion by more objective means.

With that said, there seems to be a difference between simply rejecting arguments that “show” that the world is an illusion or that we are all Boltzmann Brains, and using those arguments to draw conclusions about other parts of the world. I would be curious if there are similar arguments to Carroll’s in philosophy, arguments that draw conclusions more specific than “we exist and can know things”. Any philosopher readers should feel welcome to chime in in the comments!

And for the rest of you, you probably aren’t a Boltzmann Brain. But if the outside world looks a little too dark tonight…

What are Vacua? (A Point about the String Landscape)

A couple weeks back, there was a bit of a scuffle between Matt Strassler and Peter Woit on the subject of predictions in string theory (or more properly, the question of whether any predictions can be made at all). As a result, Strassler has begun a series on the subject of quantum field theory, string theory, and predictions.

Strassler hasn’t gotten to the topic of string vacua yet, but he’s probably going to cover the subject in a future post. While his take on the subject is likely to be more expansive and precise than mine, I think my perspective on the problem might still be of interest.

Let’s start with the basics: one of the problems often cited with string theory is the landscape problem, the idea that string theory has a metaphorical landscape of around 10^500 vacua.

What are vacua?

Vacua is the plural of vacuum.

Ok, and?

A vacuum is empty space.

That’s what you thought, right? That’s the normal meaning of vacuum. But if a vacuum is empty, how can there be more than one of them, let alone 10^500?

“Empty” is subjective.

Now we’re getting somewhere. The problem with defining a concept like “empty space” in string theory or field theory is that it’s unclear what precisely it should be empty of. Naively, such a space should be empty of “stuff”, or “matter”, but our naive notions of “matter” don’t apply to field theory or string theory. In fact, there is plenty of “stuff” that can be present in “empty” space.

Think about two pieces of construction paper. One is white, the other is yellow. Which is empty? Neither has anything drawn on it, so while one has a color and the other does not, both are empty.

“Empty space” doesn’t come in multiple colors like construction paper, but there are equivalent parameters that can vary. In quantum field theory, one option is for scalar fields to take different values. In string theory, different dimensions can be curled up in different ways (as an aside, when string theory leads to a quantum field theory often these different curling-up shapes correspond to different values for scalar fields, so the two ideas are related).

So if space can have “stuff” in it and still count as empty, are there any limits on what can be in it?

As it turns out, there is a quite straightforward limit. But to explain it, I need to talk a bit about why physicists care about vacua in the first place.

Why do physicists care about vacua?

In physics, there is a standard modus operandi for solving problems. If you’ve taken even a high school physics course, you’ve probably encountered it in some form. It’s not the only way to solve problems, but it’s one of the easiest. The idea, broadly, is the following:

First get the initial conditions, and then use the laws of physics to see what happens next.

In high school physics, this is how almost every problem works: your teacher tells you what the situation is, and you use what you know to figure out what happens next.

In quantum field theory, things are a bit more subtle, but there is a strong resemblance. You start with a default state, and then find the perturbations, or small changes, around that state.

In high school, your teacher told you what the initial conditions were. In quantum field theory, you need another source for the “default state”. Sometimes, you get that from observations of the real world. Sometimes, though, you want to make a prediction that goes beyond what your observations tell you. In that case, one trick often proves useful:

To find the default state, find which state is stable.

If your system starts out in a state that is unstable, it will change. It will keep changing until eventually it changes into a stable state, where it will stop changing. So if you’re looking for a default state, that state should be one in which the system is stable, where it won’t change.

(I’m oversimplifying things a bit here to make them easier to understand. In particular, I’m making it sound like these things change over time, which is a bit of a tricky subject when talking about different “default” states for the whole of space and time. There’s also a cool story connected to this about why tachyons don’t exist, which I’d love to go into for another post.)

Since we know that the “default” state has to be stable, if there is only one stable state, we’ve found the default!

Because of this, we can lay down a somewhat better definition:

A vacuum is a stable state.

There’s more to the definition than this, but this should be enough to give you the feel for what’s going on. If we want to know the “default” state of the world, the state which everything else is just a small perturbation on top of, we need to find a vacuum. If there is only one plausible vacuum, then our work is done.

When there are many plausible vacua, though, we have a problem. When there are 10^500 vacua, we have a huge problem.

That, in essence, is why many people despair of string theory ever making any testable predictions. String theory has around 10^500 plausible vacua (for a given, technical, meaning of plausible).

It’s important to remember a few things here.

First, the reason we care about vacuum states is because we want a “default” to make predictions around. That is, in a sense, a technical problem, in that it is an artifact of our method. It’s a result of the fact that we are choosing a default state and perturbing around it, rather than proving things that don’t depend on our choice of default state. That said, this isn’t as useful an insight as it might appear, and as it turns out there is generally very little that can be predicted without choosing a vacuum.

Second, the reason that the large number of vacua is a problem is that if there was only one vacuum, we would know which state was the default state for our world. Instead, we need some other method to pick, out of the many possible vacua, which one to use to make predictions. That is, in a sense, a philosophical problem, in that it asks what seems ostensibly to be a philosophical question: what is the basic, default state of the universe?

This happens to be a slightly more useful insight than the first one, and it leads to a number of different approaches. The most intuitive solution is to just shrug and say that we will see which vacuum we’re in by observing the world around us. That’s a little glib, since many different vacua could lead to very similar observations. A better tactic might be to try to make predictions on general grounds by trying to see what the world we can already observe implies about which vacua are possible, but this is also quite controversial. And there are some people who try another approach, attempting to pick a vacuum not based on observations, but rather on statistics, choosing a vacuum that appears to be “typical” in some sense, or that satisfies anthropic constraints. All of these, again, are controversial, and I make no commentary here about which approaches are viable and which aren’t. It’s a complicated situation and there are a fair number of people working on it. Perhaps, in the end, string theory will be ruled un-testable. Perhaps the relevant solution is right under peoples’ noses. We just don’t know.