Tag Archives: string theory

No-One Can Tell You What They Don’t Understand

On Wednesday, Amanda Peet gave a Public Lecture at Perimeter on string theory and black holes, while I and other Perimeter-folk manned the online chat. If you missed it, it’s recorded online here.

We get a lot of questions in the online chat. Some are quite insightful, some are basic, and some…well, some are kind of strange. Like the person who asked us how holography could be compatible with irrational numbers.

In physics, holography is the idea that you can encode the physics of a wider space using only information on its boundary. If you remember the 90’s or read Buzzfeed a lot, you might remember holograms: weird rainbow-colored images that looked 3d when you turned your head.

On a computer screen, they instead just look awkward.

Holograms in physics are a lot like that, but rather than a 2d image looking like a 3d object, they can be other combinations of dimensions as well. The most famous, AdS/CFT, relates a ten-dimensional space full of strings to a four-dimensional space on its boundary, where the four-dimensional space contains everybody’s favorite theory, N=4 super Yang-Mills.

So from this explanation, it’s probably not obvious what holography has to do with irrational numbers. That’s because there is no connection: holography has nothing to do with irrational numbers.

Naturally, we were all a bit confused, so one of us asked this person what they meant. They responded by asking if we knew what holograms and irrational numbers were. After all, the problem should be obvious then, right?

In this sort of situation, it’s tempting to assume you’re being trolled. In reality, though, the problem was one of the most common in science communication: people can’t tell you what they don’t understand, because they don’t understand it.

When a teacher asks “any questions?”, they’re assuming students will know what they’re missing. But a deep enough misunderstanding doesn’t show itself that way. Misunderstand things enough, and you won’t know you’re missing anything. That’s why it takes real insight to communicate science: you have to anticipate ways that people might misunderstand you.

In this situation, I thought about what associations people have with holograms. While some might remember the rainbow holograms of old, there are other famous holograms that might catch peoples’ attention.

Please state the nature of the medical emergency.

In science fiction, holograms are 3d projections, ways that computers can create objects out of thin air. The connection to a 2d image isn’t immediately apparent, but the idea that holograms are digital images is central.

Digital images are the key, here. A computer has to express everything in a finite number of bits. It can’t express an irrational number, a number with a decimal expansion that goes on to infinity, at least not without tricks. So if you think that holography is about reality being digital, rather than lower-dimensional, then the question makes perfect sense: how could a digital reality contain irrational numbers?

This is the sort of thing we have to keep in mind when communicating science. It’s easy to misunderstand, to take some aspect of what someone said and read it through a different lens. We have to think about how others will read our words, we have to be willing to poke and prod until we root out the source of the confusion. Because nobody is just going to tell us what they don’t get.

String Theorists Who Don’t Touch Strings

This week I’ve been busy, attending a workshop here at Perimeter on Superstring Perturbation Theory.

Superstrings are the supersymmetric strings that string theorists use to describe fundamental particles, while perturbation theory is the trick, common in almost every area of physics, of solving a problem by a series of increasingly precise approximations.

Based on that description, you’d think that superstring perturbation theory would be a central topic in string theory research. You wouldn’t expect it to be the sort of thing only a few people at the top of the field dabble in. You definitely wouldn’t expect one of the speakers at the workshop to mention that this might be the first conference on superstring perturbation theory he’s been to since the 1980’s.

String perturbation theory is an important subject, but it’s not one many string theorists use. And the reason why is that, oddly enough, very few string theorists actually use strings.

Looking at arXiv as I’m writing this, I can see only one paper in the theoretical physics section that directly uses strings. Most of them use something else: either older concepts like black holes, quantum field theory, and supergravity, or newer ones like d-branes. If you talked to the people who wrote those papers, though, most of them would describe themselves as string theorists.

The reason for the disconnect is that string theory as a field is much more than just the study of strings. String theory is a ten-dimensional universe (or eleven with M theory), where different ways of twisting up some of the dimensions result in different apparent physics in the remaining ones. It’s got strings, but also higher-dimensional membranes (and in the eleven dimensions of M theory it only has membranes, not strings). It’s the recipe for a long list of exotic quantum field theories, and a list of possible relations between them. It’s a new way to look at geometry, to think about the intersection of the nature of space and the dynamics of what inhabits it.

If string theory were really just about strings, it likely wouldn’t have grown any bigger than its quantum gravity rivals, like Loop Quantum Gravity. String theory grew because it inspired research directions that went far afield, and far beyond its conceptual core.

That’s part of why most string theorists will be baffled if you insist that string theory needs proof, or that it’s not the right approach to quantum gravity. For most string theorists, it doesn’t matter whether we live in a stringy world, whether gravity might eventually be described by another model. For most string theorists, string theory is a tool, one that opened up fields of inquiry that don’t have much to do with predicting the output of the LHC or describing the early universe. Or, in many cases, actually using strings.

Only the Boring Kind of Parallel Universes

PARALLEL UNIVERSES AT THE LHC??

No. No. Bad journalist. See what happens when you…

Mir Faizal, one of the three-strong team of physicists behind the experiment, said: “Just as many parallel sheets of paper, which are two dimensional objects [breadth and length] can exist in a third dimension [height], parallel universes can also exist in higher dimensions.

Bad physicist, bad! No biscuit for you!

Not nice at all!

For the technically-minded, Sabine Hossenfelder goes into thorough detail about what went wrong here. Not only do parallel universes have nothing to do with what Mir Faizal and collaborators have been studying, but the actual paper they’re hyping here is apparently riddled with holes.

BLACK holes! …no, actually, just logic holes.

But why did parallel universes even come up? If they have nothing to do with Faizal’s work, why did he mention them? Do parallel universes ever come up in real physics at all?

The answer to this last question is yes. There are real, viable ideas in physics that involve parallel universes. The universes involved, however, are usually boring ones.

The ideas are generally referred to as brane-world theories. If you’ve heard of string theory, you’ve probably heard that it proposes that the world is made of tiny strings. That’s all well and good, but it’s not the whole story. String theory has other sorts of objects in it too: higher dimensional generalizations of strings called membranes, branes for short. In fact, M theory, the theory of which every string theory is some low-energy limit, has no strings at all, just branes.

When these branes are one-dimensional, they’re strings. When they’re two-dimensional, they’re what you would normally picture as a membrane, a vibrating sheet, potentially infinite in size. When they’re three-dimensional, they fill three-dimensional space, again potentially up to infinity.

Filling three dimensional space, out to infinity…well that sure sounds a whole lot like what we’d normally call a universe.

In brane-world constructions, what we call our universe is precisely this sort of three-dimensional brane. It then lives in a higher-dimensional space, where its position in this space influences things like the strength of gravity, or the speed at which the universe expands.

Sometimes (not all the time!) these sorts of constructions include other branes, besides the one that contains our universe. These other branes behave in a similar way, and can have very important effects on our universe. They, if anything, are the parallel universes of theoretical physics.

It’s important to point out, though that these aren’t the sort of sci-fi parallel universes you might imagine! You aren’t going to find a world where everyone has a goatee, or even a world with an empty earth full of teleporting apes.

Pratchett reference!

That’s because, in order for these extra branes to do useful physical work, they generally have to be very different from our world. They’re worlds where gravity is very strong, or world with dramatically different densities of energy and matter. In the end, this means they’re not even the sort of universes that produce interesting aliens, or where we could send an astronaut, or really anything that lends itself well to (non-mathematical) imagination. From a sci-fi perspective, they’re as boring as can be.

Faizal’s idea, though, doesn’t even involve the boring kind of parallel universe!

His idea involves extra dimensions, specifically what physicists refer to as “large” extra dimensions, in contrast with the small extra dimensions of string theory. Large extra dimensions can explain the weakness of gravity, and theories that use them often predict that it’s much easier to create microscopic black holes than it otherwise would be. So far, these models haven’t had much luck at the LHC, and while I get the impression that they haven’t been completely ruled out, they aren’t very popular anymore.

The thing is, extra dimensions don’t mean parallel universes.

In fiction, the two get used interchangeably a lot. People go to “another dimension”, vaguely described as traveling along another dimension of space, and find themselves in a strange new world. In reality, though, there’s no reason to think that traveling along an extra dimension would put you in any sort of “strange new world”. The whole reason that our world is limited to three dimensions is because it’s “bound” to something: a brane, in the string theory picture. If there’s not another brane to bind things to, traveling in an extra dimension won’t put you in a new universe, it will just put you in an empty space where none of the types of matter you’re made of even exist.

It’s really tempting, when talking to laypeople, to fall back on stories. If you mention parallel universes, their faces light up with the idea that this is something they get, if only from imaginary examples. It gives you that same sense of accomplishment as if you had actually taught them something real. But you haven’t. It’s wrong, and Mir Faizal shouldn’t have stooped to doing it.

How to Predict the Mass of the Higgs

Did Homer Simpson predict the mass of the Higgs boson?

No, of course not.

Apart from the usual reasons, he’s off by more than a factor of six.

If you play with the numbers, it looks like Simon Singh (the popular science writer who reported the “discovery” Homer made as a throwaway joke in a 1998 Simpsons episode) made the classic physics mistake of losing track of a factor of 2\pi. In particular, it looks like he mistakenly thought that the Planck constant, h, was equal to the reduced Planck constant, \hbar, divided by 2\pi, when actually it’s \hbar times 2\pi. So while Singh read Homer’s prediction as 123 GeV, surprisingly close to the actual Higgs mass of 125 GeV found in 2012, in fact Homer predicted the somewhat more embarrassing value of 775 GeV.

D’Oh!

That was boring. Let’s ask a more interesting question.

Did Gordon Kane predict the mass of the Higgs boson?

I’ve talked before about how it seems impossible that string theory will ever make any testable predictions. The issue boils down to one of too many possibilities: string theory predicts different consequences for different ways that its six (or seven for M theory) extra dimensions can be curled up. Since there is an absurdly vast number of ways this can be done, anything you might want to predict (say, the mass of the electron) has an absurd number of possible values.

Gordon Kane and collaborators get around this problem by tackling a different one. Instead of trying to use string theory to predict things we already know, like the mass of the electron, they assume these things are already true. That is, they assume we live in a world with electrons that have the mass they really have, and quarks that have the mass they really have, and so on. They assume that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make. And, they assume that this world is a consequence of string (or rather M) theory.

From that combination of assumptions, they then figure out the consequences for things that aren’t yet known. And in a 2011 paper, they predicted the Higgs mass would be between 105 and 129 GeV.

I have a lot of sympathy for this approach, because it’s essentially the same thing that non-string-theorists do. When a particle physicist wants to predict what will come out of the LHC, they don’t try to get it from first principles: they assume the world works as we have discovered, make a few mild extra assumptions, and see what new consequences come out that we haven’t observed yet. If those particle physicists can be said to make predictions from supersymmetry, or (shudder) technicolor, then Gordon Kane is certainly making predictions from string theory.

So why haven’t you heard of him? Even if you have, why, if this guy successfully predicted the mass of the Higgs boson, are people still saying that you can’t make predictions with string theory?

Trouble is, making predictions is tricky.

Part of the problem is timing. Gordon Kane’s paper went online in December of 2011. The Higgs mass was announced in July 2012, so you might think Kane got a six month head-start. But when something is announced isn’t the same as when it’s discovered. For a big experiment like the Large Hadron Collider, there’s a long road between the first time something gets noticed and the point where everyone is certain enough that they’re ready to announce it to the world. Rumors fly, and it’s not clear that Kane and his co-authors wouldn’t have heard them.

Assumptions are the other issue. Remember when I said, a couple paragraphs up, that Kane’s group assumed “that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make“? That last part is what makes things tricky. There were a few extra assumptions Kane made, beyond those needed to reproduce the world we know. For many people, some of these extra assumptions are suspicious. They worry that the assumptions might have been chosen, not just because they made sense, but because they happened to give the right (rumored) mass of the Higgs.

If you want to predict something in physics, it’s not just a matter of getting in ahead of the announcement with the right number. For a clear prediction, you need to be early enough that the experiments haven’t yet even seen hints of what you’re looking for. Even then, you need your theory to be suitably generic, so that it’s clear that your prediction is really the result of the math and not of your choices. You can trade off aspects of this: more accuracy for a less generic theory, better timing for looser predictions. Get the formula right, and the world will laud you for your prediction. Wrong, and you’re Homer Simpson. Somewhere in between, though, and you end up in that tricky, tricky grey area.

Like Gordon Kane.

Living in a Broken World: Supersymmetry We Can Test

I’ve talked before about supersymmetry. Supersymmetry relates particles with different spins, linking spin 1 force-carrying particles like photons and gluons to spin 1/2 particles similar to electrons, and spin 1/2 particles in turn to spin 0 “scalar” particles, the same general type as the Higgs. I emphasized there that, if two particles are related by supersymmetry, they will have some important traits in common: the same mass and the same interactions.

That’s true for the theories I like to work with. In particular, it’s true for N=4 super Yang-Mills. Adding supersymmetry allows us to tinker with neater, cleaner theories, gaining mastery over rice before we start experimenting with the more intricate “sushi” of theories of the real world.

However, it should be pretty clear that we don’t live in a world with this sort of supersymmetry. A quick look at the Standard Model indicates that no two known particles interact in precisely the same way. When people try to test supersymmetry in the real world, they’re not looking for this sort of thing. Rather, they’re looking for broken supersymmetry.

In the past, I’ve described broken supersymmetry as like a broken mirror: the two sides are no longer the same, but you can still predict one side’s behavior from the other. When supersymmetry is broken, related particles still have the same interactions. Now, though, they can have different masses.

The simplest version of supersymmetry, N=1, gives one partner to each particle. Since nothing in the Standard Model can be partners of each other, if we have broken N=1 supersymmetry in the real world then we need a new particle for each existing one…and each one of those particles has a potentially unknown, different mass. And if that sounds rather complicated…

Baroque enough to make Rubens happy.

That, right there, is the Minimal Supersymmetric Standard Model, the simplest thing you can propose if you want a world with broken supersymmetry. If you look carefully, you’ll notice that it’s actually a bit more complicated than just one partner for each known particle: there are a few extra Higgs fields as well!

If we’re hoping to explain anything in a simpler way, we seem to have royally screwed up. Luckily, though, the situation is not quite as ridiculous as it appears. Let’s go back to the mirror analogy.

If you look into a broken mirror, you can still have a pretty good idea of what you’ll see…but in order to do so, you have to know how the mirror is broken.

Similarly, supersymmetry can be broken in different ways, by different supersymmetry-breaking mechanisms.

The general idea is to start with a theory in which supersymmetry is precisely true, and all supersymmetric partners have the same mass. Then, consider some Higgs-like field. Like the Higgs, it can take some constant value throughout all of space, forming a background like the color of a piece of construction paper. While the rules that govern this field would respect supersymmetry, any specific value it takes wouldn’t. Instead, it would be biased: the spin 0, Higgs-like field could take on a constant value, but its spin 1/2 supersymmetric partner couldn’t. (If you want to know why, read my post on the Higgs linked above.)

Once that field takes on a specific value, supersymmetry is broken. That breaking then has to be communicated to the rest of the theory, via interactions between different particles. There are several different ways this can work: perhaps the interactions come from gravity, or are the same strength as gravity. Maybe instead they come from a new fundamental force, similar to the strong nuclear force but harder to discover. They could even come as byproducts of the breaking of other symmetries.

Each one of these options has different consequences, and leads to different predictions for the masses of undiscovered partner particles. They tend to have different numbers of extra parameters (for example, if gravity-based interactions are involved there are four new parameters, and an extra sign, that must be fixed). None of them have an entire standard model-worth of new parameters…but all of them have at least a few extra.

(Brief aside: I’ve been talking about the Minimal Supersymmetric Standard Model, but these days people have largely given up on finding evidence for it, and are exploring even more complicated setups like the Next-to-Minimal Supersymmetric Standard Model.)

If we’re introducing extra parameters without explaining existing ones, what’s the point of supersymmetry?

Last week, I talked about the problem of fine-tuning. I explained that when physicists are worried about fine-tuning, what we’re really worried about is whether the sorts of ultimate (low number of parameters) theories that we expect to hold could give rise to the apparently fine-tuned world we live in. In that post, I was a little misleading about supersymmetry’s role in that problem.

The goal of introducing (broken) supersymmetry is to solve a particular set of fine-tuning problems, mostly one specific one involving the Higgs. This doesn’t mean that supersymmetry is the sort of “ultimate” theory we’re looking for, rather supersymmetry is one of the few ways we know to bridge the gap between “ultimate” theories and a fine-tuned real world.

To explain it in terms of the language of the last post, it’s hard to find one of these “ultimate” theories that gives rise to a fine-tuned world. What’s quite a bit easier, though, is finding one of these “ultimate” theories that gives rise to a supersymmetric world, which in turn gives rise to a fine-tuned real world.

In practice, these are the sorts of theories that get tested. Very rarely are people able to propose testable versions of the more “ultimate” theories. Instead, one generally finds intermediate theories, theories that can potentially come from “ultimate” theories, and builds general versions of those that can be tested.

These intermediate theories come in multiple levels. Some physicists look for the most general version, theories like the Minimal Supersymmetric Standard Model with a whole host of new parameters. Others look for more specific versions, choices of supersymmetry-breaking mechanisms. Still others try to tie it further up, getting close to candidate “ultimate” theories like M theory (though in practice they generally make a few choices that put them somewhere in between).

The hope is that with a lot of people covering different angles, we’ll be able to make the best use of any new evidence that comes in. If “something” is out there, there are still a lot of choices for what that something could be, and it’s the job of physicists to try to understand whatever ends up being found.

Not bad for working in a broken world, huh?

The Real Problem with Fine-Tuning

You’ve probably heard it said that the universe is fine-tuned.

The Standard Model, our current best understanding of the rules that govern particle physics, is full of lots of fiddly adjustable parameters. The masses of fundamental particles and the strengths of the fundamental forces aren’t the sort of thing we can predict from first principles: we need to go out, do experiments, and find out what they are. And you’ve probably heard it argued that, if these fiddly parameters were even a little different from what they are, life as we know it could not exist.

That’s fine-tuning…or at least, that’s what many people mean when they talk about fine-tuning. It’s not exactly what physicists mean though. The thing is, almost nobody who studies particle physics thinks the parameters of the Standard Model are the full story. In fact, any theory with adjustable parameters probably isn’t the full story.

It all goes back to a point I made a while back: nature abhors a constant. The whole purpose of physics is to explain the natural world, and we have a long history of taking things that look arbitrary and linking them together, showing that reality has fewer parameters than we had thought. This is something physics is very good at. (To indulge in a little extremely amateurish philosophy, it seems to me that this is simply an inherent part of how we understand the world: if we encounter a parameter, we will eventually come up with an explanation for it.)

Moreover, at this point we have a rough idea of what this sort of explanation should look like. We have experience playing with theories that don’t have any adjustable parameters, or that only have a few: M theory is an example, but there are also more traditional quantum field theories that fill this role with no mention of string theory. From our exploration of these theories, we know that they can serve as the kind of explanation we need: in a world governed by one of these theories, people unaware of the full theory would observe what would look at first glance like a world with many fiddly adjustable parameters, parameters that would eventually turn out to be consequences of the broader theory.

So for a physicist, fine-tuning is not about those fiddly parameters themselves. Rather, it’s about the theory that predicts them. Because we have experience playing with these sorts of theories, we know roughly the sorts of worlds they create. What we know is that, while sometimes they give rise to worlds that appear fine-tuned, they tend to only do so in particular ways. Setups that give rise to fine-tuning have consequences: supersymmetry, for example, can give rise to an apparently fine-tuned universe but has to have “partner” particles that show up in powerful enough colliders. In general, a theory that gives rise to apparent fine-tuning will have some detectable consequences.

That’s where physicists start to get worried. So far, we haven’t seen any of these detectable consequences, and it’s getting to the point where we could have, had they been the sort many people expected.

Physicists are worried about fine-tuning, but not because it makes the universe “unlikely”. They’re worried because the more finely-tuned our universe appears, the harder it is to find an explanation for it in terms of the sorts of theories we’re used to working with, and the less likely it becomes that someone will discover a good explanation any time soon. We’re quite confident that there should be some explanation, hundreds of years of scientific progress strongly suggest that to be the case. But the nature of that explanation is becoming increasingly opaque.

The Three Things Everyone Gets Wrong about the Big Bang

Ah, the Big Bang, our most science-y of creation myths. Everyone knows the story of how the universe and all its physical laws emerged from nothing in a massive explosion, growing from a singularity to the size of a breadbox until, over billions of years, it became the size it is today.

bigbang

A hot dense state, if you know what I mean.

…actually, almost nothing in that paragraph is true. There are a lot of myths about the Big Bang, born from physicists giving sloppy explanations. Here are three things most people get wrong about the Big Bang:

1. A Massive Explosion:

When you picture the big bang, don’t you imagine that something went, well, bang?

In movies and TV shows, a time traveler visiting the big bang sees only an empty void. Suddenly, an explosion lights up the darkness, shooting out stars and galaxies until it has created the entire universe.

Astute readers might find this suspicious: if the entire universe was created by the big bang, then where does the “darkness” come from? What does the universe explode into?

The problem here is that, despite the name, the big bang was not actually an explosion.

In picturing the universe as an explosion, you’re imagining the universe as having finite size. But it’s quite likely that the universe is infinite. Even if it is finite, it’s finite like the surface of the Earth: as Columbus (and others) experienced, you can’t get to the “edge” of the Earth no matter how far you go: eventually, you’ll just end up where you started. If the universe is truly finite, the same is true of it.

Rather than an explosion in one place, the big bang was an explosion everywhere at once. Every point in space was “exploding” at the same time. Each point was moving farther apart from every other point, and the whole universe was, as the song goes, hot and dense.

So what do physicists mean when they say that the universe at some specific time was the size of a breadbox, or a grapefruit?

It’s just sloppy language. When these physicists say “the universe”, what they mean is just the part of the universe we can see today, the Hubble Volume. It is that (enormously vast) space that, once upon a time, was merely the size of a grapefruit. But it was still adjacent to infinitely many other grapefruits of space, each one also experiencing the big bang.

2. It began with a Singularity:

This one isn’t so much definitely wrong as probably wrong.

If the universe obeys Einstein’s Theory of General Relativity perfectly, then we can make an educated guess about how it began. By tracking back the expansion of the universe to its earliest stages, we can infer that the universe was once as small as it can get: a single, zero-dimensional point, or a singularity. The laws of general relativity work the same backwards and forwards in time, so just as we could see a star collapsing and know that it is destined to form a black hole, we can see the universe’s expansion and know that if we traced it back it must have come from a single point.

This is all well and good, but there’s a problem with how it begins: “If the universe obeys Einstein’s Theory of General Relativity perfectly”.

In this situation, general relativity predicts an infinitely small, infinitely dense point. As I’ve talked about before, in physics an infinite result is almost never correct. When we encounter infinity, almost always it means we’re ignoring something about the nature of the universe.

In this case, we’re ignoring Quantum Mechanics. Quantum Mechanics naturally makes physics somewhat “fuzzy”: the Uncertainty Principle means that a quantum state can never be exactly in one specific place.

Combining quantum mechanics and general relativity is famously tricky, and the difficulty boils down to getting rid of pesky infinite results. However, several approaches exist to solving this problem, the most prominent of them being String Theory.

If you ask someone to list string theory’s successes, one thing you’ll always hear mentioned is string theory’s ability to understand black holes. In general relativity, black holes are singularities: infinitely small, and infinitely dense. In string theory, black holes are made up of combinations of fundamental objects: strings and membranes, curled up tight, but crucially not infinitely small. String theory smooths out singularities and tamps down infinities, and the same story applies to the infinity of the big bang.

String theory isn’t alone in this, though. Less popular approaches to quantum gravity, like Loop Quantum Gravity, also tend to “fuzz” out singularities. Whichever approach you favor, it’s pretty clear at this point that the big bang didn’t really begin with a true singularity, just a very compressed universe.

3. It created the laws of physics:

Physicists will occasionally say that the big bang determined the laws of physics. Fans of Anthropic Reasoning in particular will talk about different big bangs in different places in a vast multi-verse, each producing different physical laws.

I’ve met several people who were very confused by this. If the big bang created the laws of physics, then what laws governed the big bang? Don’t you need physics to get a big bang in the first place?

The problem here is that “laws of physics” doesn’t have a precise definition. Physicists use it to mean different things.

In one (important) sense, each fundamental particle is its own law of physics. Each one represents something that is true across all of space and time, a fact about the universe that we can test and confirm.

However, these aren’t the most fundamental laws possible. In string theory, the particles that exist in our four dimensions (three space dimensions, and one of time) change depending on how six “extra” dimensions are curled up. Even in ordinary particle physics, the value of the Higgs field determines the mass of the particles in our universe, including things that might feel “fundamental” like the difference between electromagnetism and the weak nuclear force. If the Higgs field had a different value (as it may have early in the life of the universe), these laws of physics would have been different. These sorts of laws can be truly said to have been created by the big bang.

The real fundamental laws, though, don’t change. Relativity is here to stay, no matter what particles exist in the universe. So is quantum mechanics. The big bang didn’t create those laws, it was a natural consequence of them. Rather than springing physics into existence from nothing, the big bang came out of the most fundamental laws of physics, then proceeded to fix the more contingent ones.

In fact, the big bang might not have even been the beginning of time! As I mentioned earlier in this article, most approaches to quantum gravity make singularities “fuzzy”. One thing these “fuzzy” singularities can do is “bounce”, going from a collapsing universe to an expanding universe. In Cyclic Models of the universe, the big bang was just the latest in a cycle of collapses and expansions, extending back into the distant past. Other approaches, like Eternal Inflation, instead think of the big bang as just a local event: our part of the universe happened to be dense enough to form a big bang, while other regions were expanding even more rapidly.

So if you picture the big bang, don’t just imagine an explosion. Imagine the entire universe expanding at once, changing and settling and cooling until it became the universe as we know it today, starting from a world of tangled strings or possibly an entirely different previous universe.

Sounds a bit more interesting to visit in your TARDIS, no?

Physical Truths, Lost to the Ages

For all you tumblr-ers out there (tumblr-ists? tumblr-dwellers?), 4 gravitons is now on tumblr. It’s mostly going to be links to my blog posts, with the occasional re-blog of someone else’s work if something catches my eye.

Nima Arkani-Hamed gave a public lecture at Perimeter yesterday, which I encourage you to watch if you have time, once it’s up on the Perimeter site. He also gave a technical talk earlier in the day, where he finished up by making the following (intentionally) provocative statement:

There is no direct evidence of what happened during the Big Bang that could have survived till today.

He clarified that he doesn’t just mean “evidence we can currently detect”. Rather, there’s a limit on what we can know, even with the most precise equipment possible. The details of what happened at the Big Bang (the sorts of precise details that would tell you, for example, whether it is best described by String Theory or some other picture) would get diluted as the universe expands, until today they would be so subtle and so rare that they fall below the level we could even in principle detect. We simply don’t have enough information available, no matter how good our technology gets, to detect them in a statistically significant way.

If this talk had happened last week, I could have used this in my spooky Halloween post. This is exactly the sort of thing that keeps physicists up at night: the idea that, fundamentally, there may be things we can never truly know about the universe, truths lost to the ages.

It’s not quite as dire as it sounds, though. To explain why, let me mention another great physics piece, Tom Stoppard’s Arcadia.

Despite appearances, this is in fact a great work of physics popularization.

Arcadia is a play about entropy. The play depicts two time periods, the early 19th century and the present day. In the present day a pair of scholars, Hannah and Bernard, argue about the events of the 19th century, when the house was occupied by a mathematically precocious girl named Thomasina and her tutor Septimus. Thomasina makes early discoveries about fractals and (to some extent) chaos theory, while Septimus gradually falls in love with her. In the present, the two scholars gradually get closer to the truth, going from a false theory that one of the guests at the house was killed by Lord Byron, to speculation that Septimus was the one to discover fractals, to finally getting a reasonably accurate idea of how the events of the story unfolded. Still, they never know everything, and the play emphasizes that certain details (documents burned in a fire, the true feelings of some of the people) will be forever lost to the ages.

The key point here is that, even with incomplete information, even without the ability to fully test their hypotheses and get all the details, the scholars can still make progress. They can propose accounts of what happened, accounts that have implications they can test, that might be proven wrong or right by future discoveries. Their accounts will also have implications they can’t test: lost letters, feelings never written down. But the better their account, the more it will explain, and the longer it will agree with anything new they manage to turn up.

That’s the way out of the problem Nima posed. We can’t know the truth of what happened at the Big Bang directly. But if we have a theory of physics that describes everything we can test, it’s likely to also make a prediction for what happened in the Big Bang. In science, most of the time you don’t have direct evidence. Rather, you have a successful theory, one that has succeeded under scrutiny many times in many contexts, enough that you trust it even when it goes out of the area you’re comfortable testing. That’s why physicists can make statements about what it’s like on the inside of a black hole, and it’s why it’s still good science to think about the Big Bang even if we can’t gather direct evidence about the details of how it took place.

All that said, Nima is well aware of this, and the problem still makes him uncomfortable. It makes me uncomfortable too. Saying that something is completely outside of our ability to measure, especially something as fundamental and important as the Big Bang, is not something we physicists can generally be content with. Time will tell whether there’s a way around the problem.

Am I a String Theorist?

Perimeter, like most institutes of theoretical physics, divides their researchers into semi-informal groups. At Perimeter, these are:

  • Condensed Matter
  • Cosmology
  • Mathematical Physics
  • Particle Physics
  • Quantum Fields and Strings
  • Quantum Foundations
  • Quantum Gravity
  • Quantum Information
  • Strong Gravity

I’m in the Quantum Fields and Strings group, which many people seem to refer to simply as the String Theory group. So for the past week or so, I’ve been introducing myself as a String Theorist. As I briefly mention in my Who Am I? post, this isn’t completely accurate.

Am I a String Theorist?

The theories that I study do derive from string theory. They were first framed by string theorists, and research into them is still deeply intertwined with string theory research. I’ve definitely had occasion to compare my results to those of string theorists, or to bring in calculations by string theorists to advance my work.

And if you’re the kind of person who views the world as a competition between string theory and its rivals (like Loop Quantum Gravity) then I suppose I’m on the string theory “side”. I’m optimistic, at least, that the reason why string theory research is so much more common than any other approach to quantum gravity is simply because string theory provides many more interesting and viable projects for researchers.

On the other hand, though, there’s the basic fact that the theories I work with are not, themselves, string theories. They’re quantum field theories, the broader class that encompasses the modern synthesis of quantum mechanics and special relativity. The theories I work with are often reasonably close to the well-tested theories of the real world, close enough that the calculations are more “particle physics” than the they are “string theory”.

Of course, all of that could change. One of the great things about string theory is the way it connects lots of different interesting quantum field theories together. There’s a “string”, the “GKP string”, involved in the work of Basso, Sever, and Vieira, work that I will probably get involved with here at Perimeter. The (2,0) theory is a quantum field theory, but it’s much closer to string theory than to particle physics, so if I get more involved with the (2,0) theory would that make me a string theorist?

The fact is, these days string theory is so ubiquitous that the question “Am I a String Theorist?” doesn’t actually mean anything. String theory is there, lurking in the background, able to get involved at any time even if it’s not directly involved at present. Theoretical physicists don’t fall into neat categories.

I am a String Theorist. Also, I am not.

N=8: That’s a Whole Lot of Symmetry

In two weeks, I’m planning an extensive overhaul of the blog. I’ll be switching from 4gravitons.wordpress.com to just 4gravitons.wordpress.com, since I’m no longer a grad student. Don’t worry, I’ll be forwarding traffic from the old address, so if you miss the changeover you’ll have plenty of time to readjust. I’ll also be changing the blog’s look a bit, and adding some new tools and sections, including my current project, a series on the theory N=8 supergravity. This is post will be the last in the N=8 supergravity series.

I’ve told you about how gravity can be thought of as interactions with spin 2 particles, called gravitons. I’ve talked about how adding supersymmetry gives you a whole new type of particle, a gravitino, one different from all of the other particles we’ve seen in nature. Add supersymmetry to gravity, and you get a type of theory called supergravity.

In this post I want to discuss a particularly interesting form of supergravity. It’s called N=8 supergravity, and it’s closely related to N=4 super Yang-Mills.

In my articles about N=4 super Yang-Mills, I talked about supersymmetry. Supersymmetry is a relationship between particles of spin X and particles of spin X-½, but it gets more complicated when N (the number of “directions” of supersymmetry) is greater than one.

I’d encourage you to read at least the two links in the above paragraph. The gist is that just like a symmetrical object can be turned in different directions and still remain the same, a supersymmetrical theory can be “turned” so that a particle with spin X becomes a particle of spin X-½ (a different type of particle), and the theory will remain the same. The higher the number N, the more different directions the theory can be “turned”.

N=4 was something I could depict in a picture. We started with a particle of spin 1, then could “turn” it in four different directions, each resulting in a different particle of spin ½. By combining two different “turns” we ended up with six distinct particles of spin 0. Miraculously, I could fit this all into one image.

N=8 is tougher. This time, we start with 1 particle of spin 2: the graviton, the particle that corresponds to the force of gravity. From there we can “turn” the theory in eight different directions, leading to 8 different gravitino particles with spin 3/2.

After that, things get more complicated. You can “turn” the theory twice to reach spin 1. Spin 1 particles correspond to Yang-Mills forces, the fundamental forces of nature (besides gravity). Photons are the spin 1 particles that correspond to Electromagnetism. The spin 1 particles here, connected as they are to gravity by supersymmetry, are typically called graviphotons. There are 28 distinct graviphotons in N=8 supergravity.

From the graviphotons, we can keep turning, getting to spin ½, where we find 56 new particles of the same “type” as electrons and quarks. On our fourth turn, we get to spin 0, the scalars, with 70 new particles. Turning further takes us back: from spin 0 to spin ½, spin ½ to spin 1, spin 1 to spin 3/2, and spin 3/2 to spin 2, back where we started after eight “turns”.

I’ve tried to depict this in the same way as N=4 super Yang-Mills, but there’s just no way to fit everything in. The best I can do is to take a slice through the space, letting certain particles overlap to give at best a general impression of what’s going on.

Graviton in black, gravitinos in grey, graviphotons in yellow, fermions in orange, scalars in red, and comprehensibility omitted entirely.

Graviton in black, gravitinos in grey, graviphotons in yellow, fermions in orange, scalars in red, making a firework of incomprehensible graphics. Incidentally, happy 4th of July to my American readers.

That picture doesn’t give you any intuition about the numbers. It doesn’t show you why there are 28 graviphotons, or 70 scalars. To explain that, it’s best to turn to another, hopefully more familiar picture, Pascal’s triangle.

Getting math class flashbacks yet?

Pascal’s triangle is a way of writing down how many distinct combinations you can make out of a list, and that’s really all that’s going on here. If you have four directions to “turn” and you pick one, you have four options, while picking two gives you six distinct choices. That’s just the 1-4-6-4-1 line on the triangle. If you go down to the eighth, you’ll spot the numbers from N=8 supergravity: 1 graviton, 8 gravitinos, 28 graviphotons, 56 fermions, and 70 scalars.

That’s a lot of particles. With that many particles, you might wonder if you could somehow fit the real world in there.

Actually, that isn’t such a naive thought. When N=8 supergravity was first discovered, people tried to fit the existing particles of nature inside it, hoping that it could explain them. Over the years though, it was realized that N=8 supergravity simply doesn’t provide enough tools to fully capture the particles of the standard model. Something more diverse, like string theory, would be needed.

That means that N=8 supergravity, like many of the things theorists call theories, does not describe the real world. Instead, it’s interesting for a different reason.

You’ve probably heard that gravity and quantum mechanics are incompatible. That’s not exactly true: you can write down a quantum theory of gravity about as easily as you can write down a quantum theory of anything else. The problem is that most such theories have divergences, infinite results that shouldn’t be infinite. Dealing with those results involves a process called renormalization, which papers over the infinities but reduces our ability to make predictions. For gravity theories, this process has to be performed an infinite number of times, resulting in an infinite loss of predictability. So while you can certainly write down a theory of quantum gravity, you can’t predict anything with it.

String theory is different. It doesn’t have the same sorts of infinite results, doesn’t require renormalization. That, really, is it’s purpose, it’s biggest virtue: everything else is a side benefit.

N=4 super Yang-Mills isn’t a theory of gravity at all, but it does have that same neat trait: you never get this sort of infinite results, so you never need to give up predictive power.

What’s so cool about N=8 supergravity is that it just might be in the same category. By all rights, it shouldn’t be…but loop after loop its divergences seem to be behaving much like N=4 super Yang-Mills. (For those new to this blog, loops are a measure of how complex a calculation is in particle physics. Most practical calculations only involve one or two loops, while four loops represents possibly the most precise test ever performed by science.)

Now, two predictions are at the fore. One suggests that this magic behavior will be broken at the terrifyingly complex level of seven loops. The other proposes that the magic will continue, and N=8 supergravity will never see a divergence. The only way for certain is to do the calculation, look at four gravitons at seven loops and see what happens.

If N=8 supergravity really doesn’t diverge, then the biggest “point” of string theory isn’t unique anymore. If you don’t need all the bells and whistles of string theory to get an acceptable quantum theory of gravity, then maybe there’s a better way to think about the problem of quantum gravity in general. Even if N=8 supergravity doesn’t describe the real world, there may be other ways forward, other ways to handle the problem of divergences. If someone can manage that calculation (not as impossible as it sounds nowadays, but still very very hard) then we might see something really truly new.