Author Archives: 4gravitons

Only the Boring Kind of Parallel Universes

PARALLEL UNIVERSES AT THE LHC??

No. No. Bad journalist. See what happens when you…

Mir Faizal, one of the three-strong team of physicists behind the experiment, said: “Just as many parallel sheets of paper, which are two dimensional objects [breadth and length] can exist in a third dimension [height], parallel universes can also exist in higher dimensions.

Bad physicist, bad! No biscuit for you!

Not nice at all!

For the technically-minded, Sabine Hossenfelder goes into thorough detail about what went wrong here. Not only do parallel universes have nothing to do with what Mir Faizal and collaborators have been studying, but the actual paper they’re hyping here is apparently riddled with holes.

BLACK holes! …no, actually, just logic holes.

But why did parallel universes even come up? If they have nothing to do with Faizal’s work, why did he mention them? Do parallel universes ever come up in real physics at all?

The answer to this last question is yes. There are real, viable ideas in physics that involve parallel universes. The universes involved, however, are usually boring ones.

The ideas are generally referred to as brane-world theories. If you’ve heard of string theory, you’ve probably heard that it proposes that the world is made of tiny strings. That’s all well and good, but it’s not the whole story. String theory has other sorts of objects in it too: higher dimensional generalizations of strings called membranes, branes for short. In fact, M theory, the theory of which every string theory is some low-energy limit, has no strings at all, just branes.

When these branes are one-dimensional, they’re strings. When they’re two-dimensional, they’re what you would normally picture as a membrane, a vibrating sheet, potentially infinite in size. When they’re three-dimensional, they fill three-dimensional space, again potentially up to infinity.

Filling three dimensional space, out to infinity…well that sure sounds a whole lot like what we’d normally call a universe.

In brane-world constructions, what we call our universe is precisely this sort of three-dimensional brane. It then lives in a higher-dimensional space, where its position in this space influences things like the strength of gravity, or the speed at which the universe expands.

Sometimes (not all the time!) these sorts of constructions include other branes, besides the one that contains our universe. These other branes behave in a similar way, and can have very important effects on our universe. They, if anything, are the parallel universes of theoretical physics.

It’s important to point out, though that these aren’t the sort of sci-fi parallel universes you might imagine! You aren’t going to find a world where everyone has a goatee, or even a world with an empty earth full of teleporting apes.

Pratchett reference!

That’s because, in order for these extra branes to do useful physical work, they generally have to be very different from our world. They’re worlds where gravity is very strong, or world with dramatically different densities of energy and matter. In the end, this means they’re not even the sort of universes that produce interesting aliens, or where we could send an astronaut, or really anything that lends itself well to (non-mathematical) imagination. From a sci-fi perspective, they’re as boring as can be.

Faizal’s idea, though, doesn’t even involve the boring kind of parallel universe!

His idea involves extra dimensions, specifically what physicists refer to as “large” extra dimensions, in contrast with the small extra dimensions of string theory. Large extra dimensions can explain the weakness of gravity, and theories that use them often predict that it’s much easier to create microscopic black holes than it otherwise would be. So far, these models haven’t had much luck at the LHC, and while I get the impression that they haven’t been completely ruled out, they aren’t very popular anymore.

The thing is, extra dimensions don’t mean parallel universes.

In fiction, the two get used interchangeably a lot. People go to “another dimension”, vaguely described as traveling along another dimension of space, and find themselves in a strange new world. In reality, though, there’s no reason to think that traveling along an extra dimension would put you in any sort of “strange new world”. The whole reason that our world is limited to three dimensions is because it’s “bound” to something: a brane, in the string theory picture. If there’s not another brane to bind things to, traveling in an extra dimension won’t put you in a new universe, it will just put you in an empty space where none of the types of matter you’re made of even exist.

It’s really tempting, when talking to laypeople, to fall back on stories. If you mention parallel universes, their faces light up with the idea that this is something they get, if only from imaginary examples. It gives you that same sense of accomplishment as if you had actually taught them something real. But you haven’t. It’s wrong, and Mir Faizal shouldn’t have stooped to doing it.

What Counts as a Fundamental Force?

I’m giving a presentation next Wednesday for Learning Unlimited, an organization that presents educational talks to seniors in Woodstock, Ontario. The talk introduces the fundamental forces and talks about Yang and Mills before moving on to introduce my work.

While practicing the talk today, someone from Perimeter’s outreach department pointed out a rather surprising missing element: I never mention gravity!

Most people know that there are four fundamental forces of nature. There’s Electromagnetism, there’s Gravity, there’s the Weak Nuclear Force, and there’s the Strong Nuclear Force.

Listed here by their most significant uses.

What ties these things together, though? What makes them all “fundamental forces”?

Mathematically, gravity is the odd one out here. Electromagnetism, the Weak Force, and the Strong Force all share a common description: they’re Yang-Mills forces. Gravity isn’t. While you can sort of think of it as a Yang-Mills force “squared”, it’s quite a bit more complicated than the Yang-Mills forces.

You might be objecting that the common trait of the fundamental forces is obvious: they’re forces! And indeed, you can write down a force law for gravity, and a force law for E&M, and umm…

[Mumble Mumble]

Ok, it’s not quite as bad as xkcd would have us believe. You can actually write down a force law for the weak force, if you really want to, and it’s at least sort of possible to talk about the force exerted by the strong interaction.

All that said, though, why are we thinking about this in terms of forces? Forces are a concept from classical mechanics. For a beginning physics student, they come up again and again, in free-body diagram after free-body diagram. But by the time a student learns quantum mechanics, and quantum field theory, they’ve already learned other ways of framing things where forces aren’t mentioned at all. So while forces are kind of familiar to people starting out, they don’t really match onto anything that most quantum field theorists work with, and it’s a bit weird to classify things that only really appear in quantum field theory (the Weak Nuclear Force, the Strong Nuclear Force) based on whether or not they’re forces.

Isn’t there some connection, though? After all, gravity, electromagnetism, the strong force, and the weak force may be different mathematically, but at least they all involve bosons.

Well, yes. And so does the Higgs.

The Higgs is usually left out of listings of the fundamental forces, because it’s not really a “force”. It doesn’t have a direction, instead it works equally at every point in space. But if you include spin 2 gravity and spin 1 Yang-Mills forces, why not also include the spin 0 Higgs?

Well, if you’re doing that, why not include fermions as well? People often think of fermions as “matter” and bosons as “energy”, but in fact both have energy, and neither is made of it. Electrons and quarks are just as fundamental as photons and gluons and gravitons, just as central a part of how the universe works.

I’m still trying to decide whether my presentation about Yang-Mills forces should also include gravity. On the one hand, it would make everything more familiar. On the other…pretty much this entire post.

What Can Pi Do for You?

Tomorrow is Pi Day!

And what a Pi Day! 3/14/15 (if you’re in the US, Belize, Micronesia, some parts of Canada, the Philippines, or Swahili-speaking Kenya), best celebrated at 9:26:53, if you’re up by then. Grab a slice of pie, or cake if you really must, and enjoy!

If you don’t have some of your own, download this one!

Pi is great not just because it’s fun to recite digits and eat pastries, but because it serves a very important role in physics. That’s because, often, pi is one of the most “natural” ways to get larger numbers.

Suppose you’re starting with some sort of “natural” theory. Here I don’t mean natural in the technical sense. Instead, I want you to imagine a theory that has very few free parameters, a theory that is almost entirely fixed by mathematics.

Many physicists hope that the world is ultimately described by this sort of theory, but it’s hard to see in the world we live in. There are so many different numbers, from the tiny mass of the electron to the much larger mass of the top quark, that would all have to come from a simple, overarching theory. Often, it’s easier to get these numbers when they’re made out of factors of pi.

Why is pi easy to get?

In general, pi shows up a lot in physics and mathematics, and its appearance can be mysterious the uninitiated, as this joke related by Eugene Wigner in an essay I mentioned a few weeks ago demonstrates:

THERE IS A story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. “How can you know that?” was his query. “And what is this symbol here?” “Oh,” said the statistician, “this is pi.” “What is that?” “The ratio of the circumference of the circle to its diameter.” “Well, now you are pushing your joke too far,” said the classmate, “surely the population has nothing to do with the circumference of the circle.”

While it may sound silly, in a sense the population really is connected to the circumference of the circle. That’s because pi isn’t just about circles, pi is about volumes.

Take a bit to check out that link. Not just the area of a circle, but the volume of a sphere, and that of all sorts of higher-dimensional ball-shaped things, is calculated with the value of pi. It’s not just spheres, either: pi appears in the volume of many higher-dimensional shapes.

Why does this matter for physics? Because you don’t need a literal shape to get a volume. Most of the time, there aren’t literal circles and spheres giving you factors of pi…but there are abstract spaces, and they contain circles and spheres. A electric and magnetic fields might not be shaped like circles, but the mathematics that describes them can still make good use of a circular space.

That’s why, when I describe the mathematical formulas I work with, formulas that often produce factors of pi, mathematicians will often ask if they’re the volume of some particular mathematical space. It’s why Nima Arkani-Hamed is trying to understand related formulas by thinking of them as the volume of some new sort of geometrical object.

All this is not to say you should go and plug factors of pi together until you get the physical constants you want. Throw in enough factors of pi and enough other numbers and you can match current observations, sure…but you could also match anything else in the same way. Instead, it’s better to think of pi as an assistant: waiting in the wings, ready to translate a pure mathematical theory into the complicated mess of the real world.

So have a Happy Pi Day, everyone, and be grateful to our favorite transcendental number. The universe would be a much more boring place without it.

How to Predict the Mass of the Higgs

Did Homer Simpson predict the mass of the Higgs boson?

No, of course not.

Apart from the usual reasons, he’s off by more than a factor of six.

If you play with the numbers, it looks like Simon Singh (the popular science writer who reported the “discovery” Homer made as a throwaway joke in a 1998 Simpsons episode) made the classic physics mistake of losing track of a factor of 2\pi. In particular, it looks like he mistakenly thought that the Planck constant, h, was equal to the reduced Planck constant, \hbar, divided by 2\pi, when actually it’s \hbar times 2\pi. So while Singh read Homer’s prediction as 123 GeV, surprisingly close to the actual Higgs mass of 125 GeV found in 2012, in fact Homer predicted the somewhat more embarrassing value of 775 GeV.

D’Oh!

That was boring. Let’s ask a more interesting question.

Did Gordon Kane predict the mass of the Higgs boson?

I’ve talked before about how it seems impossible that string theory will ever make any testable predictions. The issue boils down to one of too many possibilities: string theory predicts different consequences for different ways that its six (or seven for M theory) extra dimensions can be curled up. Since there is an absurdly vast number of ways this can be done, anything you might want to predict (say, the mass of the electron) has an absurd number of possible values.

Gordon Kane and collaborators get around this problem by tackling a different one. Instead of trying to use string theory to predict things we already know, like the mass of the electron, they assume these things are already true. That is, they assume we live in a world with electrons that have the mass they really have, and quarks that have the mass they really have, and so on. They assume that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make. And, they assume that this world is a consequence of string (or rather M) theory.

From that combination of assumptions, they then figure out the consequences for things that aren’t yet known. And in a 2011 paper, they predicted the Higgs mass would be between 105 and 129 GeV.

I have a lot of sympathy for this approach, because it’s essentially the same thing that non-string-theorists do. When a particle physicist wants to predict what will come out of the LHC, they don’t try to get it from first principles: they assume the world works as we have discovered, make a few mild extra assumptions, and see what new consequences come out that we haven’t observed yet. If those particle physicists can be said to make predictions from supersymmetry, or (shudder) technicolor, then Gordon Kane is certainly making predictions from string theory.

So why haven’t you heard of him? Even if you have, why, if this guy successfully predicted the mass of the Higgs boson, are people still saying that you can’t make predictions with string theory?

Trouble is, making predictions is tricky.

Part of the problem is timing. Gordon Kane’s paper went online in December of 2011. The Higgs mass was announced in July 2012, so you might think Kane got a six month head-start. But when something is announced isn’t the same as when it’s discovered. For a big experiment like the Large Hadron Collider, there’s a long road between the first time something gets noticed and the point where everyone is certain enough that they’re ready to announce it to the world. Rumors fly, and it’s not clear that Kane and his co-authors wouldn’t have heard them.

Assumptions are the other issue. Remember when I said, a couple paragraphs up, that Kane’s group assumed “that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make“? That last part is what makes things tricky. There were a few extra assumptions Kane made, beyond those needed to reproduce the world we know. For many people, some of these extra assumptions are suspicious. They worry that the assumptions might have been chosen, not just because they made sense, but because they happened to give the right (rumored) mass of the Higgs.

If you want to predict something in physics, it’s not just a matter of getting in ahead of the announcement with the right number. For a clear prediction, you need to be early enough that the experiments haven’t yet even seen hints of what you’re looking for. Even then, you need your theory to be suitably generic, so that it’s clear that your prediction is really the result of the math and not of your choices. You can trade off aspects of this: more accuracy for a less generic theory, better timing for looser predictions. Get the formula right, and the world will laud you for your prediction. Wrong, and you’re Homer Simpson. Somewhere in between, though, and you end up in that tricky, tricky grey area.

Like Gordon Kane.

Pics or It Didn’t Happen

I got a tumblr recently.

One thing I’ve noticed is that tumblr is a very visual medium. While some people can get away with massive text-dumps, they’re usually part of specialized communities. The content that’s most popular with a wide audience is, almost always, images. And that’s especially true for science-related content.

This isn’t limited to tumblr either. Most of my most successful posts have images. Most successful science posts in general involve images. Think of the most interesting science you’ve seen on the internet: chances are, it was something visual that made it memorable.

The problem is, I’m a theoretical physicist. I can’t show you pictures of nebulae in colorized glory, or images showing the behavior of individual atoms. I work with words, equations, and, when I’m lucky, diagrams.

Diagrams tend to work best, when they’re an option. I have no doubt that part of the Amplituhedron‘s popularity with the press owes to Andy Gilmore’s beautiful illustration, as printed in Quanta Magazine’s piece:

Gotta get me an artist.

The problem is, the nicer one of these illustrations is, the less it actually means. For most people, the above is just a pretty picture. Sometimes it’s possible to do something more accurate, like a 3d model of one of string theory’s six-dimensional Calabi-Yau manifolds:

What, you expected a six-dimensional intrusion into our world *not* to look like Yog-Sothoth?

A lot of the time, though, we don’t even have a diagram!

In those sorts of situations, it’s tempting to show an equation. After all, equations are the real deal, the stuff we theorists are actually manipulating.

Unless you’ve got an especially obvious equation, though, there’s basically only one thing the general public will get out of it. Either the equation is surprisingly simple,

Isn’t it cute?

Or it’s unreasonably complicated,

Why yes, this is one equation that covers seventeen pages. You're lucky I didn't post the eight-hundred page one.

Why yes, this is one equation that covers seventeen pages. You’re lucky I didn’t post the eight-hundred page one.

This is great for first impressions, but it’s not very repeatable. Show people one giant equation, and they’ll be impressed. Show them two, and they won’t have any idea what the difference is supposed to be.

If you’re not showing diagrams or equations, what else can you show?

The final option is, essentially, to draw a cartoon. Forget about showing what’s “really going on”, physically or mathematically. That’s what the article is for. For an image, just pick something cute and memorable that references the topic.

When I did an article for Ars Technica back in 2013, I didn’t have any diagrams to show, or any interesting equations. Their artist, undeterred, came up with a cute picture of sushi with an N=4 on it.

That sort of thing really helps! It doesn’t tell you anything technical, it doesn’t explain what’s going on…but it does mean that every time I think of the article, that image pops into my head. And in a world where nothing lasts without a picture to document it, that’s a job well done.

Explanations of Phenomena Are All Alike; Every Unexplained Phenomenon Is Unexplained in Its Own Way

Vladimir Kazakov began his talk at ICTP-SAIFR this week with a variant of Tolstoy’s famous opening to the novel Anna Karenina: “Happy families are all alike; every unhappy family is unhappy in its own way.” Kazakov flipped the order of the quote, stating that while “Un-solvable models are each un-solvable in their own way, solvable models are all alike.”

In talking about solvable and un-solvable models, Kazakov was referring to a concept called integrability, the idea that in certain quantum field theories it’s possible to avoid the messy approximations of perturbation theory and instead jump straight to the answer. Kazakov was observing that these integrable systems seem to have a deep kinship: the same basic methods appear to work to understand all of them.

I’d like to generalize Kazakov’s point, and talk about a broader trend in physics.

Much has been made over the years of the “unreasonable effectiveness of mathematics in the natural sciences”, most notably in physicist Eugene Wigner’s famous essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences. There’s a feeling among some people that mathematics is much better at explaining physical phenomena than one would expect, that the world appears to be “made of math” and that it didn’t have to be.

On the surface, this is a reasonable claim. Certain mathematical ideas, group theory for example, seem to pop up again and again in physics, sometimes in wildly different contexts. The history of fundamental physics has tended to see steady progress over the years, from clunkier mathematical concepts to more and more elegant ones.

Some physicists tend to be dismissive of this. Lee Smolin in particular seems to be under the impression that mathematics is just particularly good at providing useful approximations. This perspective links to his definition of mathematics as “the study of systems of evoked relationships inspired by observations of nature,” a definition to which Peter Woit vehemently objects. Woit argues what I think any mathematician would when presented by a statement like Smolin’s: that mathematics is much more than just a useful tool for approximating observations, and that contrary to physicists’ vanity most of mathematics goes on without any explicit interest in observing the natural world.

While it’s generally rude for physicists to propose definitions for mathematics, I’m going to do so anyway. I think the following definition is one mathematicians would be more comfortable with, though it may be overly broad: Mathematics is the study of simple rules with complex consequences.

We live in a complex world. The breadth of the periodic table, the vast diversity of life, the tangled webs of galaxies across the sky, these are things that display both vast variety and a sense of order. They are, in a rather direct way, the complex consequences of rules that are at heart very very simple.

Part of the wonder of modern mathematics is how interconnected it has become. Many sub-fields, once distinct, have discovered over the years that they are really studying different aspects of the same phenomena. That’s why when you see a proof of a three-hundred-year-old mathematical conjecture, it uses terms that seem to have nothing to do with the original problem. It’s why Woit, in an essay on this topic, quotes Edward Frenkel’s description of a particular recent program as a blueprint for a “Grand Unified Theory of Mathematics”. Increasingly, complex patterns are being shown to be not only consequences of simple rules, but consequences of the same simple rules.

Mathematics itself is “unreasonably effective”. That’s why, when faced with a complex world, we shouldn’t be surprised when the same simple rules pop up again and again to explain it. That’s what explaining something is: breaking down something complex into the simple rules that give rise to it. And as mathematics progresses, it becomes more and more clear that a few closely related types of simple rules lie behind any complex phenomena. While each unexplained fact about the universe may seem unexplained in its own way, as things are explained bit by bit they show just how alike they really are.

Valentine’s Day Physics Poem 2015

In the third installment of an ongoing tradition (wow, this blog is old enough to have traditions!), I present 2015’s Valentine’s Day Physics Poem. Like the others, I wrote this one a long time ago. I’ve polished it up a bit since.

 

Perturbation Theory

 

When you’ve been in a system a long time, your state tends to settle

Time-energy uncertainty

That unrigorous interloper

Means the longer you wait, the more fixed you are

And I’ve been stuck

In a comfy eigenstate

Since what I might as well call t=0.

 

Yesterday though,

Out of the ether

Like an electric field

New potential entered my Hamiltonian.

 

And my state was perturbed.

 

Just a small, delicate perturbation

And an infinite series scrolls out

Waves from waves from waves

It’s a new system now

With new, unrealized energy

And I might as well

Call yesterday

t=0.

 

Our old friend

Time-energy uncertainty

Tells me not to change,

Not to worry.

Soon, probability thins

The Hamiltonian pulls us back

And we all return

Closer and closer

To a fixed, settled, normal state.

 

This freedom

This uncertainty

This perturbation

Is limited by Planck’s constant

Is vanishingly small.

 

Yet rigor

        And happiness

                Demand I include it.

All Is Dust

Joke stolen from some fellow PI postdocs.

The BICEP2 and Planck experiment teams have released a joint analysis of their data, discovering what many had already suspected: that the evidence for primordial gravitational waves found by BICEP2 can be fully explained by interstellar dust.

For those who haven’t been following the story, BICEP2 is a telescope in Antarctica. Last March, they told the press they had found evidence of primordial gravitational waves, ripples in space-time caused by the exponential expansion of the universe shortly after the Big Bang. Soon after, though, doubts were raised. It appeared that the BICEP2 team hadn’t taken proper account of interstellar dust, and in particular had mis-used some data they scraped from a presentation by larger experiment Planck. After Planck released the correct version of their dust data, BICEP2’s predictions were even more evidently premature.

Now, the Planck team has exhaustively gone over their data and BICEP2’s, and done a full analysis. The result is a pretty thorough statement: everything BICEP2 observed can be explained by interstellar dust.

A few news outlets have been describing this as “ruling out inflation” or “ruling out gravitational waves”, both of which are misunderstandings. What Planck has ruled out are inflation (and gravitational waves caused by inflation) powerful enough to have been observed by BICEP2.

To an extent, this was something Planck had already predicted before BICEP2 made their announcement. BICEP2 announced a value for a parameter r, called the tensor-scalar ratio, of 0.2. This parameter r is a way to measure the strength of the gravitational waves (if you want to know what gravitational waves have to do with tensors, this post might help), and thus indirectly the strength of inflation in the early universe.

Trouble is, Planck had already released results arguing that r had to be below 0.11! So a lot of people were already rather skeptical.

With the new evidence, Planck’s bound is relaxed slightly. They now argue that r should be below 0.13, so BICEP2’s evidence was enough to introduce some fuzziness into their measurements when everything was analyzed together.

I’ve complained before about the bad aspects of BICEP2’s announcement, how releasing their data prematurely hurt the public’s trust in science and revealed the nasty side of competition for funding on massive projects. In this post, I’d like to talk a little about the positive side of the publicity around BICEP2.

Lots of theorists care about physics at very very high energies. The scale of string theory, or the Planck mass (no direct connection to the experiment, just the energy where one expects quantum gravity to be relevant), or the energy at which the fundamental forces might unify, are all much higher than any energy we can explore with a particle collider like the LHC. If you had gone out before BICEP2’s announcement and asked physicists whether we would ever see direct evidence for physics at these kinds of scales, they would have given you a resounding no. Maybe we could see indirect evidence, but any direct consequences would be essentially invisible.

All that changed with BICEP2. Their announcement of an r of 0.2 corresponds to very strong inflation, inflation of higher energy than the Planck mass!

Suddenly, there was hope that, even if we could never see such high-energy physics in a collider, we could see it out in the cosmos. This falls into a wider trend. Physicists have increasingly begun to look to the stars as the LHC continues to show nothing new. But the possibility that the cosmos could give us data that not only meets LHC energies, but surpasses them so dramatically, is something that very few people had realized.

The thing is, that hope is still alive and kicking. The new bound, restricting r to less than 0.13, still allows enormously powerful inflation. (If you’d like to work out the math yourself, equation (14) here relates the scale of inflation \Delta \phi to the Planck mass M_{\textrm{Pl}} and the parameter r.)

This isn’t just a “it hasn’t been ruled out yet” claim either. Cosmologists tell me that new experiments coming online in the next decade will have much more precision, and much better ability to take account of dust. These experiments should be sensitive to an r as low as 0.001!

With that kind of sensitivity, and the new mindset that BICEP2 introduced, we have a real chance of seeing evidence of Planck-scale physics within the next ten or twenty years. We just have to wait and see if the stars are right…

The Cycle of Exploration

Science is often described as a journey of exploration. You might imagine scientists carefully planning an expedition, gathering their equipment, then venturing out into the wilds of Nature, traveling as far as they can before returning with tales of the wonders they discovered.

Is it capybaras? Please let it be capybaras.

Is it capybaras? Please let it be capybaras.

This misses an important part of the story, though. In science, exploration isn’t just about discovering the true nature of Nature, as important as that is. It’s also about laying the groundwork for future exploration.

Picture our explorers, traveling out into the wilderness with no idea what’s in store. With only a rough idea of the challenges they might face, they must pack for every possibility: warm clothing for mountains, sunscreen for the desert, canoes to ford rivers, cameras in case they encounter capybaras. Since they can only carry so much, they can only travel so far before they run out of supplies.

Once they return, though, the explorers can assess what they did and didn’t need. Maybe they found a jungle, full of capybaras. The next time they travel they’ll make sure to bring canoes and cameras, but they can skip the warm coats. This lets them free up more room, letting them bring more supplies that’s actually useful. In the end, this lets them travel farther.

Science is a lot like this. The more we know, the better questions we can ask, and the further we can explore. It’s true not just for experiments, but for theoretical work as well. Here’s a slide from a talk I’m preparing, about how this works in my sub-field of Amplitudeology.

Unfortunately not a capybara.

Unfortunately not a capybara.

In theoretical physics, you often start out doing a calculation using the most general methods you have available. Once you’ve done it, you understand a bit more about your results: in particular, you can start figuring out which parts of the general method are actually unnecessary. By paring things down, you can figure out a new method, one that’s more efficient and allows for more complicated calculations. Doing those calculations then reveals new patterns, letting you propose even newer methods and do even more complicated calculations.

It’s the circle of exploration, and it really does move us all, motivating everything we do. With each discovery, we can go further, learn more, than the last attempt, keeping science churning long into the future.

Living in a Broken World: Supersymmetry We Can Test

I’ve talked before about supersymmetry. Supersymmetry relates particles with different spins, linking spin 1 force-carrying particles like photons and gluons to spin 1/2 particles similar to electrons, and spin 1/2 particles in turn to spin 0 “scalar” particles, the same general type as the Higgs. I emphasized there that, if two particles are related by supersymmetry, they will have some important traits in common: the same mass and the same interactions.

That’s true for the theories I like to work with. In particular, it’s true for N=4 super Yang-Mills. Adding supersymmetry allows us to tinker with neater, cleaner theories, gaining mastery over rice before we start experimenting with the more intricate “sushi” of theories of the real world.

However, it should be pretty clear that we don’t live in a world with this sort of supersymmetry. A quick look at the Standard Model indicates that no two known particles interact in precisely the same way. When people try to test supersymmetry in the real world, they’re not looking for this sort of thing. Rather, they’re looking for broken supersymmetry.

In the past, I’ve described broken supersymmetry as like a broken mirror: the two sides are no longer the same, but you can still predict one side’s behavior from the other. When supersymmetry is broken, related particles still have the same interactions. Now, though, they can have different masses.

The simplest version of supersymmetry, N=1, gives one partner to each particle. Since nothing in the Standard Model can be partners of each other, if we have broken N=1 supersymmetry in the real world then we need a new particle for each existing one…and each one of those particles has a potentially unknown, different mass. And if that sounds rather complicated…

Baroque enough to make Rubens happy.

That, right there, is the Minimal Supersymmetric Standard Model, the simplest thing you can propose if you want a world with broken supersymmetry. If you look carefully, you’ll notice that it’s actually a bit more complicated than just one partner for each known particle: there are a few extra Higgs fields as well!

If we’re hoping to explain anything in a simpler way, we seem to have royally screwed up. Luckily, though, the situation is not quite as ridiculous as it appears. Let’s go back to the mirror analogy.

If you look into a broken mirror, you can still have a pretty good idea of what you’ll see…but in order to do so, you have to know how the mirror is broken.

Similarly, supersymmetry can be broken in different ways, by different supersymmetry-breaking mechanisms.

The general idea is to start with a theory in which supersymmetry is precisely true, and all supersymmetric partners have the same mass. Then, consider some Higgs-like field. Like the Higgs, it can take some constant value throughout all of space, forming a background like the color of a piece of construction paper. While the rules that govern this field would respect supersymmetry, any specific value it takes wouldn’t. Instead, it would be biased: the spin 0, Higgs-like field could take on a constant value, but its spin 1/2 supersymmetric partner couldn’t. (If you want to know why, read my post on the Higgs linked above.)

Once that field takes on a specific value, supersymmetry is broken. That breaking then has to be communicated to the rest of the theory, via interactions between different particles. There are several different ways this can work: perhaps the interactions come from gravity, or are the same strength as gravity. Maybe instead they come from a new fundamental force, similar to the strong nuclear force but harder to discover. They could even come as byproducts of the breaking of other symmetries.

Each one of these options has different consequences, and leads to different predictions for the masses of undiscovered partner particles. They tend to have different numbers of extra parameters (for example, if gravity-based interactions are involved there are four new parameters, and an extra sign, that must be fixed). None of them have an entire standard model-worth of new parameters…but all of them have at least a few extra.

(Brief aside: I’ve been talking about the Minimal Supersymmetric Standard Model, but these days people have largely given up on finding evidence for it, and are exploring even more complicated setups like the Next-to-Minimal Supersymmetric Standard Model.)

If we’re introducing extra parameters without explaining existing ones, what’s the point of supersymmetry?

Last week, I talked about the problem of fine-tuning. I explained that when physicists are worried about fine-tuning, what we’re really worried about is whether the sorts of ultimate (low number of parameters) theories that we expect to hold could give rise to the apparently fine-tuned world we live in. In that post, I was a little misleading about supersymmetry’s role in that problem.

The goal of introducing (broken) supersymmetry is to solve a particular set of fine-tuning problems, mostly one specific one involving the Higgs. This doesn’t mean that supersymmetry is the sort of “ultimate” theory we’re looking for, rather supersymmetry is one of the few ways we know to bridge the gap between “ultimate” theories and a fine-tuned real world.

To explain it in terms of the language of the last post, it’s hard to find one of these “ultimate” theories that gives rise to a fine-tuned world. What’s quite a bit easier, though, is finding one of these “ultimate” theories that gives rise to a supersymmetric world, which in turn gives rise to a fine-tuned real world.

In practice, these are the sorts of theories that get tested. Very rarely are people able to propose testable versions of the more “ultimate” theories. Instead, one generally finds intermediate theories, theories that can potentially come from “ultimate” theories, and builds general versions of those that can be tested.

These intermediate theories come in multiple levels. Some physicists look for the most general version, theories like the Minimal Supersymmetric Standard Model with a whole host of new parameters. Others look for more specific versions, choices of supersymmetry-breaking mechanisms. Still others try to tie it further up, getting close to candidate “ultimate” theories like M theory (though in practice they generally make a few choices that put them somewhere in between).

The hope is that with a lot of people covering different angles, we’ll be able to make the best use of any new evidence that comes in. If “something” is out there, there are still a lot of choices for what that something could be, and it’s the job of physicists to try to understand whatever ends up being found.

Not bad for working in a broken world, huh?