No-One Can Tell You What They Don’t Understand

On Wednesday, Amanda Peet gave a Public Lecture at Perimeter on string theory and black holes, while I and other Perimeter-folk manned the online chat. If you missed it, it’s recorded online here.

We get a lot of questions in the online chat. Some are quite insightful, some are basic, and some…well, some are kind of strange. Like the person who asked us how holography could be compatible with irrational numbers.

In physics, holography is the idea that you can encode the physics of a wider space using only information on its boundary. If you remember the 90’s or read Buzzfeed a lot, you might remember holograms: weird rainbow-colored images that looked 3d when you turned your head.

On a computer screen, they instead just look awkward.

Holograms in physics are a lot like that, but rather than a 2d image looking like a 3d object, they can be other combinations of dimensions as well. The most famous, AdS/CFT, relates a ten-dimensional space full of strings to a four-dimensional space on its boundary, where the four-dimensional space contains everybody’s favorite theory, N=4 super Yang-Mills.

So from this explanation, it’s probably not obvious what holography has to do with irrational numbers. That’s because there is no connection: holography has nothing to do with irrational numbers.

Naturally, we were all a bit confused, so one of us asked this person what they meant. They responded by asking if we knew what holograms and irrational numbers were. After all, the problem should be obvious then, right?

In this sort of situation, it’s tempting to assume you’re being trolled. In reality, though, the problem was one of the most common in science communication: people can’t tell you what they don’t understand, because they don’t understand it.

When a teacher asks “any questions?”, they’re assuming students will know what they’re missing. But a deep enough misunderstanding doesn’t show itself that way. Misunderstand things enough, and you won’t know you’re missing anything. That’s why it takes real insight to communicate science: you have to anticipate ways that people might misunderstand you.

In this situation, I thought about what associations people have with holograms. While some might remember the rainbow holograms of old, there are other famous holograms that might catch peoples’ attention.

Please state the nature of the medical emergency.

In science fiction, holograms are 3d projections, ways that computers can create objects out of thin air. The connection to a 2d image isn’t immediately apparent, but the idea that holograms are digital images is central.

Digital images are the key, here. A computer has to express everything in a finite number of bits. It can’t express an irrational number, a number with a decimal expansion that goes on to infinity, at least not without tricks. So if you think that holography is about reality being digital, rather than lower-dimensional, then the question makes perfect sense: how could a digital reality contain irrational numbers?

This is the sort of thing we have to keep in mind when communicating science. It’s easy to misunderstand, to take some aspect of what someone said and read it through a different lens. We have to think about how others will read our words, we have to be willing to poke and prod until we root out the source of the confusion. Because nobody is just going to tell us what they don’t get.

Outreach as the End Product of Science

Sabine Hossenfelder recently wrote a blog post about physics outreach. In it, she identifies two goals: inspiration, and education.

Inspiration outreach is all about making science seem cool. It’s the IFLScience side of things, stoking the science fandom and getting people excited.

Education outreach, by contrast, is about making sure peoples’ beliefs are accurate. It teaches the audience something about the world around them, giving them a better understanding of how the world works.

In both cases, though, Sabine finds it hard to convince other scientists that outreach is valuable. Maybe inspiration helps increase grant funding, maybe education makes people vote better on scientific issues like climate change…but there isn’t a lot of research that shows that outreach really accomplishes either.

Sabine has a number of good suggestions in her post for how to make outreach more effective, but I’d like to take a step back and suggest that maybe we as a community are thinking about outreach in the wrong way. And in order to do that, I’m going to do a little outreach myself, and talk about black holes.

The black hole of physics outreach.

Black holes are collapsed stars, crushed in on themselves by their own gravity so much that one you get close enough (past the event horizon) not even light can escape. This means that if you sent an astronaut past the event horizon, there would be no way for them to communicate with you: any way they might try to get information to you would travel, at most, at the speed of light.

Einstein’s equations keep working fine past the event horizon, but despite that there are some people who view any prediction of what happens inside to be outside the scope of science. If there’s no way to report back, then how could we ever test our predictions? And if we can’t test our predictions, aren’t we missing the cornerstone of science itself?

In a rather entertaining textbook, physicists Edwin F. Taylor and John Archibald Wheeler suggest a way around this: instead of sending just one astronaut, send multiple! Send a whole community! That way, while we might not be able to test our predictions about the inside of the event horizon, the scientific community that falls in certainly can. For them, those predictions aren’t just meaningless speculation, but testable science.

If something seems unsatisfying about this, congratulations: you now understand the purpose of outreach.

As long as scientific advances never get beyond a small community, we’re like Taylor and Wheeler’s astronauts inside the black hole. We can test our predictions among each other, verify them to our heart’s content…but if they never reach the wider mass of humanity, then what have we really accomplished? Have we really created knowledge, when only a few people will ever know it?

In my Who Am I? post, I express the hope that one day the science I blog about will be as well known as electrons and protons. That might sound farfetched, but I really do think it’s possible. In one hundred years, electrons and protons went from esoteric discoveries of a few specialists to something children learn about in grade school. If science is going to live up to its purpose, if we’re going to escape the black hole of our discipline, then in another hundred years quantum field theory needs to do the same. And by doing outreach work, each of us is taking steps in that direction.

String Theorists Who Don’t Touch Strings

This week I’ve been busy, attending a workshop here at Perimeter on Superstring Perturbation Theory.

Superstrings are the supersymmetric strings that string theorists use to describe fundamental particles, while perturbation theory is the trick, common in almost every area of physics, of solving a problem by a series of increasingly precise approximations.

Based on that description, you’d think that superstring perturbation theory would be a central topic in string theory research. You wouldn’t expect it to be the sort of thing only a few people at the top of the field dabble in. You definitely wouldn’t expect one of the speakers at the workshop to mention that this might be the first conference on superstring perturbation theory he’s been to since the 1980’s.

String perturbation theory is an important subject, but it’s not one many string theorists use. And the reason why is that, oddly enough, very few string theorists actually use strings.

Looking at arXiv as I’m writing this, I can see only one paper in the theoretical physics section that directly uses strings. Most of them use something else: either older concepts like black holes, quantum field theory, and supergravity, or newer ones like d-branes. If you talked to the people who wrote those papers, though, most of them would describe themselves as string theorists.

The reason for the disconnect is that string theory as a field is much more than just the study of strings. String theory is a ten-dimensional universe (or eleven with M theory), where different ways of twisting up some of the dimensions result in different apparent physics in the remaining ones. It’s got strings, but also higher-dimensional membranes (and in the eleven dimensions of M theory it only has membranes, not strings). It’s the recipe for a long list of exotic quantum field theories, and a list of possible relations between them. It’s a new way to look at geometry, to think about the intersection of the nature of space and the dynamics of what inhabits it.

If string theory were really just about strings, it likely wouldn’t have grown any bigger than its quantum gravity rivals, like Loop Quantum Gravity. String theory grew because it inspired research directions that went far afield, and far beyond its conceptual core.

That’s part of why most string theorists will be baffled if you insist that string theory needs proof, or that it’s not the right approach to quantum gravity. For most string theorists, it doesn’t matter whether we live in a stringy world, whether gravity might eventually be described by another model. For most string theorists, string theory is a tool, one that opened up fields of inquiry that don’t have much to do with predicting the output of the LHC or describing the early universe. Or, in many cases, actually using strings.

Who Plagiarizes an Acknowledgements Section?

I’ve got plagiarists on the brain.

Maybe it was running into this interesting discussion about a plagiarized application for the National Science Foundation’s prestigious Graduate Research Fellowship Program. Maybe it’s due to the talk Paul Ginsparg, founder of arXiv, gave this week about, among other things, detecting plagiarism.

Using arXiv’s repository of every paper someone in physics thought was worth posting, Ginsparg has been using statistical techniques to sift out cases of plagiarism. Probably the funniest cases involved people copying a chunk of their thesis acknowledgements section, as excerpted here. Compare:

“I cannot describe how indebted I am to my wonderful girlfriend, Amanda, whose love and encouragement will always motivate me to achieve all that I can. I could not have written this thesis without her support; in particular, my peculiar working hours and erratic behaviour towards the end could not have been easy to deal with!”

“I cannot describe how indebted I am to my wonderful wife, Renata, whose love and encouragement will always motivate me to achieve all that I can. I could not have written this thesis without her support; in particular, my peculiar working hours and erratic behaviour towards the end could not have been easy to deal with!”

Why would someone do this? Copying the scientific part of a thesis makes sense, in a twisted way: science is hard! But why would someone copy the fluff at the end, the easy part that’s supposed to be a genuine take on your emotions?

The thing is, the acknowledgements section of a thesis isn’t exactly genuine. It’s very formal: a required section of the thesis, with tacit expectations about what’s appropriate to include and what isn’t. It’s also the sort of thing you only write once in your life: while published papers also have acknowledgements sections, they’re typically much shorter, and have different conventions.

If you ever were forced to write thank-you notes as a kid, you know where I’m going with this.

It’s not that you don’t feel grateful, you do! But when you feel grateful, you express it by saying “thank you” and moving on. Writing a note about it isn’t very intuitive, it’s not a way you’re used to expressing gratitude, so the whole experience feels like you’re just following a template.

Literally in some cases.

That sort of situation: where it doesn’t matter how strongly you feel something, only whether you express it in the right way, is a breeding ground for plagiarism. Aunt Mildred isn’t going to care what you write in your thank-you note, and Amanda/Renata isn’t going to be moved by your acknowledgements section. It’s so easy to decide, in that kind of situation, that it’s better to just grab whatever appropriate text you can than to teach yourself a new style of writing.

In general, plagiarism happens because there’s a disconnect between incentives and what they’re meant to be for. In a world where very few beginning graduate students actually have a solid research plan, the NSF’s fellowship application feels like a demand for creative lying, not an honest way to judge scientific potential. In countries eager for highly-cited faculty but low on preexisting experts able to judge scientific merit, tenure becomes easier to get by faking a series of papers than by doing the actual work.

If we want to get rid of plagiarism, we need to make sure our incentives match our intent. We need a system in which people succeed when they do real work, get fellowships when they honestly have talent, and where we care about whether someone was grateful, not how they express it. If we can’t do that, then there will always be people trying to sneak through the cracks.

What’s the Matter with Dark Matter, Matt?

It’s very rare that I disagree with Matt Strassler. That said, I can’t help but think that, when he criticizes the press for focusing their LHC stories on dark matter, he’s missing an important element.

From his perspective, when the media says that the goal of the new run of the LHC is to detect dark matter, they’re just being lazy. People have heard of dark matter. They might have read that it makes up 23% of the universe, more than regular matter at 4%. So when an LHC physicist wants to explain what they’re working on to a journalist, the easiest way is to talk about dark matter. And when the journalist wants to explain the LHC to the public, they do the same thing.

This explanation makes sense, but it’s a little glib. What Matt Strassler is missing is that, from the public’s perspective, dark matter really is a central part of the LHC’s justification.

Now, I’m not saying that the LHC’s main goal is to detect dark matter! Directly detecting dark matter is pretty low on the LHC’s list of priorities. Even if it detects a new particle with the right properties to be dark matter, it still wouldn’t be able to confirm that it really is dark matter without help from another experiment that actually observes some consequence of the new particle among the stars. I agree with Matt when he writes that the LHC’s priorities for the next run are

  1. studying the newly discovered Higgs particle in great detail, checking its properties very carefully against the predictions of the “Standard Model” (the equations that describe the known apparently-elementary particles and forces)  to see whether our current understanding of the Higgs field is complete and correct, and

  2. trying to find particles or other phenomena that might resolve the naturalness puzzle of the Standard Model, a puzzle which makes many particle physicists suspicious that we are missing an important part of the story, and

  3. seeking either dark matter particles or particles that may be shown someday to be “associated” with dark matter.

Here’s the thing, though:

From the public’s perspective, why do we need to study the properties of the Higgs? Because we think it might be different than the Standard Model predicts.

Why do we think it might be different than the Standard Model predicts? More generally, why do we expect the world to be different from the Standard Model at all? Well there are a few reasons, but they generally boil down to two things: the naturalness puzzle, and the fact that the Standard Model doesn’t have anything that could account for dark matter.

Naturalness is a powerful motivation, but it’s hard to sell to the general public. Does the universe appear fine-tuned? Then maybe it just is fine-tuned! Maybe someone fine-tuned it!

These arguments miss the real problem with fine-tuning, but they’re hard to correct in a short article. Getting the public worried about naturalness is tough, tough enough that I don’t think we can demand it of the average journalist, or accuse them of being lazy if they fail to do it.

That leaves dark matter. And for all that naturalness is philosophically murky, dark matter is remarkably clear. We don’t know what 96% of the universe is made of! That’s huge, and not just in a “gee-whiz-cool” way. It shows, directly and intuitively, that physics still has something it needs to solve, that we still have particles to find. Unless you are a fan of (increasingly dubious) modifications to gravity like MOND, dark matter is the strongest possible justification for machines like the LHC.

The LHC won’t confirm dark matter on its own. It might not directly detect it, that’s still quite up-in-the-air. And even if it finds deviations from the Standard Model, it’s not likely they’ll be directly caused by dark matter, at least not in a simple way.

But the reason that the press is describing the LHC’s mission in terms of dark matter isn’t just laziness. It’s because, from the public’s perspective, dark matter is the only vaguely plausible reason to spend billions of dollars searching for new particles, especially when we’ve already found the Higgs. We’re lucky it’s such a good reason.

Want to Make Something New? Just Turn on the Lights.

Isn’t it weird that you can collide two protons, and get something else?

It wouldn’t be so weird if you collided two protons, and out popped a quark. After all, protons are made of quarks. But how, if you collide two protons together, do you get a tau, or the Higgs boson: things that not only aren’t “part of” protons, but are more massive than a proton by themselves?

It seems weird…but in a way, it’s not. When a particle releases another particle that wasn’t inside it to begin with, it’s actually not doing anything more special than an everyday light bulb.

Eureka!

How does a light bulb work?

You probably know the basics: when an electrical current enters the bulb, the electrons in the filament start to move. They heat the filament up, releasing light.

That probably seems perfectly ordinary. But ask yourself for a moment: where did the light come from?

Light is made up of photons, elementary particles in their own right. When you flip a light switch, where do the photons come from? Were they stored in the light bulb?

Silly question, right? You don’t need to “store” light in a light bulb: light bulbs transform one type of energy (electrical, or the movement of electrons) into another type of energy (light, or photons).

Here’s the thing, though: mass is just another type of energy.

I like to describe mass as “energy we haven’t met yet”. Einstein’s equation, E=mc^2, relates a particle’s mass to its “rest energy”, the energy it would have if it stopped moving around and sit still. Even when a particle seems to be sitting still from the outside, there’s still a lot going on, though. “Composite” particles like protons have powerful forces between their internal quarks, while particles like electrons interact with the Higgs field. These processes give the particle energy, even when it’s not moving, so from our perspective on the outside they’re giving the particle mass.

What does that mean for the protons at the LHC?

The protons at the LHC have a lot of kinetic energy: they’re going 99.9999991% of the speed of light! When they collide, all that energy has to go somewhere. Just like in a light bulb, the fast-moving particles will release their energy in another form. And while that some of that energy will add to the speed of the fragments, much of it will go into the mass and energy of new particles. Some of these particles will be photons, some will be tau leptons, or Higgs bosons…pretty much anything that the protons have enough energy to create.

So if you want to understand how to create new particles, you don’t need a deep understanding of the mysteries of quantum field theory. Just turn on the lights.

Only the Boring Kind of Parallel Universes

PARALLEL UNIVERSES AT THE LHC??

No. No. Bad journalist. See what happens when you…

Mir Faizal, one of the three-strong team of physicists behind the experiment, said: “Just as many parallel sheets of paper, which are two dimensional objects [breadth and length] can exist in a third dimension [height], parallel universes can also exist in higher dimensions.

Bad physicist, bad! No biscuit for you!

Not nice at all!

For the technically-minded, Sabine Hossenfelder goes into thorough detail about what went wrong here. Not only do parallel universes have nothing to do with what Mir Faizal and collaborators have been studying, but the actual paper they’re hyping here is apparently riddled with holes.

BLACK holes! …no, actually, just logic holes.

But why did parallel universes even come up? If they have nothing to do with Faizal’s work, why did he mention them? Do parallel universes ever come up in real physics at all?

The answer to this last question is yes. There are real, viable ideas in physics that involve parallel universes. The universes involved, however, are usually boring ones.

The ideas are generally referred to as brane-world theories. If you’ve heard of string theory, you’ve probably heard that it proposes that the world is made of tiny strings. That’s all well and good, but it’s not the whole story. String theory has other sorts of objects in it too: higher dimensional generalizations of strings called membranes, branes for short. In fact, M theory, the theory of which every string theory is some low-energy limit, has no strings at all, just branes.

When these branes are one-dimensional, they’re strings. When they’re two-dimensional, they’re what you would normally picture as a membrane, a vibrating sheet, potentially infinite in size. When they’re three-dimensional, they fill three-dimensional space, again potentially up to infinity.

Filling three dimensional space, out to infinity…well that sure sounds a whole lot like what we’d normally call a universe.

In brane-world constructions, what we call our universe is precisely this sort of three-dimensional brane. It then lives in a higher-dimensional space, where its position in this space influences things like the strength of gravity, or the speed at which the universe expands.

Sometimes (not all the time!) these sorts of constructions include other branes, besides the one that contains our universe. These other branes behave in a similar way, and can have very important effects on our universe. They, if anything, are the parallel universes of theoretical physics.

It’s important to point out, though that these aren’t the sort of sci-fi parallel universes you might imagine! You aren’t going to find a world where everyone has a goatee, or even a world with an empty earth full of teleporting apes.

Pratchett reference!

That’s because, in order for these extra branes to do useful physical work, they generally have to be very different from our world. They’re worlds where gravity is very strong, or world with dramatically different densities of energy and matter. In the end, this means they’re not even the sort of universes that produce interesting aliens, or where we could send an astronaut, or really anything that lends itself well to (non-mathematical) imagination. From a sci-fi perspective, they’re as boring as can be.

Faizal’s idea, though, doesn’t even involve the boring kind of parallel universe!

His idea involves extra dimensions, specifically what physicists refer to as “large” extra dimensions, in contrast with the small extra dimensions of string theory. Large extra dimensions can explain the weakness of gravity, and theories that use them often predict that it’s much easier to create microscopic black holes than it otherwise would be. So far, these models haven’t had much luck at the LHC, and while I get the impression that they haven’t been completely ruled out, they aren’t very popular anymore.

The thing is, extra dimensions don’t mean parallel universes.

In fiction, the two get used interchangeably a lot. People go to “another dimension”, vaguely described as traveling along another dimension of space, and find themselves in a strange new world. In reality, though, there’s no reason to think that traveling along an extra dimension would put you in any sort of “strange new world”. The whole reason that our world is limited to three dimensions is because it’s “bound” to something: a brane, in the string theory picture. If there’s not another brane to bind things to, traveling in an extra dimension won’t put you in a new universe, it will just put you in an empty space where none of the types of matter you’re made of even exist.

It’s really tempting, when talking to laypeople, to fall back on stories. If you mention parallel universes, their faces light up with the idea that this is something they get, if only from imaginary examples. It gives you that same sense of accomplishment as if you had actually taught them something real. But you haven’t. It’s wrong, and Mir Faizal shouldn’t have stooped to doing it.

What Counts as a Fundamental Force?

I’m giving a presentation next Wednesday for Learning Unlimited, an organization that presents educational talks to seniors in Woodstock, Ontario. The talk introduces the fundamental forces and talks about Yang and Mills before moving on to introduce my work.

While practicing the talk today, someone from Perimeter’s outreach department pointed out a rather surprising missing element: I never mention gravity!

Most people know that there are four fundamental forces of nature. There’s Electromagnetism, there’s Gravity, there’s the Weak Nuclear Force, and there’s the Strong Nuclear Force.

Listed here by their most significant uses.

What ties these things together, though? What makes them all “fundamental forces”?

Mathematically, gravity is the odd one out here. Electromagnetism, the Weak Force, and the Strong Force all share a common description: they’re Yang-Mills forces. Gravity isn’t. While you can sort of think of it as a Yang-Mills force “squared”, it’s quite a bit more complicated than the Yang-Mills forces.

You might be objecting that the common trait of the fundamental forces is obvious: they’re forces! And indeed, you can write down a force law for gravity, and a force law for E&M, and umm…

[Mumble Mumble]

Ok, it’s not quite as bad as xkcd would have us believe. You can actually write down a force law for the weak force, if you really want to, and it’s at least sort of possible to talk about the force exerted by the strong interaction.

All that said, though, why are we thinking about this in terms of forces? Forces are a concept from classical mechanics. For a beginning physics student, they come up again and again, in free-body diagram after free-body diagram. But by the time a student learns quantum mechanics, and quantum field theory, they’ve already learned other ways of framing things where forces aren’t mentioned at all. So while forces are kind of familiar to people starting out, they don’t really match onto anything that most quantum field theorists work with, and it’s a bit weird to classify things that only really appear in quantum field theory (the Weak Nuclear Force, the Strong Nuclear Force) based on whether or not they’re forces.

Isn’t there some connection, though? After all, gravity, electromagnetism, the strong force, and the weak force may be different mathematically, but at least they all involve bosons.

Well, yes. And so does the Higgs.

The Higgs is usually left out of listings of the fundamental forces, because it’s not really a “force”. It doesn’t have a direction, instead it works equally at every point in space. But if you include spin 2 gravity and spin 1 Yang-Mills forces, why not also include the spin 0 Higgs?

Well, if you’re doing that, why not include fermions as well? People often think of fermions as “matter” and bosons as “energy”, but in fact both have energy, and neither is made of it. Electrons and quarks are just as fundamental as photons and gluons and gravitons, just as central a part of how the universe works.

I’m still trying to decide whether my presentation about Yang-Mills forces should also include gravity. On the one hand, it would make everything more familiar. On the other…pretty much this entire post.

What Can Pi Do for You?

Tomorrow is Pi Day!

And what a Pi Day! 3/14/15 (if you’re in the US, Belize, Micronesia, some parts of Canada, the Philippines, or Swahili-speaking Kenya), best celebrated at 9:26:53, if you’re up by then. Grab a slice of pie, or cake if you really must, and enjoy!

If you don’t have some of your own, download this one!

Pi is great not just because it’s fun to recite digits and eat pastries, but because it serves a very important role in physics. That’s because, often, pi is one of the most “natural” ways to get larger numbers.

Suppose you’re starting with some sort of “natural” theory. Here I don’t mean natural in the technical sense. Instead, I want you to imagine a theory that has very few free parameters, a theory that is almost entirely fixed by mathematics.

Many physicists hope that the world is ultimately described by this sort of theory, but it’s hard to see in the world we live in. There are so many different numbers, from the tiny mass of the electron to the much larger mass of the top quark, that would all have to come from a simple, overarching theory. Often, it’s easier to get these numbers when they’re made out of factors of pi.

Why is pi easy to get?

In general, pi shows up a lot in physics and mathematics, and its appearance can be mysterious the uninitiated, as this joke related by Eugene Wigner in an essay I mentioned a few weeks ago demonstrates:

THERE IS A story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. “How can you know that?” was his query. “And what is this symbol here?” “Oh,” said the statistician, “this is pi.” “What is that?” “The ratio of the circumference of the circle to its diameter.” “Well, now you are pushing your joke too far,” said the classmate, “surely the population has nothing to do with the circumference of the circle.”

While it may sound silly, in a sense the population really is connected to the circumference of the circle. That’s because pi isn’t just about circles, pi is about volumes.

Take a bit to check out that link. Not just the area of a circle, but the volume of a sphere, and that of all sorts of higher-dimensional ball-shaped things, is calculated with the value of pi. It’s not just spheres, either: pi appears in the volume of many higher-dimensional shapes.

Why does this matter for physics? Because you don’t need a literal shape to get a volume. Most of the time, there aren’t literal circles and spheres giving you factors of pi…but there are abstract spaces, and they contain circles and spheres. A electric and magnetic fields might not be shaped like circles, but the mathematics that describes them can still make good use of a circular space.

That’s why, when I describe the mathematical formulas I work with, formulas that often produce factors of pi, mathematicians will often ask if they’re the volume of some particular mathematical space. It’s why Nima Arkani-Hamed is trying to understand related formulas by thinking of them as the volume of some new sort of geometrical object.

All this is not to say you should go and plug factors of pi together until you get the physical constants you want. Throw in enough factors of pi and enough other numbers and you can match current observations, sure…but you could also match anything else in the same way. Instead, it’s better to think of pi as an assistant: waiting in the wings, ready to translate a pure mathematical theory into the complicated mess of the real world.

So have a Happy Pi Day, everyone, and be grateful to our favorite transcendental number. The universe would be a much more boring place without it.

How to Predict the Mass of the Higgs

Did Homer Simpson predict the mass of the Higgs boson?

No, of course not.

Apart from the usual reasons, he’s off by more than a factor of six.

If you play with the numbers, it looks like Simon Singh (the popular science writer who reported the “discovery” Homer made as a throwaway joke in a 1998 Simpsons episode) made the classic physics mistake of losing track of a factor of 2\pi. In particular, it looks like he mistakenly thought that the Planck constant, h, was equal to the reduced Planck constant, \hbar, divided by 2\pi, when actually it’s \hbar times 2\pi. So while Singh read Homer’s prediction as 123 GeV, surprisingly close to the actual Higgs mass of 125 GeV found in 2012, in fact Homer predicted the somewhat more embarrassing value of 775 GeV.

D’Oh!

That was boring. Let’s ask a more interesting question.

Did Gordon Kane predict the mass of the Higgs boson?

I’ve talked before about how it seems impossible that string theory will ever make any testable predictions. The issue boils down to one of too many possibilities: string theory predicts different consequences for different ways that its six (or seven for M theory) extra dimensions can be curled up. Since there is an absurdly vast number of ways this can be done, anything you might want to predict (say, the mass of the electron) has an absurd number of possible values.

Gordon Kane and collaborators get around this problem by tackling a different one. Instead of trying to use string theory to predict things we already know, like the mass of the electron, they assume these things are already true. That is, they assume we live in a world with electrons that have the mass they really have, and quarks that have the mass they really have, and so on. They assume that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make. And, they assume that this world is a consequence of string (or rather M) theory.

From that combination of assumptions, they then figure out the consequences for things that aren’t yet known. And in a 2011 paper, they predicted the Higgs mass would be between 105 and 129 GeV.

I have a lot of sympathy for this approach, because it’s essentially the same thing that non-string-theorists do. When a particle physicist wants to predict what will come out of the LHC, they don’t try to get it from first principles: they assume the world works as we have discovered, make a few mild extra assumptions, and see what new consequences come out that we haven’t observed yet. If those particle physicists can be said to make predictions from supersymmetry, or (shudder) technicolor, then Gordon Kane is certainly making predictions from string theory.

So why haven’t you heard of him? Even if you have, why, if this guy successfully predicted the mass of the Higgs boson, are people still saying that you can’t make predictions with string theory?

Trouble is, making predictions is tricky.

Part of the problem is timing. Gordon Kane’s paper went online in December of 2011. The Higgs mass was announced in July 2012, so you might think Kane got a six month head-start. But when something is announced isn’t the same as when it’s discovered. For a big experiment like the Large Hadron Collider, there’s a long road between the first time something gets noticed and the point where everyone is certain enough that they’re ready to announce it to the world. Rumors fly, and it’s not clear that Kane and his co-authors wouldn’t have heard them.

Assumptions are the other issue. Remember when I said, a couple paragraphs up, that Kane’s group assumed “that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make“? That last part is what makes things tricky. There were a few extra assumptions Kane made, beyond those needed to reproduce the world we know. For many people, some of these extra assumptions are suspicious. They worry that the assumptions might have been chosen, not just because they made sense, but because they happened to give the right (rumored) mass of the Higgs.

If you want to predict something in physics, it’s not just a matter of getting in ahead of the announcement with the right number. For a clear prediction, you need to be early enough that the experiments haven’t yet even seen hints of what you’re looking for. Even then, you need your theory to be suitably generic, so that it’s clear that your prediction is really the result of the math and not of your choices. You can trade off aspects of this: more accuracy for a less generic theory, better timing for looser predictions. Get the formula right, and the world will laud you for your prediction. Wrong, and you’re Homer Simpson. Somewhere in between, though, and you end up in that tricky, tricky grey area.

Like Gordon Kane.