# The opposite of Witches

On Halloween I have a tradition of posts about spooky topics, whether traditional Halloween fare or things that spook physicists. This year it’s a little of both.

Mage: The Ascension is a role-playing game set in a world in which belief shapes reality. Players take the role of witches and warlocks, casting spells powered by their personal paradigms of belief. The game allows for pretty much any modern-day magic-user you could imagine, from Wiccans to martial artists.

Even stereotypical green witches, probably

Despite all the options, I was always more interested in the game’s villains, the witches’ opposites, the Technocracy.

The Technocracy answer an inevitable problem with any setting involving modern-day magic: why don’t people notice? If reality is powered by belief, why does no-one believe in magic?

In the Technocracy’s case, the answer is a vast conspiracy of mages with a scientific bent, manipulating public belief. Much like the witches and warlocks of Mage are a grab-bag of every occult belief system, the Technocracy combines every oppressive government conspiracy story you can imagine, all with the express purpose of suppressing the supernatural and maintaining scientific consensus.

This quote is from another game by the same publisher, but it captures the attitude of the Technocracy, and the magnitude of what is being claimed here:

Do not believe what the scientists tell you. The natural history we know is a lie, a falsehood sold to us by wicked old men who would make the world a dull gray prison and protect us from the dangers inherent to freedom. They would have you believe our planet to be a lonely starship, hurtling through the void of space, barren of magic and in need of a stern hand upon the rudder.

Close your mind to their deception. The time before our time was not a time of senseless natural struggle and reptilian rage, but a time of myth and sorcery. It was a time of legend, when heroes walked Creation and wielded the very power of the gods. It was a time before the world was bent, a time before the magic of Creation lessened, a time before the souls of men became the stunted, withered things they are today.

It can be a fun exercise to see how far doubt can take you, how much of the scientific consensus you can really be confident of and how much could be due to a conspiracy. Believing in the Technocracy would be the most extreme version of this, but Flat-Earthers come pretty close. Once you’re doubting whether the Earth is round, you have to imagine a truly absurd conspiracy to back it up.

On the other extreme, there are the kinds of conspiracies that barely take a conspiracy at all. Big experimental collaborations, like ATLAS and CMS at the LHC, keep a tight handle on what their members publish. (If you’re curious how much of one, here’s a talk by a law professor about, among other things, the Constitution of CMS. Yes, it has one!) An actual conspiracy would still be outed in about five minutes, but you could imagine something subtler, the experiment sticking to “safe” explanations and refusing to publish results that look too unusual, on the basis that they’re “probably” wrong. Worries about that sort of thing can make actual physicists spooked.

There’s an important dividing line with doubt: too much and you risk invoking a conspiracy more fantastical than the science you’re doubting in the first place. The Technocracy doesn’t just straddle that line, it hops past it off into the distance. Science is too vast, and too unpredictable, to be controlled by some shadowy conspiracy.

Or maybe that’s just what we want you to think!

# More Travel

I’m visiting the Niels Bohr Institute this week, on my way back from Amplitudes.

You might recognize the place from old conference photos.

Amplitudes itself was nice. There weren’t any surprising new developments, but a lot of little “aha” moments when one of the speakers explained something I’d heard vague rumors about. I figured I’d mention a few of the things that stood out. Be warned, this is going to be long and comparatively jargon-heavy.

The conference organizers were rather daring in scheduling Nima Arkani-Hamed for the first talk, as Nima has a tendency to arrive at the last minute and talk for twice as long as you ask him to. Miraculously, though, things worked out, if only barely: Nima arrived at the wrong campus and ran most of the way back, showing up within five minutes of the start of the conference. He also stuck to his allotted time, possibly out of courtesy to his student, Yuntao Bai, who was speaking next.

Between the two of them, Nima and Yuntao covered an interesting development, tying the Amplituhedron together with the string theory-esque picture of scattering amplitudes pioneered by Freddy Cachazo, Song He, and Ellis Ye Yuan (or CHY). There’s a simpler (and older) Amplituhedron-like object called the associahedron that can be thought of as what the Amplituhedron looks like on the surface of a string, and CHY’s setup can be thought of as a sophisticated map that takes this object and turns it into the Amplituhedron. It was nice to hear from both Nima and his student on this topic, because Nima’s talks are often high on motivation but low on detail, so it was great that Yuntao was up next to fill in the blanks.

Anastasia Volovich talked about Landau singularities, a topic I’ve mentioned before. What I hadn’t appreciated was how much they can do with them at this point. Originally, Juan Maldacena had suggested that these singularities, mathematical points that determine the behavior of amplitudes first investigated by Landau in the 60’s, might explain some of the simplicity we’ve observed in N=4 super Yang-Mills. They ended up not being enough by themselves, but what Volovich and collaborators are discovering is that with a bit of help from the Amplithedron they explain quite a lot. In particular, if they start with the Amplituhedron and do a procedure similar to Landau’s, they can find the simpler set of singularities allowed by N=4 super Yang-Mills, at least for the examples they’ve calculated. It’s still a bit unclear how this links to their previous investigations of these things in terms of cluster algebras, but it sounds like they’re making progress.

Dmitry Chicherin gave me one of those minor “aha” moments. One big useful fact about scattering amplitudes in N=4 super Yang-Mills is that they’re “dual” to different mathematical objects called Wilson loops, a fact which allows us to compare to the “POPE” approach of Basso, Sever, and Vieira. Chicherin asks the question: “What if you’re not calculating a scattering amplitude or a Wilson loop, but something halfway in between?” Interestingly, this has an answer, with the “halfway between” objects having a similar duality among themselves.

Yorgos Papathansiou talked about work I’ve been involved with. I’ll probably cover it in detail in another post, so now I’ll just mention that we’re up to six loops!

Andy Strominger talked about soft theorems. It’s always interesting seeing people who don’t traditionally work on amplitudes giving talks at Amplitudes. There’s a range of responses, from integrability people (who are basically welcomed like family) to work on fairly unrelated areas that have some “amplitudes” connection (met with yawns except from the few people interested in the connection). The response to Strominger was neither welcome nor boredom, but lively debate. He’s clearly doing something interesting, but many specialists worried he was ignorant of important no-go results in the field that could hamstring some of his bolder conjectures.

The second day focused on methods for more practical calculations, and had the overall effect of making me really want to clean up my code. Tiziano Peraro’s finite field methods in particular look like they could be quite useful. There were two competing bases of integrals on display, Von Manteuffel’s finite integrals and Rutger Boels’s uniform transcendental integrals later in the conference. Both seem to have their own virtues, and I ended up asking Rob Schabinger if it was possible to combine the two, with the result that he’s apparently now looking into it.

The more practical talks that day had a clear focus on calculations with two loops, which are becoming increasingly viable for LHC-relevant calculations. From talking to people who work on this, I get the impression that the goal of these calculations isn’t so much to find new physics as to confirm and investigate new physics found via other methods. Things are complicated enough at two loops that for the moment it isn’t feasible to describe what all the possible new particles might do at that order, and instead the goal is to understand the standard model well enough that if new physics is noticed (likely based on one-loop calculations) then the details can be pinned down by two-loop data. But this picture could conceivably change as methods improve.

Wednesday was math-focused. We had a talk by Francis Brown on his conjecture of a cosmic Galois group. This is a topic I knew a bit about already, since it’s involved in something I’ve been working on. Brown’s talk cleared up some things, but also shed light on the vagueness of the proposal. As with Yorgos’s talk, I’ll probably cover more about this in a future post, so I’ll skip the details for now.

There was also a talk by Samuel Abreu on a much more physical picture of the “symbols” we calculate with. This is something I’ve seen presented before by Ruth Britto, and it’s a setup I haven’t looked into as much as I ought to. It does seem at the moment that they’re limited to one loop, which is a definite downside. Other talks discussed elliptic integrals, the bogeyman that we still can’t deal with by our favored means but that people are at least understanding better.

The last talk on Wednesday before the hike was by David Broadhurst, who’s quite a character in his own right. Broadhurst sat in the front row and asked a question after nearly every talk, usually bringing up papers at least fifty years old, if not one hundred and fifty. At the conference dinner he was exactly the right person to read the Address to the Haggis, resurrecting a thick Scottish accent from his youth. Broadhurst’s techniques for handling high-loop elliptic integrals are quite impressively powerful, leaving me wondering if the approach can be generalized.

Thursday focused on gravity. Radu Roiban gave a better idea of where he and his collaborators are on the road to seven-loop supergravity and what the next bottlenecks are along the way. Oliver Schlotterer’s talk was another one of those “aha” moments, helping me understand a key difference between two senses in which gravity is Yang-Mills squared ( the Kawai-Lewellen-Tye relations and BCJ). In particular, the latter is much more dependent on specifics of how you write the scattering amplitude, so to the extent that you can prove something more like the former at higher loops (the original was only for trees, unlike BCJ) it’s quite valuable. Schlotterer has managed to do this at one loop, using the “Q-cut” method I’ve (briefly) mentioned before. The next day’s talk by Emil Bjerrum-Bohr focused more heavily on these Q-cuts, including a more detailed example at two loops than I’d seen that group present before.

There was also a talk by Walter Goldberger about using amplitudes methods for classical gravity, a subject I’ve looked into before. It was nice to see a more thorough presentation of those ideas, including a more honest appraisal of which amplitudes techniques are really helpful there.

There were other interesting topics, but I’m already way over my usual post length, so I’ll sign off for now. Videos from all but a few of the talks are now online, so if you’re interested you should watch them on the conference page.

# Poll Results, and What’s Next

I’ll leave last week’s poll up a while longer as more votes trickle in, but the overall pattern (beyond “Zipflike“) is pretty clear.

From pretty early on, most requests were for more explanations of QFT, gravity, and string theory concepts, with amplitudes content a clear second. This is something I can definitely do more of: I haven’t had much inspiration for interesting pieces of this sort recently, but it’s something I can ramp up in future.

I suspect that many of the people voting for more QFT and more amplitudes content were also interested in something else, though: more physics news. Xezlec mentioned that with Résonaances and Of Particular Significance quiet, there’s an open niche for vaguely reasonable people blogging about physics.

The truth is, I didn’t think of adding a “more physics news” option to the poll. I’m not a great source of news: not being a phenomenologist, I don’t keep up with the latest experimental results, and since my sub-field is small and insular I’m not always aware of the latest thing Witten or Maldacena is working on.

For an example of the former: recently, various LHC teams presented results at the Moriond and Aspen conferences, with no new evidence of supersymmetry in the data they’ve gathered thus far. This triggered concessions on several bets about SUSY (including an amusingly awkward conversation about how to pay one of them).

And I only know about that because other bloggers talked about it.

So I’m not going to be a reliable source of physics news.

With that said, knowing there’s a sizable number of people interested in this kind of thing is helpful. I’ve definitely had times when I saw something I found interesting, but wasn’t sure if my audience would care. (For example, recently there’s been some substantial progress on the problem that gave this blog its name.) Now that I know some of you are interested, I’ll err on the side of posting about these kinds of things.

“What’s it like to be a physicist” and science popularization were both consistently third and fourth in the poll, switching back and forth as more votes came in. This tells me that while many of you want more technical content, there are still people interested in pieces aimed to a broader audience, so I won’t abandon those.

The other topics were fairly close together, with the more “news-y” ones (astrophysics/cosmology and criticism of bad science coverage) beating the less “news-y” ones. This also supports my guess that people were looking for a “more physics news” option. A few people even voted for “more arguments”, which was really more of a joke topic: getting into arguments with other bloggers tends to bring in readers, but it’s not something I ever plan to do intentionally.

So, what’s next? I’ll explain more quantum field theory, talk more about interesting progress in amplitudes, and mention news when I come across it, trusting you guys to find it interesting. I’ll keep up with the low-level stuff, and with trying to humanize physics, to get the public to understand what being a physicist is all about. And I’ll think about some of the specific suggestions you gave: I’m always looking for good post ideas.

# What Space Can Tell Us about Fundamental Physics

Back when LIGO announced its detection of gravitational waves, there was one question people kept asking me: “what does this say about quantum gravity?”

The answer, each time, was “nothing”. LIGO’s success told us nothing about quantum gravity, and very likely LIGO will never tell us anything about quantum gravity.

The sheer volume of questions made me think, though. Astronomy, astrophysics, and cosmology fascinate people. They capture the public’s imagination in a way that makes them expect breakthroughs about fundamental questions. Especially now, with the LHC so far seeing nothing new since the Higgs, people are turning to space for answers.

Is that a fair expectation? Well, yes and no.

Most astrophysicists aren’t concerned with finding new fundamental laws of nature. They’re interested in big systems like stars and galaxies, where we know most of the basic rules but can’t possibly calculate all their consequences. Like most physicists, they’re doing the vital work of “physics of decimals”.

At the same time, there’s a decent chunk of astrophysics and cosmology that does matter for fundamental physics. Just not all of it. Here are some of the key areas where space has something important to say about the fundamental rules that govern our world:

1. Dark Matter:

Galaxies rotate at different speeds than their stars would alone. Clusters of galaxies bend light that passes by, and do so more than their visible mass would suggest. And when scientists try to model the evolution of the universe, from early images to its current form, the models require an additional piece: extra matter that cannot interact with light. All of this suggests that there is some extra “dark” matter in the universe, not described by our standard model of particle physics.

If we want to understand this dark matter, we need to know more about its properties, and much of that can be learned from astronomy. If it turns out dark matter isn’t really matter after all, if it can be explained by a modification of gravity or better calculations of gravity’s effects, then it still will have important implications for fundamental physics, and astronomical evidence will still be key to finding those implications.

2. Dark Energy (/Cosmological Constant/Inflation/…):

The universe is expanding, and its expansion appears to be accelerating. It also seems more smooth and uniform than expected, suggesting that it had a period of much greater acceleration early on. Both of these suggest some extra quantity: a changing acceleration, a “dark energy”, the sort of thing that can often be explained by a new scalar field like the Higgs.

Again, the specifics: how (and perhaps if) the universe is expanding now, what kinds of early expansion (if any) the shape of the universe suggests, these will almost certainly have implications for fundamental physics.

3. Limits on stable stuff:

Let’s say you have a new proposal for particle physics. You’ve predicted a new particle, but it can’t interact with anything else, or interacts so weakly we’d never detect it. If your new particle is stable, then you can still say something about it, because its mass would have an effect on the early universe. Too many such particles and they would throw off cosmologists’ models, ruling them out.

Alternatively, you might predict something that could be detected, but hasn’t, like a magnetic monopole. Then cosmologists can tell you how many such particles would have been produced in the early universe, and thus how likely we would be to detect them today. If you predict too many particles and we don’t see them, then that becomes evidence against your proposal.

4. “Cosmological Collider Physics”:

A few years back, Nima Arkani-Hamed and Juan Maldacena suggested that the early universe could be viewed as an extremely high energy particle collider. While this collider performed only one experiment, the results from that experiment are spread across the sky, and observed patterns in the early universe should tell us something about the particles produced by the cosmic collider.

People are still teasing out the implications of this idea, but it looks promising, and could mean we have a lot more to learn from examining the structure of the universe.

5. Big Weird Space Stuff:

If you suspect we live in a multiverse, you might want to look for signs of other universes brushing up against our own. If your model of the early universe predicts vast cosmic strings, maybe a gravitational wave detector like LIGO will be able to see them.

6. Unexpected weirdness:

In all likelihood, nothing visibly “quantum” happens at the event horizons of astrophysical black holes. If you think there’s something to see though, the Event Horizon Telescope might be able to see it. There’s a grab bag of other predictions like this: situations where we probably won’t see anything, but where at least one person thinks there’s a question worth asking.

I’ve probably left something out here, but this should give you a general idea. There is a lot that fundamental physics can learn from astronomy, from the overall structure and origins of the universe to unexplained phenomena like dark matter. But not everything in astronomy has these sorts of implications: for the most part, astronomy is interesting not because it tells us something about the fundamental laws of nature, but because it tells us how the vast space above us actually happens to work.

# What If the Field Is Doomed?

Around Halloween, I have a tradition of exploring the spooky and/or scary side of physics (sometimes rather tenuously). This time, I want to talk about something particle physicists find scary: the future of the field.

For a long time, now, our field has centered around particle colliders. Early colliders confirmed the existence of quarks and gluons, and populated the Standard Model with a wealth of particles, some expected and some not. Now, an enormous amount of effort has poured into the Large Hadron Collider, which found the Higgs…and so far, nothing else.

Plans are being discussed for an even larger collider, in Europe or China, but it’s not clear that either will be funded. Even if the case for new physics isn’t as strong in such a collider, there are properties of the Higgs that the LHC won’t be able to measure, things it’s important to check with a more powerful machine.

That’s the case we’ll have to make to the public, if we want such a collider to be built. But in addition to the scientific reasons, there are selfish reasons to hope for a new collider. Without one, it’s not clear the field can survive in its current form.

By “the field”, here, I don’t just mean those focused on making predictions for collider physics. My work isn’t plugged particularly tightly into the real world, the same is true of most string theorists. Naively, you’d think it wouldn’t matter to us if a new collider gets built.

The trouble is, physics is interconnected. We may not all make predictions about the world, but the purpose of the tools we build and concepts we explore is to eventually make contact. On grant applications, we talk about that future, one that leads not just to understanding the mathematics and models we use but to understanding reality. And for a long while, a major theme in those grant applications has been collider physics.

Different sub-fields are vulnerable to this in different ways. Surprisingly, the people who directly make predictions for the LHC might have it easiest. Many of them can pivot, and make predictions for cosmological observations and cheaper dark matter detection experiments. Quite a few are already doing so.

It’s harder for my field, for amplitudeology. We try to push the calculation techniques of theoretical physics to greater and greater precision…but without colliders, there are fewer experiments that can match that precision. Cosmological observations and dark matter detection won’t need four-loop calculations.

If there isn’t a next big collider, our field won’t dry up overnight. Our work is disconnected enough, at a far enough remove from reality, that it takes time for that sort of change to be reflected in our funding. Optimistically, this gives people enough time to change gears and alter their focus to the less collider-dependent parts of the field. Pessimistically, it means people would be working on a zombie field, shambling around in a field that is already dead but can’t admit it.

Well I had to use some Halloween imagery

My hope is that this won’t happen. Even if the new colliders don’t get approved and collider physics goes dormant, I’d like to think my colleagues are adaptable enough to stay useful as the world’s demands change. But I’m young in this field, I haven’t seen it face these kinds of challenges before. And so, I worry.

# A Collider’s Eye View

When it detected the Higgs, what did the LHC see, exactly?

What do you see with your detector-eyes, CMS?

The first problem is that the Higgs, like most particles produced in particle colliders, is unstable. In a very short amount of time the Higgs transforms into two or more lighter particles. Often, these particles will decay in turn, possibly many more times.  So when the LHC sees a Higgs boson, it doesn’t really “see the Higgs”.

The second problem is that you can’t “see” the lighter particles either. They’re much too small for that. Instead, the LHC has to measure their properties.

Does the particle have a charge? Then its path will curve in a magnetic field, and it will send electrical signals in silicon. So the LHC can “see” charge.

Can the particle be stopped, absorbed by some material? Getting absorbed releases energy, lighting up a detector. So the LHC can “see” energy, and what it takes for a particle to be absorbed.

Diagram of a collider’s “eye”

And that’s…pretty much it. When the LHC “sees” the Higgs, what it sees is a set of tracks in a magnetic field, indicating charge, and energy in its detectors, caused by absorption at different points. Everything else has to be inferred: what exactly the particles were, where they decayed, and from what. Some of it can be figured out in real-time, some is only understood later once we can add up everything and do statistics.

On the face of it, this sounds about as impossible as astrophysics. Like astrophysics, it works in part because what the colliders see is not the whole story. The strong force has to both be consistent with our observations of hadrons, and with nuclear physics. Neutrinos aren’t just mysterious missing energy that we can’t track, they’re an important part of cosmology. And so on.

So in the sense of that massive, interconnected web of ideas, the LHC sees the Higgs. It sees patterns of charges and energies, binned into histograms and analyzed with statistics and cross-checked, implicitly or explicitly, against all of the rest of physics at every scale we know. All of that, together, is the collider’s eye view of the universe.

# Pentaquarks!

Earlier this week, the LHCb experiment at the Large Hadron Collider announced that, after painstakingly analyzing the data from earlier runs, they have decisive evidence of a previously unobserved particle: the pentaquark.

What’s a pentaquark? In simple terms, it’s five quarks stuck together. Stick two up quarks and a down quark together, and you get a proton. Stick two quarks together, you get a meson of some sort. Five, you get a pentaquark.

(In this case, if you’re curious: two up quarks, one down quark, one charm quark and one anti-charm quark.)

Artist’s Conception

Crucially, this means pentaquarks are not fundamental particles. Fundamental particles aren’t like species, but composite particles like pentaquarks are: they’re examples of a dizzying variety of combinations of an already-known set of basic building blocks.

So why is this discovery exciting? If we already knew that quarks existed, and we already knew the forces between them, shouldn’t we already know all about pentaquarks?

Well, not really. People definitely expected pentaquarks to exist, they were predicted fifty years ago. But their exact properties, or how likely they were to show up? Largely unknown.

Quantum field theory is hard, and this is especially true of QCD, the theory of quarks and gluons. We know the basic rules, but calculating their large-scale consequences, which composite particles we’re going to detect and which we won’t, is still largely out of our reach. We have to supplement first-principles calculations with experimental data, to take bits and pieces and approximations until we get something reasonably sensible.

This is an important point in general, not just for pentaquarks. Often, people get very excited about the idea of a “theory of everything”. At best, such a theory would tell us the fundamental rules that govern the universe. The thing is, we already know many of these rules, even if we don’t yet know all of them. What we can’t do, in general, is predict their full consequences. Most of physics, most of science in general, is about investigating these consequences, coming up with models for things we can’t dream of calculating from first principles, and it really does start as early as “what composite particles can you make out of quarks?”

Pentaquarks have been a long time coming, long enough that someone occasionally proposed a model that explained that they didn’t exist. There are still other exotic states of quarks and gluons out there, like glueballs, that have been predicted but not yet observed. It’s going to take time, effort, and data before we fully understand composite particles, even though we know the rules of QCD.

# What’s the Matter with Dark Matter, Matt?

It’s very rare that I disagree with Matt Strassler. That said, I can’t help but think that, when he criticizes the press for focusing their LHC stories on dark matter, he’s missing an important element.

From his perspective, when the media says that the goal of the new run of the LHC is to detect dark matter, they’re just being lazy. People have heard of dark matter. They might have read that it makes up 23% of the universe, more than regular matter at 4%. So when an LHC physicist wants to explain what they’re working on to a journalist, the easiest way is to talk about dark matter. And when the journalist wants to explain the LHC to the public, they do the same thing.

This explanation makes sense, but it’s a little glib. What Matt Strassler is missing is that, from the public’s perspective, dark matter really is a central part of the LHC’s justification.

Now, I’m not saying that the LHC’s main goal is to detect dark matter! Directly detecting dark matter is pretty low on the LHC’s list of priorities. Even if it detects a new particle with the right properties to be dark matter, it still wouldn’t be able to confirm that it really is dark matter without help from another experiment that actually observes some consequence of the new particle among the stars. I agree with Matt when he writes that the LHC’s priorities for the next run are

1. studying the newly discovered Higgs particle in great detail, checking its properties very carefully against the predictions of the “Standard Model” (the equations that describe the known apparently-elementary particles and forces)  to see whether our current understanding of the Higgs field is complete and correct, and

2. trying to find particles or other phenomena that might resolve the naturalness puzzle of the Standard Model, a puzzle which makes many particle physicists suspicious that we are missing an important part of the story, and

3. seeking either dark matter particles or particles that may be shown someday to be “associated” with dark matter.

Here’s the thing, though:

From the public’s perspective, why do we need to study the properties of the Higgs? Because we think it might be different than the Standard Model predicts.

Why do we think it might be different than the Standard Model predicts? More generally, why do we expect the world to be different from the Standard Model at all? Well there are a few reasons, but they generally boil down to two things: the naturalness puzzle, and the fact that the Standard Model doesn’t have anything that could account for dark matter.

Naturalness is a powerful motivation, but it’s hard to sell to the general public. Does the universe appear fine-tuned? Then maybe it just is fine-tuned! Maybe someone fine-tuned it!

These arguments miss the real problem with fine-tuning, but they’re hard to correct in a short article. Getting the public worried about naturalness is tough, tough enough that I don’t think we can demand it of the average journalist, or accuse them of being lazy if they fail to do it.

That leaves dark matter. And for all that naturalness is philosophically murky, dark matter is remarkably clear. We don’t know what 96% of the universe is made of! That’s huge, and not just in a “gee-whiz-cool” way. It shows, directly and intuitively, that physics still has something it needs to solve, that we still have particles to find. Unless you are a fan of (increasingly dubious) modifications to gravity like MOND, dark matter is the strongest possible justification for machines like the LHC.

The LHC won’t confirm dark matter on its own. It might not directly detect it, that’s still quite up-in-the-air. And even if it finds deviations from the Standard Model, it’s not likely they’ll be directly caused by dark matter, at least not in a simple way.

But the reason that the press is describing the LHC’s mission in terms of dark matter isn’t just laziness. It’s because, from the public’s perspective, dark matter is the only vaguely plausible reason to spend billions of dollars searching for new particles, especially when we’ve already found the Higgs. We’re lucky it’s such a good reason.

# Want to Make Something New? Just Turn on the Lights.

Isn’t it weird that you can collide two protons, and get something else?

It wouldn’t be so weird if you collided two protons, and out popped a quark. After all, protons are made of quarks. But how, if you collide two protons together, do you get a tau, or the Higgs boson: things that not only aren’t “part of” protons, but are more massive than a proton by themselves?

It seems weird…but in a way, it’s not. When a particle releases another particle that wasn’t inside it to begin with, it’s actually not doing anything more special than an everyday light bulb.

Eureka!

How does a light bulb work?

You probably know the basics: when an electrical current enters the bulb, the electrons in the filament start to move. They heat the filament up, releasing light.

That probably seems perfectly ordinary. But ask yourself for a moment: where did the light come from?

Light is made up of photons, elementary particles in their own right. When you flip a light switch, where do the photons come from? Were they stored in the light bulb?

Silly question, right? You don’t need to “store” light in a light bulb: light bulbs transform one type of energy (electrical, or the movement of electrons) into another type of energy (light, or photons).

Here’s the thing, though: mass is just another type of energy.

I like to describe mass as “energy we haven’t met yet”. Einstein’s equation, $E=mc^2$, relates a particle’s mass to its “rest energy”, the energy it would have if it stopped moving around and sit still. Even when a particle seems to be sitting still from the outside, there’s still a lot going on, though. “Composite” particles like protons have powerful forces between their internal quarks, while particles like electrons interact with the Higgs field. These processes give the particle energy, even when it’s not moving, so from our perspective on the outside they’re giving the particle mass.

What does that mean for the protons at the LHC?

The protons at the LHC have a lot of kinetic energy: they’re going 99.9999991% of the speed of light! When they collide, all that energy has to go somewhere. Just like in a light bulb, the fast-moving particles will release their energy in another form. And while that some of that energy will add to the speed of the fragments, much of it will go into the mass and energy of new particles. Some of these particles will be photons, some will be tau leptons, or Higgs bosons…pretty much anything that the protons have enough energy to create.

So if you want to understand how to create new particles, you don’t need a deep understanding of the mysteries of quantum field theory. Just turn on the lights.

# How to Predict the Mass of the Higgs

Did Homer Simpson predict the mass of the Higgs boson?

No, of course not.

Apart from the usual reasons, he’s off by more than a factor of six.

If you play with the numbers, it looks like Simon Singh (the popular science writer who reported the “discovery” Homer made as a throwaway joke in a 1998 Simpsons episode) made the classic physics mistake of losing track of a factor of $2\pi$. In particular, it looks like he mistakenly thought that the Planck constant, $h$, was equal to the reduced Planck constant, $\hbar$, divided by $2\pi$, when actually it’s $\hbar$ times $2\pi$. So while Singh read Homer’s prediction as 123 GeV, surprisingly close to the actual Higgs mass of 125 GeV found in 2012, in fact Homer predicted the somewhat more embarrassing value of 775 GeV.

D’Oh!

That was boring. Let’s ask a more interesting question.

Did Gordon Kane predict the mass of the Higgs boson?

I’ve talked before about how it seems impossible that string theory will ever make any testable predictions. The issue boils down to one of too many possibilities: string theory predicts different consequences for different ways that its six (or seven for M theory) extra dimensions can be curled up. Since there is an absurdly vast number of ways this can be done, anything you might want to predict (say, the mass of the electron) has an absurd number of possible values.

Gordon Kane and collaborators get around this problem by tackling a different one. Instead of trying to use string theory to predict things we already know, like the mass of the electron, they assume these things are already true. That is, they assume we live in a world with electrons that have the mass they really have, and quarks that have the mass they really have, and so on. They assume that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make. And, they assume that this world is a consequence of string (or rather M) theory.

From that combination of assumptions, they then figure out the consequences for things that aren’t yet known. And in a 2011 paper, they predicted the Higgs mass would be between 105 and 129 GeV.

I have a lot of sympathy for this approach, because it’s essentially the same thing that non-string-theorists do. When a particle physicist wants to predict what will come out of the LHC, they don’t try to get it from first principles: they assume the world works as we have discovered, make a few mild extra assumptions, and see what new consequences come out that we haven’t observed yet. If those particle physicists can be said to make predictions from supersymmetry, or (shudder) technicolor, then Gordon Kane is certainly making predictions from string theory.

So why haven’t you heard of him? Even if you have, why, if this guy successfully predicted the mass of the Higgs boson, are people still saying that you can’t make predictions with string theory?

Trouble is, making predictions is tricky.

Part of the problem is timing. Gordon Kane’s paper went online in December of 2011. The Higgs mass was announced in July 2012, so you might think Kane got a six month head-start. But when something is announced isn’t the same as when it’s discovered. For a big experiment like the Large Hadron Collider, there’s a long road between the first time something gets noticed and the point where everyone is certain enough that they’re ready to announce it to the world. Rumors fly, and it’s not clear that Kane and his co-authors wouldn’t have heard them.

Assumptions are the other issue. Remember when I said, a couple paragraphs up, that Kane’s group assumed “that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make“? That last part is what makes things tricky. There were a few extra assumptions Kane made, beyond those needed to reproduce the world we know. For many people, some of these extra assumptions are suspicious. They worry that the assumptions might have been chosen, not just because they made sense, but because they happened to give the right (rumored) mass of the Higgs.

If you want to predict something in physics, it’s not just a matter of getting in ahead of the announcement with the right number. For a clear prediction, you need to be early enough that the experiments haven’t yet even seen hints of what you’re looking for. Even then, you need your theory to be suitably generic, so that it’s clear that your prediction is really the result of the math and not of your choices. You can trade off aspects of this: more accuracy for a less generic theory, better timing for looser predictions. Get the formula right, and the world will laud you for your prediction. Wrong, and you’re Homer Simpson. Somewhere in between, though, and you end up in that tricky, tricky grey area.

Like Gordon Kane.