Category Archives: General QFT

Want to Make Something New? Just Turn on the Lights.

Isn’t it weird that you can collide two protons, and get something else?

It wouldn’t be so weird if you collided two protons, and out popped a quark. After all, protons are made of quarks. But how, if you collide two protons together, do you get a tau, or the Higgs boson: things that not only aren’t “part of” protons, but are more massive than a proton by themselves?

It seems weird…but in a way, it’s not. When a particle releases another particle that wasn’t inside it to begin with, it’s actually not doing anything more special than an everyday light bulb.

Eureka!

How does a light bulb work?

You probably know the basics: when an electrical current enters the bulb, the electrons in the filament start to move. They heat the filament up, releasing light.

That probably seems perfectly ordinary. But ask yourself for a moment: where did the light come from?

Light is made up of photons, elementary particles in their own right. When you flip a light switch, where do the photons come from? Were they stored in the light bulb?

Silly question, right? You don’t need to “store” light in a light bulb: light bulbs transform one type of energy (electrical, or the movement of electrons) into another type of energy (light, or photons).

Here’s the thing, though: mass is just another type of energy.

I like to describe mass as “energy we haven’t met yet”. Einstein’s equation, E=mc^2, relates a particle’s mass to its “rest energy”, the energy it would have if it stopped moving around and sit still. Even when a particle seems to be sitting still from the outside, there’s still a lot going on, though. “Composite” particles like protons have powerful forces between their internal quarks, while particles like electrons interact with the Higgs field. These processes give the particle energy, even when it’s not moving, so from our perspective on the outside they’re giving the particle mass.

What does that mean for the protons at the LHC?

The protons at the LHC have a lot of kinetic energy: they’re going 99.9999991% of the speed of light! When they collide, all that energy has to go somewhere. Just like in a light bulb, the fast-moving particles will release their energy in another form. And while that some of that energy will add to the speed of the fragments, much of it will go into the mass and energy of new particles. Some of these particles will be photons, some will be tau leptons, or Higgs bosons…pretty much anything that the protons have enough energy to create.

So if you want to understand how to create new particles, you don’t need a deep understanding of the mysteries of quantum field theory. Just turn on the lights.

How to Predict the Mass of the Higgs

Did Homer Simpson predict the mass of the Higgs boson?

No, of course not.

Apart from the usual reasons, he’s off by more than a factor of six.

If you play with the numbers, it looks like Simon Singh (the popular science writer who reported the “discovery” Homer made as a throwaway joke in a 1998 Simpsons episode) made the classic physics mistake of losing track of a factor of 2\pi. In particular, it looks like he mistakenly thought that the Planck constant, h, was equal to the reduced Planck constant, \hbar, divided by 2\pi, when actually it’s \hbar times 2\pi. So while Singh read Homer’s prediction as 123 GeV, surprisingly close to the actual Higgs mass of 125 GeV found in 2012, in fact Homer predicted the somewhat more embarrassing value of 775 GeV.

D’Oh!

That was boring. Let’s ask a more interesting question.

Did Gordon Kane predict the mass of the Higgs boson?

I’ve talked before about how it seems impossible that string theory will ever make any testable predictions. The issue boils down to one of too many possibilities: string theory predicts different consequences for different ways that its six (or seven for M theory) extra dimensions can be curled up. Since there is an absurdly vast number of ways this can be done, anything you might want to predict (say, the mass of the electron) has an absurd number of possible values.

Gordon Kane and collaborators get around this problem by tackling a different one. Instead of trying to use string theory to predict things we already know, like the mass of the electron, they assume these things are already true. That is, they assume we live in a world with electrons that have the mass they really have, and quarks that have the mass they really have, and so on. They assume that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make. And, they assume that this world is a consequence of string (or rather M) theory.

From that combination of assumptions, they then figure out the consequences for things that aren’t yet known. And in a 2011 paper, they predicted the Higgs mass would be between 105 and 129 GeV.

I have a lot of sympathy for this approach, because it’s essentially the same thing that non-string-theorists do. When a particle physicist wants to predict what will come out of the LHC, they don’t try to get it from first principles: they assume the world works as we have discovered, make a few mild extra assumptions, and see what new consequences come out that we haven’t observed yet. If those particle physicists can be said to make predictions from supersymmetry, or (shudder) technicolor, then Gordon Kane is certainly making predictions from string theory.

So why haven’t you heard of him? Even if you have, why, if this guy successfully predicted the mass of the Higgs boson, are people still saying that you can’t make predictions with string theory?

Trouble is, making predictions is tricky.

Part of the problem is timing. Gordon Kane’s paper went online in December of 2011. The Higgs mass was announced in July 2012, so you might think Kane got a six month head-start. But when something is announced isn’t the same as when it’s discovered. For a big experiment like the Large Hadron Collider, there’s a long road between the first time something gets noticed and the point where everyone is certain enough that they’re ready to announce it to the world. Rumors fly, and it’s not clear that Kane and his co-authors wouldn’t have heard them.

Assumptions are the other issue. Remember when I said, a couple paragraphs up, that Kane’s group assumed “that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make“? That last part is what makes things tricky. There were a few extra assumptions Kane made, beyond those needed to reproduce the world we know. For many people, some of these extra assumptions are suspicious. They worry that the assumptions might have been chosen, not just because they made sense, but because they happened to give the right (rumored) mass of the Higgs.

If you want to predict something in physics, it’s not just a matter of getting in ahead of the announcement with the right number. For a clear prediction, you need to be early enough that the experiments haven’t yet even seen hints of what you’re looking for. Even then, you need your theory to be suitably generic, so that it’s clear that your prediction is really the result of the math and not of your choices. You can trade off aspects of this: more accuracy for a less generic theory, better timing for looser predictions. Get the formula right, and the world will laud you for your prediction. Wrong, and you’re Homer Simpson. Somewhere in between, though, and you end up in that tricky, tricky grey area.

Like Gordon Kane.

Living in a Broken World: Supersymmetry We Can Test

I’ve talked before about supersymmetry. Supersymmetry relates particles with different spins, linking spin 1 force-carrying particles like photons and gluons to spin 1/2 particles similar to electrons, and spin 1/2 particles in turn to spin 0 “scalar” particles, the same general type as the Higgs. I emphasized there that, if two particles are related by supersymmetry, they will have some important traits in common: the same mass and the same interactions.

That’s true for the theories I like to work with. In particular, it’s true for N=4 super Yang-Mills. Adding supersymmetry allows us to tinker with neater, cleaner theories, gaining mastery over rice before we start experimenting with the more intricate “sushi” of theories of the real world.

However, it should be pretty clear that we don’t live in a world with this sort of supersymmetry. A quick look at the Standard Model indicates that no two known particles interact in precisely the same way. When people try to test supersymmetry in the real world, they’re not looking for this sort of thing. Rather, they’re looking for broken supersymmetry.

In the past, I’ve described broken supersymmetry as like a broken mirror: the two sides are no longer the same, but you can still predict one side’s behavior from the other. When supersymmetry is broken, related particles still have the same interactions. Now, though, they can have different masses.

The simplest version of supersymmetry, N=1, gives one partner to each particle. Since nothing in the Standard Model can be partners of each other, if we have broken N=1 supersymmetry in the real world then we need a new particle for each existing one…and each one of those particles has a potentially unknown, different mass. And if that sounds rather complicated…

Baroque enough to make Rubens happy.

That, right there, is the Minimal Supersymmetric Standard Model, the simplest thing you can propose if you want a world with broken supersymmetry. If you look carefully, you’ll notice that it’s actually a bit more complicated than just one partner for each known particle: there are a few extra Higgs fields as well!

If we’re hoping to explain anything in a simpler way, we seem to have royally screwed up. Luckily, though, the situation is not quite as ridiculous as it appears. Let’s go back to the mirror analogy.

If you look into a broken mirror, you can still have a pretty good idea of what you’ll see…but in order to do so, you have to know how the mirror is broken.

Similarly, supersymmetry can be broken in different ways, by different supersymmetry-breaking mechanisms.

The general idea is to start with a theory in which supersymmetry is precisely true, and all supersymmetric partners have the same mass. Then, consider some Higgs-like field. Like the Higgs, it can take some constant value throughout all of space, forming a background like the color of a piece of construction paper. While the rules that govern this field would respect supersymmetry, any specific value it takes wouldn’t. Instead, it would be biased: the spin 0, Higgs-like field could take on a constant value, but its spin 1/2 supersymmetric partner couldn’t. (If you want to know why, read my post on the Higgs linked above.)

Once that field takes on a specific value, supersymmetry is broken. That breaking then has to be communicated to the rest of the theory, via interactions between different particles. There are several different ways this can work: perhaps the interactions come from gravity, or are the same strength as gravity. Maybe instead they come from a new fundamental force, similar to the strong nuclear force but harder to discover. They could even come as byproducts of the breaking of other symmetries.

Each one of these options has different consequences, and leads to different predictions for the masses of undiscovered partner particles. They tend to have different numbers of extra parameters (for example, if gravity-based interactions are involved there are four new parameters, and an extra sign, that must be fixed). None of them have an entire standard model-worth of new parameters…but all of them have at least a few extra.

(Brief aside: I’ve been talking about the Minimal Supersymmetric Standard Model, but these days people have largely given up on finding evidence for it, and are exploring even more complicated setups like the Next-to-Minimal Supersymmetric Standard Model.)

If we’re introducing extra parameters without explaining existing ones, what’s the point of supersymmetry?

Last week, I talked about the problem of fine-tuning. I explained that when physicists are worried about fine-tuning, what we’re really worried about is whether the sorts of ultimate (low number of parameters) theories that we expect to hold could give rise to the apparently fine-tuned world we live in. In that post, I was a little misleading about supersymmetry’s role in that problem.

The goal of introducing (broken) supersymmetry is to solve a particular set of fine-tuning problems, mostly one specific one involving the Higgs. This doesn’t mean that supersymmetry is the sort of “ultimate” theory we’re looking for, rather supersymmetry is one of the few ways we know to bridge the gap between “ultimate” theories and a fine-tuned real world.

To explain it in terms of the language of the last post, it’s hard to find one of these “ultimate” theories that gives rise to a fine-tuned world. What’s quite a bit easier, though, is finding one of these “ultimate” theories that gives rise to a supersymmetric world, which in turn gives rise to a fine-tuned real world.

In practice, these are the sorts of theories that get tested. Very rarely are people able to propose testable versions of the more “ultimate” theories. Instead, one generally finds intermediate theories, theories that can potentially come from “ultimate” theories, and builds general versions of those that can be tested.

These intermediate theories come in multiple levels. Some physicists look for the most general version, theories like the Minimal Supersymmetric Standard Model with a whole host of new parameters. Others look for more specific versions, choices of supersymmetry-breaking mechanisms. Still others try to tie it further up, getting close to candidate “ultimate” theories like M theory (though in practice they generally make a few choices that put them somewhere in between).

The hope is that with a lot of people covering different angles, we’ll be able to make the best use of any new evidence that comes in. If “something” is out there, there are still a lot of choices for what that something could be, and it’s the job of physicists to try to understand whatever ends up being found.

Not bad for working in a broken world, huh?

The Real Problem with Fine-Tuning

You’ve probably heard it said that the universe is fine-tuned.

The Standard Model, our current best understanding of the rules that govern particle physics, is full of lots of fiddly adjustable parameters. The masses of fundamental particles and the strengths of the fundamental forces aren’t the sort of thing we can predict from first principles: we need to go out, do experiments, and find out what they are. And you’ve probably heard it argued that, if these fiddly parameters were even a little different from what they are, life as we know it could not exist.

That’s fine-tuning…or at least, that’s what many people mean when they talk about fine-tuning. It’s not exactly what physicists mean though. The thing is, almost nobody who studies particle physics thinks the parameters of the Standard Model are the full story. In fact, any theory with adjustable parameters probably isn’t the full story.

It all goes back to a point I made a while back: nature abhors a constant. The whole purpose of physics is to explain the natural world, and we have a long history of taking things that look arbitrary and linking them together, showing that reality has fewer parameters than we had thought. This is something physics is very good at. (To indulge in a little extremely amateurish philosophy, it seems to me that this is simply an inherent part of how we understand the world: if we encounter a parameter, we will eventually come up with an explanation for it.)

Moreover, at this point we have a rough idea of what this sort of explanation should look like. We have experience playing with theories that don’t have any adjustable parameters, or that only have a few: M theory is an example, but there are also more traditional quantum field theories that fill this role with no mention of string theory. From our exploration of these theories, we know that they can serve as the kind of explanation we need: in a world governed by one of these theories, people unaware of the full theory would observe what would look at first glance like a world with many fiddly adjustable parameters, parameters that would eventually turn out to be consequences of the broader theory.

So for a physicist, fine-tuning is not about those fiddly parameters themselves. Rather, it’s about the theory that predicts them. Because we have experience playing with these sorts of theories, we know roughly the sorts of worlds they create. What we know is that, while sometimes they give rise to worlds that appear fine-tuned, they tend to only do so in particular ways. Setups that give rise to fine-tuning have consequences: supersymmetry, for example, can give rise to an apparently fine-tuned universe but has to have “partner” particles that show up in powerful enough colliders. In general, a theory that gives rise to apparent fine-tuning will have some detectable consequences.

That’s where physicists start to get worried. So far, we haven’t seen any of these detectable consequences, and it’s getting to the point where we could have, had they been the sort many people expected.

Physicists are worried about fine-tuning, but not because it makes the universe “unlikely”. They’re worried because the more finely-tuned our universe appears, the harder it is to find an explanation for it in terms of the sorts of theories we’re used to working with, and the less likely it becomes that someone will discover a good explanation any time soon. We’re quite confident that there should be some explanation, hundreds of years of scientific progress strongly suggest that to be the case. But the nature of that explanation is becoming increasingly opaque.

Why You Should Be Skeptical about Faster-than-Light Neutrinos

While I do love science, I don’t always love IFL Science. They can be good at drumming up enthusiasm, but they can also be ridiculously gullible. Case in point: last week, IFL Science ran a piece on a recent paper purporting to give evidence for faster-than-light particles.

Faster than light! Sounds cool, right? Here’s why you should be skeptical:

If a science article looks dubious, you should check out the source. In this case, IFL Science links to an article on the preprint server arXiv.

arXiv is a freely accessible website where physicists and mathematicians post their articles. The site has multiple categories, corresponding to different fields. It’s got categories for essentially any type of physics you’d care to include, with the option to cross-list if you think people from multiple areas might find your work interesting.

So which category is this paper in? Particle physics? Astrophysics?

General Physics, actually.

General Physics is arXiv’s catch-all category. Some of it really is general, and can’t be put into any more specific place. But most of it, including this, falls into another category: things arXiv’s moderators think are fishy.

arXiv isn’t a journal. If you follow some basic criteria, it won’t reject your articles. Instead, dubious articles are put into General Physics, to signify that they don’t seem to belong with the other scholarship in the established categories. General Physics is a grab-bag of weird ideas and crackpot theories, a mix of fringe physicists and overenthusiastic amateurs. There probably are legitimate papers in there too…but for every paper in there, you can guarantee that some experienced researcher found it suspicious enough to send into exile.

Even if you don’t trust the moderators of arXiv, there are other reasons to be wary of faster-than-light particles.

According to Einstein’s theory of relativity, massless particles travel at the speed of light, while massive particles always travel slower. To travel faster than the speed of light, you need to have a very unusual situation: a particle whose mass is an imaginary number.

Particles like that are called tachyons, and they’re a staple of science fiction. While there was a time when they were a serious subject of physics speculation, nowadays the general view is that tachyons are a sign we’re making bad assumptions.

Assuming that someone is a republic serial villain is a good example.

Why is that? It has to do with the nature of mass.

In quantum field theory, what we observe as particles arise as ripples in quantum fields, extending across space and time. The harder it is to make the field ripple, the higher the particle’s mass.

A tachyon has imaginary mass. This means that it isn’t hard to make the field ripple at all. In fact, exactly the opposite happens: it’s easier to ripple than to stay still! Any ripple, no matter how small, will keep growing until it’s not just a ripple, but a new default state for the field. Only when it becomes hard to change again will the changes stop. If it’s hard to change, though, then the particle has a normal, non-imaginary mass, and is no longer a tachyon!

Thus, the modern understanding is that if a theory has tachyons in it, it’s because we’re assuming that one of the quantum fields has the wrong default state. Switching to the correct default gets rid of the tachyons.

There are deeper problems with the idea proposed in this paper. Normally, the only types of fields that can have tachyons are scalars, fields that can be defined by a single number at each point, sort of like a temperature. The particles this article is describing aren’t scalars, though, they’re fermions, the type of particle that includes everyday matter like electrons. Those sorts of particles can’t be tachyons at all without breaking some fairly important laws of physics. (For a technical explanation of why this is, Lubos Motl’s reply to the post here is pretty good.)

Of course, this paper’s author knows all this. He’s well aware that he’s suggesting bending some fairly fundamental laws, and he seems to think there’s room for it. But that, really, is the issue here: there’s room for it. The paper isn’t, as IFL Science seems to believe, six pieces of evidence for faster-than-light particles. It’s six measurements that, if you twist them around and squint and pick exactly the right model, have room for faster-than-light particles. And that’s…probably not worth an article.

Misleading Headlines and Tacky Physics, Oh My!

It’s been making the rounds on the blogosphere (despite having come out three months ago). It’s probably showed up on your Facebook feed. It’s the news that (apparently) one of the biggest discoveries of recent years may have been premature. It’s….

The Huffington Post writing a misleading headline to drum up clicks!

The article linked above is titled “Scientists Raise Doubts About Higgs Boson Discovery, Say It Could Be Another Particle”. And while that is indeed technically all true, it’s more than a little misleading.

When the various teams at the Large Hadron Collider announced their discovery of the Higgs, they didn’t say it was exactly the Higgs predicted by the Standard Model. In fact, it probably shouldn’t be: most of the options for extending the Standard Model, like supersymmetry, predict a Higgs boson with slightly different properties. Until the Higgs is measured more precisely, these slightly different versions won’t be ruled out.

Of course, “not ruled out” is not exactly newsworthy, which is the main problem with this article. The Huffington Post quotes a paper that argues, not that there is new evidence for an alternative to the Higgs, but simply that one particular alternative that the authors like hasn’t been ruled out yet.

Also, it’s probably the tackiest alternative out there.

The theory in question is called Technicolor, and if you’re imagining a certain coat then you may have an idea of how tacky we’re talking.

Any Higgs will do…

To describe technicolor, let’s take a brief aside and talk about the colors of quarks.

Rather than having one type of charge going from plus to minus like Electromagnetism, the Strong Nuclear Force has three types of charge, called red, green, and blue. Quarks are charged under the strong force, and can be red, green, or blue, while the antimatter partners of quarks have the equivalent of negative charges, anti-red, anti-green, and anti-blue. The strong force binds quarks together into protons and neutrons. The strong force is also charged under itself, which means that not only does it bind quarks together, it also binds itself together, so that it only acts at very very short range.

In combination, these two facts have one rather surprising consequence. A proton contains three quarks, but a proton’s mass is over a hundred times the total mass of three quarks. The same is true of neutrons.

The reason why is that most of the mass isn’t coming from the quarks, it’s coming from the strength of the strong force. Mass, contrary to what you might think, isn’t fundamental “stuff”. It’s just a handy way of talking about energy that isn’t due to something we can easily see. Particles have energy because they move, but they also have energy due to internal interactions, as well as interactions with other fields like the Higgs field. While a lone quark’s mass is due to its interaction with the Higgs field, the quarks inside a proton are also interacting with each other, gaining enormous amounts of energy from the strong force trapped within. That energy, largely invisible from an outside view, contributes most of what we see as the mass of the proton.

Technicolor asks the following: what if it’s not just protons and neutrons? What if the mass of everything, quarks and electrons and the W and Z bosons, was due not truly to the Higgs, but to another force, like the strong force but even stronger? The Higgs we think we saw at the LHC would not be fundamental, but merely a composite, made up of  two “techni-quarks” with “technicolor” charges. [Edited to remove confusion with Preon Theory]

It’s…an idea. But it’s never been a very popular one.

Part of the problem is that the simpler versions of technicolor have been ruled out, so theorists are having to invoke increasingly baroque models to try to make it work. But that, to some extent, is also true of supersymmetry.

A bigger problem is that technicolor is just kind of…tacky.

Technicolor doesn’t say anything deep about the way the universe works. It doesn’t propose new [types of] symmetries, and it doesn’t say anything about what happens at the very highest energies. It’s not really tied in to any of the other lines of speculation in physics, it doesn’t lead to a lot of discussion between researchers. It doesn’t require an end, a fundamental lowest level with truly fundamental particles. You could potentially keep adding new levels of technicolor, new things made up of other things made up of other things, ad infinitum.

And the fleas that bite ’em, presumably.

[Note: to clarify, technicolor theories don’t actually keep going like this, their extra particles don’t require another layer of technicolor to gain their masses. That would be an actual problem with the concept itself, not a reason it’s tacky. It’s tacky because, in a world where most physicists feel like we’ve really gotten down to the fundamental particles, adding new composite objects seems baroque and unnecessary, like adding epicycles. Fleas upon fleas as it were.]

In a word, it’s not sexy.

Does that mean it’s wrong? No, of course not. As the paper linked by Huffington Post points out, technicolor hasn’t been ruled out yet.

Does that mean I think people shouldn’t study it? Again, no. If you really find technicolor meaningful and interesting, go for it! Maybe you’ll be the kick it needs to prove itself!

But good grief, until you manage that, please don’t spread your tacky, un-sexy theory all over Facebook. A theory like technicolor should get press when it’s got a good reason, and “we haven’t been ruled out yet” is never, ever, a good reason.

 

[Edit: Esben on Facebook is more well-informed about technicolor than I am, and pointed out some issues with this post. Some of them are due to me conflating technicolor with another old and tacky theory, while some were places where my description was misleading. Corrections in bold.]

Why I Can’t Explain Ghosts: Or, a Review of a Popular Physics Piece

Since today is Halloween, I really wanted to write a post talking about the spookiest particles in physics, ghosts.

And their superpartners, ghost riders.

The problem is, in order to explain ghosts I’d have to explain something called gauge symmetry. And gauge symmetry is quite possibly the hardest topic in modern physics to explain to a general audience.

Deep down, gauge symmetry is the idea that irrelevant extra parts of how we represent things in physics should stay irrelevant. While that sounds obvious, it’s far from obvious how you can go from that to predicting new particles like the Higgs boson.

Explaining this is tough! Tough enough that I haven’t thought of a good way to do it yet.

Which is why I was fairly stoked when a fellow postdoc pointed out a recent popular physics article by Juan Maldacena, explaining gauge symmetry.

Juan Maldacena is a Big Deal. He’s the guy who figured out the AdS/CFT correspondence, showing that string theory (in a particular hyperbola-shaped space called AdS) and everybody’s favorite N=4 super Yang-Mills theory are secretly the same, a discovery which led to a Big Blue Dot on Paperscape. So naturally, I was excited to see what he had to say.

Big Blue Dot pictured here.

Big Blue Dot pictured here.

The core analogy he makes is with currencies in different countries. Just like gauge symmetry, currencies aren’t measuring anything “real”: they’re arbitrary conventions put in place because we don’t have a good way of just buying things based on pure “value”. However, also like gauge symmetry, then can have real-life consequences, as different currency exchange rates can lead to currency speculation, letting some people make money and others lose money. In Maldacena’s analogy the Higgs field works like a precious metal, making differences in exchange rates manifest as different prices of precious metals in different countries.

It’s a solid analogy, and one that is quite close to the real mathematics of the problem (as the paper’s Appendix goes into detail to show). However, I have some reservations, both about the paper as a whole and about the core analogy.

In general, Maldacena doesn’t do a very good job of writing something publicly accessible. There’s a lot of stilted, academic language, and a lot of use of “we” to do things other than lead the reader through a thought experiment. There’s also a sprinkling of terms that I don’t think the average person will understand; for example, I doubt the average college student knows flux as anything other than a zany card game.

Regarding the analogy itself, I think Maldacena has fallen into the common physicist trap of making an analogy that explains things really well…if you already know the math.

This is a problem I see pretty frequently. I keep picking on this article, and I apologize for doing so, but it’s got a great example of this when it describes supersymmetry as involving “a whole new class of number that can be thought of as the square roots of zero”. That’s a really great analogy…if you’re a student learning about the math behind supersymmetry. If you’re not, it doesn’t tell you anything about what supersymmetry does, or how it works, or why anyone might study it. It relates something unfamiliar to something unfamiliar.

I’m worried that Maldacena is doing that in this paper. His setup is mathematically rigorous, but doesn’t say much about the why of things: why do physicists use something like this economic model to understand these forces? How does this lead to what we observe around us in the real world? What’s actually going on, physically? What do particles have to do with dimensionless constants? (If you’re curious about that last one, I like to think I have a good explanation here.)

It’s not that Maldacena ignores these questions, he definitely puts effort into answering them. The problem is that his analogy itself doesn’t really address them. They’re the trickiest part, the part that people need help picturing and framing, the part that would benefit the most from a good analogy. Instead, the core imagery of the piece is wasted on details that don’t really do much for a non-expert.

Maybe I’m wrong about this, and I welcome comments from non-physicists. Do you feel like Maldacena’s account gives you a satisfying idea of what gauge symmetry is?

What’s an Amplitude? Just about everything.

I am an Amplitudeologist. In other words, I study scattering amplitudes. I’ve explained bits and pieces of what scattering amplitudes are in other posts, but I ought to give a short definition here so everyone’s on the same page:

A scattering amplitude is the formula used to calculate the probability that some collection of particles will “scatter”, emerging as some (possibly different) collection of particles.

Note that I’m using some weasel words here. The scattering amplitude is not a probability itself, but “the formula used to calculate the probability”. For those familiar with the mathematics of waves, the scattering amplitude gives the amplitude of a “probability wave” that must be squared to get the probability. (Those familiar with waves might also ask: “If this is the amplitude, what about the period?” The truth is that because scattering amplitudes are calculated using complex numbers, what we call the “amplitude” also contains information about the wave’s “period”. It may seem like an inconsistent way to name things from the perspective of a beginning student, but it is actually consistent with the terminology in a large chunk of physics.)

In some of the simplest scattering amplitudes particles literally “scatter”, with two particles “colliding” and emerging traveling in different directions.

A scattering amplitude can also describe a more complicated situation, though. At particle colliders like the Large Hadron Collider, two particles (a pair of protons for the LHC) are accelerated fast enough that when they collide they release a whole slew of new particles. Since it still fits the “some particles go in, some particles go out” template, this is still described by a scattering amplitude.

It goes even further than that, though, because “some particles” could also just be “one particle”. If you’re dealing with something unstable (the particle equivalent of radioactive, essentially) then one particle can decay into two or more particles. There’s a whole slew of questions that require that sort of calculation. For example, if unstable particles were produced in the early universe, how many of them would be left around today? If dark matter is unstable (and some possible candidates are), when it decays it might release particles we could detect. In general, this sort of scattering amplitude is often of interest to astrophysicists when they happen to get involved in particle physics.

You can even use scattering amplitudes to describe situations that, at first glance, don’t sound like collisions of particles at all. If you want to find the effect of a magnetic field on an electron to high accuracy, the calculation also involves a scattering amplitude. A magnetic field can be thought of in terms of photons, particles of light, because light is a vibration in the electro-magnetic field. This means that the effect of a magnetic field on an electron can be calculated by “scattering” an electron and a photon.

4gravanom

If this looks familiar, check the handbook section.

In fact, doing the calculation in this way leads to what is possibly the most accurately predicted number in all of science.

Scattering amplitudes show up all over the place, from particle physics at the Large Hadron Collider to astrophysics to delicate experiments on electrons in magnetic fields. That said, there are plenty of things people calculate in theoretical physics that don’t use scattering amplitudes, either because they involve questions that are difficult to answer from the scattering amplitude point of view, or because they invoke different formulas altogether. Still, scattering amplitudes are central to the work of a large number of physicists. They really do cover just about everything.

“China” plans super collider

When I saw the headline, I was excited.

“China plans super collider” says Nature News.

There’s been a lot of worry about what may happen if the Large Hadron Collider finishes its run without discovering anything truly new. If that happens, finding new particles might require a much bigger machine…and since even that machine has no guarantee of finding anything at all, world governments may be understandably reluctant to fund it.

As such, several prominent people in the physics community have put their hopes on China. The country’s somewhat autocratic nature means that getting funding for a collider is a matter of convincing a few powerful people, not a whole fractious gaggle of legislators. It’s a cynical choice, but if it keeps the field alive so be it.

If China was planning a super collider, then, that would be great news!

Too bad it’s not.

Buried eight paragraphs in to Nature’s article we find the following:

The Chinese government is yet to agree on any funding, but growing economic confidence in the country has led its scientists to believe that the political climate is ripe, says Nick Walker, an accelerator physicist at DESY, Germany’s high-energy physics laboratory in Hamburg. Although some technical issues remain, such as keeping down the power demands of an energy-hungry ring, none are major, he adds.

The Chinese government is yet to agree on any funding. China, if by China you mean the Chinese government, is not planning a super collider.

So who is?

Someone must have drawn these diagrams, after all.

Reading the article, the most obvious answer is Beijing’s Institute of High Energy Physics (IHEP). While this is true, the article leaves out any mention of a more recently founded site, the Center for Future High Energy Physics (CFHEP).

This is a bit odd, given that CFHEP’s whole purpose is to compose a plan for the next generation of colliders, and persuade China’s government to implement it. They were founded, with heavy involvement from non-Chinese physicists including their director Nima Arkani-Hamed, with that express purpose in mind. And since several of the quotes in the article come from Yifang Wang, director of IHEP and member of the advisory board of CFHEP, it’s highly unlikely that this isn’t CFHEP’s plan.

So what’s going on here? On one level, it could be a problem on the journalists’ side. News editors love to rewrite headlines to be more misleading and click-bait-y, and claiming that China is definitely going to build a collider draws much more attention than pointing out the plans of a specialized think tank. I hope that it’s just something like that, and not the sort of casual racism that likes to think of China as a single united will. Similarly, I hope that the journalists involved just didn’t dig deep enough to hear about CFHEP, or left it out to simplify things, because there is a somewhat darker alternative.

CFHEP’s goal is to convince the Chinese government to build a collider, and what better way to do that than to present them with a fait accompli? If the public thinks that this is “China’s” plan, that wheels are already in motion, wouldn’t it benefit the Chinese government to play along? Throw in a few sweet words about the merits of international collaboration (a big part of the strategy of CFHEP is to bring international scientists to China to show the sort of community a collider could attract) and you’ve got a winning argument, or at least enough plausibility to get US and European funding agencies in a competitive mood.

This…is probably more cynical than what’s actually going on. For one, I don’t even know whether this sort of tactic would work.

Do these guys look like devious manipulators?

Indeed, it might just be a journalistic omission, part of a wider tendency of science journalists to focus on big projects and ignore the interesting part, the nitty-gritty things that people do to push them forward. It’s a shame, because people are what drive the news forward, and as long as science is viewed as something apart from real human beings people are going to continue to mistrust and misunderstand it.

Either way, one thing is clear. The public deserves to hear a lot more about CFHEP.

Feeling Perturbed?

You might think of physics as the science of certainties and exact statements: action and reaction, F=ma, and all that. However, most calculations in physics aren’t exact, they’re approximations. This is especially true today, but it’s been true almost since the dawn of physics. In particular, approximations are performed via a method known as perturbation theory.

Perturbation theory is a trick used to solve problems that, for one reason or another, are too difficult to solve all in one go. It works by solving a simpler problem, then perturbing that solution, adjusting it closer to the target.

To give an analogy: let’s say you want to find the area of a circle, but you only know how to draw straight lines. You could start by drawing a square: it’s easy to find the area, and you get close to the area of the circle. But you’re still a long ways away from the total you’re aiming for. So you add more straight lines, getting an octagon. Now it’s harder to find the area, but you’re closer to the full circle. You can keep adding lines, each step getting closer and closer.

And so on.

And so on.

This, broadly speaking, is what’s going on when particle physicists talk about loops. The calculation with no loops (or “tree-level” result) is the easier problem to solve, omitting quantum effects. Each loop then is the next stage, more complicated but closer to the real total.

There are, as usual, holes in this analogy. One is that it leaves out an important aspect of perturbation theory, namely that it involves perturbing with a parameter. When that parameter is small, perturbation theory works, but as it gets larger the approximation gets worse and worse. In the case of particle physics, the parameter is the strength of the forces involves, with weaker forces (like the weak nuclear force, or electromagnetism) having better approximations than stronger forces (like the strong nuclear force). If you squint, this can still fit the analogy: different shapes might be harder to approximate than the circle, taking more sets of lines to get acceptably close.

Where the analogy fails completely, though, is when you start approaching infinity. Keep adding more lines, and you should be getting closer and closer to the circle each time. In quantum field theory, though, this frequently is not the case. As I’ve mentioned before, while lower loops keep getting closer to the true (and experimentally verified) results, going all the way out to infinite loops results not in the full circle, but in an infinite result instead. There’s an understanding of why this happens, but it does mean that perturbation theory can’t be thought of in the most intuitive way.

Almost every calculation in particle physics uses perturbation theory, which means almost always we are just approximating the real result, trying to draw a circle using straight lines. There are only a few theories where we can bypass this process and look at the full circle. These are known as integrable theories. N=4 super Yang-Mills may be among them, one of many reasons why studying it offers hope for a deeper understanding of particle physics.