Category Archives: General QFT

What Makes Light Move?

Light always moves at the speed of light.

It’s not alone in this: anything that lacks mass moves at the speed of light. Gluons, if they weren’t constantly interacting with each other, would move at the speed of light. Neutrinos, back when we thought they were massless, were thought to move at the speed of light. Gravitational waves, and by extension gravitons, move at the speed of light.

This is, on the face of it, a weird thing to say. If I say a jet moves at the speed of sound, I don’t mean that it always moves at the speed of sound. Find it in its hangar and hopefully it won’t be moving at all.

And so, people occasionally ask me, why can’t we find light in its hangar? Why does light never stand still? What makes light move?

(For the record, you can make light “stand still” in a material, but that’s because the material is absorbing and reflecting it, so it’s not the “same” light traveling through. Compare the speed of a wave of hands in a stadium versus the speed you could run past the seats.)

This is surprisingly tricky to explain without math. Some people point out that if you want to see light at rest you need to speed up to catch it, but you can’t accelerate enough unless you too are massless. This probably sounds a bit circular. Some people talk about how, from light’s perspective, no time passes at all. This is true, but it seems to confuse more than it helps. Some people say that light is “made of energy”, but I don’t like that metaphor. Nothing is “made of energy”, nor is anything “made of mass” either. Mass and energy are properties things can have.

I do like game metaphors though. So, imagine that each particle (including photons, particles of light) is a character in an RPG.

260px-yagami_light

For bonus points, play Light in an RPG.

You can think of energy as the particle’s “character points”. When the particle builds its character it gets a number of points determined by its energy. It can spend those points increasing its “stats”: mass and momentum, via the lesser-known big brother of E=mc^2, E^2=p^2c^2+m^2c^4.

Maybe the particle chooses to play something heavy, like a Higgs boson. Then they spend a lot of points on mass, and don’t have as much to spend on momentum. If they picked something lighter, like an electron, they’d have more to spend, so they could go faster. And if they spent nothing at all on mass, like light does, they could use all of their energy “points” boosting their speed.

Now, it turns out that these “energy points” don’t boost speed one for one, which is why low-energy light isn’t any slower than high-energy light. Instead, speed is determined by the ratio between energy and momentum. When they’re proportional to each other, when E^2=p^2c^2, then a particle is moving at the speed of light.

(Why this is is trickier to explain. You’ll have to trust me or wikipedia that the math works out.)

Some of you may be happy with this explanation, but others will accuse me of passing the buck. Ok, a photon with any energy will move at the speed of light. But why do photons have any energy at all? And even if they must move at the speed of light, what determines which direction?

Here I think part of the problem is an old physics metaphor, probably dating back to Newton, of a pool table.

220px-cribbage_pool_rack_closeup

A pool table is a decent metaphor for classical physics. You have moving objects following predictable paths, colliding off each other and the walls of the table.

Where people go wrong is in projecting this metaphor back to the beginning of the game. At the beginning of a game of pool, the balls are at rest, racked in the center. Then one of them is hit with the pool cue, and they’re set into motion.

In physics, we don’t tend to have such neat and tidy starting conditions. In particular, things don’t have to start at rest before something whacks them into motion.

A photon’s “start” might come from an unstable Higgs boson produced by the LHC. The Higgs decays, and turns into two photons. Since energy is conserved, these two each must have half of the energy of the original Higgs, including the energy that was “spent” on its mass. This process is quantum mechanical, and with no preferred direction the photons will emerge in a random one.

Photons in the LHC may seem like an artificial example, but in general whenever light is produced it’s due to particles interacting, and conservation of energy and momentum will send the light off in one direction or another.

(For the experts, there is of course the possibility of very low energy soft photons, but that’s a story for another day.)

Not even the beginning of the universe resembles that racked set of billiard balls. The question of what “initial conditions” make sense for the whole universe is a tricky one, but there isn’t a way to set it up where you start with light at rest. It’s not just that it’s not the default option: it isn’t even an available option.

Light moves at the speed of light, no matter what. That isn’t because light started at rest, and something pushed it. It’s because light has energy, and a particle has to spend its “character points” on something.

 

Mass Is Just Energy You Haven’t Met Yet

How can colliding two protons give rise to more massive particles? Why do vibrations of a string have mass? And how does the Higgs work anyway?

There is one central misunderstanding that makes each of these topics confusing. It’s something I’ve brought up before, but it really deserves its own post. It’s people not realizing that mass is just energy you haven’t met yet.

It’s quite intuitive to think of mass as some sort of “stuff” that things can be made out of. In our everyday experience, that’s how it works: combine this mass of flour and this mass of sugar, and get this mass of cake. Historically, it was the dominant view in physics for quite some time. However, once you get to particle physics it starts to break down.

It’s probably most obvious for protons. A proton has a mass of 938 MeV/c², or 1.6×10⁻²⁷ kg in less physicist-specific units. Protons are each made of three quarks, two up quarks and a down quark. Naively, you’d think that the quarks would have to be around 300 MeV/c². They’re not, though: up and down quarks both have masses less than 10 MeV/c². Those three quarks account for less than a fiftieth of a proton’s mass.

The “extra” mass is because a proton is not just three quarks. It’s three quarks interacting. The forces between those quarks, the strong nuclear force that binds them together, involves a heck of a lot of energy. And from a distance, that energy ends up looking like mass.

This isn’t unique to protons. In some sense, it’s just what mass is.

The quarks themselves get their mass from the Higgs field. Far enough away, this looks like the quarks having a mass. However, zoom in and it’s energy again, the energy of interaction between quarks and the Higgs. In string theory, mass comes from the energy of vibrating strings. And so on. Every time we run into something that looks like a fundamental mass, it ends up being just another energy of interaction.

If mass is just energy, what about gravity?

When you’re taught about gravity, the story is all about mass. Mass attracts mass. Mass bends space-time. What gets left out, until you actually learn the details of General Relativity, is that energy gravitates too.

Normally you don’t notice this, because mass contributes so much more to energy than anything else. That’s really what E=m is really about: it’s a unit conversion formula. It tells you that if you want to know how much energy a given mass “really is”, you multiply it by the speed of light squared. And that’s a large enough number that most of the time, when you notice energy gravitating, it’s because that energy looks like a big chunk of mass. (It’s also why physicists like silly units like MeV/c² for mass: we can just multiply by c² and get an energy!)

It’s really tempting to think about mass as a substance, of mass as always conserved, of mass as fundamental. But in physics we often have to toss aside our everyday intuitions, and this is no exception. Mass really is just energy. It’s just energy that we’ve “zoomed out” enough not to notice.

Those Wacky 60’s Physicists

The 60’s were a weird time in academia. Psychologists were busy experimenting with LSD, seeing if they could convince people to electrocute each other, and otherwise doing the sorts of shenanigans that ended up saddling them with Institutional Review Boards so that nowadays they can’t even hand out surveys without a ten page form attesting that it won’t have adverse effects on pregnant women.

We don’t have IRBs in theoretical physics. We didn’t get quite as wacky as the psychologists did. But the 60’s were still a time of utopian dreams and experimentation, even in physics. We may not have done unethical experiments on people…but we did have the Analytic S-Matrix Program.

The Analytic S-Matrix Program was an attempt to rebuild quantum field theory from the ground up. The “S” in S-Matrix stands for “scattering”: the S-Matrix is an enormous matrix that tells you, for each set of incoming particles, the probability that they scatter into some new set of outgoing particles. Normally, this gets calculated piece by piece with what are called Feynman diagrams. The goal of the Analytic S-Matrix program was a loftier one: to derive the S-Matrix from first principles, without building it out of quantum field theory pieces. Without Feynman diagrams’ reliance on space and time, people like  Geoffrey Chew, Stanley Mandelstam, Tullio Regge, and Lev Landau hoped to reach a deeper understanding of fundamental physics.

If this sounds familiar, it should. Amplitudeologists like me view the physicists of the Analytic S-Matrix Program as our spiritual ancestors. Like us, they tried to skip the mess of Feynman diagrams, looking for mathematical tricks and unexpected symmetries to show them the way forward.

Unfortunately, they didn’t have the tools we do now. They didn’t understand the mathematical functions they needed, nor did they have novel ways of writing down their results like the amplituhedron. Instead, they had to work with what they knew, which in practice usually meant going back to Feynman diagrams.

Paradoxically then, much of the lasting impact of the Analytic S-Matrix Program has been on how we understand the results of Feynman diagram calculations. Just as psychologists learn about the Milgram experiment in school, we learn about Mandelstam variables and Regge trajectories. Recently, we’ve been digging up old concepts from those days and finding new applications, like the recent work on Landau singularities, or some as-yet unpublished work I’ve been doing.

Of course, this post wouldn’t be complete without mentioning the Analytic S-Matrix Program’s most illustrious child, String Theory. Some of the mathematics cooked up by the physicists of the 60’s, while dead ends for the problems they were trying to solve, ended up revealing a whole new world of potential.

The physicists of the 60’s were overly optimistic. Nevertheless, their work opened up questions that are still worth asking today. Much as psychologists can’t ignore what they got up to in the 60’s, it’s important for physicists to be aware of our history. You never know what you might dig up.

0521523362cvr.qxd (Page 1)

And as Levar Burton would say, you don’t have to take my word for it.

A Collider’s Eye View

When it detected the Higgs, what did the LHC see, exactly?

cern-1304107-02-thumb

What do you see with your detector-eyes, CMS?

The first problem is that the Higgs, like most particles produced in particle colliders, is unstable. In a very short amount of time the Higgs transforms into two or more lighter particles. Often, these particles will decay in turn, possibly many more times.  So when the LHC sees a Higgs boson, it doesn’t really “see the Higgs”.

The second problem is that you can’t “see” the lighter particles either. They’re much too small for that. Instead, the LHC has to measure their properties.

Does the particle have a charge? Then its path will curve in a magnetic field, and it will send electrical signals in silicon. So the LHC can “see” charge.

Can the particle be stopped, absorbed by some material? Getting absorbed releases energy, lighting up a detector. So the LHC can “see” energy, and what it takes for a particle to be absorbed.

vvvvv

Diagram of a collider’s “eye”

And that’s…pretty much it. When the LHC “sees” the Higgs, what it sees is a set of tracks in a magnetic field, indicating charge, and energy in its detectors, caused by absorption at different points. Everything else has to be inferred: what exactly the particles were, where they decayed, and from what. Some of it can be figured out in real-time, some is only understood later once we can add up everything and do statistics.

On the face of it, this sounds about as impossible as astrophysics. Like astrophysics, it works in part because what the colliders see is not the whole story. The strong force has to both be consistent with our observations of hadrons, and with nuclear physics. Neutrinos aren’t just mysterious missing energy that we can’t track, they’re an important part of cosmology. And so on.

So in the sense of that massive, interconnected web of ideas, the LHC sees the Higgs. It sees patterns of charges and energies, binned into histograms and analyzed with statistics and cross-checked, implicitly or explicitly, against all of the rest of physics at every scale we know. All of that, together, is the collider’s eye view of the universe.

GUTs vs ToEs: What Are We Unifying Here?

“Grand Unified Theory” and “Theory of Everything” may sound like meaningless grandiose titles, but they mean very different things.

In particular, Grand Unified Theory, or GUT, is a technical term, referring to a specific way to unify three of the fundamental interactions: electromagnetism, the weak force, and the strong force.

blausen_0817_smallintestine_anatomy

In contrast, guts unify the two fundamental intestines.

Those three forces are called Yang-Mills forces, and they can all be described in the same basic way. In particular, each has a strength (the coupling constant) and a mathematical structure that determines how it interacts with itself, called a group.

The core idea of a GUT, then, is pretty simple: to unite the three Yang-Mills forces, they need to have the same strength (the same coupling constant) and be part of the same group.

But wait! (You say, still annoyed at the pun in the above caption.) These forces don’t have the same strength at all! One of them’s strong, one of them’s weak, and one of them is electromagnetic!

As it turns out, this isn’t as much of a problem as it seems. While the three Yang-Mills forces seem to have very different strengths on an everyday scale, that’s not true at very high energies. Let’s steal a plot from Sweden’s Royal Institute of Technology:

running

Why Sweden? Why not!

What’s going on in this plot?

Here, each \alpha represents the strength of a fundamental force. As the force gets stronger, \alpha gets bigger (and so \alpha^{-1} gets smaller). The variable on the x-axis is the energy scale. The grey lines represent a world without supersymmetry, while the black lines show the world in a supersymmetric model.

So based on this plot, it looks like the strengths of the fundamental forces change based on the energy scale. That’s true, but if you find that confusing there’s another, mathematically equivalent way to think about it.

You can think about each force as having some sort of ultimate strength, the strength it would have if the world weren’t quantum. Without quantum mechanics, each force would interact with particles in only the simplest of ways, corresponding to the simplest diagram here.

However, our world is quantum mechanical. Because of that, when we try to measure the strength of a force, we’re not really measuring its “ultimate strength”. Rather, we’re measuring it alongside a whole mess of other interactions, corresponding to the other diagrams in that post. These extra contributions mean that what looks like the strength of the force gets stronger or weaker depending on the energy of the particles involved.

(I’m sweeping several things under the rug here, including a few infinities and electroweak unification. But if you just want a general understanding of what’s going on, this should be a good starting point.)

If you look at the plot, you’ll see the forces meet up somewhere around 10^16 GeV. They miss each-other for the faint, non-supersymmetric lines, but they meet fairly cleanly for the supersymmetric ones.

So (at least if supersymmetry is true), making the Yang-Mills forces have the same strength is not so hard. Putting them in the same mathematical group is where things get trickier. This is because any group that contains the groups of the fundamental forces will be “bigger” than just the sum of those forces: it will contain “extra forces” that we haven’t observed yet, and these forces can do unexpected things.

In particular, the “extra forces” predicted by GUTs usually make protons unstable. As far as we can tell, protons are very long-lasting: if protons decayed too fast, we wouldn’t have stars. So if protons decay, they must do it only very rarely, detectable only with very precise experiments. These experiments are powerful enough to rule out most of the simplest GUTs. The more complicated GUTs still haven’t been ruled out, but it’s enough to make fewer people interested in GUTs as a research topic.

What about Theories of Everything, or ToEs?

While GUT is a technical term, ToE is very much not. Instead, it’s a phrase that journalists have latched onto because it sounds cool. As such, it doesn’t really have a clear definition. Usually it means uniting gravity with the other fundamental forces, but occasionally people use it to refer to a theory that also unifies the various Standard Model particles into some sort of “final theory”.

Gravity is very different from the other fundamental forces, different enough that it’s kind of silly to group them as “fundamental forces” in the first place. Thus, while GUT models are the kind of thing one can cook up and tinker with, any ToE has to be based on some novel insight, one that lets you express gravity and Yang-Mills forces as part of the same structure.

So far, string theory is the only such insight we have access to. This isn’t just me being arrogant: while there are other attempts at theories of quantum gravity, aside from some rather dubious claims none of them are even interested in unifying gravity with other forces.

This doesn’t mean that string theory is necessarily right. But it does mean that if you want a different “theory of everything”, telling physicists to go out and find a new one isn’t going to be very productive. “Find a theory of everything” is a hope, not a research program, especially if you want people to throw out the one structure we have that even looks like it can do the job.

The Higgs Solution

My grandfather is a molecular biologist. Over the holidays I had many opportunities to chat with him, and our conversations often revolved around explaining some aspect of our respective fields. While talking to him, I came up with a chemistry-themed description of the Higgs field, and how it leads to electro-weak symmetry breaking. Very few of you are likely to be chemists, but I think you still might find the metaphor worthwhile.

Picture the Higgs as a mixture of ions, dissolved in water.

In this metaphor, the Higgs field is a sort of “Higgs solution”. Overall, this solution should be uniform: if you have more ions of a certain type in one place than another, over time they will dissolve until they reach a uniform mixture again. In this metaphor, the Higgs particle detected by the LHC is like a brief disturbance in the fluid: by stirring the solution at high energy, we’ve managed to briefly get more of one type of ion in one place than the average concentration.

What determines the average concentration, though?

Essentially, it’s arbitrary. If this were really a chemistry experiment, it would depend on the initial conditions: which ions we put in to the mixture in the first place. In physics, quantum mechanics plays a role, randomly selecting one option out of the many possibilities.

 

nile_red_01

Choose wisely

(Note that this metaphor doesn’t explain why there has to be a solution, why the water can’t just be “pure”. A setup that required this would probably be chemically complicated enough to confuse nearly everybody, so I’m leaving that feature out. Just trust that “no ions” isn’t one of our options.)

Up till now, the choice of mixture didn’t matter very much. But different ions interact with other chemicals in different ways, and this has some interesting implications.

Suppose we have a tube filled with our Higgs solution. We want to shoot some substance through the tube, and collect it on the other side. This other substance is going to represent a force.

If our force substance doesn’t react with the ions in our Higgs solution, it will just go through to the other side. If it does react, though, then it will be slowed down, and only some of it will get to the other side, possibly none at all.

You can think of the electro-weak force as a mixture of these sorts of substances. Normally, there is no way to tell the different substances apart. Just like the different Higgs solutions, different parts of the electro-weak force are arbitrary.

However, once we’ve chosen a Higgs solution, things change. Now, different parts of our electro-weak substance will behave differently. The parts that react with the ions in our Higgs solution will slow down, and won’t make it through the tube, while the parts that don’t interact will just flow on through.

We call the part that gets through the tube electromagnetism, and the part that doesn’t the weak nuclear force. Electromagnetism is long-range, its waves (light) can travel great distances. The weak nuclear force is short-range, and doesn’t have an effect outside of the scale of atoms.

The important thing to take away from this is that the division between electromagnetism and the weak nuclear force is totally arbitrary. Taken by themselves, they’re equivalent parts of the same, electro-weak force. It’s only because some of them interact with the Higgs, while others don’t, that we distinguish those parts from each other. If the Higgs solution were a different mixture (if the Higgs field had different charges) then a different part of the electroweak force would be long-range, and a different part would be short-range.

We wouldn’t be able to tell the difference, though. We’d see a long-range force, and a short-range force, and a Higgs field. In the end, our world would be completely the same, just based on a different, arbitrary choice.

Entropy is Ignorance

(My last post had a poll in it! If you haven’t responded yet, please do.)

Earlier this month, philosopher Richard Dawid ran a workshop entitled “Why Trust a Theory? Reconsidering Scientific Methodology in Light of Modern Physics” to discuss his idea of “non-empirical theory confirmation” for string theory, inflation, and the multiverse. They haven’t published the talks online yet, so I’m stuck reading coverage, mostly these posts by skeptical philosopher Massimo Pigliucci. I find the overall concept annoying, and may rant about it later. For now though, I’d like to talk about a talks on the second day by philosopher Chris Wüthrich about black hole entropy.

Black holes, of course, are the entire-stars-collapsed-to-a-point-that-no-light-can-escape that everyone knows and loves. Entropy is often thought of as the scientific term for chaos and disorder, the universe’s long slide towards dissolution. In reality, it’s a bit more complicated than that.

2000px-chaos_star-svg

For one, you need to take Elric into account…

Can black holes be disordered? Naively, that doesn’t seem possible. How can a single point be disorderly?

Thought about in a bit more detail, the conclusion seems even stronger. Via something called the “No Hair Theorem”, it’s possible to prove that black holes can be described completely with just three numbers: their mass, their charge, and how fast they are spinning. With just three numbers, how can there be room for chaos?

On the other hand, you may have heard of the Second Law of Thermodynamics. The Second Law states that entropy always increases. Absent external support, things will always slide towards disorder eventually.

If you combine this with black holes, then this seems to have weird implications. In particular, what happens when something disordered falls into a black hole? Does the disorder just “go away”? Doesn’t that violate the Second Law?

This line of reasoning has led to the idea that black holes have entropy after all. It led Bekenstein to calculate the entropy of a black hole based on how much information is “hidden” inside, and Hawking to find that black holes in a quantum world should radiate as if they had a temperature consistent with that entropy. One of the biggest successes of string theory is an explanation for this entropy. In string theory, black holes aren’t perfect points: they have structure, arrangements of strings and higher dimensional membranes, and this structure can be disordered in a way that seems to give the right entropy.

Note that none of this has been tested experimentally. Hawking radiation, if it exists, is very faint: not the sort of thing we could detect with a telescope. Wüthrich is worried that Bekenstein’s original calculation of black hole entropy might have been on the wrong track, which would undermine one of string theory’s most well-known accomplishments.

I don’t know Wüthrich’s full argument, since the talks haven’t been posted online yet. All I know is Pigliucci’s summary. From that summary, it looks like Wüthrich’s primary worry is about two different definitions of entropy.

See, when I described entropy as “disorder”, I was being a bit vague. There are actually two different definitions of entropy. The older one, Gibbs entropy, grows with the number of states of a system. What does that have to do with disorder?

Think about two different substances: a gas, and a crystal. Both are made out of atoms, but the patterns involved are different. In the gas, atoms are free to move, while in the crystal they’re (comparatively) fixed in place.

147515main_phases_large

Blurrily so in this case

There are many different ways the atoms of a gas can be arranged and still be a gas, but fewer in which they can be a crystal, so a gas has more entropy than a crystal. Intuitively, the gas is more disordered.

When Bekenstein calculated the entropy of a black hole he didn’t use Gibbs entropy, though. Instead, he used Shannon entropy, a concept from information theory. Shannon entropy measures the amount of information in a message, with a formula very similar to that of Gibbs entropy: the more different ways you can arrange something, the more information you can use it to send. Bekenstein used this formula to calculate the amount of information that gets hidden from us when something falls into a black hole.

Wüthrich’s worry here (again, as far as Pigliucci describes) is that Shannon entropy is a very different concept from Gibbs entropy. Shannon entropy measures information, while Gibbs entropy is something “physical”. So by using one to predict the other, are predictions about black hole entropy just confused?

It may well be he has a deeper argument for this, one that wasn’t covered in the summary. But if this is accurate, Wüthrich is missing something fundamental. Shannon entropy and Gibbs entropy aren’t two different concepts. Rather, they’re both ways of describing a core idea: entropy is a measure of ignorance.

A gas has more entropy than a crystal, it can be arranged in a larger number of different ways. But let’s not talk about a gas. Let’s talk about a specific arrangement of atoms: one is flying up, one to the left, one to the right, and so on. Space them apart, but be very specific about how they are arranged. This arrangement could well be a gas, but now it’s a specific gas. And because we’re this specific, there are now many fewer states the gas can be in, so this (specific) gas has less entropy!

Now of course, this is a very silly way to describe a gas. In general, we don’t know what every single atom of a gas is doing, that’s why we call it a gas in the first place. But it’s that lack of knowledge that we call entropy. Entropy isn’t just something out there in the world, it’s a feature of our descriptions…but one that, nonetheless, has important physical consequences. The Second Law still holds: the world goes from lower entropy to higher entropy. And while that may seem strange, it’s actually quite logical: the things that we describe in more vague terms should become more common than the things we describe in specific terms, after all there are many more of them!

Entropy isn’t the only thing like this. In the past, I’ve bemoaned the difficulty of describing the concept of gauge symmetry. Gauge symmetry is in some ways just part of our descriptions: we prefer to describe fundamental forces in a particular way, and that description has redundant parameters. We have to make those redundant parameters “go away” somehow, and that leads to non-existent particles called “ghosts”. However, gauge symmetry also has physical consequences: it was how people first knew that there had to be a Higgs boson, long before it was discovered. And while it might seem weird to think that a redundancy could imply something as physical as the Higgs, the success of the concept of entropy should make this much less surprising. Much of what we do in physics is reasoning about different descriptions, different ways of dividing up the world, and then figuring out the consequences of those descriptions. Entropy is ignorance…and if our ignorance obeys laws, if it’s describable mathematically, then it’s as physical as anything else.

Is Everything Really Astonishingly Simple?

Neil Turok gave a talk last week, entitled The Astonishing Simplicity of Everything. In it, he argued that our current understanding of physics is really quite astonishingly simple, and that recent discoveries seem to be confirming this simplicity.

For the right sort of person, this can be a very uplifting message. The audience was spellbound. But a few of my friends were pretty thoroughly annoyed, so I thought I’d dedicate a post to explaining why.

Neil’s talk built up to showing this graphic, one of the masterpieces of Perimeter’s publications department:

Looked at in this way, the laws of physics look astonishingly simple. One equation, a few terms, each handily labeled with a famous name of some (occasionally a little hazy) relevance to the symbol in question.

In a sense, the world really is that simple. There are only a few kinds of laws that govern the universe, and the concepts behind them are really, deep down, very simple concepts. Neil adroitly explained some of the concepts behind quantum mechanics in his talk (here represented by the Schrodinger, Feynman, and Planck parts of the equation), and I have a certain fondness for the Maxwell-Yang-Mills part. The other parts represent different kinds of particles, and different ways they can interact.

While there are only a few different kinds of laws, though, that doesn’t mean the existing laws are simple. That nice, elegant equation hides 25 arbitrary parameters, hidden in the Maxwell-Yang-Mills, Dirac, Kobayashi-Masakawa, and Higgs parts. It also omits the cosmological constant, which fuels the expansion of the universe. And there are problems if you try to claim that the gravity part, for example, is complete.

When Neil mentions recent discoveries, he’s referring to the LHC not seeing new supersymmetric particles, to telescopes not seeing any unusual features in the cosmic microwave background. The theories that were being tested, supersymmetry and inflation, are in many ways more complicated than the Standard Model, adding new parameters without getting rid of old ones. But I think it’s a mistake to say that if these theories are ruled out, the world is astonishingly simple. These theories are attempts to explain unlikely features of the old parameters, or unlikely features of the universe we observe. Without them, we’ve still got those unlikely, awkward, complicated bits.

Of course, Neil doesn’t think the Standard Model is all there is either, and while he’s not a fan of inflation, he does have proposals he’s worked on that explain the same observations, proposals that are also beyond the current picture. More broadly, he’s not suggesting here that the universe is just what we’ve figured out so far and no more. Rather, he’s suggesting that new proposals ought to build on the astonishing simplicity of the universe, instead of adding complexity, that we need to go back to the conceptual drawing board rather than correcting the universe with more gears and wheels.

On the one hand, that’s Perimeter’s mission statement in a nutshell. Perimeter’s independent nature means that folks here can focus on deeper conceptual modifications to the laws of physics, rather than playing with the sorts of gears and wheels that people already know how to work with.

On the other hand, a lack of new evidence doesn’t do anyone any favors. It doesn’t show the way for supersymmetry, but it doesn’t point to any of the “deep conceptual” approaches either. And so for some people, Neil’s glee at the lack of new evidence feels less like admiration for the simplicity of the cosmos and more like that one guy in a group project who sits back chuckling while everyone else fails. You can perhaps understand why some people felt resentful.

Hooray for Neutrinos!

Congratulations to Takaaki Kajita and Arthur McDonald, winners of this year’s Nobel Prize in Physics, as well as to the Super-Kamiokande and SNOLAB teams that made their work possible.

Congratulations!

Unlike last year’s Nobel, this is one I’ve been anticipating for quite some time. Kajita and McDonald discovered that neutrinos have mass, and that discovery remains our best hint that there is something out there beyond the Standard Model.

But I’m getting a bit ahead of myself.

Neutrinos are the lightest of the fundamental particles, and for a long time they were thought to be completely massless. Their name means “little neutral one”, and it’s probably the last time physicists used “-ino” to mean “little”. Neutrinos are “neutral” because they have no electrical charge. They also don’t interact with the strong nuclear force. Only the weak nuclear force has any effect on them. (Well, gravity does too, but very weakly.)

This makes it very difficult to detect neutrinos: you have to catch them interacting via the weak force, which is, well, weak. Originally, that meant they had to be inferred by their absence: missing energy in nuclear reactions carried away by “something”. Now, they can be detected, but it requires massive tanks of fluid, carefully watched for the telltale light of the rare interactions between neutrinos and ordinary matter. You wouldn’t notice if billions of neutrinos passed through you every second, like an unstoppable army of ghosts. And in fact, that’s exactly what happens!

Visualization of neutrinos from a popular documentary

In the 60’s, scientists began to use these giant tanks of fluid to detect neutrinos coming from the sun. An enormous amount of effort goes in to understanding the sun, and these days our models of it are pretty accurate, so it came as quite a shock when researchers observed only half the neutrinos they expected. It wasn’t until the work of Super-Kamiokande in 1998, and SNOLAB in 2001, that we knew the reason why.

As it turns out, neutrinos oscillate. Neutrinos are produced in what are called flavor states, which match up with the different types of leptons. There are electron-neutrinos, muon-neutrinos, and tau-neutrinos.

Radioactive processes usually produce electron-neutrinos, so those are the type that the sun produces. But on their way from the sun to the earth, these neutrinos “oscillate”: they switch between electron neutrinos and the other types! The older detectors, focused only on electron-neutrinos, couldn’t see this. SNOLAB’s big advantage was that it could detect the other types of neutrinos as well, and tell the difference between them, which allowed it to see that the “missing” neutrinos were really just turning into other flavors! Meanwhile, Super-Kamiokande measured neutrinos coming not from the sun, but from cosmic rays reacting with the upper atmosphere. Some of these neutrinos came from the sky above the detector, while others traveled all the way through the earth below it, from the atmosphere on the other side. By observing “missing” neutrinos coming from below but not from above, Super-Kamiokande confirmed that it wasn’t the sun’s fault that we were missing solar neutrinos, neutrinos just oscillate!

What does this oscillation have to do with neutrinos having mass, though?

Here things get a bit trickier. I’ve laid some of the groundwork in older posts. I’ve told you to think about mass as “energy we haven’t met yet”, as the energy something has when we leave it alone to itself. I’ve also mentioned that conservation laws come from symmetries of nature, that energy conservation is a result of symmetry in time.

This should make it a little more plausible when I say that when something has a specific mass, it doesn’t change. It can decay into other particles, or interact with other forces, but left alone, by itself, it won’t turn into something else. To be more specific, it doesn’t oscillate. A state with a fixed mass is symmetric in time.

The only way neutrinos can oscillate between flavor states, then, is if one flavor state is actually a combination (in quantum terms, a superposition) of different masses. The components with different masses move at different speeds, so at any point along their path you can be more or less likely to see certain masses of neutrinos. As the mix of masses changes, the flavor state changes, so neutrinos end up oscillating from electron-neutrino, to muon-neutrino, to tau-neutrino.

So because of neutrino oscillation, neutrinos have to have mass. But this presented a problem. Most fundamental particles get their mass from interacting with the Higgs field. But, as it turns out, neutrinos can’t interact with the Higgs field. This has to do with the fact that neutrinos are “chiral”, and only come in a “left-handed” orientation. Only if they had both types of “handedness” could they get their mass from the Higgs.

As-is, they have to get their mass another way, and that way has yet to be definitively shown. Whatever it ends up being, it will be beyond the current Standard Model. Maybe there actually are right-handed neutrinos, but they’re too massive, or interact too weakly, for them to have been discovered. Maybe neutrinos are Majorana particles, getting mass in a novel way that hasn’t been seen yet in the Standard Model.

Whatever we discover, neutrinos are currently our best evidence that something lies beyond the Standard Model. Naturalness may have philosophical problems, dark matter may be explained away by modified gravity…but if neutrinos have mass, there’s something we still have yet to discover. And that definitely seems worthy of a Nobel to me!

Pentaquarks!

Earlier this week, the LHCb experiment at the Large Hadron Collider announced that, after painstakingly analyzing the data from earlier runs, they have decisive evidence of a previously unobserved particle: the pentaquark.

What’s a pentaquark? In simple terms, it’s five quarks stuck together. Stick two up quarks and a down quark together, and you get a proton. Stick two quarks together, you get a meson of some sort. Five, you get a pentaquark.

(In this case, if you’re curious: two up quarks, one down quark, one charm quark and one anti-charm quark.)

Artist’s Conception

Crucially, this means pentaquarks are not fundamental particles. Fundamental particles aren’t like species, but composite particles like pentaquarks are: they’re examples of a dizzying variety of combinations of an already-known set of basic building blocks.

So why is this discovery exciting? If we already knew that quarks existed, and we already knew the forces between them, shouldn’t we already know all about pentaquarks?

Well, not really. People definitely expected pentaquarks to exist, they were predicted fifty years ago. But their exact properties, or how likely they were to show up? Largely unknown.

Quantum field theory is hard, and this is especially true of QCD, the theory of quarks and gluons. We know the basic rules, but calculating their large-scale consequences, which composite particles we’re going to detect and which we won’t, is still largely out of our reach. We have to supplement first-principles calculations with experimental data, to take bits and pieces and approximations until we get something reasonably sensible.

This is an important point in general, not just for pentaquarks. Often, people get very excited about the idea of a “theory of everything”. At best, such a theory would tell us the fundamental rules that govern the universe. The thing is, we already know many of these rules, even if we don’t yet know all of them. What we can’t do, in general, is predict their full consequences. Most of physics, most of science in general, is about investigating these consequences, coming up with models for things we can’t dream of calculating from first principles, and it really does start as early as “what composite particles can you make out of quarks?”

Pentaquarks have been a long time coming, long enough that someone occasionally proposed a model that explained that they didn’t exist. There are still other exotic states of quarks and gluons out there, like glueballs, that have been predicted but not yet observed. It’s going to take time, effort, and data before we fully understand composite particles, even though we know the rules of QCD.