Tag Archives: particle physics

Is Everything Really Astonishingly Simple?

Neil Turok gave a talk last week, entitled The Astonishing Simplicity of Everything. In it, he argued that our current understanding of physics is really quite astonishingly simple, and that recent discoveries seem to be confirming this simplicity.

For the right sort of person, this can be a very uplifting message. The audience was spellbound. But a few of my friends were pretty thoroughly annoyed, so I thought I’d dedicate a post to explaining why.

Neil’s talk built up to showing this graphic, one of the masterpieces of Perimeter’s publications department:

Looked at in this way, the laws of physics look astonishingly simple. One equation, a few terms, each handily labeled with a famous name of some (occasionally a little hazy) relevance to the symbol in question.

In a sense, the world really is that simple. There are only a few kinds of laws that govern the universe, and the concepts behind them are really, deep down, very simple concepts. Neil adroitly explained some of the concepts behind quantum mechanics in his talk (here represented by the Schrodinger, Feynman, and Planck parts of the equation), and I have a certain fondness for the Maxwell-Yang-Mills part. The other parts represent different kinds of particles, and different ways they can interact.

While there are only a few different kinds of laws, though, that doesn’t mean the existing laws are simple. That nice, elegant equation hides 25 arbitrary parameters, hidden in the Maxwell-Yang-Mills, Dirac, Kobayashi-Masakawa, and Higgs parts. It also omits the cosmological constant, which fuels the expansion of the universe. And there are problems if you try to claim that the gravity part, for example, is complete.

When Neil mentions recent discoveries, he’s referring to the LHC not seeing new supersymmetric particles, to telescopes not seeing any unusual features in the cosmic microwave background. The theories that were being tested, supersymmetry and inflation, are in many ways more complicated than the Standard Model, adding new parameters without getting rid of old ones. But I think it’s a mistake to say that if these theories are ruled out, the world is astonishingly simple. These theories are attempts to explain unlikely features of the old parameters, or unlikely features of the universe we observe. Without them, we’ve still got those unlikely, awkward, complicated bits.

Of course, Neil doesn’t think the Standard Model is all there is either, and while he’s not a fan of inflation, he does have proposals he’s worked on that explain the same observations, proposals that are also beyond the current picture. More broadly, he’s not suggesting here that the universe is just what we’ve figured out so far and no more. Rather, he’s suggesting that new proposals ought to build on the astonishing simplicity of the universe, instead of adding complexity, that we need to go back to the conceptual drawing board rather than correcting the universe with more gears and wheels.

On the one hand, that’s Perimeter’s mission statement in a nutshell. Perimeter’s independent nature means that folks here can focus on deeper conceptual modifications to the laws of physics, rather than playing with the sorts of gears and wheels that people already know how to work with.

On the other hand, a lack of new evidence doesn’t do anyone any favors. It doesn’t show the way for supersymmetry, but it doesn’t point to any of the “deep conceptual” approaches either. And so for some people, Neil’s glee at the lack of new evidence feels less like admiration for the simplicity of the cosmos and more like that one guy in a group project who sits back chuckling while everyone else fails. You can perhaps understand why some people felt resentful.

Hooray for Neutrinos!

Congratulations to Takaaki Kajita and Arthur McDonald, winners of this year’s Nobel Prize in Physics, as well as to the Super-Kamiokande and SNOLAB teams that made their work possible.

Congratulations!

Unlike last year’s Nobel, this is one I’ve been anticipating for quite some time. Kajita and McDonald discovered that neutrinos have mass, and that discovery remains our best hint that there is something out there beyond the Standard Model.

But I’m getting a bit ahead of myself.

Neutrinos are the lightest of the fundamental particles, and for a long time they were thought to be completely massless. Their name means “little neutral one”, and it’s probably the last time physicists used “-ino” to mean “little”. Neutrinos are “neutral” because they have no electrical charge. They also don’t interact with the strong nuclear force. Only the weak nuclear force has any effect on them. (Well, gravity does too, but very weakly.)

This makes it very difficult to detect neutrinos: you have to catch them interacting via the weak force, which is, well, weak. Originally, that meant they had to be inferred by their absence: missing energy in nuclear reactions carried away by “something”. Now, they can be detected, but it requires massive tanks of fluid, carefully watched for the telltale light of the rare interactions between neutrinos and ordinary matter. You wouldn’t notice if billions of neutrinos passed through you every second, like an unstoppable army of ghosts. And in fact, that’s exactly what happens!

Visualization of neutrinos from a popular documentary

In the 60’s, scientists began to use these giant tanks of fluid to detect neutrinos coming from the sun. An enormous amount of effort goes in to understanding the sun, and these days our models of it are pretty accurate, so it came as quite a shock when researchers observed only half the neutrinos they expected. It wasn’t until the work of Super-Kamiokande in 1998, and SNOLAB in 2001, that we knew the reason why.

As it turns out, neutrinos oscillate. Neutrinos are produced in what are called flavor states, which match up with the different types of leptons. There are electron-neutrinos, muon-neutrinos, and tau-neutrinos.

Radioactive processes usually produce electron-neutrinos, so those are the type that the sun produces. But on their way from the sun to the earth, these neutrinos “oscillate”: they switch between electron neutrinos and the other types! The older detectors, focused only on electron-neutrinos, couldn’t see this. SNOLAB’s big advantage was that it could detect the other types of neutrinos as well, and tell the difference between them, which allowed it to see that the “missing” neutrinos were really just turning into other flavors! Meanwhile, Super-Kamiokande measured neutrinos coming not from the sun, but from cosmic rays reacting with the upper atmosphere. Some of these neutrinos came from the sky above the detector, while others traveled all the way through the earth below it, from the atmosphere on the other side. By observing “missing” neutrinos coming from below but not from above, Super-Kamiokande confirmed that it wasn’t the sun’s fault that we were missing solar neutrinos, neutrinos just oscillate!

What does this oscillation have to do with neutrinos having mass, though?

Here things get a bit trickier. I’ve laid some of the groundwork in older posts. I’ve told you to think about mass as “energy we haven’t met yet”, as the energy something has when we leave it alone to itself. I’ve also mentioned that conservation laws come from symmetries of nature, that energy conservation is a result of symmetry in time.

This should make it a little more plausible when I say that when something has a specific mass, it doesn’t change. It can decay into other particles, or interact with other forces, but left alone, by itself, it won’t turn into something else. To be more specific, it doesn’t oscillate. A state with a fixed mass is symmetric in time.

The only way neutrinos can oscillate between flavor states, then, is if one flavor state is actually a combination (in quantum terms, a superposition) of different masses. The components with different masses move at different speeds, so at any point along their path you can be more or less likely to see certain masses of neutrinos. As the mix of masses changes, the flavor state changes, so neutrinos end up oscillating from electron-neutrino, to muon-neutrino, to tau-neutrino.

So because of neutrino oscillation, neutrinos have to have mass. But this presented a problem. Most fundamental particles get their mass from interacting with the Higgs field. But, as it turns out, neutrinos can’t interact with the Higgs field. This has to do with the fact that neutrinos are “chiral”, and only come in a “left-handed” orientation. Only if they had both types of “handedness” could they get their mass from the Higgs.

As-is, they have to get their mass another way, and that way has yet to be definitively shown. Whatever it ends up being, it will be beyond the current Standard Model. Maybe there actually are right-handed neutrinos, but they’re too massive, or interact too weakly, for them to have been discovered. Maybe neutrinos are Majorana particles, getting mass in a novel way that hasn’t been seen yet in the Standard Model.

Whatever we discover, neutrinos are currently our best evidence that something lies beyond the Standard Model. Naturalness may have philosophical problems, dark matter may be explained away by modified gravity…but if neutrinos have mass, there’s something we still have yet to discover. And that definitely seems worthy of a Nobel to me!

Pentaquarks!

Earlier this week, the LHCb experiment at the Large Hadron Collider announced that, after painstakingly analyzing the data from earlier runs, they have decisive evidence of a previously unobserved particle: the pentaquark.

What’s a pentaquark? In simple terms, it’s five quarks stuck together. Stick two up quarks and a down quark together, and you get a proton. Stick two quarks together, you get a meson of some sort. Five, you get a pentaquark.

(In this case, if you’re curious: two up quarks, one down quark, one charm quark and one anti-charm quark.)

Artist’s Conception

Crucially, this means pentaquarks are not fundamental particles. Fundamental particles aren’t like species, but composite particles like pentaquarks are: they’re examples of a dizzying variety of combinations of an already-known set of basic building blocks.

So why is this discovery exciting? If we already knew that quarks existed, and we already knew the forces between them, shouldn’t we already know all about pentaquarks?

Well, not really. People definitely expected pentaquarks to exist, they were predicted fifty years ago. But their exact properties, or how likely they were to show up? Largely unknown.

Quantum field theory is hard, and this is especially true of QCD, the theory of quarks and gluons. We know the basic rules, but calculating their large-scale consequences, which composite particles we’re going to detect and which we won’t, is still largely out of our reach. We have to supplement first-principles calculations with experimental data, to take bits and pieces and approximations until we get something reasonably sensible.

This is an important point in general, not just for pentaquarks. Often, people get very excited about the idea of a “theory of everything”. At best, such a theory would tell us the fundamental rules that govern the universe. The thing is, we already know many of these rules, even if we don’t yet know all of them. What we can’t do, in general, is predict their full consequences. Most of physics, most of science in general, is about investigating these consequences, coming up with models for things we can’t dream of calculating from first principles, and it really does start as early as “what composite particles can you make out of quarks?”

Pentaquarks have been a long time coming, long enough that someone occasionally proposed a model that explained that they didn’t exist. There are still other exotic states of quarks and gluons out there, like glueballs, that have been predicted but not yet observed. It’s going to take time, effort, and data before we fully understand composite particles, even though we know the rules of QCD.

What’s the Matter with Dark Matter, Matt?

It’s very rare that I disagree with Matt Strassler. That said, I can’t help but think that, when he criticizes the press for focusing their LHC stories on dark matter, he’s missing an important element.

From his perspective, when the media says that the goal of the new run of the LHC is to detect dark matter, they’re just being lazy. People have heard of dark matter. They might have read that it makes up 23% of the universe, more than regular matter at 4%. So when an LHC physicist wants to explain what they’re working on to a journalist, the easiest way is to talk about dark matter. And when the journalist wants to explain the LHC to the public, they do the same thing.

This explanation makes sense, but it’s a little glib. What Matt Strassler is missing is that, from the public’s perspective, dark matter really is a central part of the LHC’s justification.

Now, I’m not saying that the LHC’s main goal is to detect dark matter! Directly detecting dark matter is pretty low on the LHC’s list of priorities. Even if it detects a new particle with the right properties to be dark matter, it still wouldn’t be able to confirm that it really is dark matter without help from another experiment that actually observes some consequence of the new particle among the stars. I agree with Matt when he writes that the LHC’s priorities for the next run are

  1. studying the newly discovered Higgs particle in great detail, checking its properties very carefully against the predictions of the “Standard Model” (the equations that describe the known apparently-elementary particles and forces)  to see whether our current understanding of the Higgs field is complete and correct, and

  2. trying to find particles or other phenomena that might resolve the naturalness puzzle of the Standard Model, a puzzle which makes many particle physicists suspicious that we are missing an important part of the story, and

  3. seeking either dark matter particles or particles that may be shown someday to be “associated” with dark matter.

Here’s the thing, though:

From the public’s perspective, why do we need to study the properties of the Higgs? Because we think it might be different than the Standard Model predicts.

Why do we think it might be different than the Standard Model predicts? More generally, why do we expect the world to be different from the Standard Model at all? Well there are a few reasons, but they generally boil down to two things: the naturalness puzzle, and the fact that the Standard Model doesn’t have anything that could account for dark matter.

Naturalness is a powerful motivation, but it’s hard to sell to the general public. Does the universe appear fine-tuned? Then maybe it just is fine-tuned! Maybe someone fine-tuned it!

These arguments miss the real problem with fine-tuning, but they’re hard to correct in a short article. Getting the public worried about naturalness is tough, tough enough that I don’t think we can demand it of the average journalist, or accuse them of being lazy if they fail to do it.

That leaves dark matter. And for all that naturalness is philosophically murky, dark matter is remarkably clear. We don’t know what 96% of the universe is made of! That’s huge, and not just in a “gee-whiz-cool” way. It shows, directly and intuitively, that physics still has something it needs to solve, that we still have particles to find. Unless you are a fan of (increasingly dubious) modifications to gravity like MOND, dark matter is the strongest possible justification for machines like the LHC.

The LHC won’t confirm dark matter on its own. It might not directly detect it, that’s still quite up-in-the-air. And even if it finds deviations from the Standard Model, it’s not likely they’ll be directly caused by dark matter, at least not in a simple way.

But the reason that the press is describing the LHC’s mission in terms of dark matter isn’t just laziness. It’s because, from the public’s perspective, dark matter is the only vaguely plausible reason to spend billions of dollars searching for new particles, especially when we’ve already found the Higgs. We’re lucky it’s such a good reason.

Want to Make Something New? Just Turn on the Lights.

Isn’t it weird that you can collide two protons, and get something else?

It wouldn’t be so weird if you collided two protons, and out popped a quark. After all, protons are made of quarks. But how, if you collide two protons together, do you get a tau, or the Higgs boson: things that not only aren’t “part of” protons, but are more massive than a proton by themselves?

It seems weird…but in a way, it’s not. When a particle releases another particle that wasn’t inside it to begin with, it’s actually not doing anything more special than an everyday light bulb.

Eureka!

How does a light bulb work?

You probably know the basics: when an electrical current enters the bulb, the electrons in the filament start to move. They heat the filament up, releasing light.

That probably seems perfectly ordinary. But ask yourself for a moment: where did the light come from?

Light is made up of photons, elementary particles in their own right. When you flip a light switch, where do the photons come from? Were they stored in the light bulb?

Silly question, right? You don’t need to “store” light in a light bulb: light bulbs transform one type of energy (electrical, or the movement of electrons) into another type of energy (light, or photons).

Here’s the thing, though: mass is just another type of energy.

I like to describe mass as “energy we haven’t met yet”. Einstein’s equation, E=mc^2, relates a particle’s mass to its “rest energy”, the energy it would have if it stopped moving around and sit still. Even when a particle seems to be sitting still from the outside, there’s still a lot going on, though. “Composite” particles like protons have powerful forces between their internal quarks, while particles like electrons interact with the Higgs field. These processes give the particle energy, even when it’s not moving, so from our perspective on the outside they’re giving the particle mass.

What does that mean for the protons at the LHC?

The protons at the LHC have a lot of kinetic energy: they’re going 99.9999991% of the speed of light! When they collide, all that energy has to go somewhere. Just like in a light bulb, the fast-moving particles will release their energy in another form. And while that some of that energy will add to the speed of the fragments, much of it will go into the mass and energy of new particles. Some of these particles will be photons, some will be tau leptons, or Higgs bosons…pretty much anything that the protons have enough energy to create.

So if you want to understand how to create new particles, you don’t need a deep understanding of the mysteries of quantum field theory. Just turn on the lights.

How to Predict the Mass of the Higgs

Did Homer Simpson predict the mass of the Higgs boson?

No, of course not.

Apart from the usual reasons, he’s off by more than a factor of six.

If you play with the numbers, it looks like Simon Singh (the popular science writer who reported the “discovery” Homer made as a throwaway joke in a 1998 Simpsons episode) made the classic physics mistake of losing track of a factor of 2\pi. In particular, it looks like he mistakenly thought that the Planck constant, h, was equal to the reduced Planck constant, \hbar, divided by 2\pi, when actually it’s \hbar times 2\pi. So while Singh read Homer’s prediction as 123 GeV, surprisingly close to the actual Higgs mass of 125 GeV found in 2012, in fact Homer predicted the somewhat more embarrassing value of 775 GeV.

D’Oh!

That was boring. Let’s ask a more interesting question.

Did Gordon Kane predict the mass of the Higgs boson?

I’ve talked before about how it seems impossible that string theory will ever make any testable predictions. The issue boils down to one of too many possibilities: string theory predicts different consequences for different ways that its six (or seven for M theory) extra dimensions can be curled up. Since there is an absurdly vast number of ways this can be done, anything you might want to predict (say, the mass of the electron) has an absurd number of possible values.

Gordon Kane and collaborators get around this problem by tackling a different one. Instead of trying to use string theory to predict things we already know, like the mass of the electron, they assume these things are already true. That is, they assume we live in a world with electrons that have the mass they really have, and quarks that have the mass they really have, and so on. They assume that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make. And, they assume that this world is a consequence of string (or rather M) theory.

From that combination of assumptions, they then figure out the consequences for things that aren’t yet known. And in a 2011 paper, they predicted the Higgs mass would be between 105 and 129 GeV.

I have a lot of sympathy for this approach, because it’s essentially the same thing that non-string-theorists do. When a particle physicist wants to predict what will come out of the LHC, they don’t try to get it from first principles: they assume the world works as we have discovered, make a few mild extra assumptions, and see what new consequences come out that we haven’t observed yet. If those particle physicists can be said to make predictions from supersymmetry, or (shudder) technicolor, then Gordon Kane is certainly making predictions from string theory.

So why haven’t you heard of him? Even if you have, why, if this guy successfully predicted the mass of the Higgs boson, are people still saying that you can’t make predictions with string theory?

Trouble is, making predictions is tricky.

Part of the problem is timing. Gordon Kane’s paper went online in December of 2011. The Higgs mass was announced in July 2012, so you might think Kane got a six month head-start. But when something is announced isn’t the same as when it’s discovered. For a big experiment like the Large Hadron Collider, there’s a long road between the first time something gets noticed and the point where everyone is certain enough that they’re ready to announce it to the world. Rumors fly, and it’s not clear that Kane and his co-authors wouldn’t have heard them.

Assumptions are the other issue. Remember when I said, a couple paragraphs up, that Kane’s group assumed “that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make“? That last part is what makes things tricky. There were a few extra assumptions Kane made, beyond those needed to reproduce the world we know. For many people, some of these extra assumptions are suspicious. They worry that the assumptions might have been chosen, not just because they made sense, but because they happened to give the right (rumored) mass of the Higgs.

If you want to predict something in physics, it’s not just a matter of getting in ahead of the announcement with the right number. For a clear prediction, you need to be early enough that the experiments haven’t yet even seen hints of what you’re looking for. Even then, you need your theory to be suitably generic, so that it’s clear that your prediction is really the result of the math and not of your choices. You can trade off aspects of this: more accuracy for a less generic theory, better timing for looser predictions. Get the formula right, and the world will laud you for your prediction. Wrong, and you’re Homer Simpson. Somewhere in between, though, and you end up in that tricky, tricky grey area.

Like Gordon Kane.

Living in a Broken World: Supersymmetry We Can Test

I’ve talked before about supersymmetry. Supersymmetry relates particles with different spins, linking spin 1 force-carrying particles like photons and gluons to spin 1/2 particles similar to electrons, and spin 1/2 particles in turn to spin 0 “scalar” particles, the same general type as the Higgs. I emphasized there that, if two particles are related by supersymmetry, they will have some important traits in common: the same mass and the same interactions.

That’s true for the theories I like to work with. In particular, it’s true for N=4 super Yang-Mills. Adding supersymmetry allows us to tinker with neater, cleaner theories, gaining mastery over rice before we start experimenting with the more intricate “sushi” of theories of the real world.

However, it should be pretty clear that we don’t live in a world with this sort of supersymmetry. A quick look at the Standard Model indicates that no two known particles interact in precisely the same way. When people try to test supersymmetry in the real world, they’re not looking for this sort of thing. Rather, they’re looking for broken supersymmetry.

In the past, I’ve described broken supersymmetry as like a broken mirror: the two sides are no longer the same, but you can still predict one side’s behavior from the other. When supersymmetry is broken, related particles still have the same interactions. Now, though, they can have different masses.

The simplest version of supersymmetry, N=1, gives one partner to each particle. Since nothing in the Standard Model can be partners of each other, if we have broken N=1 supersymmetry in the real world then we need a new particle for each existing one…and each one of those particles has a potentially unknown, different mass. And if that sounds rather complicated…

Baroque enough to make Rubens happy.

That, right there, is the Minimal Supersymmetric Standard Model, the simplest thing you can propose if you want a world with broken supersymmetry. If you look carefully, you’ll notice that it’s actually a bit more complicated than just one partner for each known particle: there are a few extra Higgs fields as well!

If we’re hoping to explain anything in a simpler way, we seem to have royally screwed up. Luckily, though, the situation is not quite as ridiculous as it appears. Let’s go back to the mirror analogy.

If you look into a broken mirror, you can still have a pretty good idea of what you’ll see…but in order to do so, you have to know how the mirror is broken.

Similarly, supersymmetry can be broken in different ways, by different supersymmetry-breaking mechanisms.

The general idea is to start with a theory in which supersymmetry is precisely true, and all supersymmetric partners have the same mass. Then, consider some Higgs-like field. Like the Higgs, it can take some constant value throughout all of space, forming a background like the color of a piece of construction paper. While the rules that govern this field would respect supersymmetry, any specific value it takes wouldn’t. Instead, it would be biased: the spin 0, Higgs-like field could take on a constant value, but its spin 1/2 supersymmetric partner couldn’t. (If you want to know why, read my post on the Higgs linked above.)

Once that field takes on a specific value, supersymmetry is broken. That breaking then has to be communicated to the rest of the theory, via interactions between different particles. There are several different ways this can work: perhaps the interactions come from gravity, or are the same strength as gravity. Maybe instead they come from a new fundamental force, similar to the strong nuclear force but harder to discover. They could even come as byproducts of the breaking of other symmetries.

Each one of these options has different consequences, and leads to different predictions for the masses of undiscovered partner particles. They tend to have different numbers of extra parameters (for example, if gravity-based interactions are involved there are four new parameters, and an extra sign, that must be fixed). None of them have an entire standard model-worth of new parameters…but all of them have at least a few extra.

(Brief aside: I’ve been talking about the Minimal Supersymmetric Standard Model, but these days people have largely given up on finding evidence for it, and are exploring even more complicated setups like the Next-to-Minimal Supersymmetric Standard Model.)

If we’re introducing extra parameters without explaining existing ones, what’s the point of supersymmetry?

Last week, I talked about the problem of fine-tuning. I explained that when physicists are worried about fine-tuning, what we’re really worried about is whether the sorts of ultimate (low number of parameters) theories that we expect to hold could give rise to the apparently fine-tuned world we live in. In that post, I was a little misleading about supersymmetry’s role in that problem.

The goal of introducing (broken) supersymmetry is to solve a particular set of fine-tuning problems, mostly one specific one involving the Higgs. This doesn’t mean that supersymmetry is the sort of “ultimate” theory we’re looking for, rather supersymmetry is one of the few ways we know to bridge the gap between “ultimate” theories and a fine-tuned real world.

To explain it in terms of the language of the last post, it’s hard to find one of these “ultimate” theories that gives rise to a fine-tuned world. What’s quite a bit easier, though, is finding one of these “ultimate” theories that gives rise to a supersymmetric world, which in turn gives rise to a fine-tuned real world.

In practice, these are the sorts of theories that get tested. Very rarely are people able to propose testable versions of the more “ultimate” theories. Instead, one generally finds intermediate theories, theories that can potentially come from “ultimate” theories, and builds general versions of those that can be tested.

These intermediate theories come in multiple levels. Some physicists look for the most general version, theories like the Minimal Supersymmetric Standard Model with a whole host of new parameters. Others look for more specific versions, choices of supersymmetry-breaking mechanisms. Still others try to tie it further up, getting close to candidate “ultimate” theories like M theory (though in practice they generally make a few choices that put them somewhere in between).

The hope is that with a lot of people covering different angles, we’ll be able to make the best use of any new evidence that comes in. If “something” is out there, there are still a lot of choices for what that something could be, and it’s the job of physicists to try to understand whatever ends up being found.

Not bad for working in a broken world, huh?

The Real Problem with Fine-Tuning

You’ve probably heard it said that the universe is fine-tuned.

The Standard Model, our current best understanding of the rules that govern particle physics, is full of lots of fiddly adjustable parameters. The masses of fundamental particles and the strengths of the fundamental forces aren’t the sort of thing we can predict from first principles: we need to go out, do experiments, and find out what they are. And you’ve probably heard it argued that, if these fiddly parameters were even a little different from what they are, life as we know it could not exist.

That’s fine-tuning…or at least, that’s what many people mean when they talk about fine-tuning. It’s not exactly what physicists mean though. The thing is, almost nobody who studies particle physics thinks the parameters of the Standard Model are the full story. In fact, any theory with adjustable parameters probably isn’t the full story.

It all goes back to a point I made a while back: nature abhors a constant. The whole purpose of physics is to explain the natural world, and we have a long history of taking things that look arbitrary and linking them together, showing that reality has fewer parameters than we had thought. This is something physics is very good at. (To indulge in a little extremely amateurish philosophy, it seems to me that this is simply an inherent part of how we understand the world: if we encounter a parameter, we will eventually come up with an explanation for it.)

Moreover, at this point we have a rough idea of what this sort of explanation should look like. We have experience playing with theories that don’t have any adjustable parameters, or that only have a few: M theory is an example, but there are also more traditional quantum field theories that fill this role with no mention of string theory. From our exploration of these theories, we know that they can serve as the kind of explanation we need: in a world governed by one of these theories, people unaware of the full theory would observe what would look at first glance like a world with many fiddly adjustable parameters, parameters that would eventually turn out to be consequences of the broader theory.

So for a physicist, fine-tuning is not about those fiddly parameters themselves. Rather, it’s about the theory that predicts them. Because we have experience playing with these sorts of theories, we know roughly the sorts of worlds they create. What we know is that, while sometimes they give rise to worlds that appear fine-tuned, they tend to only do so in particular ways. Setups that give rise to fine-tuning have consequences: supersymmetry, for example, can give rise to an apparently fine-tuned universe but has to have “partner” particles that show up in powerful enough colliders. In general, a theory that gives rise to apparent fine-tuning will have some detectable consequences.

That’s where physicists start to get worried. So far, we haven’t seen any of these detectable consequences, and it’s getting to the point where we could have, had they been the sort many people expected.

Physicists are worried about fine-tuning, but not because it makes the universe “unlikely”. They’re worried because the more finely-tuned our universe appears, the harder it is to find an explanation for it in terms of the sorts of theories we’re used to working with, and the less likely it becomes that someone will discover a good explanation any time soon. We’re quite confident that there should be some explanation, hundreds of years of scientific progress strongly suggest that to be the case. But the nature of that explanation is becoming increasingly opaque.

Why You Should Be Skeptical about Faster-than-Light Neutrinos

While I do love science, I don’t always love IFL Science. They can be good at drumming up enthusiasm, but they can also be ridiculously gullible. Case in point: last week, IFL Science ran a piece on a recent paper purporting to give evidence for faster-than-light particles.

Faster than light! Sounds cool, right? Here’s why you should be skeptical:

If a science article looks dubious, you should check out the source. In this case, IFL Science links to an article on the preprint server arXiv.

arXiv is a freely accessible website where physicists and mathematicians post their articles. The site has multiple categories, corresponding to different fields. It’s got categories for essentially any type of physics you’d care to include, with the option to cross-list if you think people from multiple areas might find your work interesting.

So which category is this paper in? Particle physics? Astrophysics?

General Physics, actually.

General Physics is arXiv’s catch-all category. Some of it really is general, and can’t be put into any more specific place. But most of it, including this, falls into another category: things arXiv’s moderators think are fishy.

arXiv isn’t a journal. If you follow some basic criteria, it won’t reject your articles. Instead, dubious articles are put into General Physics, to signify that they don’t seem to belong with the other scholarship in the established categories. General Physics is a grab-bag of weird ideas and crackpot theories, a mix of fringe physicists and overenthusiastic amateurs. There probably are legitimate papers in there too…but for every paper in there, you can guarantee that some experienced researcher found it suspicious enough to send into exile.

Even if you don’t trust the moderators of arXiv, there are other reasons to be wary of faster-than-light particles.

According to Einstein’s theory of relativity, massless particles travel at the speed of light, while massive particles always travel slower. To travel faster than the speed of light, you need to have a very unusual situation: a particle whose mass is an imaginary number.

Particles like that are called tachyons, and they’re a staple of science fiction. While there was a time when they were a serious subject of physics speculation, nowadays the general view is that tachyons are a sign we’re making bad assumptions.

Assuming that someone is a republic serial villain is a good example.

Why is that? It has to do with the nature of mass.

In quantum field theory, what we observe as particles arise as ripples in quantum fields, extending across space and time. The harder it is to make the field ripple, the higher the particle’s mass.

A tachyon has imaginary mass. This means that it isn’t hard to make the field ripple at all. In fact, exactly the opposite happens: it’s easier to ripple than to stay still! Any ripple, no matter how small, will keep growing until it’s not just a ripple, but a new default state for the field. Only when it becomes hard to change again will the changes stop. If it’s hard to change, though, then the particle has a normal, non-imaginary mass, and is no longer a tachyon!

Thus, the modern understanding is that if a theory has tachyons in it, it’s because we’re assuming that one of the quantum fields has the wrong default state. Switching to the correct default gets rid of the tachyons.

There are deeper problems with the idea proposed in this paper. Normally, the only types of fields that can have tachyons are scalars, fields that can be defined by a single number at each point, sort of like a temperature. The particles this article is describing aren’t scalars, though, they’re fermions, the type of particle that includes everyday matter like electrons. Those sorts of particles can’t be tachyons at all without breaking some fairly important laws of physics. (For a technical explanation of why this is, Lubos Motl’s reply to the post here is pretty good.)

Of course, this paper’s author knows all this. He’s well aware that he’s suggesting bending some fairly fundamental laws, and he seems to think there’s room for it. But that, really, is the issue here: there’s room for it. The paper isn’t, as IFL Science seems to believe, six pieces of evidence for faster-than-light particles. It’s six measurements that, if you twist them around and squint and pick exactly the right model, have room for faster-than-light particles. And that’s…probably not worth an article.

Misleading Headlines and Tacky Physics, Oh My!

It’s been making the rounds on the blogosphere (despite having come out three months ago). It’s probably showed up on your Facebook feed. It’s the news that (apparently) one of the biggest discoveries of recent years may have been premature. It’s….

The Huffington Post writing a misleading headline to drum up clicks!

The article linked above is titled “Scientists Raise Doubts About Higgs Boson Discovery, Say It Could Be Another Particle”. And while that is indeed technically all true, it’s more than a little misleading.

When the various teams at the Large Hadron Collider announced their discovery of the Higgs, they didn’t say it was exactly the Higgs predicted by the Standard Model. In fact, it probably shouldn’t be: most of the options for extending the Standard Model, like supersymmetry, predict a Higgs boson with slightly different properties. Until the Higgs is measured more precisely, these slightly different versions won’t be ruled out.

Of course, “not ruled out” is not exactly newsworthy, which is the main problem with this article. The Huffington Post quotes a paper that argues, not that there is new evidence for an alternative to the Higgs, but simply that one particular alternative that the authors like hasn’t been ruled out yet.

Also, it’s probably the tackiest alternative out there.

The theory in question is called Technicolor, and if you’re imagining a certain coat then you may have an idea of how tacky we’re talking.

Any Higgs will do…

To describe technicolor, let’s take a brief aside and talk about the colors of quarks.

Rather than having one type of charge going from plus to minus like Electromagnetism, the Strong Nuclear Force has three types of charge, called red, green, and blue. Quarks are charged under the strong force, and can be red, green, or blue, while the antimatter partners of quarks have the equivalent of negative charges, anti-red, anti-green, and anti-blue. The strong force binds quarks together into protons and neutrons. The strong force is also charged under itself, which means that not only does it bind quarks together, it also binds itself together, so that it only acts at very very short range.

In combination, these two facts have one rather surprising consequence. A proton contains three quarks, but a proton’s mass is over a hundred times the total mass of three quarks. The same is true of neutrons.

The reason why is that most of the mass isn’t coming from the quarks, it’s coming from the strength of the strong force. Mass, contrary to what you might think, isn’t fundamental “stuff”. It’s just a handy way of talking about energy that isn’t due to something we can easily see. Particles have energy because they move, but they also have energy due to internal interactions, as well as interactions with other fields like the Higgs field. While a lone quark’s mass is due to its interaction with the Higgs field, the quarks inside a proton are also interacting with each other, gaining enormous amounts of energy from the strong force trapped within. That energy, largely invisible from an outside view, contributes most of what we see as the mass of the proton.

Technicolor asks the following: what if it’s not just protons and neutrons? What if the mass of everything, quarks and electrons and the W and Z bosons, was due not truly to the Higgs, but to another force, like the strong force but even stronger? The Higgs we think we saw at the LHC would not be fundamental, but merely a composite, made up of  two “techni-quarks” with “technicolor” charges. [Edited to remove confusion with Preon Theory]

It’s…an idea. But it’s never been a very popular one.

Part of the problem is that the simpler versions of technicolor have been ruled out, so theorists are having to invoke increasingly baroque models to try to make it work. But that, to some extent, is also true of supersymmetry.

A bigger problem is that technicolor is just kind of…tacky.

Technicolor doesn’t say anything deep about the way the universe works. It doesn’t propose new [types of] symmetries, and it doesn’t say anything about what happens at the very highest energies. It’s not really tied in to any of the other lines of speculation in physics, it doesn’t lead to a lot of discussion between researchers. It doesn’t require an end, a fundamental lowest level with truly fundamental particles. You could potentially keep adding new levels of technicolor, new things made up of other things made up of other things, ad infinitum.

And the fleas that bite ’em, presumably.

[Note: to clarify, technicolor theories don’t actually keep going like this, their extra particles don’t require another layer of technicolor to gain their masses. That would be an actual problem with the concept itself, not a reason it’s tacky. It’s tacky because, in a world where most physicists feel like we’ve really gotten down to the fundamental particles, adding new composite objects seems baroque and unnecessary, like adding epicycles. Fleas upon fleas as it were.]

In a word, it’s not sexy.

Does that mean it’s wrong? No, of course not. As the paper linked by Huffington Post points out, technicolor hasn’t been ruled out yet.

Does that mean I think people shouldn’t study it? Again, no. If you really find technicolor meaningful and interesting, go for it! Maybe you’ll be the kick it needs to prove itself!

But good grief, until you manage that, please don’t spread your tacky, un-sexy theory all over Facebook. A theory like technicolor should get press when it’s got a good reason, and “we haven’t been ruled out yet” is never, ever, a good reason.

 

[Edit: Esben on Facebook is more well-informed about technicolor than I am, and pointed out some issues with this post. Some of them are due to me conflating technicolor with another old and tacky theory, while some were places where my description was misleading. Corrections in bold.]