How to Predict the Mass of the Higgs

Did Homer Simpson predict the mass of the Higgs boson?

No, of course not.

Apart from the usual reasons, he’s off by more than a factor of six.

If you play with the numbers, it looks like Simon Singh (the popular science writer who reported the “discovery” Homer made as a throwaway joke in a 1998 Simpsons episode) made the classic physics mistake of losing track of a factor of 2\pi. In particular, it looks like he mistakenly thought that the Planck constant, h, was equal to the reduced Planck constant, \hbar, divided by 2\pi, when actually it’s \hbar times 2\pi. So while Singh read Homer’s prediction as 123 GeV, surprisingly close to the actual Higgs mass of 125 GeV found in 2012, in fact Homer predicted the somewhat more embarrassing value of 775 GeV.

D’Oh!

That was boring. Let’s ask a more interesting question.

Did Gordon Kane predict the mass of the Higgs boson?

I’ve talked before about how it seems impossible that string theory will ever make any testable predictions. The issue boils down to one of too many possibilities: string theory predicts different consequences for different ways that its six (or seven for M theory) extra dimensions can be curled up. Since there is an absurdly vast number of ways this can be done, anything you might want to predict (say, the mass of the electron) has an absurd number of possible values.

Gordon Kane and collaborators get around this problem by tackling a different one. Instead of trying to use string theory to predict things we already know, like the mass of the electron, they assume these things are already true. That is, they assume we live in a world with electrons that have the mass they really have, and quarks that have the mass they really have, and so on. They assume that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make. And, they assume that this world is a consequence of string (or rather M) theory.

From that combination of assumptions, they then figure out the consequences for things that aren’t yet known. And in a 2011 paper, they predicted the Higgs mass would be between 105 and 129 GeV.

I have a lot of sympathy for this approach, because it’s essentially the same thing that non-string-theorists do. When a particle physicist wants to predict what will come out of the LHC, they don’t try to get it from first principles: they assume the world works as we have discovered, make a few mild extra assumptions, and see what new consequences come out that we haven’t observed yet. If those particle physicists can be said to make predictions from supersymmetry, or (shudder) technicolor, then Gordon Kane is certainly making predictions from string theory.

So why haven’t you heard of him? Even if you have, why, if this guy successfully predicted the mass of the Higgs boson, are people still saying that you can’t make predictions with string theory?

Trouble is, making predictions is tricky.

Part of the problem is timing. Gordon Kane’s paper went online in December of 2011. The Higgs mass was announced in July 2012, so you might think Kane got a six month head-start. But when something is announced isn’t the same as when it’s discovered. For a big experiment like the Large Hadron Collider, there’s a long road between the first time something gets noticed and the point where everyone is certain enough that they’re ready to announce it to the world. Rumors fly, and it’s not clear that Kane and his co-authors wouldn’t have heard them.

Assumptions are the other issue. Remember when I said, a couple paragraphs up, that Kane’s group assumed “that we live in a world that obeys all of the discoveries we’ve already made, and a few we hope to make“? That last part is what makes things tricky. There were a few extra assumptions Kane made, beyond those needed to reproduce the world we know. For many people, some of these extra assumptions are suspicious. They worry that the assumptions might have been chosen, not just because they made sense, but because they happened to give the right (rumored) mass of the Higgs.

If you want to predict something in physics, it’s not just a matter of getting in ahead of the announcement with the right number. For a clear prediction, you need to be early enough that the experiments haven’t yet even seen hints of what you’re looking for. Even then, you need your theory to be suitably generic, so that it’s clear that your prediction is really the result of the math and not of your choices. You can trade off aspects of this: more accuracy for a less generic theory, better timing for looser predictions. Get the formula right, and the world will laud you for your prediction. Wrong, and you’re Homer Simpson. Somewhere in between, though, and you end up in that tricky, tricky grey area.

Like Gordon Kane.

Pics or It Didn’t Happen

I got a tumblr recently.

One thing I’ve noticed is that tumblr is a very visual medium. While some people can get away with massive text-dumps, they’re usually part of specialized communities. The content that’s most popular with a wide audience is, almost always, images. And that’s especially true for science-related content.

This isn’t limited to tumblr either. Most of my most successful posts have images. Most successful science posts in general involve images. Think of the most interesting science you’ve seen on the internet: chances are, it was something visual that made it memorable.

The problem is, I’m a theoretical physicist. I can’t show you pictures of nebulae in colorized glory, or images showing the behavior of individual atoms. I work with words, equations, and, when I’m lucky, diagrams.

Diagrams tend to work best, when they’re an option. I have no doubt that part of the Amplituhedron‘s popularity with the press owes to Andy Gilmore’s beautiful illustration, as printed in Quanta Magazine’s piece:

Gotta get me an artist.

The problem is, the nicer one of these illustrations is, the less it actually means. For most people, the above is just a pretty picture. Sometimes it’s possible to do something more accurate, like a 3d model of one of string theory’s six-dimensional Calabi-Yau manifolds:

What, you expected a six-dimensional intrusion into our world *not* to look like Yog-Sothoth?

A lot of the time, though, we don’t even have a diagram!

In those sorts of situations, it’s tempting to show an equation. After all, equations are the real deal, the stuff we theorists are actually manipulating.

Unless you’ve got an especially obvious equation, though, there’s basically only one thing the general public will get out of it. Either the equation is surprisingly simple,

Isn’t it cute?

Or it’s unreasonably complicated,

Why yes, this is one equation that covers seventeen pages. You're lucky I didn't post the eight-hundred page one.

Why yes, this is one equation that covers seventeen pages. You’re lucky I didn’t post the eight-hundred page one.

This is great for first impressions, but it’s not very repeatable. Show people one giant equation, and they’ll be impressed. Show them two, and they won’t have any idea what the difference is supposed to be.

If you’re not showing diagrams or equations, what else can you show?

The final option is, essentially, to draw a cartoon. Forget about showing what’s “really going on”, physically or mathematically. That’s what the article is for. For an image, just pick something cute and memorable that references the topic.

When I did an article for Ars Technica back in 2013, I didn’t have any diagrams to show, or any interesting equations. Their artist, undeterred, came up with a cute picture of sushi with an N=4 on it.

That sort of thing really helps! It doesn’t tell you anything technical, it doesn’t explain what’s going on…but it does mean that every time I think of the article, that image pops into my head. And in a world where nothing lasts without a picture to document it, that’s a job well done.

Explanations of Phenomena Are All Alike; Every Unexplained Phenomenon Is Unexplained in Its Own Way

Vladimir Kazakov began his talk at ICTP-SAIFR this week with a variant of Tolstoy’s famous opening to the novel Anna Karenina: “Happy families are all alike; every unhappy family is unhappy in its own way.” Kazakov flipped the order of the quote, stating that while “Un-solvable models are each un-solvable in their own way, solvable models are all alike.”

In talking about solvable and un-solvable models, Kazakov was referring to a concept called integrability, the idea that in certain quantum field theories it’s possible to avoid the messy approximations of perturbation theory and instead jump straight to the answer. Kazakov was observing that these integrable systems seem to have a deep kinship: the same basic methods appear to work to understand all of them.

I’d like to generalize Kazakov’s point, and talk about a broader trend in physics.

Much has been made over the years of the “unreasonable effectiveness of mathematics in the natural sciences”, most notably in physicist Eugene Wigner’s famous essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences. There’s a feeling among some people that mathematics is much better at explaining physical phenomena than one would expect, that the world appears to be “made of math” and that it didn’t have to be.

On the surface, this is a reasonable claim. Certain mathematical ideas, group theory for example, seem to pop up again and again in physics, sometimes in wildly different contexts. The history of fundamental physics has tended to see steady progress over the years, from clunkier mathematical concepts to more and more elegant ones.

Some physicists tend to be dismissive of this. Lee Smolin in particular seems to be under the impression that mathematics is just particularly good at providing useful approximations. This perspective links to his definition of mathematics as “the study of systems of evoked relationships inspired by observations of nature,” a definition to which Peter Woit vehemently objects. Woit argues what I think any mathematician would when presented by a statement like Smolin’s: that mathematics is much more than just a useful tool for approximating observations, and that contrary to physicists’ vanity most of mathematics goes on without any explicit interest in observing the natural world.

While it’s generally rude for physicists to propose definitions for mathematics, I’m going to do so anyway. I think the following definition is one mathematicians would be more comfortable with, though it may be overly broad: Mathematics is the study of simple rules with complex consequences.

We live in a complex world. The breadth of the periodic table, the vast diversity of life, the tangled webs of galaxies across the sky, these are things that display both vast variety and a sense of order. They are, in a rather direct way, the complex consequences of rules that are at heart very very simple.

Part of the wonder of modern mathematics is how interconnected it has become. Many sub-fields, once distinct, have discovered over the years that they are really studying different aspects of the same phenomena. That’s why when you see a proof of a three-hundred-year-old mathematical conjecture, it uses terms that seem to have nothing to do with the original problem. It’s why Woit, in an essay on this topic, quotes Edward Frenkel’s description of a particular recent program as a blueprint for a “Grand Unified Theory of Mathematics”. Increasingly, complex patterns are being shown to be not only consequences of simple rules, but consequences of the same simple rules.

Mathematics itself is “unreasonably effective”. That’s why, when faced with a complex world, we shouldn’t be surprised when the same simple rules pop up again and again to explain it. That’s what explaining something is: breaking down something complex into the simple rules that give rise to it. And as mathematics progresses, it becomes more and more clear that a few closely related types of simple rules lie behind any complex phenomena. While each unexplained fact about the universe may seem unexplained in its own way, as things are explained bit by bit they show just how alike they really are.

Valentine’s Day Physics Poem 2015

In the third installment of an ongoing tradition (wow, this blog is old enough to have traditions!), I present 2015’s Valentine’s Day Physics Poem. Like the others, I wrote this one a long time ago. I’ve polished it up a bit since.

 

Perturbation Theory

 

When you’ve been in a system a long time, your state tends to settle

Time-energy uncertainty

That unrigorous interloper

Means the longer you wait, the more fixed you are

And I’ve been stuck

In a comfy eigenstate

Since what I might as well call t=0.

 

Yesterday though,

Out of the ether

Like an electric field

New potential entered my Hamiltonian.

 

And my state was perturbed.

 

Just a small, delicate perturbation

And an infinite series scrolls out

Waves from waves from waves

It’s a new system now

With new, unrealized energy

And I might as well

Call yesterday

t=0.

 

Our old friend

Time-energy uncertainty

Tells me not to change,

Not to worry.

Soon, probability thins

The Hamiltonian pulls us back

And we all return

Closer and closer

To a fixed, settled, normal state.

 

This freedom

This uncertainty

This perturbation

Is limited by Planck’s constant

Is vanishingly small.

 

Yet rigor

        And happiness

                Demand I include it.

All Is Dust

Joke stolen from some fellow PI postdocs.

The BICEP2 and Planck experiment teams have released a joint analysis of their data, discovering what many had already suspected: that the evidence for primordial gravitational waves found by BICEP2 can be fully explained by interstellar dust.

For those who haven’t been following the story, BICEP2 is a telescope in Antarctica. Last March, they told the press they had found evidence of primordial gravitational waves, ripples in space-time caused by the exponential expansion of the universe shortly after the Big Bang. Soon after, though, doubts were raised. It appeared that the BICEP2 team hadn’t taken proper account of interstellar dust, and in particular had mis-used some data they scraped from a presentation by larger experiment Planck. After Planck released the correct version of their dust data, BICEP2’s predictions were even more evidently premature.

Now, the Planck team has exhaustively gone over their data and BICEP2’s, and done a full analysis. The result is a pretty thorough statement: everything BICEP2 observed can be explained by interstellar dust.

A few news outlets have been describing this as “ruling out inflation” or “ruling out gravitational waves”, both of which are misunderstandings. What Planck has ruled out are inflation (and gravitational waves caused by inflation) powerful enough to have been observed by BICEP2.

To an extent, this was something Planck had already predicted before BICEP2 made their announcement. BICEP2 announced a value for a parameter r, called the tensor-scalar ratio, of 0.2. This parameter r is a way to measure the strength of the gravitational waves (if you want to know what gravitational waves have to do with tensors, this post might help), and thus indirectly the strength of inflation in the early universe.

Trouble is, Planck had already released results arguing that r had to be below 0.11! So a lot of people were already rather skeptical.

With the new evidence, Planck’s bound is relaxed slightly. They now argue that r should be below 0.13, so BICEP2’s evidence was enough to introduce some fuzziness into their measurements when everything was analyzed together.

I’ve complained before about the bad aspects of BICEP2’s announcement, how releasing their data prematurely hurt the public’s trust in science and revealed the nasty side of competition for funding on massive projects. In this post, I’d like to talk a little about the positive side of the publicity around BICEP2.

Lots of theorists care about physics at very very high energies. The scale of string theory, or the Planck mass (no direct connection to the experiment, just the energy where one expects quantum gravity to be relevant), or the energy at which the fundamental forces might unify, are all much higher than any energy we can explore with a particle collider like the LHC. If you had gone out before BICEP2’s announcement and asked physicists whether we would ever see direct evidence for physics at these kinds of scales, they would have given you a resounding no. Maybe we could see indirect evidence, but any direct consequences would be essentially invisible.

All that changed with BICEP2. Their announcement of an r of 0.2 corresponds to very strong inflation, inflation of higher energy than the Planck mass!

Suddenly, there was hope that, even if we could never see such high-energy physics in a collider, we could see it out in the cosmos. This falls into a wider trend. Physicists have increasingly begun to look to the stars as the LHC continues to show nothing new. But the possibility that the cosmos could give us data that not only meets LHC energies, but surpasses them so dramatically, is something that very few people had realized.

The thing is, that hope is still alive and kicking. The new bound, restricting r to less than 0.13, still allows enormously powerful inflation. (If you’d like to work out the math yourself, equation (14) here relates the scale of inflation \Delta \phi to the Planck mass M_{\textrm{Pl}} and the parameter r.)

This isn’t just a “it hasn’t been ruled out yet” claim either. Cosmologists tell me that new experiments coming online in the next decade will have much more precision, and much better ability to take account of dust. These experiments should be sensitive to an r as low as 0.001!

With that kind of sensitivity, and the new mindset that BICEP2 introduced, we have a real chance of seeing evidence of Planck-scale physics within the next ten or twenty years. We just have to wait and see if the stars are right…

The Cycle of Exploration

Science is often described as a journey of exploration. You might imagine scientists carefully planning an expedition, gathering their equipment, then venturing out into the wilds of Nature, traveling as far as they can before returning with tales of the wonders they discovered.

Is it capybaras? Please let it be capybaras.

Is it capybaras? Please let it be capybaras.

This misses an important part of the story, though. In science, exploration isn’t just about discovering the true nature of Nature, as important as that is. It’s also about laying the groundwork for future exploration.

Picture our explorers, traveling out into the wilderness with no idea what’s in store. With only a rough idea of the challenges they might face, they must pack for every possibility: warm clothing for mountains, sunscreen for the desert, canoes to ford rivers, cameras in case they encounter capybaras. Since they can only carry so much, they can only travel so far before they run out of supplies.

Once they return, though, the explorers can assess what they did and didn’t need. Maybe they found a jungle, full of capybaras. The next time they travel they’ll make sure to bring canoes and cameras, but they can skip the warm coats. This lets them free up more room, letting them bring more supplies that’s actually useful. In the end, this lets them travel farther.

Science is a lot like this. The more we know, the better questions we can ask, and the further we can explore. It’s true not just for experiments, but for theoretical work as well. Here’s a slide from a talk I’m preparing, about how this works in my sub-field of Amplitudeology.

Unfortunately not a capybara.

Unfortunately not a capybara.

In theoretical physics, you often start out doing a calculation using the most general methods you have available. Once you’ve done it, you understand a bit more about your results: in particular, you can start figuring out which parts of the general method are actually unnecessary. By paring things down, you can figure out a new method, one that’s more efficient and allows for more complicated calculations. Doing those calculations then reveals new patterns, letting you propose even newer methods and do even more complicated calculations.

It’s the circle of exploration, and it really does move us all, motivating everything we do. With each discovery, we can go further, learn more, than the last attempt, keeping science churning long into the future.

Living in a Broken World: Supersymmetry We Can Test

I’ve talked before about supersymmetry. Supersymmetry relates particles with different spins, linking spin 1 force-carrying particles like photons and gluons to spin 1/2 particles similar to electrons, and spin 1/2 particles in turn to spin 0 “scalar” particles, the same general type as the Higgs. I emphasized there that, if two particles are related by supersymmetry, they will have some important traits in common: the same mass and the same interactions.

That’s true for the theories I like to work with. In particular, it’s true for N=4 super Yang-Mills. Adding supersymmetry allows us to tinker with neater, cleaner theories, gaining mastery over rice before we start experimenting with the more intricate “sushi” of theories of the real world.

However, it should be pretty clear that we don’t live in a world with this sort of supersymmetry. A quick look at the Standard Model indicates that no two known particles interact in precisely the same way. When people try to test supersymmetry in the real world, they’re not looking for this sort of thing. Rather, they’re looking for broken supersymmetry.

In the past, I’ve described broken supersymmetry as like a broken mirror: the two sides are no longer the same, but you can still predict one side’s behavior from the other. When supersymmetry is broken, related particles still have the same interactions. Now, though, they can have different masses.

The simplest version of supersymmetry, N=1, gives one partner to each particle. Since nothing in the Standard Model can be partners of each other, if we have broken N=1 supersymmetry in the real world then we need a new particle for each existing one…and each one of those particles has a potentially unknown, different mass. And if that sounds rather complicated…

Baroque enough to make Rubens happy.

That, right there, is the Minimal Supersymmetric Standard Model, the simplest thing you can propose if you want a world with broken supersymmetry. If you look carefully, you’ll notice that it’s actually a bit more complicated than just one partner for each known particle: there are a few extra Higgs fields as well!

If we’re hoping to explain anything in a simpler way, we seem to have royally screwed up. Luckily, though, the situation is not quite as ridiculous as it appears. Let’s go back to the mirror analogy.

If you look into a broken mirror, you can still have a pretty good idea of what you’ll see…but in order to do so, you have to know how the mirror is broken.

Similarly, supersymmetry can be broken in different ways, by different supersymmetry-breaking mechanisms.

The general idea is to start with a theory in which supersymmetry is precisely true, and all supersymmetric partners have the same mass. Then, consider some Higgs-like field. Like the Higgs, it can take some constant value throughout all of space, forming a background like the color of a piece of construction paper. While the rules that govern this field would respect supersymmetry, any specific value it takes wouldn’t. Instead, it would be biased: the spin 0, Higgs-like field could take on a constant value, but its spin 1/2 supersymmetric partner couldn’t. (If you want to know why, read my post on the Higgs linked above.)

Once that field takes on a specific value, supersymmetry is broken. That breaking then has to be communicated to the rest of the theory, via interactions between different particles. There are several different ways this can work: perhaps the interactions come from gravity, or are the same strength as gravity. Maybe instead they come from a new fundamental force, similar to the strong nuclear force but harder to discover. They could even come as byproducts of the breaking of other symmetries.

Each one of these options has different consequences, and leads to different predictions for the masses of undiscovered partner particles. They tend to have different numbers of extra parameters (for example, if gravity-based interactions are involved there are four new parameters, and an extra sign, that must be fixed). None of them have an entire standard model-worth of new parameters…but all of them have at least a few extra.

(Brief aside: I’ve been talking about the Minimal Supersymmetric Standard Model, but these days people have largely given up on finding evidence for it, and are exploring even more complicated setups like the Next-to-Minimal Supersymmetric Standard Model.)

If we’re introducing extra parameters without explaining existing ones, what’s the point of supersymmetry?

Last week, I talked about the problem of fine-tuning. I explained that when physicists are worried about fine-tuning, what we’re really worried about is whether the sorts of ultimate (low number of parameters) theories that we expect to hold could give rise to the apparently fine-tuned world we live in. In that post, I was a little misleading about supersymmetry’s role in that problem.

The goal of introducing (broken) supersymmetry is to solve a particular set of fine-tuning problems, mostly one specific one involving the Higgs. This doesn’t mean that supersymmetry is the sort of “ultimate” theory we’re looking for, rather supersymmetry is one of the few ways we know to bridge the gap between “ultimate” theories and a fine-tuned real world.

To explain it in terms of the language of the last post, it’s hard to find one of these “ultimate” theories that gives rise to a fine-tuned world. What’s quite a bit easier, though, is finding one of these “ultimate” theories that gives rise to a supersymmetric world, which in turn gives rise to a fine-tuned real world.

In practice, these are the sorts of theories that get tested. Very rarely are people able to propose testable versions of the more “ultimate” theories. Instead, one generally finds intermediate theories, theories that can potentially come from “ultimate” theories, and builds general versions of those that can be tested.

These intermediate theories come in multiple levels. Some physicists look for the most general version, theories like the Minimal Supersymmetric Standard Model with a whole host of new parameters. Others look for more specific versions, choices of supersymmetry-breaking mechanisms. Still others try to tie it further up, getting close to candidate “ultimate” theories like M theory (though in practice they generally make a few choices that put them somewhere in between).

The hope is that with a lot of people covering different angles, we’ll be able to make the best use of any new evidence that comes in. If “something” is out there, there are still a lot of choices for what that something could be, and it’s the job of physicists to try to understand whatever ends up being found.

Not bad for working in a broken world, huh?

The Real Problem with Fine-Tuning

You’ve probably heard it said that the universe is fine-tuned.

The Standard Model, our current best understanding of the rules that govern particle physics, is full of lots of fiddly adjustable parameters. The masses of fundamental particles and the strengths of the fundamental forces aren’t the sort of thing we can predict from first principles: we need to go out, do experiments, and find out what they are. And you’ve probably heard it argued that, if these fiddly parameters were even a little different from what they are, life as we know it could not exist.

That’s fine-tuning…or at least, that’s what many people mean when they talk about fine-tuning. It’s not exactly what physicists mean though. The thing is, almost nobody who studies particle physics thinks the parameters of the Standard Model are the full story. In fact, any theory with adjustable parameters probably isn’t the full story.

It all goes back to a point I made a while back: nature abhors a constant. The whole purpose of physics is to explain the natural world, and we have a long history of taking things that look arbitrary and linking them together, showing that reality has fewer parameters than we had thought. This is something physics is very good at. (To indulge in a little extremely amateurish philosophy, it seems to me that this is simply an inherent part of how we understand the world: if we encounter a parameter, we will eventually come up with an explanation for it.)

Moreover, at this point we have a rough idea of what this sort of explanation should look like. We have experience playing with theories that don’t have any adjustable parameters, or that only have a few: M theory is an example, but there are also more traditional quantum field theories that fill this role with no mention of string theory. From our exploration of these theories, we know that they can serve as the kind of explanation we need: in a world governed by one of these theories, people unaware of the full theory would observe what would look at first glance like a world with many fiddly adjustable parameters, parameters that would eventually turn out to be consequences of the broader theory.

So for a physicist, fine-tuning is not about those fiddly parameters themselves. Rather, it’s about the theory that predicts them. Because we have experience playing with these sorts of theories, we know roughly the sorts of worlds they create. What we know is that, while sometimes they give rise to worlds that appear fine-tuned, they tend to only do so in particular ways. Setups that give rise to fine-tuning have consequences: supersymmetry, for example, can give rise to an apparently fine-tuned universe but has to have “partner” particles that show up in powerful enough colliders. In general, a theory that gives rise to apparent fine-tuning will have some detectable consequences.

That’s where physicists start to get worried. So far, we haven’t seen any of these detectable consequences, and it’s getting to the point where we could have, had they been the sort many people expected.

Physicists are worried about fine-tuning, but not because it makes the universe “unlikely”. They’re worried because the more finely-tuned our universe appears, the harder it is to find an explanation for it in terms of the sorts of theories we’re used to working with, and the less likely it becomes that someone will discover a good explanation any time soon. We’re quite confident that there should be some explanation, hundreds of years of scientific progress strongly suggest that to be the case. But the nature of that explanation is becoming increasingly opaque.

My Travels, and Someone Else’s

I arrived in São Paulo, Brazil a few days ago. I’m going to be here for two months as part of a partnership between Perimeter and the International Centre for Theoretical Physics – South American Institute for Fundamental Research. More specifically, I’m here as part of a program on Integrability, a set of tricks that can, in limited cases, let physicists bypass the messy approximations we often have to use.

I’m still getting my metaphorical feet under me here, so I haven’t had time to think of a proper blog post. However, if you’re interested in hearing about the travels of physicists in general, a friend of mine from Stony Brook is going to the South Pole to work on the IceCube neutrino detection experiment, and has been writing a blog about it.

Why You Should Be Skeptical about Faster-than-Light Neutrinos

While I do love science, I don’t always love IFL Science. They can be good at drumming up enthusiasm, but they can also be ridiculously gullible. Case in point: last week, IFL Science ran a piece on a recent paper purporting to give evidence for faster-than-light particles.

Faster than light! Sounds cool, right? Here’s why you should be skeptical:

If a science article looks dubious, you should check out the source. In this case, IFL Science links to an article on the preprint server arXiv.

arXiv is a freely accessible website where physicists and mathematicians post their articles. The site has multiple categories, corresponding to different fields. It’s got categories for essentially any type of physics you’d care to include, with the option to cross-list if you think people from multiple areas might find your work interesting.

So which category is this paper in? Particle physics? Astrophysics?

General Physics, actually.

General Physics is arXiv’s catch-all category. Some of it really is general, and can’t be put into any more specific place. But most of it, including this, falls into another category: things arXiv’s moderators think are fishy.

arXiv isn’t a journal. If you follow some basic criteria, it won’t reject your articles. Instead, dubious articles are put into General Physics, to signify that they don’t seem to belong with the other scholarship in the established categories. General Physics is a grab-bag of weird ideas and crackpot theories, a mix of fringe physicists and overenthusiastic amateurs. There probably are legitimate papers in there too…but for every paper in there, you can guarantee that some experienced researcher found it suspicious enough to send into exile.

Even if you don’t trust the moderators of arXiv, there are other reasons to be wary of faster-than-light particles.

According to Einstein’s theory of relativity, massless particles travel at the speed of light, while massive particles always travel slower. To travel faster than the speed of light, you need to have a very unusual situation: a particle whose mass is an imaginary number.

Particles like that are called tachyons, and they’re a staple of science fiction. While there was a time when they were a serious subject of physics speculation, nowadays the general view is that tachyons are a sign we’re making bad assumptions.

Assuming that someone is a republic serial villain is a good example.

Why is that? It has to do with the nature of mass.

In quantum field theory, what we observe as particles arise as ripples in quantum fields, extending across space and time. The harder it is to make the field ripple, the higher the particle’s mass.

A tachyon has imaginary mass. This means that it isn’t hard to make the field ripple at all. In fact, exactly the opposite happens: it’s easier to ripple than to stay still! Any ripple, no matter how small, will keep growing until it’s not just a ripple, but a new default state for the field. Only when it becomes hard to change again will the changes stop. If it’s hard to change, though, then the particle has a normal, non-imaginary mass, and is no longer a tachyon!

Thus, the modern understanding is that if a theory has tachyons in it, it’s because we’re assuming that one of the quantum fields has the wrong default state. Switching to the correct default gets rid of the tachyons.

There are deeper problems with the idea proposed in this paper. Normally, the only types of fields that can have tachyons are scalars, fields that can be defined by a single number at each point, sort of like a temperature. The particles this article is describing aren’t scalars, though, they’re fermions, the type of particle that includes everyday matter like electrons. Those sorts of particles can’t be tachyons at all without breaking some fairly important laws of physics. (For a technical explanation of why this is, Lubos Motl’s reply to the post here is pretty good.)

Of course, this paper’s author knows all this. He’s well aware that he’s suggesting bending some fairly fundamental laws, and he seems to think there’s room for it. But that, really, is the issue here: there’s room for it. The paper isn’t, as IFL Science seems to believe, six pieces of evidence for faster-than-light particles. It’s six measurements that, if you twist them around and squint and pick exactly the right model, have room for faster-than-light particles. And that’s…probably not worth an article.