Why You Should Be Skeptical about Faster-than-Light Neutrinos

While I do love science, I don’t always love IFL Science. They can be good at drumming up enthusiasm, but they can also be ridiculously gullible. Case in point: last week, IFL Science ran a piece on a recent paper purporting to give evidence for faster-than-light particles.

Faster than light! Sounds cool, right? Here’s why you should be skeptical:

If a science article looks dubious, you should check out the source. In this case, IFL Science links to an article on the preprint server arXiv.

arXiv is a freely accessible website where physicists and mathematicians post their articles. The site has multiple categories, corresponding to different fields. It’s got categories for essentially any type of physics you’d care to include, with the option to cross-list if you think people from multiple areas might find your work interesting.

So which category is this paper in? Particle physics? Astrophysics?

General Physics, actually.

General Physics is arXiv’s catch-all category. Some of it really is general, and can’t be put into any more specific place. But most of it, including this, falls into another category: things arXiv’s moderators think are fishy.

arXiv isn’t a journal. If you follow some basic criteria, it won’t reject your articles. Instead, dubious articles are put into General Physics, to signify that they don’t seem to belong with the other scholarship in the established categories. General Physics is a grab-bag of weird ideas and crackpot theories, a mix of fringe physicists and overenthusiastic amateurs. There probably are legitimate papers in there too…but for every paper in there, you can guarantee that some experienced researcher found it suspicious enough to send into exile.

Even if you don’t trust the moderators of arXiv, there are other reasons to be wary of faster-than-light particles.

According to Einstein’s theory of relativity, massless particles travel at the speed of light, while massive particles always travel slower. To travel faster than the speed of light, you need to have a very unusual situation: a particle whose mass is an imaginary number.

Particles like that are called tachyons, and they’re a staple of science fiction. While there was a time when they were a serious subject of physics speculation, nowadays the general view is that tachyons are a sign we’re making bad assumptions.

Assuming that someone is a republic serial villain is a good example.

Why is that? It has to do with the nature of mass.

In quantum field theory, what we observe as particles arise as ripples in quantum fields, extending across space and time. The harder it is to make the field ripple, the higher the particle’s mass.

A tachyon has imaginary mass. This means that it isn’t hard to make the field ripple at all. In fact, exactly the opposite happens: it’s easier to ripple than to stay still! Any ripple, no matter how small, will keep growing until it’s not just a ripple, but a new default state for the field. Only when it becomes hard to change again will the changes stop. If it’s hard to change, though, then the particle has a normal, non-imaginary mass, and is no longer a tachyon!

Thus, the modern understanding is that if a theory has tachyons in it, it’s because we’re assuming that one of the quantum fields has the wrong default state. Switching to the correct default gets rid of the tachyons.

There are deeper problems with the idea proposed in this paper. Normally, the only types of fields that can have tachyons are scalars, fields that can be defined by a single number at each point, sort of like a temperature. The particles this article is describing aren’t scalars, though, they’re fermions, the type of particle that includes everyday matter like electrons. Those sorts of particles can’t be tachyons at all without breaking some fairly important laws of physics. (For a technical explanation of why this is, Lubos Motl’s reply to the post here is pretty good.)

Of course, this paper’s author knows all this. He’s well aware that he’s suggesting bending some fairly fundamental laws, and he seems to think there’s room for it. But that, really, is the issue here: there’s room for it. The paper isn’t, as IFL Science seems to believe, six pieces of evidence for faster-than-light particles. It’s six measurements that, if you twist them around and squint and pick exactly the right model, have room for faster-than-light particles. And that’s…probably not worth an article.

Merry Newtonmas!

Yesterday, people around the globe celebrated the birth of someone whose new perspective and radical ideas changed history, perhaps more than any other.

I’m referring, of course, to Isaac Newton.

Ho ho ho!

Born on December 25, 1642, Newton is justly famed as one of history’s greatest scientists. By relating gravity on Earth to the force that holds the planets in orbit, Newton arguably created physics as we know it.

However, like many prominent scientists, Newton’s greatness was not so much in what he discovered as how he discovered it. Others had already had similar ideas about gravity. Robert Hooke in particular had written to Newton mentioning a law much like the one Newton eventually wrote down, leading Hooke to accuse Newton of plagiarism.

Newton’s great accomplishment was not merely proposing his law of gravitation, but justifying it, in a way that no-one had ever done before. When others (Hooke for example) had proposed similar laws, they were looking for a law that perfectly described the motion of the planets. Kepler had already proposed ellipse-shaped orbits, but it was clear by Newton and Hooke’s time that such orbits did not fully describe the motion of the planets. Hooke and others hoped that if some sufficiently skilled mathematician started with the correct laws, they could predict the planets’ motions with complete accuracy.

The genius of Newton was in attacking this problem from a different direction. In particular, Newton showed that his laws of gravitation do result in (incorrect) ellipses…provided that there was only one planet.

With multiple planets, things become much more complicated. Even just two planets orbiting a single star is so difficult a problem that it’s impossible to write down an exact solution.

Sensibly, Newton didn’t try to write down an exact solution. Instead, he figured out an approximation: since the Sun is much bigger than the planets, he could simplify the problem and arrive at a partial solution. While he couldn’t perfectly predict the motions of the planets, he knew more than that they were just “approximately” ellipses: he had a prediction for how different from ellipses they should be.

That step was Newton’s great contribution. That insight, that science was able not just to provide exact answers to simpler problems but to guess how far those answers might be off, was something no-one else had really thought about before. It led to error analysis in experiments, and perturbation methods in theory. More generally, it led to the idea that scientists have to be responsible, not just for getting things “almost right”, but for explaining how their results are still wrong.

So this holiday season, let’s give thanks to the man whose ideas created science as we know it. Merry Newtonmas everyone!

Sorry Science Fiction, Quantum Gravity Doesn’t Do What You Think It Does

I saw Interstellar this week. There’s been a lot of buzz among physicists about it, owing in part to the involvement of black hole expert Kip Thorne in the film’s development. I’d just like to comment on one aspect of the film that bugged me, a problem that shows up pretty frequently in science fiction.

In the film, Michael Caine plays a theoretical physicist working for NASA. His dream is to save humanity from an Earth plagued by a blight that is killing off the world’s food supply. To do this, he plans to build giant anti-gravity spaceships capable of taking as many people as possible away from the dying Earth to find a new planet capable of supporting human life. And in order to do that, apparently, he needs a theory of quantum gravity.

The thing is, quantum gravity has nothing to do with making giant anti-gravity spaceships.

Michael Caine lied to us?

This mistake isn’t unique to Interstellar. Lots of science fiction works assume that once we understand quantum gravity then everything else will follow: faster than light travel, wormholes, anti-gravity…pretty much every sci-fi staple.

It’s not just present in science fiction, either. Plenty of science popularizers like to mention all of the marvelous technology that’s going to come out of quantum gravity, including people who really should know better. A good example comes from a recent piece by quantum gravity researcher Sabine Hossenfelder:

But especially in high energy physics and quantum gravity, progress has basically stalled since the development of the standard model in the mid 70s. […] it is a frustrating situation and this makes you wonder if not there are other reasons for lack of progress, reasons that we can do something about. Especially in a time when we really need a game changer, some breakthrough technology, clean energy, that warp drive, a transporter!

None of these are things we’re likely to get from quantum gravity, and the reason is rather basic. It boils down to one central issue: if we can’t control the classical physics, we can’t control the quantum physics.

When science fiction authors speculate about the benefits of quantum gravity, they’re thinking about the benefits of quantum mechanics. Understanding the quantum world has allowed some of the greatest breakthroughs of the 20th century, from miniaturizing circuits to developing novel materials.

The assumption writers make is that the same will be true for quantum gravity: understand it, and gravity technology will flow. But this assumption forgets that quantum mechanics was so successful because it let us understand things we were already working with.

In order to miniaturize circuits, you have to know how to build a circuit in the first place. Only then, when you try to make the circuit smaller and don’t understand why it stops working, does quantum mechanics step in to tell you what you’re missing. Quantum mechanics helps us develop new materials because it helps us understand how existing materials work.

We don’t have any gravity circuits to shrink down, or gravity materials to understand. When gravity limits our current technology, it does so on a macro level (such as the effect of the Earth’s gravity on GPS satellites) not on a quantum level. If there isn’t a way to build anti-gravity technology using classical physics, there probably isn’t a way using quantum physics.

Scientists and popularizers generally argue that we can’t know what the future will bring. This is true, up to a point. When Maxwell wrote down equations to unify electricity and magnetism he could not have imagined the wealth of technology we have today. And often, technologies come from unexpected places. The spinoff technologies of the space race are the most popular example, another is that CERN (the facility that houses the Large Hadron Collider) was instrumental in developing the world wide web.

While it’s great to emphasize the open-ended promise of scientific advances (especially on grant applications!), in this context it’s misleading because it erases the very real progress people are making on these issues without quantum gravity.

Want to invest in clean energy? There are a huge number of scientists working on it, with projects ranging from creating materials that can split water using solar energy to nuclear fusion. Quantum gravity is just about the last science likely to give us clean energy, and I’m including the social sciences in that assessment.

How about a warp drive?

Indeed, how about one?

That’s not obviously related to quantum gravity either. There has actually been some research into warp drives, but they’re based on a solution to Einstein’s equations without quantum mechanics. It’s not clear whether quantum gravity has something meaningful to say about them…while there are points to be made, from what I’ve been able to gather they’re more related to talking about how other quantum systems interact with gravity than the quantum properties of gravity itself. The same seems to apply to the difficulties involved in wormholes, another sci-fi concept that comes straight out of Einstein’s theory.

As for teleportation, that’s an entirely different field, and it probably doesn’t work how you think it does.

So what is quantum gravity actually good for?

Quantum gravity becomes relevant when gravity becomes very strong, places where Einstein’s theory would predict infinitely dense singularities. That means the inside of black holes, and the Big Bang. Quantum gravity smooths out these singularities, which means it can tell you about the universe’s beginnings (by smoothing out the big bang and showing what could cause it), or its long-term future (for example, problems with the long-term evolution of black holes).

These are important questions! They tell us about where we come from and where we’re going: in short, about our ultimate place in the universe. Almost every religion in history has tried to answer these questions. They’re very important to us as a species, even if they don’t directly impact our daily lives.

What they are not, however, is a source of technology.

So please, science fiction, use some other field for your plot-technology. There are plenty of scientific advances to choose from, people who are really working on cutting-edge futuristic stuff. They don’t need to wait on a theory of quantum gravity to get their work done. Neither do you.

Where do you get all those mathematical toys?

I’m at a conference at Caltech this week, so it’s going to be a shorter post than usual.

The conference is on something call the Positive Grassmannian, a precursor to Nima Arkani-Hamed’s much-hyped Amplituhedron. Both are variants of a central idea: take complicated calculations in physics and express them in terms of clean, well-defined mathematical objects.

Because of this, this conference is attended not just by physicists, but by mathematicians as well, and it’s been interesting watching how the two groups interact.

From a physics perspective, mathematicians are great because they give us so many useful tools! Many significant advances in my field happened because a physicist talked to a mathematician and learned that a problem that had stymied the physics world had already been solved in the math community.

This tends to lead to certain expectations among physicists. If a mathematician gives a talk at a physics conference, we expect them to present something we can use. Our ideal math talk is like when Q presents the gadgets at the beginning of a Bond movie: a ton of new toys with just enough explanation for us to use them to save the day in the second act.

Pictured: Mathematicians, through Physicist eyes

You may see the beginning of a problem here, once you realize that physicists are the James Bond in this analogy.

Physicists like to see themselves as the protagonists of their own stories. That’s true of every field, though, to some degree or another. And it’s certainly true of mathematicians.

Mathematicians don’t go to physics conferences just to be someone’s supporting cast. They do it because physics problems are interesting to them: by hearing what physicists are working on they hope to get inspiration for new mathematical structures, concepts jury-rigged together by physicists that represent corners that mathematics hasn’t yet explored. Their goal is to take home an idea that they can turn into something productive, gaining glory among their fellow mathematicians. And if that sounds familiar…

Pictured: Physicists, through Mathematician eyes

While it’s amusing to watch the different expectations go head-to-head, the best collaborations between physicists and mathematicians are those where both sides respect that the other is the protagonist of their own story. Allow for give-and-take, paying attention not just to what you find interesting but to what the other person does, without assuming a tired old movie script, and it’s possible to make great progress.

Of course, that’s true of life in general as well.

The Three Things Everyone Gets Wrong about the Big Bang

Ah, the Big Bang, our most science-y of creation myths. Everyone knows the story of how the universe and all its physical laws emerged from nothing in a massive explosion, growing from a singularity to the size of a breadbox until, over billions of years, it became the size it is today.

bigbang

A hot dense state, if you know what I mean.

…actually, almost nothing in that paragraph is true. There are a lot of myths about the Big Bang, born from physicists giving sloppy explanations. Here are three things most people get wrong about the Big Bang:

1. A Massive Explosion:

When you picture the big bang, don’t you imagine that something went, well, bang?

In movies and TV shows, a time traveler visiting the big bang sees only an empty void. Suddenly, an explosion lights up the darkness, shooting out stars and galaxies until it has created the entire universe.

Astute readers might find this suspicious: if the entire universe was created by the big bang, then where does the “darkness” come from? What does the universe explode into?

The problem here is that, despite the name, the big bang was not actually an explosion.

In picturing the universe as an explosion, you’re imagining the universe as having finite size. But it’s quite likely that the universe is infinite. Even if it is finite, it’s finite like the surface of the Earth: as Columbus (and others) experienced, you can’t get to the “edge” of the Earth no matter how far you go: eventually, you’ll just end up where you started. If the universe is truly finite, the same is true of it.

Rather than an explosion in one place, the big bang was an explosion everywhere at once. Every point in space was “exploding” at the same time. Each point was moving farther apart from every other point, and the whole universe was, as the song goes, hot and dense.

So what do physicists mean when they say that the universe at some specific time was the size of a breadbox, or a grapefruit?

It’s just sloppy language. When these physicists say “the universe”, what they mean is just the part of the universe we can see today, the Hubble Volume. It is that (enormously vast) space that, once upon a time, was merely the size of a grapefruit. But it was still adjacent to infinitely many other grapefruits of space, each one also experiencing the big bang.

2. It began with a Singularity:

This one isn’t so much definitely wrong as probably wrong.

If the universe obeys Einstein’s Theory of General Relativity perfectly, then we can make an educated guess about how it began. By tracking back the expansion of the universe to its earliest stages, we can infer that the universe was once as small as it can get: a single, zero-dimensional point, or a singularity. The laws of general relativity work the same backwards and forwards in time, so just as we could see a star collapsing and know that it is destined to form a black hole, we can see the universe’s expansion and know that if we traced it back it must have come from a single point.

This is all well and good, but there’s a problem with how it begins: “If the universe obeys Einstein’s Theory of General Relativity perfectly”.

In this situation, general relativity predicts an infinitely small, infinitely dense point. As I’ve talked about before, in physics an infinite result is almost never correct. When we encounter infinity, almost always it means we’re ignoring something about the nature of the universe.

In this case, we’re ignoring Quantum Mechanics. Quantum Mechanics naturally makes physics somewhat “fuzzy”: the Uncertainty Principle means that a quantum state can never be exactly in one specific place.

Combining quantum mechanics and general relativity is famously tricky, and the difficulty boils down to getting rid of pesky infinite results. However, several approaches exist to solving this problem, the most prominent of them being String Theory.

If you ask someone to list string theory’s successes, one thing you’ll always hear mentioned is string theory’s ability to understand black holes. In general relativity, black holes are singularities: infinitely small, and infinitely dense. In string theory, black holes are made up of combinations of fundamental objects: strings and membranes, curled up tight, but crucially not infinitely small. String theory smooths out singularities and tamps down infinities, and the same story applies to the infinity of the big bang.

String theory isn’t alone in this, though. Less popular approaches to quantum gravity, like Loop Quantum Gravity, also tend to “fuzz” out singularities. Whichever approach you favor, it’s pretty clear at this point that the big bang didn’t really begin with a true singularity, just a very compressed universe.

3. It created the laws of physics:

Physicists will occasionally say that the big bang determined the laws of physics. Fans of Anthropic Reasoning in particular will talk about different big bangs in different places in a vast multi-verse, each producing different physical laws.

I’ve met several people who were very confused by this. If the big bang created the laws of physics, then what laws governed the big bang? Don’t you need physics to get a big bang in the first place?

The problem here is that “laws of physics” doesn’t have a precise definition. Physicists use it to mean different things.

In one (important) sense, each fundamental particle is its own law of physics. Each one represents something that is true across all of space and time, a fact about the universe that we can test and confirm.

However, these aren’t the most fundamental laws possible. In string theory, the particles that exist in our four dimensions (three space dimensions, and one of time) change depending on how six “extra” dimensions are curled up. Even in ordinary particle physics, the value of the Higgs field determines the mass of the particles in our universe, including things that might feel “fundamental” like the difference between electromagnetism and the weak nuclear force. If the Higgs field had a different value (as it may have early in the life of the universe), these laws of physics would have been different. These sorts of laws can be truly said to have been created by the big bang.

The real fundamental laws, though, don’t change. Relativity is here to stay, no matter what particles exist in the universe. So is quantum mechanics. The big bang didn’t create those laws, it was a natural consequence of them. Rather than springing physics into existence from nothing, the big bang came out of the most fundamental laws of physics, then proceeded to fix the more contingent ones.

In fact, the big bang might not have even been the beginning of time! As I mentioned earlier in this article, most approaches to quantum gravity make singularities “fuzzy”. One thing these “fuzzy” singularities can do is “bounce”, going from a collapsing universe to an expanding universe. In Cyclic Models of the universe, the big bang was just the latest in a cycle of collapses and expansions, extending back into the distant past. Other approaches, like Eternal Inflation, instead think of the big bang as just a local event: our part of the universe happened to be dense enough to form a big bang, while other regions were expanding even more rapidly.

So if you picture the big bang, don’t just imagine an explosion. Imagine the entire universe expanding at once, changing and settling and cooling until it became the universe as we know it today, starting from a world of tangled strings or possibly an entirely different previous universe.

Sounds a bit more interesting to visit in your TARDIS, no?

What Can Replace Space-Time?

Nima Arkani-Hamed is famous for believing that space-time is doomed, that as physicists we will have to abandon the concepts of space and time if we want to find the ultimate theory of the universe. He’s joked that this is what motivates him to get up in the morning. He tends to bring it up often in talks, both for physicists and for the general public.

The latter especially tend to be baffled by this idea. I’ve heard a lot of questions like “if space-time is doomed, what could replace it?”

In the past, Nima and I both tended to answer this question with a shrug. (Though a more elaborate shrug in his case.) This is the honest answer: we don’t know what replaces space-time, we’re still looking for a good solution. Nima’s Amplituhedron may eventually provide an answer, but it’s still not clear what that answer will look like. I’ve recently realized, though, that this way of responding to the question misses its real thrust.

When people ask me “what could replace space-time?” they’re not asking “what will replace space-time?” Rather, they’re asking “what could possibly replace space-time?” It’s not that they want to know the answer before we’ve found it, it’s that they don’t understand how any reasonable answer could possibly exist.

I don’t think this concern has been addressed much by physicists, and it’s a pity, because it’s not very hard to answer. You don’t even need advanced physics. All you need is some fairly old philosophy. Specifically we’ll use concepts from metaphysics, the branch of philosophy that deals with categories of being.

Think about your day yesterday. Maybe you had breakfast at home, drove to work, had a meeting, then went home and watched TV.

Each of those steps can be thought of as an event. Each event is something that happened that we want to pay attention to. You having breakfast was an event, as was you arriving at work.

These events are connected by relations. Here, each relation specifies the connection between two events. There might be a relation of cause-and-effect, for example, between you arriving at work late and meeting with your boss later in the day.

Space and time, then, can be seen as additional types of relations. Your breakfast is related to you arriving at work: it is before it in time, and some distance from it in space. Before and after, distant in one direction or another, these are all relations between the two events.

Using these relations, we can infer other relations between the events. For example, if we know the distance relating your breakfast and arriving at work, we can make a decent guess at another relation, the difference in amount of gas in your car.

This way of viewing the world, events connected by relations, is already quite common in physics. With Einstein’s theory of relativity, it’s hard to say exactly when or where an event happened, but the overall relationship between two events (distance in space and time taken together) can be thought of much more precisely. As I’ve mentioned before, the curved space-time necessary for Einstein’s theory of gravity can be thought of equally well as a change in the way you measure distances between two points.

So if space and time are relations between events, what would it mean for space-time to be doomed?

The key thing to realize here is that space and time are very specific relations between events, with very specific properties. Some of those properties are what cause problems for quantum gravity, problems which prompt people to suggest that space-time is doomed.

One of those properties is the fact that, when you multiply two distances together, it doesn’t matter which order you do it in. This probably sounds obvious, because you’re used to multiplying normal numbers, for which this is always true anyway. But even slightly more complicated mathematical objects, like matrices, don’t always obey this rule. If distances were this sort of mathematical object, then multiplying them in different orders could give slightly different results. If the difference were small enough, we wouldn’t be able to tell that it was happening in everyday life: distance would have given way to some more complicated concept, but it would still act like distance for us.

That specific idea isn’t generally suggested as a solution to the problems of space and time, but it’s a useful toy model that physicists have used to solve other problems.

It’s the general principle I want to get across: if you want to replace space and time, you need a relation between events. That relation should behave like space and time on the scales we’re used to, but it can be different on very small scales (Big Bang, inside of Black Holes) and on very large scales (long-term fate of the universe).

Space-time is doomed, and we don’t know yet what’s going to replace it. But whatever it is, whatever form it takes, we do know one thing: it’s going to be a relation between events.

Research or Conference? Can’t it be both?

“If you’re there for two months, for sure you’ll be doing research.”

I wanted to be snarky. I wanted to point out that, as a theoretical physicist, I do research wherever I go. I wanted to say that I even did research on the drive over. (This may not have been true, I think I mostly thought about Magic the Gathering cards.)

More than any of those, though, I wanted to get my travel visa. So instead I said,

“That’s fair.”

“Mmhmm, that’s fair.” Looking down at the invitation letter, she triumphantly pointed to the name of the inviting institution: “South American Institute for Fundamental Research.”

A bit of background: I’m going to Brazil this winter. Partly, this is because winter in Canada is not especially desirable, but it’s also because Sao Paulo’s International Center for Theoretical Physics is running a Program on Integrability, the arcane set of techniques that seeks to bypass the approximate perturbations we often use in particle physics and find full, exact results.

What do I mean by a Program? It’s not the sort of scientific program I’ve talked about before, though the ideas are related. When an institute holds a Program, they’re declaring a theme. For a certain length of time (generally from a few months to a whole semester), there will be a large number of talks at the institute focused on some particular scientific theme. The institute invites people from all over the world who work on that theme. Those people are there to give and attend talks, but they’re also there to share ideas with each other, to network and collaborate and do research.

This is where things get tricky. See, Brazil has multiple types of visas. A Tourist Visa can be used, among other things, for attending a scientific conference. On the other hand, someone coming to Brazil to do research uses Visa 1.

A Program is essentially a long conference…but it’s also an opportunity to do research. So are most short conferences, though! In theoretical physics we have workshops, short conferences explicitly focused on collaboration and research, but even if a conference isn’t a workshop you can bet that we’ll be doing some research there, for sure. We don’t need labs, and some of us don’t even need computers, research can happen whenever the inspiration strikes. The distinction between conferences and research, from our perspective, is an arbitrary one.

In physics, we like to cut through this sort of ambiguity by looking at what’s really important. I wanted to figure out what about research makes the Brazilian government use a different visa for it, whether it was about motivating people to enter the country for specific reasons or tracking certain sorts of activities. I wanted to understand that, because it would let me figure out whether my own research fell under those reasons, and thus figure out objectively which type of visa I ought to have.

I wanted to ask about all of this…but more than any of that, I wanted to get my travel visa. So I applied for the visa they told me to, and left.

Misleading Headlines and Tacky Physics, Oh My!

It’s been making the rounds on the blogosphere (despite having come out three months ago). It’s probably showed up on your Facebook feed. It’s the news that (apparently) one of the biggest discoveries of recent years may have been premature. It’s….

The Huffington Post writing a misleading headline to drum up clicks!

The article linked above is titled “Scientists Raise Doubts About Higgs Boson Discovery, Say It Could Be Another Particle”. And while that is indeed technically all true, it’s more than a little misleading.

When the various teams at the Large Hadron Collider announced their discovery of the Higgs, they didn’t say it was exactly the Higgs predicted by the Standard Model. In fact, it probably shouldn’t be: most of the options for extending the Standard Model, like supersymmetry, predict a Higgs boson with slightly different properties. Until the Higgs is measured more precisely, these slightly different versions won’t be ruled out.

Of course, “not ruled out” is not exactly newsworthy, which is the main problem with this article. The Huffington Post quotes a paper that argues, not that there is new evidence for an alternative to the Higgs, but simply that one particular alternative that the authors like hasn’t been ruled out yet.

Also, it’s probably the tackiest alternative out there.

The theory in question is called Technicolor, and if you’re imagining a certain coat then you may have an idea of how tacky we’re talking.

Any Higgs will do…

To describe technicolor, let’s take a brief aside and talk about the colors of quarks.

Rather than having one type of charge going from plus to minus like Electromagnetism, the Strong Nuclear Force has three types of charge, called red, green, and blue. Quarks are charged under the strong force, and can be red, green, or blue, while the antimatter partners of quarks have the equivalent of negative charges, anti-red, anti-green, and anti-blue. The strong force binds quarks together into protons and neutrons. The strong force is also charged under itself, which means that not only does it bind quarks together, it also binds itself together, so that it only acts at very very short range.

In combination, these two facts have one rather surprising consequence. A proton contains three quarks, but a proton’s mass is over a hundred times the total mass of three quarks. The same is true of neutrons.

The reason why is that most of the mass isn’t coming from the quarks, it’s coming from the strength of the strong force. Mass, contrary to what you might think, isn’t fundamental “stuff”. It’s just a handy way of talking about energy that isn’t due to something we can easily see. Particles have energy because they move, but they also have energy due to internal interactions, as well as interactions with other fields like the Higgs field. While a lone quark’s mass is due to its interaction with the Higgs field, the quarks inside a proton are also interacting with each other, gaining enormous amounts of energy from the strong force trapped within. That energy, largely invisible from an outside view, contributes most of what we see as the mass of the proton.

Technicolor asks the following: what if it’s not just protons and neutrons? What if the mass of everything, quarks and electrons and the W and Z bosons, was due not truly to the Higgs, but to another force, like the strong force but even stronger? The Higgs we think we saw at the LHC would not be fundamental, but merely a composite, made up of  two “techni-quarks” with “technicolor” charges. [Edited to remove confusion with Preon Theory]

It’s…an idea. But it’s never been a very popular one.

Part of the problem is that the simpler versions of technicolor have been ruled out, so theorists are having to invoke increasingly baroque models to try to make it work. But that, to some extent, is also true of supersymmetry.

A bigger problem is that technicolor is just kind of…tacky.

Technicolor doesn’t say anything deep about the way the universe works. It doesn’t propose new [types of] symmetries, and it doesn’t say anything about what happens at the very highest energies. It’s not really tied in to any of the other lines of speculation in physics, it doesn’t lead to a lot of discussion between researchers. It doesn’t require an end, a fundamental lowest level with truly fundamental particles. You could potentially keep adding new levels of technicolor, new things made up of other things made up of other things, ad infinitum.

And the fleas that bite ’em, presumably.

[Note: to clarify, technicolor theories don’t actually keep going like this, their extra particles don’t require another layer of technicolor to gain their masses. That would be an actual problem with the concept itself, not a reason it’s tacky. It’s tacky because, in a world where most physicists feel like we’ve really gotten down to the fundamental particles, adding new composite objects seems baroque and unnecessary, like adding epicycles. Fleas upon fleas as it were.]

In a word, it’s not sexy.

Does that mean it’s wrong? No, of course not. As the paper linked by Huffington Post points out, technicolor hasn’t been ruled out yet.

Does that mean I think people shouldn’t study it? Again, no. If you really find technicolor meaningful and interesting, go for it! Maybe you’ll be the kick it needs to prove itself!

But good grief, until you manage that, please don’t spread your tacky, un-sexy theory all over Facebook. A theory like technicolor should get press when it’s got a good reason, and “we haven’t been ruled out yet” is never, ever, a good reason.

 

[Edit: Esben on Facebook is more well-informed about technicolor than I am, and pointed out some issues with this post. Some of them are due to me conflating technicolor with another old and tacky theory, while some were places where my description was misleading. Corrections in bold.]

Physical Truths, Lost to the Ages

For all you tumblr-ers out there (tumblr-ists? tumblr-dwellers?), 4 gravitons is now on tumblr. It’s mostly going to be links to my blog posts, with the occasional re-blog of someone else’s work if something catches my eye.

Nima Arkani-Hamed gave a public lecture at Perimeter yesterday, which I encourage you to watch if you have time, once it’s up on the Perimeter site. He also gave a technical talk earlier in the day, where he finished up by making the following (intentionally) provocative statement:

There is no direct evidence of what happened during the Big Bang that could have survived till today.

He clarified that he doesn’t just mean “evidence we can currently detect”. Rather, there’s a limit on what we can know, even with the most precise equipment possible. The details of what happened at the Big Bang (the sorts of precise details that would tell you, for example, whether it is best described by String Theory or some other picture) would get diluted as the universe expands, until today they would be so subtle and so rare that they fall below the level we could even in principle detect. We simply don’t have enough information available, no matter how good our technology gets, to detect them in a statistically significant way.

If this talk had happened last week, I could have used this in my spooky Halloween post. This is exactly the sort of thing that keeps physicists up at night: the idea that, fundamentally, there may be things we can never truly know about the universe, truths lost to the ages.

It’s not quite as dire as it sounds, though. To explain why, let me mention another great physics piece, Tom Stoppard’s Arcadia.

Despite appearances, this is in fact a great work of physics popularization.

Arcadia is a play about entropy. The play depicts two time periods, the early 19th century and the present day. In the present day a pair of scholars, Hannah and Bernard, argue about the events of the 19th century, when the house was occupied by a mathematically precocious girl named Thomasina and her tutor Septimus. Thomasina makes early discoveries about fractals and (to some extent) chaos theory, while Septimus gradually falls in love with her. In the present, the two scholars gradually get closer to the truth, going from a false theory that one of the guests at the house was killed by Lord Byron, to speculation that Septimus was the one to discover fractals, to finally getting a reasonably accurate idea of how the events of the story unfolded. Still, they never know everything, and the play emphasizes that certain details (documents burned in a fire, the true feelings of some of the people) will be forever lost to the ages.

The key point here is that, even with incomplete information, even without the ability to fully test their hypotheses and get all the details, the scholars can still make progress. They can propose accounts of what happened, accounts that have implications they can test, that might be proven wrong or right by future discoveries. Their accounts will also have implications they can’t test: lost letters, feelings never written down. But the better their account, the more it will explain, and the longer it will agree with anything new they manage to turn up.

That’s the way out of the problem Nima posed. We can’t know the truth of what happened at the Big Bang directly. But if we have a theory of physics that describes everything we can test, it’s likely to also make a prediction for what happened in the Big Bang. In science, most of the time you don’t have direct evidence. Rather, you have a successful theory, one that has succeeded under scrutiny many times in many contexts, enough that you trust it even when it goes out of the area you’re comfortable testing. That’s why physicists can make statements about what it’s like on the inside of a black hole, and it’s why it’s still good science to think about the Big Bang even if we can’t gather direct evidence about the details of how it took place.

All that said, Nima is well aware of this, and the problem still makes him uncomfortable. It makes me uncomfortable too. Saying that something is completely outside of our ability to measure, especially something as fundamental and important as the Big Bang, is not something we physicists can generally be content with. Time will tell whether there’s a way around the problem.

Why I Can’t Explain Ghosts: Or, a Review of a Popular Physics Piece

Since today is Halloween, I really wanted to write a post talking about the spookiest particles in physics, ghosts.

And their superpartners, ghost riders.

The problem is, in order to explain ghosts I’d have to explain something called gauge symmetry. And gauge symmetry is quite possibly the hardest topic in modern physics to explain to a general audience.

Deep down, gauge symmetry is the idea that irrelevant extra parts of how we represent things in physics should stay irrelevant. While that sounds obvious, it’s far from obvious how you can go from that to predicting new particles like the Higgs boson.

Explaining this is tough! Tough enough that I haven’t thought of a good way to do it yet.

Which is why I was fairly stoked when a fellow postdoc pointed out a recent popular physics article by Juan Maldacena, explaining gauge symmetry.

Juan Maldacena is a Big Deal. He’s the guy who figured out the AdS/CFT correspondence, showing that string theory (in a particular hyperbola-shaped space called AdS) and everybody’s favorite N=4 super Yang-Mills theory are secretly the same, a discovery which led to a Big Blue Dot on Paperscape. So naturally, I was excited to see what he had to say.

Big Blue Dot pictured here.

Big Blue Dot pictured here.

The core analogy he makes is with currencies in different countries. Just like gauge symmetry, currencies aren’t measuring anything “real”: they’re arbitrary conventions put in place because we don’t have a good way of just buying things based on pure “value”. However, also like gauge symmetry, then can have real-life consequences, as different currency exchange rates can lead to currency speculation, letting some people make money and others lose money. In Maldacena’s analogy the Higgs field works like a precious metal, making differences in exchange rates manifest as different prices of precious metals in different countries.

It’s a solid analogy, and one that is quite close to the real mathematics of the problem (as the paper’s Appendix goes into detail to show). However, I have some reservations, both about the paper as a whole and about the core analogy.

In general, Maldacena doesn’t do a very good job of writing something publicly accessible. There’s a lot of stilted, academic language, and a lot of use of “we” to do things other than lead the reader through a thought experiment. There’s also a sprinkling of terms that I don’t think the average person will understand; for example, I doubt the average college student knows flux as anything other than a zany card game.

Regarding the analogy itself, I think Maldacena has fallen into the common physicist trap of making an analogy that explains things really well…if you already know the math.

This is a problem I see pretty frequently. I keep picking on this article, and I apologize for doing so, but it’s got a great example of this when it describes supersymmetry as involving “a whole new class of number that can be thought of as the square roots of zero”. That’s a really great analogy…if you’re a student learning about the math behind supersymmetry. If you’re not, it doesn’t tell you anything about what supersymmetry does, or how it works, or why anyone might study it. It relates something unfamiliar to something unfamiliar.

I’m worried that Maldacena is doing that in this paper. His setup is mathematically rigorous, but doesn’t say much about the why of things: why do physicists use something like this economic model to understand these forces? How does this lead to what we observe around us in the real world? What’s actually going on, physically? What do particles have to do with dimensionless constants? (If you’re curious about that last one, I like to think I have a good explanation here.)

It’s not that Maldacena ignores these questions, he definitely puts effort into answering them. The problem is that his analogy itself doesn’t really address them. They’re the trickiest part, the part that people need help picturing and framing, the part that would benefit the most from a good analogy. Instead, the core imagery of the piece is wasted on details that don’t really do much for a non-expert.

Maybe I’m wrong about this, and I welcome comments from non-physicists. Do you feel like Maldacena’s account gives you a satisfying idea of what gauge symmetry is?