Author Archives: 4gravitons

The Real Problem with Fine-Tuning

You’ve probably heard it said that the universe is fine-tuned.

The Standard Model, our current best understanding of the rules that govern particle physics, is full of lots of fiddly adjustable parameters. The masses of fundamental particles and the strengths of the fundamental forces aren’t the sort of thing we can predict from first principles: we need to go out, do experiments, and find out what they are. And you’ve probably heard it argued that, if these fiddly parameters were even a little different from what they are, life as we know it could not exist.

That’s fine-tuning…or at least, that’s what many people mean when they talk about fine-tuning. It’s not exactly what physicists mean though. The thing is, almost nobody who studies particle physics thinks the parameters of the Standard Model are the full story. In fact, any theory with adjustable parameters probably isn’t the full story.

It all goes back to a point I made a while back: nature abhors a constant. The whole purpose of physics is to explain the natural world, and we have a long history of taking things that look arbitrary and linking them together, showing that reality has fewer parameters than we had thought. This is something physics is very good at. (To indulge in a little extremely amateurish philosophy, it seems to me that this is simply an inherent part of how we understand the world: if we encounter a parameter, we will eventually come up with an explanation for it.)

Moreover, at this point we have a rough idea of what this sort of explanation should look like. We have experience playing with theories that don’t have any adjustable parameters, or that only have a few: M theory is an example, but there are also more traditional quantum field theories that fill this role with no mention of string theory. From our exploration of these theories, we know that they can serve as the kind of explanation we need: in a world governed by one of these theories, people unaware of the full theory would observe what would look at first glance like a world with many fiddly adjustable parameters, parameters that would eventually turn out to be consequences of the broader theory.

So for a physicist, fine-tuning is not about those fiddly parameters themselves. Rather, it’s about the theory that predicts them. Because we have experience playing with these sorts of theories, we know roughly the sorts of worlds they create. What we know is that, while sometimes they give rise to worlds that appear fine-tuned, they tend to only do so in particular ways. Setups that give rise to fine-tuning have consequences: supersymmetry, for example, can give rise to an apparently fine-tuned universe but has to have “partner” particles that show up in powerful enough colliders. In general, a theory that gives rise to apparent fine-tuning will have some detectable consequences.

That’s where physicists start to get worried. So far, we haven’t seen any of these detectable consequences, and it’s getting to the point where we could have, had they been the sort many people expected.

Physicists are worried about fine-tuning, but not because it makes the universe “unlikely”. They’re worried because the more finely-tuned our universe appears, the harder it is to find an explanation for it in terms of the sorts of theories we’re used to working with, and the less likely it becomes that someone will discover a good explanation any time soon. We’re quite confident that there should be some explanation, hundreds of years of scientific progress strongly suggest that to be the case. But the nature of that explanation is becoming increasingly opaque.

My Travels, and Someone Else’s

I arrived in São Paulo, Brazil a few days ago. I’m going to be here for two months as part of a partnership between Perimeter and the International Centre for Theoretical Physics – South American Institute for Fundamental Research. More specifically, I’m here as part of a program on Integrability, a set of tricks that can, in limited cases, let physicists bypass the messy approximations we often have to use.

I’m still getting my metaphorical feet under me here, so I haven’t had time to think of a proper blog post. However, if you’re interested in hearing about the travels of physicists in general, a friend of mine from Stony Brook is going to the South Pole to work on the IceCube neutrino detection experiment, and has been writing a blog about it.

Why You Should Be Skeptical about Faster-than-Light Neutrinos

While I do love science, I don’t always love IFL Science. They can be good at drumming up enthusiasm, but they can also be ridiculously gullible. Case in point: last week, IFL Science ran a piece on a recent paper purporting to give evidence for faster-than-light particles.

Faster than light! Sounds cool, right? Here’s why you should be skeptical:

If a science article looks dubious, you should check out the source. In this case, IFL Science links to an article on the preprint server arXiv.

arXiv is a freely accessible website where physicists and mathematicians post their articles. The site has multiple categories, corresponding to different fields. It’s got categories for essentially any type of physics you’d care to include, with the option to cross-list if you think people from multiple areas might find your work interesting.

So which category is this paper in? Particle physics? Astrophysics?

General Physics, actually.

General Physics is arXiv’s catch-all category. Some of it really is general, and can’t be put into any more specific place. But most of it, including this, falls into another category: things arXiv’s moderators think are fishy.

arXiv isn’t a journal. If you follow some basic criteria, it won’t reject your articles. Instead, dubious articles are put into General Physics, to signify that they don’t seem to belong with the other scholarship in the established categories. General Physics is a grab-bag of weird ideas and crackpot theories, a mix of fringe physicists and overenthusiastic amateurs. There probably are legitimate papers in there too…but for every paper in there, you can guarantee that some experienced researcher found it suspicious enough to send into exile.

Even if you don’t trust the moderators of arXiv, there are other reasons to be wary of faster-than-light particles.

According to Einstein’s theory of relativity, massless particles travel at the speed of light, while massive particles always travel slower. To travel faster than the speed of light, you need to have a very unusual situation: a particle whose mass is an imaginary number.

Particles like that are called tachyons, and they’re a staple of science fiction. While there was a time when they were a serious subject of physics speculation, nowadays the general view is that tachyons are a sign we’re making bad assumptions.

Assuming that someone is a republic serial villain is a good example.

Why is that? It has to do with the nature of mass.

In quantum field theory, what we observe as particles arise as ripples in quantum fields, extending across space and time. The harder it is to make the field ripple, the higher the particle’s mass.

A tachyon has imaginary mass. This means that it isn’t hard to make the field ripple at all. In fact, exactly the opposite happens: it’s easier to ripple than to stay still! Any ripple, no matter how small, will keep growing until it’s not just a ripple, but a new default state for the field. Only when it becomes hard to change again will the changes stop. If it’s hard to change, though, then the particle has a normal, non-imaginary mass, and is no longer a tachyon!

Thus, the modern understanding is that if a theory has tachyons in it, it’s because we’re assuming that one of the quantum fields has the wrong default state. Switching to the correct default gets rid of the tachyons.

There are deeper problems with the idea proposed in this paper. Normally, the only types of fields that can have tachyons are scalars, fields that can be defined by a single number at each point, sort of like a temperature. The particles this article is describing aren’t scalars, though, they’re fermions, the type of particle that includes everyday matter like electrons. Those sorts of particles can’t be tachyons at all without breaking some fairly important laws of physics. (For a technical explanation of why this is, Lubos Motl’s reply to the post here is pretty good.)

Of course, this paper’s author knows all this. He’s well aware that he’s suggesting bending some fairly fundamental laws, and he seems to think there’s room for it. But that, really, is the issue here: there’s room for it. The paper isn’t, as IFL Science seems to believe, six pieces of evidence for faster-than-light particles. It’s six measurements that, if you twist them around and squint and pick exactly the right model, have room for faster-than-light particles. And that’s…probably not worth an article.

Merry Newtonmas!

Yesterday, people around the globe celebrated the birth of someone whose new perspective and radical ideas changed history, perhaps more than any other.

I’m referring, of course, to Isaac Newton.

Ho ho ho!

Born on December 25, 1642, Newton is justly famed as one of history’s greatest scientists. By relating gravity on Earth to the force that holds the planets in orbit, Newton arguably created physics as we know it.

However, like many prominent scientists, Newton’s greatness was not so much in what he discovered as how he discovered it. Others had already had similar ideas about gravity. Robert Hooke in particular had written to Newton mentioning a law much like the one Newton eventually wrote down, leading Hooke to accuse Newton of plagiarism.

Newton’s great accomplishment was not merely proposing his law of gravitation, but justifying it, in a way that no-one had ever done before. When others (Hooke for example) had proposed similar laws, they were looking for a law that perfectly described the motion of the planets. Kepler had already proposed ellipse-shaped orbits, but it was clear by Newton and Hooke’s time that such orbits did not fully describe the motion of the planets. Hooke and others hoped that if some sufficiently skilled mathematician started with the correct laws, they could predict the planets’ motions with complete accuracy.

The genius of Newton was in attacking this problem from a different direction. In particular, Newton showed that his laws of gravitation do result in (incorrect) ellipses…provided that there was only one planet.

With multiple planets, things become much more complicated. Even just two planets orbiting a single star is so difficult a problem that it’s impossible to write down an exact solution.

Sensibly, Newton didn’t try to write down an exact solution. Instead, he figured out an approximation: since the Sun is much bigger than the planets, he could simplify the problem and arrive at a partial solution. While he couldn’t perfectly predict the motions of the planets, he knew more than that they were just “approximately” ellipses: he had a prediction for how different from ellipses they should be.

That step was Newton’s great contribution. That insight, that science was able not just to provide exact answers to simpler problems but to guess how far those answers might be off, was something no-one else had really thought about before. It led to error analysis in experiments, and perturbation methods in theory. More generally, it led to the idea that scientists have to be responsible, not just for getting things “almost right”, but for explaining how their results are still wrong.

So this holiday season, let’s give thanks to the man whose ideas created science as we know it. Merry Newtonmas everyone!

Sorry Science Fiction, Quantum Gravity Doesn’t Do What You Think It Does

I saw Interstellar this week. There’s been a lot of buzz among physicists about it, owing in part to the involvement of black hole expert Kip Thorne in the film’s development. I’d just like to comment on one aspect of the film that bugged me, a problem that shows up pretty frequently in science fiction.

In the film, Michael Caine plays a theoretical physicist working for NASA. His dream is to save humanity from an Earth plagued by a blight that is killing off the world’s food supply. To do this, he plans to build giant anti-gravity spaceships capable of taking as many people as possible away from the dying Earth to find a new planet capable of supporting human life. And in order to do that, apparently, he needs a theory of quantum gravity.

The thing is, quantum gravity has nothing to do with making giant anti-gravity spaceships.

Michael Caine lied to us?

This mistake isn’t unique to Interstellar. Lots of science fiction works assume that once we understand quantum gravity then everything else will follow: faster than light travel, wormholes, anti-gravity…pretty much every sci-fi staple.

It’s not just present in science fiction, either. Plenty of science popularizers like to mention all of the marvelous technology that’s going to come out of quantum gravity, including people who really should know better. A good example comes from a recent piece by quantum gravity researcher Sabine Hossenfelder:

But especially in high energy physics and quantum gravity, progress has basically stalled since the development of the standard model in the mid 70s. […] it is a frustrating situation and this makes you wonder if not there are other reasons for lack of progress, reasons that we can do something about. Especially in a time when we really need a game changer, some breakthrough technology, clean energy, that warp drive, a transporter!

None of these are things we’re likely to get from quantum gravity, and the reason is rather basic. It boils down to one central issue: if we can’t control the classical physics, we can’t control the quantum physics.

When science fiction authors speculate about the benefits of quantum gravity, they’re thinking about the benefits of quantum mechanics. Understanding the quantum world has allowed some of the greatest breakthroughs of the 20th century, from miniaturizing circuits to developing novel materials.

The assumption writers make is that the same will be true for quantum gravity: understand it, and gravity technology will flow. But this assumption forgets that quantum mechanics was so successful because it let us understand things we were already working with.

In order to miniaturize circuits, you have to know how to build a circuit in the first place. Only then, when you try to make the circuit smaller and don’t understand why it stops working, does quantum mechanics step in to tell you what you’re missing. Quantum mechanics helps us develop new materials because it helps us understand how existing materials work.

We don’t have any gravity circuits to shrink down, or gravity materials to understand. When gravity limits our current technology, it does so on a macro level (such as the effect of the Earth’s gravity on GPS satellites) not on a quantum level. If there isn’t a way to build anti-gravity technology using classical physics, there probably isn’t a way using quantum physics.

Scientists and popularizers generally argue that we can’t know what the future will bring. This is true, up to a point. When Maxwell wrote down equations to unify electricity and magnetism he could not have imagined the wealth of technology we have today. And often, technologies come from unexpected places. The spinoff technologies of the space race are the most popular example, another is that CERN (the facility that houses the Large Hadron Collider) was instrumental in developing the world wide web.

While it’s great to emphasize the open-ended promise of scientific advances (especially on grant applications!), in this context it’s misleading because it erases the very real progress people are making on these issues without quantum gravity.

Want to invest in clean energy? There are a huge number of scientists working on it, with projects ranging from creating materials that can split water using solar energy to nuclear fusion. Quantum gravity is just about the last science likely to give us clean energy, and I’m including the social sciences in that assessment.

How about a warp drive?

Indeed, how about one?

That’s not obviously related to quantum gravity either. There has actually been some research into warp drives, but they’re based on a solution to Einstein’s equations without quantum mechanics. It’s not clear whether quantum gravity has something meaningful to say about them…while there are points to be made, from what I’ve been able to gather they’re more related to talking about how other quantum systems interact with gravity than the quantum properties of gravity itself. The same seems to apply to the difficulties involved in wormholes, another sci-fi concept that comes straight out of Einstein’s theory.

As for teleportation, that’s an entirely different field, and it probably doesn’t work how you think it does.

So what is quantum gravity actually good for?

Quantum gravity becomes relevant when gravity becomes very strong, places where Einstein’s theory would predict infinitely dense singularities. That means the inside of black holes, and the Big Bang. Quantum gravity smooths out these singularities, which means it can tell you about the universe’s beginnings (by smoothing out the big bang and showing what could cause it), or its long-term future (for example, problems with the long-term evolution of black holes).

These are important questions! They tell us about where we come from and where we’re going: in short, about our ultimate place in the universe. Almost every religion in history has tried to answer these questions. They’re very important to us as a species, even if they don’t directly impact our daily lives.

What they are not, however, is a source of technology.

So please, science fiction, use some other field for your plot-technology. There are plenty of scientific advances to choose from, people who are really working on cutting-edge futuristic stuff. They don’t need to wait on a theory of quantum gravity to get their work done. Neither do you.

Where do you get all those mathematical toys?

I’m at a conference at Caltech this week, so it’s going to be a shorter post than usual.

The conference is on something call the Positive Grassmannian, a precursor to Nima Arkani-Hamed’s much-hyped Amplituhedron. Both are variants of a central idea: take complicated calculations in physics and express them in terms of clean, well-defined mathematical objects.

Because of this, this conference is attended not just by physicists, but by mathematicians as well, and it’s been interesting watching how the two groups interact.

From a physics perspective, mathematicians are great because they give us so many useful tools! Many significant advances in my field happened because a physicist talked to a mathematician and learned that a problem that had stymied the physics world had already been solved in the math community.

This tends to lead to certain expectations among physicists. If a mathematician gives a talk at a physics conference, we expect them to present something we can use. Our ideal math talk is like when Q presents the gadgets at the beginning of a Bond movie: a ton of new toys with just enough explanation for us to use them to save the day in the second act.

Pictured: Mathematicians, through Physicist eyes

You may see the beginning of a problem here, once you realize that physicists are the James Bond in this analogy.

Physicists like to see themselves as the protagonists of their own stories. That’s true of every field, though, to some degree or another. And it’s certainly true of mathematicians.

Mathematicians don’t go to physics conferences just to be someone’s supporting cast. They do it because physics problems are interesting to them: by hearing what physicists are working on they hope to get inspiration for new mathematical structures, concepts jury-rigged together by physicists that represent corners that mathematics hasn’t yet explored. Their goal is to take home an idea that they can turn into something productive, gaining glory among their fellow mathematicians. And if that sounds familiar…

Pictured: Physicists, through Mathematician eyes

While it’s amusing to watch the different expectations go head-to-head, the best collaborations between physicists and mathematicians are those where both sides respect that the other is the protagonist of their own story. Allow for give-and-take, paying attention not just to what you find interesting but to what the other person does, without assuming a tired old movie script, and it’s possible to make great progress.

Of course, that’s true of life in general as well.

The Three Things Everyone Gets Wrong about the Big Bang

Ah, the Big Bang, our most science-y of creation myths. Everyone knows the story of how the universe and all its physical laws emerged from nothing in a massive explosion, growing from a singularity to the size of a breadbox until, over billions of years, it became the size it is today.

bigbang

A hot dense state, if you know what I mean.

…actually, almost nothing in that paragraph is true. There are a lot of myths about the Big Bang, born from physicists giving sloppy explanations. Here are three things most people get wrong about the Big Bang:

1. A Massive Explosion:

When you picture the big bang, don’t you imagine that something went, well, bang?

In movies and TV shows, a time traveler visiting the big bang sees only an empty void. Suddenly, an explosion lights up the darkness, shooting out stars and galaxies until it has created the entire universe.

Astute readers might find this suspicious: if the entire universe was created by the big bang, then where does the “darkness” come from? What does the universe explode into?

The problem here is that, despite the name, the big bang was not actually an explosion.

In picturing the universe as an explosion, you’re imagining the universe as having finite size. But it’s quite likely that the universe is infinite. Even if it is finite, it’s finite like the surface of the Earth: as Columbus (and others) experienced, you can’t get to the “edge” of the Earth no matter how far you go: eventually, you’ll just end up where you started. If the universe is truly finite, the same is true of it.

Rather than an explosion in one place, the big bang was an explosion everywhere at once. Every point in space was “exploding” at the same time. Each point was moving farther apart from every other point, and the whole universe was, as the song goes, hot and dense.

So what do physicists mean when they say that the universe at some specific time was the size of a breadbox, or a grapefruit?

It’s just sloppy language. When these physicists say “the universe”, what they mean is just the part of the universe we can see today, the Hubble Volume. It is that (enormously vast) space that, once upon a time, was merely the size of a grapefruit. But it was still adjacent to infinitely many other grapefruits of space, each one also experiencing the big bang.

2. It began with a Singularity:

This one isn’t so much definitely wrong as probably wrong.

If the universe obeys Einstein’s Theory of General Relativity perfectly, then we can make an educated guess about how it began. By tracking back the expansion of the universe to its earliest stages, we can infer that the universe was once as small as it can get: a single, zero-dimensional point, or a singularity. The laws of general relativity work the same backwards and forwards in time, so just as we could see a star collapsing and know that it is destined to form a black hole, we can see the universe’s expansion and know that if we traced it back it must have come from a single point.

This is all well and good, but there’s a problem with how it begins: “If the universe obeys Einstein’s Theory of General Relativity perfectly”.

In this situation, general relativity predicts an infinitely small, infinitely dense point. As I’ve talked about before, in physics an infinite result is almost never correct. When we encounter infinity, almost always it means we’re ignoring something about the nature of the universe.

In this case, we’re ignoring Quantum Mechanics. Quantum Mechanics naturally makes physics somewhat “fuzzy”: the Uncertainty Principle means that a quantum state can never be exactly in one specific place.

Combining quantum mechanics and general relativity is famously tricky, and the difficulty boils down to getting rid of pesky infinite results. However, several approaches exist to solving this problem, the most prominent of them being String Theory.

If you ask someone to list string theory’s successes, one thing you’ll always hear mentioned is string theory’s ability to understand black holes. In general relativity, black holes are singularities: infinitely small, and infinitely dense. In string theory, black holes are made up of combinations of fundamental objects: strings and membranes, curled up tight, but crucially not infinitely small. String theory smooths out singularities and tamps down infinities, and the same story applies to the infinity of the big bang.

String theory isn’t alone in this, though. Less popular approaches to quantum gravity, like Loop Quantum Gravity, also tend to “fuzz” out singularities. Whichever approach you favor, it’s pretty clear at this point that the big bang didn’t really begin with a true singularity, just a very compressed universe.

3. It created the laws of physics:

Physicists will occasionally say that the big bang determined the laws of physics. Fans of Anthropic Reasoning in particular will talk about different big bangs in different places in a vast multi-verse, each producing different physical laws.

I’ve met several people who were very confused by this. If the big bang created the laws of physics, then what laws governed the big bang? Don’t you need physics to get a big bang in the first place?

The problem here is that “laws of physics” doesn’t have a precise definition. Physicists use it to mean different things.

In one (important) sense, each fundamental particle is its own law of physics. Each one represents something that is true across all of space and time, a fact about the universe that we can test and confirm.

However, these aren’t the most fundamental laws possible. In string theory, the particles that exist in our four dimensions (three space dimensions, and one of time) change depending on how six “extra” dimensions are curled up. Even in ordinary particle physics, the value of the Higgs field determines the mass of the particles in our universe, including things that might feel “fundamental” like the difference between electromagnetism and the weak nuclear force. If the Higgs field had a different value (as it may have early in the life of the universe), these laws of physics would have been different. These sorts of laws can be truly said to have been created by the big bang.

The real fundamental laws, though, don’t change. Relativity is here to stay, no matter what particles exist in the universe. So is quantum mechanics. The big bang didn’t create those laws, it was a natural consequence of them. Rather than springing physics into existence from nothing, the big bang came out of the most fundamental laws of physics, then proceeded to fix the more contingent ones.

In fact, the big bang might not have even been the beginning of time! As I mentioned earlier in this article, most approaches to quantum gravity make singularities “fuzzy”. One thing these “fuzzy” singularities can do is “bounce”, going from a collapsing universe to an expanding universe. In Cyclic Models of the universe, the big bang was just the latest in a cycle of collapses and expansions, extending back into the distant past. Other approaches, like Eternal Inflation, instead think of the big bang as just a local event: our part of the universe happened to be dense enough to form a big bang, while other regions were expanding even more rapidly.

So if you picture the big bang, don’t just imagine an explosion. Imagine the entire universe expanding at once, changing and settling and cooling until it became the universe as we know it today, starting from a world of tangled strings or possibly an entirely different previous universe.

Sounds a bit more interesting to visit in your TARDIS, no?

What Can Replace Space-Time?

Nima Arkani-Hamed is famous for believing that space-time is doomed, that as physicists we will have to abandon the concepts of space and time if we want to find the ultimate theory of the universe. He’s joked that this is what motivates him to get up in the morning. He tends to bring it up often in talks, both for physicists and for the general public.

The latter especially tend to be baffled by this idea. I’ve heard a lot of questions like “if space-time is doomed, what could replace it?”

In the past, Nima and I both tended to answer this question with a shrug. (Though a more elaborate shrug in his case.) This is the honest answer: we don’t know what replaces space-time, we’re still looking for a good solution. Nima’s Amplituhedron may eventually provide an answer, but it’s still not clear what that answer will look like. I’ve recently realized, though, that this way of responding to the question misses its real thrust.

When people ask me “what could replace space-time?” they’re not asking “what will replace space-time?” Rather, they’re asking “what could possibly replace space-time?” It’s not that they want to know the answer before we’ve found it, it’s that they don’t understand how any reasonable answer could possibly exist.

I don’t think this concern has been addressed much by physicists, and it’s a pity, because it’s not very hard to answer. You don’t even need advanced physics. All you need is some fairly old philosophy. Specifically we’ll use concepts from metaphysics, the branch of philosophy that deals with categories of being.

Think about your day yesterday. Maybe you had breakfast at home, drove to work, had a meeting, then went home and watched TV.

Each of those steps can be thought of as an event. Each event is something that happened that we want to pay attention to. You having breakfast was an event, as was you arriving at work.

These events are connected by relations. Here, each relation specifies the connection between two events. There might be a relation of cause-and-effect, for example, between you arriving at work late and meeting with your boss later in the day.

Space and time, then, can be seen as additional types of relations. Your breakfast is related to you arriving at work: it is before it in time, and some distance from it in space. Before and after, distant in one direction or another, these are all relations between the two events.

Using these relations, we can infer other relations between the events. For example, if we know the distance relating your breakfast and arriving at work, we can make a decent guess at another relation, the difference in amount of gas in your car.

This way of viewing the world, events connected by relations, is already quite common in physics. With Einstein’s theory of relativity, it’s hard to say exactly when or where an event happened, but the overall relationship between two events (distance in space and time taken together) can be thought of much more precisely. As I’ve mentioned before, the curved space-time necessary for Einstein’s theory of gravity can be thought of equally well as a change in the way you measure distances between two points.

So if space and time are relations between events, what would it mean for space-time to be doomed?

The key thing to realize here is that space and time are very specific relations between events, with very specific properties. Some of those properties are what cause problems for quantum gravity, problems which prompt people to suggest that space-time is doomed.

One of those properties is the fact that, when you multiply two distances together, it doesn’t matter which order you do it in. This probably sounds obvious, because you’re used to multiplying normal numbers, for which this is always true anyway. But even slightly more complicated mathematical objects, like matrices, don’t always obey this rule. If distances were this sort of mathematical object, then multiplying them in different orders could give slightly different results. If the difference were small enough, we wouldn’t be able to tell that it was happening in everyday life: distance would have given way to some more complicated concept, but it would still act like distance for us.

That specific idea isn’t generally suggested as a solution to the problems of space and time, but it’s a useful toy model that physicists have used to solve other problems.

It’s the general principle I want to get across: if you want to replace space and time, you need a relation between events. That relation should behave like space and time on the scales we’re used to, but it can be different on very small scales (Big Bang, inside of Black Holes) and on very large scales (long-term fate of the universe).

Space-time is doomed, and we don’t know yet what’s going to replace it. But whatever it is, whatever form it takes, we do know one thing: it’s going to be a relation between events.

Research or Conference? Can’t it be both?

“If you’re there for two months, for sure you’ll be doing research.”

I wanted to be snarky. I wanted to point out that, as a theoretical physicist, I do research wherever I go. I wanted to say that I even did research on the drive over. (This may not have been true, I think I mostly thought about Magic the Gathering cards.)

More than any of those, though, I wanted to get my travel visa. So instead I said,

“That’s fair.”

“Mmhmm, that’s fair.” Looking down at the invitation letter, she triumphantly pointed to the name of the inviting institution: “South American Institute for Fundamental Research.”

A bit of background: I’m going to Brazil this winter. Partly, this is because winter in Canada is not especially desirable, but it’s also because Sao Paulo’s International Center for Theoretical Physics is running a Program on Integrability, the arcane set of techniques that seeks to bypass the approximate perturbations we often use in particle physics and find full, exact results.

What do I mean by a Program? It’s not the sort of scientific program I’ve talked about before, though the ideas are related. When an institute holds a Program, they’re declaring a theme. For a certain length of time (generally from a few months to a whole semester), there will be a large number of talks at the institute focused on some particular scientific theme. The institute invites people from all over the world who work on that theme. Those people are there to give and attend talks, but they’re also there to share ideas with each other, to network and collaborate and do research.

This is where things get tricky. See, Brazil has multiple types of visas. A Tourist Visa can be used, among other things, for attending a scientific conference. On the other hand, someone coming to Brazil to do research uses Visa 1.

A Program is essentially a long conference…but it’s also an opportunity to do research. So are most short conferences, though! In theoretical physics we have workshops, short conferences explicitly focused on collaboration and research, but even if a conference isn’t a workshop you can bet that we’ll be doing some research there, for sure. We don’t need labs, and some of us don’t even need computers, research can happen whenever the inspiration strikes. The distinction between conferences and research, from our perspective, is an arbitrary one.

In physics, we like to cut through this sort of ambiguity by looking at what’s really important. I wanted to figure out what about research makes the Brazilian government use a different visa for it, whether it was about motivating people to enter the country for specific reasons or tracking certain sorts of activities. I wanted to understand that, because it would let me figure out whether my own research fell under those reasons, and thus figure out objectively which type of visa I ought to have.

I wanted to ask about all of this…but more than any of that, I wanted to get my travel visa. So I applied for the visa they told me to, and left.

Misleading Headlines and Tacky Physics, Oh My!

It’s been making the rounds on the blogosphere (despite having come out three months ago). It’s probably showed up on your Facebook feed. It’s the news that (apparently) one of the biggest discoveries of recent years may have been premature. It’s….

The Huffington Post writing a misleading headline to drum up clicks!

The article linked above is titled “Scientists Raise Doubts About Higgs Boson Discovery, Say It Could Be Another Particle”. And while that is indeed technically all true, it’s more than a little misleading.

When the various teams at the Large Hadron Collider announced their discovery of the Higgs, they didn’t say it was exactly the Higgs predicted by the Standard Model. In fact, it probably shouldn’t be: most of the options for extending the Standard Model, like supersymmetry, predict a Higgs boson with slightly different properties. Until the Higgs is measured more precisely, these slightly different versions won’t be ruled out.

Of course, “not ruled out” is not exactly newsworthy, which is the main problem with this article. The Huffington Post quotes a paper that argues, not that there is new evidence for an alternative to the Higgs, but simply that one particular alternative that the authors like hasn’t been ruled out yet.

Also, it’s probably the tackiest alternative out there.

The theory in question is called Technicolor, and if you’re imagining a certain coat then you may have an idea of how tacky we’re talking.

Any Higgs will do…

To describe technicolor, let’s take a brief aside and talk about the colors of quarks.

Rather than having one type of charge going from plus to minus like Electromagnetism, the Strong Nuclear Force has three types of charge, called red, green, and blue. Quarks are charged under the strong force, and can be red, green, or blue, while the antimatter partners of quarks have the equivalent of negative charges, anti-red, anti-green, and anti-blue. The strong force binds quarks together into protons and neutrons. The strong force is also charged under itself, which means that not only does it bind quarks together, it also binds itself together, so that it only acts at very very short range.

In combination, these two facts have one rather surprising consequence. A proton contains three quarks, but a proton’s mass is over a hundred times the total mass of three quarks. The same is true of neutrons.

The reason why is that most of the mass isn’t coming from the quarks, it’s coming from the strength of the strong force. Mass, contrary to what you might think, isn’t fundamental “stuff”. It’s just a handy way of talking about energy that isn’t due to something we can easily see. Particles have energy because they move, but they also have energy due to internal interactions, as well as interactions with other fields like the Higgs field. While a lone quark’s mass is due to its interaction with the Higgs field, the quarks inside a proton are also interacting with each other, gaining enormous amounts of energy from the strong force trapped within. That energy, largely invisible from an outside view, contributes most of what we see as the mass of the proton.

Technicolor asks the following: what if it’s not just protons and neutrons? What if the mass of everything, quarks and electrons and the W and Z bosons, was due not truly to the Higgs, but to another force, like the strong force but even stronger? The Higgs we think we saw at the LHC would not be fundamental, but merely a composite, made up of  two “techni-quarks” with “technicolor” charges. [Edited to remove confusion with Preon Theory]

It’s…an idea. But it’s never been a very popular one.

Part of the problem is that the simpler versions of technicolor have been ruled out, so theorists are having to invoke increasingly baroque models to try to make it work. But that, to some extent, is also true of supersymmetry.

A bigger problem is that technicolor is just kind of…tacky.

Technicolor doesn’t say anything deep about the way the universe works. It doesn’t propose new [types of] symmetries, and it doesn’t say anything about what happens at the very highest energies. It’s not really tied in to any of the other lines of speculation in physics, it doesn’t lead to a lot of discussion between researchers. It doesn’t require an end, a fundamental lowest level with truly fundamental particles. You could potentially keep adding new levels of technicolor, new things made up of other things made up of other things, ad infinitum.

And the fleas that bite ’em, presumably.

[Note: to clarify, technicolor theories don’t actually keep going like this, their extra particles don’t require another layer of technicolor to gain their masses. That would be an actual problem with the concept itself, not a reason it’s tacky. It’s tacky because, in a world where most physicists feel like we’ve really gotten down to the fundamental particles, adding new composite objects seems baroque and unnecessary, like adding epicycles. Fleas upon fleas as it were.]

In a word, it’s not sexy.

Does that mean it’s wrong? No, of course not. As the paper linked by Huffington Post points out, technicolor hasn’t been ruled out yet.

Does that mean I think people shouldn’t study it? Again, no. If you really find technicolor meaningful and interesting, go for it! Maybe you’ll be the kick it needs to prove itself!

But good grief, until you manage that, please don’t spread your tacky, un-sexy theory all over Facebook. A theory like technicolor should get press when it’s got a good reason, and “we haven’t been ruled out yet” is never, ever, a good reason.

 

[Edit: Esben on Facebook is more well-informed about technicolor than I am, and pointed out some issues with this post. Some of them are due to me conflating technicolor with another old and tacky theory, while some were places where my description was misleading. Corrections in bold.]