Entropy is Ignorance

(My last post had a poll in it! If you haven’t responded yet, please do.)

Earlier this month, philosopher Richard Dawid ran a workshop entitled “Why Trust a Theory? Reconsidering Scientific Methodology in Light of Modern Physics” to discuss his idea of “non-empirical theory confirmation” for string theory, inflation, and the multiverse. They haven’t published the talks online yet, so I’m stuck reading coverage, mostly these posts by skeptical philosopher Massimo Pigliucci. I find the overall concept annoying, and may rant about it later. For now though, I’d like to talk about a talks on the second day by philosopher Chris Wüthrich about black hole entropy.

Black holes, of course, are the entire-stars-collapsed-to-a-point-that-no-light-can-escape that everyone knows and loves. Entropy is often thought of as the scientific term for chaos and disorder, the universe’s long slide towards dissolution. In reality, it’s a bit more complicated than that.

2000px-chaos_star-svg

For one, you need to take Elric into account…

Can black holes be disordered? Naively, that doesn’t seem possible. How can a single point be disorderly?

Thought about in a bit more detail, the conclusion seems even stronger. Via something called the “No Hair Theorem”, it’s possible to prove that black holes can be described completely with just three numbers: their mass, their charge, and how fast they are spinning. With just three numbers, how can there be room for chaos?

On the other hand, you may have heard of the Second Law of Thermodynamics. The Second Law states that entropy always increases. Absent external support, things will always slide towards disorder eventually.

If you combine this with black holes, then this seems to have weird implications. In particular, what happens when something disordered falls into a black hole? Does the disorder just “go away”? Doesn’t that violate the Second Law?

This line of reasoning has led to the idea that black holes have entropy after all. It led Bekenstein to calculate the entropy of a black hole based on how much information is “hidden” inside, and Hawking to find that black holes in a quantum world should radiate as if they had a temperature consistent with that entropy. One of the biggest successes of string theory is an explanation for this entropy. In string theory, black holes aren’t perfect points: they have structure, arrangements of strings and higher dimensional membranes, and this structure can be disordered in a way that seems to give the right entropy.

Note that none of this has been tested experimentally. Hawking radiation, if it exists, is very faint: not the sort of thing we could detect with a telescope. Wüthrich is worried that Bekenstein’s original calculation of black hole entropy might have been on the wrong track, which would undermine one of string theory’s most well-known accomplishments.

I don’t know Wüthrich’s full argument, since the talks haven’t been posted online yet. All I know is Pigliucci’s summary. From that summary, it looks like Wüthrich’s primary worry is about two different definitions of entropy.

See, when I described entropy as “disorder”, I was being a bit vague. There are actually two different definitions of entropy. The older one, Gibbs entropy, grows with the number of states of a system. What does that have to do with disorder?

Think about two different substances: a gas, and a crystal. Both are made out of atoms, but the patterns involved are different. In the gas, atoms are free to move, while in the crystal they’re (comparatively) fixed in place.

147515main_phases_large

Blurrily so in this case

There are many different ways the atoms of a gas can be arranged and still be a gas, but fewer in which they can be a crystal, so a gas has more entropy than a crystal. Intuitively, the gas is more disordered.

When Bekenstein calculated the entropy of a black hole he didn’t use Gibbs entropy, though. Instead, he used Shannon entropy, a concept from information theory. Shannon entropy measures the amount of information in a message, with a formula very similar to that of Gibbs entropy: the more different ways you can arrange something, the more information you can use it to send. Bekenstein used this formula to calculate the amount of information that gets hidden from us when something falls into a black hole.

Wüthrich’s worry here (again, as far as Pigliucci describes) is that Shannon entropy is a very different concept from Gibbs entropy. Shannon entropy measures information, while Gibbs entropy is something “physical”. So by using one to predict the other, are predictions about black hole entropy just confused?

It may well be he has a deeper argument for this, one that wasn’t covered in the summary. But if this is accurate, Wüthrich is missing something fundamental. Shannon entropy and Gibbs entropy aren’t two different concepts. Rather, they’re both ways of describing a core idea: entropy is a measure of ignorance.

A gas has more entropy than a crystal, it can be arranged in a larger number of different ways. But let’s not talk about a gas. Let’s talk about a specific arrangement of atoms: one is flying up, one to the left, one to the right, and so on. Space them apart, but be very specific about how they are arranged. This arrangement could well be a gas, but now it’s a specific gas. And because we’re this specific, there are now many fewer states the gas can be in, so this (specific) gas has less entropy!

Now of course, this is a very silly way to describe a gas. In general, we don’t know what every single atom of a gas is doing, that’s why we call it a gas in the first place. But it’s that lack of knowledge that we call entropy. Entropy isn’t just something out there in the world, it’s a feature of our descriptions…but one that, nonetheless, has important physical consequences. The Second Law still holds: the world goes from lower entropy to higher entropy. And while that may seem strange, it’s actually quite logical: the things that we describe in more vague terms should become more common than the things we describe in specific terms, after all there are many more of them!

Entropy isn’t the only thing like this. In the past, I’ve bemoaned the difficulty of describing the concept of gauge symmetry. Gauge symmetry is in some ways just part of our descriptions: we prefer to describe fundamental forces in a particular way, and that description has redundant parameters. We have to make those redundant parameters “go away” somehow, and that leads to non-existent particles called “ghosts”. However, gauge symmetry also has physical consequences: it was how people first knew that there had to be a Higgs boson, long before it was discovered. And while it might seem weird to think that a redundancy could imply something as physical as the Higgs, the success of the concept of entropy should make this much less surprising. Much of what we do in physics is reasoning about different descriptions, different ways of dividing up the world, and then figuring out the consequences of those descriptions. Entropy is ignorance…and if our ignorance obeys laws, if it’s describable mathematically, then it’s as physical as anything else.

Visiting the Blog? Here’s a Poll!

A few of my recent posts talked about how important it is to know your audience when communicating science. As it turns out, I don’t actually know much about who reads this blog. WordPress tells me which countries you come from (mostly from the US, but large contingents from several other countries, with views from 122 countries last year), and in some cases what links you clicked on to get here (lots of search engines, facebook, reddit, twitter, various other peoples’ blogs). What it doesn’t tell me, though, is what your background is.

That’s what this poll is for. Readers, I’d like you to tell me how much physics background you have. Did you only run into it in high school (if at all), or did you see some college physics too? How many of you are actually physicists? How many of you are mathematicians? (From my observations, even mathematicians with no physics experience favor very different explanations from other people with no physics experience.)

I try to make this blog accessible to as many people as I can, but I do wonder how much of my audience needs that accessibility. So whether you’re just stopping by to read a post linked on reddit, or you’re a long-time reader, vote in the poll and let me know where you stand. And if you’ve got more to say or the poll doesn’t capture some subtlety, feel free to respond in the comments!

The “Lies to Children” Model of Science Communication, and The “Amplitudes Are Weird” Model of Amplitudes

Let me tell you a secret.

Scattering amplitudes in N=4 super Yang-Mills don’t actually make sense.

Scattering amplitudes calculate the probability that particles “scatter”: coming in from far away, interacting in some fashion, and producing new particles that travel far away in turn. N=4 super Yang-Mills is my favorite theory to work with: a highly symmetric version of the theory that describes the strong nuclear force. In particular, N=4 super Yang-Mills has conformal symmetry: if you re-scale everything larger or smaller, you should end up with the same predictions.

You might already see the contradiction here: scattering amplitudes talk about particles coming in from very far away…but due to conformal symmetry, “far away” doesn’t mean anything, since we can always re-scale it until it’s not far away anymore!

So when I say that I study scattering amplitudes in N=4 super Yang-Mills, am I lying?

Well…yes. But it’s a useful type of lie.

There’s a concept in science writing called “lies to children”, first popularized in a fantasy novel.

the-science-of-discworld-1

This one.

When you explain science to the public, it’s almost always impossible to explain everything accurately. So much background is needed to really understand most of modern science that conveying even a fraction of it would bore the average audience to tears. Instead, you need to simplify, to skip steps, and even (to be honest) to lie.

The important thing to realize here is that “lies to children” aren’t meant to mislead. Rather, they’re chosen in such a way that they give roughly the right impression, even as they leave important details out. When they told you in school that energy is always conserved, that was a lie: energy is a consequence of symmetry in time, and when that symmetry is broken energy doesn’t have to be conserved. But “energy is conserved” is a useful enough rule that lets you understand most of everyday life.

In this case, the “lie” that we’re calculating scattering amplitudes is fairly close to the truth. We’re using the same methods that people use to calculate scattering amplitudes in theories where they do make sense, like QCD. For a while, people thought these scattering amplitudes would have to be zero, since anything else “wouldn’t make sense”…but in practice, we found they were remarkably similar to scattering amplitudes in other theories. Now, we have more rigorous definitions for what we’re calculating that avoid this problem, involving objects called polygonal Wilson loops.

This illustrates another principle, one that hasn’t (yet) been popularized by a fantasy novel. I’d like to call it the “amplitudes are weird” principle. Time and again we amplitudes-folks will do a calculation that doesn’t really make sense, find unexpected structure, and go back to figure out what that structure actually means. It’s been one of the defining traits of the field, and we’ve got a pretty good track record with it.

A couple of weeks back, Lance Dixon gave an interview for the SLAC website, talking about his work on quantum gravity. This was immediately jumped on by Peter Woit and Lubos Motl as ammo for the long-simmering string wars. To one extent or another, both tried to read scientific arguments into the piece. This is in general a mistake: it is in the nature of a popularization piece to contain some volume of lies-to-children, and reading a piece aimed at a lower audience can be just as confusing as reading one aimed at a higher audience.

In the remainder of this post, I’ll try to explain what Lance was talking about in a slightly higher-level way. There will still be lies-to-children involved, this is a popularization blog after all. But I should be able to clear up a few misunderstandings. Lubos probably still won’t agree with the resulting argument, but it isn’t the self-evidently wrong one he seems to think it is.

Lance Dixon has done a lot of work on quantum gravity. Those of you who’ve read my old posts might remember that quantum gravity is not so difficult in principle: general relativity naturally leads you to particles called gravitons, which can be treated just like other particles. The catch is that the theory that you get by doing this fails to be predictive: one reason why is that you get an infinite number of erroneous infinite results, which have to be papered over with an infinite number of arbitrary constants.

Working with these non-predictive theories, however, can still yield interesting results. In the article, Lance mentions the work of Bern, Carrasco, and Johansson. BCJ (as they are abbreviated) have found that calculating a gravity amplitude often just amounts to calculating a (much easier to find) Yang-Mills amplitude, and then squaring the right parts. This was originally found in the context of string theory by another three-letter group, Kawai, Lewellen, and Tye (or KLT). In string theory, it’s particularly easy to see how this works, as it’s a basic feature of how string theory represents gravity. However, the string theory relations don’t tell the whole story: in particular, they only show that this squaring procedure makes sense on a classical level. Once quantum corrections come in, there’s no known reason why this squaring trick should continue to work in non-string theories, and yet so far it has. It would be great if we had a good argument why this trick should continue to work, a proof based on string theory or otherwise: for one, it would allow us to be much more confident that our hard work trying to apply this trick will pay off! But at the moment, this falls solidly under the “amplitudes are weird” principle.

Using this trick, BCJ and collaborators (frequently including Lance Dixon) have been calculating amplitudes in N=8 supergravity, a highly symmetric version of those naive, non-predictive gravity theories. For this particular, theory, the theory you “square” for the above trick is N=4 super Yang-Mills. N=4 super Yang-Mills is special for a number of reasons, but one is that the sorts of infinite results that lose you predictive power in most other quantum field theories never come up. Remarkably, the same appears to be true of N=8 supergravity. We’re still not sure, the relevant calculation is still a bit beyond what we’re capable of. But in example after example, N=8 supergravity seems to be behaving similarly to N=4 super Yang-Mills, and not like people would have predicted from its gravitational nature. Once again, amplitudes are weird, in a way that string theory helped us discover but by no means conclusively predicted.

If N=8 supergravity doesn’t lose predictive power in this way, does that mean it could describe our world?

In a word, no. I’m not claiming that, and Lance isn’t claiming that. N=8 supergravity simply doesn’t have the right sorts of freedom to give you something like the real world, no matter how you twist it. You need a broader toolset (string theory generally) to get something realistic. The reason why we’re interested in N=8 supergravity is not because it’s a candidate for a real-world theory of quantum gravity. Rather, it’s because it tells us something about where the sorts of dangerous infinities that appear in quantum gravity theories really come from.

That’s what’s going on in the more recent paper that Lance mentioned. There, they’re not working with a supersymmetric theory, but with the naive theory you’d get from just trying to do quantum gravity based on Einstein’s equations. What they found was that the infinity you get is in a certain sense arbitrary. You can’t get rid of it, but you can shift it around (infinity times some adjustable constant 😉 ) by changing the theory in ways that aren’t physically meaningful. What this suggests is that, in a sense that hadn’t been previously appreciated, the infinite results naive gravity theories give you are arbitrary.

The inevitable question, though, is why would anyone muck around with this sort of thing when they could just use string theory? String theory never has any of these extra infinities, that’s one of its most important selling points. If we already have a perfectly good theory of quantum gravity, why mess with wrong ones?

Here, Lance’s answer dips into lies-to-children territory. In particular, Lance brings up the landscape problem: the fact that there are 10^500 configurations of string theory that might loosely resemble our world, and no clear way to sift through them to make predictions about the one we actually live in.

This is a real problem, but I wouldn’t think of it as the primary motivation here. Rather, it gets at a story people have heard before while giving the feeling of a broader issue: that string theory feels excessive.

princess_diana_wedding_dress

Why does this have a Wikipedia article?

Think of string theory like an enormous piece of fabric, and quantum gravity like a dress. You can definitely wrap that fabric around, pin it in the right places, and get a dress. You can in fact get any number of dresses, elaborate trains and frilly togas and all sorts of things. You have to do something with the extra material, though, find some tricky but not impossible stitching that keeps it out of the way, and you have a fair number of choices of how to do this.

From this perspective, naive quantum gravity theories are things that don’t qualify as dresses at all, scarves and socks and so forth. You can try stretching them, but it’s going to be pretty obvious you’re not really wearing a dress.

What we amplitudes-folks are looking for is more like a pencil skirt. We’re trying to figure out the minimal theory that covers the divergences, the minimal dress that preserves modesty. It would be a dress that fits the form underneath it, so we need to understand that form: the infinities that quantum gravity “wants” to give rise to, and what it takes to cancel them out. A pencil skirt is still inconvenient, it’s hard to sit down for example, something that can be solved by adding extra material that allows it to bend more. Similarly, fixing these infinities is unlikely to be the full story, there are things called non-perturbative effects that probably won’t be cured. But finding the minimal pencil skirt is still going to tell us something that just pinning a vast stretch of fabric wouldn’t.

This is where “amplitudes are weird” comes in in full force. We’ve observed, repeatedly, that amplitudes in gravity theories have unexpected properties, traits that still aren’t straightforwardly explicable from the perspective of string theory. In our line of work, that’s usually a sign that we’re on the right track. If you’re a fan of the amplituhedron, the project here is along very similar lines: both are taking the results of plodding, not especially deep loop-by-loop calculations, observing novel simplifications, and asking the inevitable question: what does this mean?

That far-term perspective, looking off into the distance at possible insights about space and time, isn’t my style. (It isn’t usually Lance’s either.) But for the times that you want to tell that kind of story…well, this isn’t that outlandish of a story to tell. And unless your primary concern is whether a piece gives succor to the Woits of the world, it shouldn’t be an objectionable one.

Knowing Too Little, Knowing Too Much

(Commenter nueww has asked me to comment on the flurry of blog posts around an interview with Lance Dixon that recently went up on the SLAC website. I’m not going to comment on it until I have a chance to talk with Lance, beyond saying that this is a remarkable amount of attention paid to a fairly workaday organizational puff piece.)

I’ve been in Oregon this week, giving talks at Oregon State and at the University of Oregon. After my talk at Brown in front of some of the world’s top experts in my subfield, I’ve had to adapt quite a bit for these talks. Oregon State doesn’t have any particle theorists at all, while at the University of Oregon I gave a seminar for their Institute of Theoretical Science, which contains a mix of researchers ranging from particle theorists to theoretical chemists.

Guess which talk was harder to give?

If you guessed the UofO talk, you’re right. At Oregon State, I had a pretty good idea of everyone’s background. I knew these were people who would be pretty familiar with quantum mechanics, but probably wouldn’t have heard of Feynman diagrams. From that, I could build a strategy, and end up giving a pretty good talk.

At the University of Oregon, if I aimed for the particle physicists in the audience, I’d lose the chemists. So I should aim for the chemists, right?

That has its problems too. I’ve talked about some of them: the risk that the experts in your audience feel talked-down to, that you don’t cover the more important parts of your work. But there’s another problem, one that I noticed when I tried to prepare this talk: knowing too little can lead to misunderstandings, but so can knowing too much.

What would happen if I geared the talk completely to the chemists? Well, I’d end up being very vague about key details of what I did. And for the chemists, that would be fine: they’d get a flavor of what I do, and they’d understand not to read any more into it. People are pretty good at putting something in the “I don’t understand this completely” box, as long as it’s reasonably clearly labeled.

That vagueness, though, would be a disaster for the physicists in the audience. It’s not just that they wouldn’t get the full story: unless I was very careful, they’d end up actively misled. The same vague descriptions that the chemists would accept as “flavor”, the physicists would actively try to read for meaning. And with the relevant technical terms replaced with terms the chemists would recognize, they would end up with an understanding that would be actively wrong.

In the end, I ended up giving a talk mostly geared to the physicists, but with some background and vagueness to give the chemists some value. I don’t feel like I did as good of a job as I would like, and neither group really got as much out of the talk as I wanted them to. It’s tricky talking for a mixed audience, and it’s something I’m still learning how to do.

Pi in the Sky Science Journalism

You’ve probably seen it somewhere on your facebook feed, likely shared by a particularly wide-eyed friend: pi found hidden in the hydrogen atom!

FionaPi

ChoPi

OuellettePi

From the headlines, this sounds like some sort of kabbalistic nonsense, like finding the golden ratio in random pictures.

Read the actual articles, and the story is a bit more reasonable. The last two I linked above seem to be decent takes on it, they’re just saddled with ridiculous headlines. As usual, I blame the editors. This time, they’ve obscured an interesting point about the link between physics and mathematics.

So what does “pi found hidden in the hydrogen atom” actually mean?

It doesn’t mean that there’s some deep importance to the number pi in nature, beyond its relevance in mathematics in general. The reason that pi is showing up here isn’t especially deep.

It isn’t trivial either, though. I’ve seen a few people whose first response to this article was “of course they found pi in the hydrogen atom, hydrogen atoms are spherical!” That’s not what’s going on here. The connection isn’t about the shape of the hydrogen atom, it’s about one particular technique for estimating its energy.

Carl Hagen is a physicist at the University of Rochester who was teaching a quantum mechanics class in which he taught a well-known approximation technique called the variational principle. Specifically, he had his students apply this technique to the hydrogen atom. The nice thing about the hydrogen atom is that it’s one of the few atoms simple enough that it’s possible to find its energy levels exactly. The exact calculation can then be compared to the approximation.

What Hagen noticed was that this approximation was surprisingly good, especially for high energy states for which it wasn’t expected to be. In the end, working with Rochester math professor Tamar Friedmann, he figured out that the variational principle was making use of a particular identity between a type of mathematical functions, called Gamma functions, that are quite common in physics. Using those Gamma functions, the two researchers were able to re-derive what turned out to be a 17th century formula for pi, giving rise to a much cleaner proof for that formula than had been known previously.

So pi isn’t appearing here because “the hydrogen atom is a sphere”. It’s appearing because pi appears all over the place in physics, and because in general, the same sorts of structures appear again and again in mathematics.

Pi’s appearance in the hydrogen atom is thus not very special, regardless. What is a little bit special is the fact that, using the hydrogen atom, these folks were able to find a cleaner proof of an old approximation for pi, one that mathematicians hadn’t found before.

That, if anything, is the interesting part of this news story, but it’s also part of a broader trend, one in which physicists provide “physics proofs” for mathematical results. One of the more famous accomplishments of string theory is a class of “physics proofs” of this sort, using a principle called mirror symmetry.

The existence of  “physics proofs” doesn’t mean that mathematics is secretly constrained by the physical world. Rather, they’re a result of the fact that physicists are interested in different aspects of mathematics, and in general are a bit more reckless in using approximations that haven’t been mathematically vetted. A physicist can sometimes prove something in just a few lines that mathematicians would take many pages to prove, but usually they do this by invoking a structure that would take much longer for a mathematician to define. As physicists, we’re building on the shoulders of other physicists, using concepts that mathematicians usually don’t have much reason to bother with. That’s why it’s always interesting when we find something like the Amplituhedron, a clean mathematical concept hidden inside what would naively seem like a very messy construction. It’s also why “physics proofs” like this can happen: we’re dealing with things that mathematicians don’t naturally consider.

So please, ignore the pi-in-the-sky headlines. Some physicists found a trick, some mathematicians found it interesting, the hydrogen atom was (quite tangentially) involved…and no nonsense needs to be present.

Map Your Dead Ends

I’m at Brown this week, where I’ve been chatting with Mark Spradlin and Anastasia Volovich, two of the founding figures of my particular branch of amplitudeology. Back in 2010 they figured out how to turn this seventeen-page two-loop amplitude:

Why yes, this is one equation that covers seventeen pages. You're lucky I didn't post the eight-hundred page one.

into a formula that just takes up two lines:gsvvformThis got everyone very excited, it inspired some of my collaborators to do work that would eventually give rise to the Hexagon Functions, my main research project for the past few years.

Unfortunately, when we tried to push this to higher loops, we didn’t get the sort of nice, clean-looking formulas that the Brown team did. Each “loop” is an additional layer of complexity, a series of approximations that get closer to the exact result. And so far, our answers look more like that first image than the second: hundreds of pages with no clear simplifications in sight.

At the time, people wondered whether some simple formula might be enough. As it turns out, you can write down a formula similar to the one found by Spradlin and Volovich, generalized to a higher number of loops. It’s clean, it’s symmetric, it makes sense…and it’s not the right answer.

That happens in science a lot more often than science fans might expect. When you hear about this sort of thing in the news, it always works: someone suggests a nice, simple answer, and it turns out to be correct, and everyone goes home happy. But for every nice simple guess that works, there are dozens that don’t: promising ideas that just lead to dead ends.

One of the postdocs here at Brown worked on this “wrong” formula, and while chatting with him here he asked a very interesting question: why is it wrong? Sure, we know that it’s wrong, we can check that it’s wrong…but what, specifically, is missing? Is it “part” of the right answer in some sense, with some predictable corrections?

As it turns out, this is a very interesting question! We’ve been looking into it, and the “wrong” answer has some interesting relationships with some of our Hexagon Functions. It may have been a “dead end”, but it still could turn out to be a useful one.

A good physics advisor will tell their students to document their work. This doesn’t just mean taking notes: most theoretical physicists will maintain files, in standard journal article format, with partial results. One reason to do this is that, if things work out, you’ll have some of your paper already written. But if something doesn’t work out, you’ll end up with a pdf on your hard drive carefully explaining an idea that didn’t quite work. Physicists often end up with dozens of these files squirreled away on their computers. Put together, they’re a map: a map of dead ends.

There’s a handy thing about having a map: it lets you retrace your steps. Any one of these paths may lead nowhere, but each one will contain some substantive work. And years later, often enough, you end up needing some of it: some piece of the calculation, some old idea. You follow the map, dig it up…and build it into something new.

Using Effective Language

Physicists like to use silly names for things, but sometimes it’s best to just use an everyday word. It can trigger useful intuitions, and it makes remembering concepts easier. What gets confusing, though, is when the everyday word you use has a meaning that’s not quite the same as the colloquial one.

“Realism” is a pretty classic example, where Bell’s elegant use of the term in quantum mechanics doesn’t quite match its common usage, leading to inevitable confusion whenever it’s brought up. “Theory” is such a useful word that multiple branches of science use it…with different meanings! In both cases, the naive meaning of the word is the basis of how it gets used scientifically…just not the full story.

There are two things to be wary of here. First, those of us who communicate science must be sure to point out when a word we use doesn’t match its everyday meaning, to guide readers’ intuitions away from first impressions to understand how the term is used in our field. Second, as a reader, you need to be on the look-out for hidden technical terms, especially when you’re reading technical work.

I remember making a particularly silly mistake along these lines. It was early on in grad school, back when I knew almost nothing about quantum field theory. One of our classes was a seminar, structured so that each student would give a talk on some topic that could be understood by the whole group. Unfortunately, some grad students with deeper backgrounds in theoretical physics hadn’t quite gotten the memo.

It was a particular phrase that set me off: “This theory isn’t an effective theory”.

My immediate response was to raise my hand. “What’s wrong with it? What about this theory makes it ineffective?”

The presenter boggled for a moment before responding. “Well, it’s complete up to high energies…it has no ultraviolet divergences…”

“Then shouldn’t that make it even more effective?”

After a bit more of this back-and-forth, we finally cleared things up. As it turns out, “effective field theory” is a technical term! An “effective field theory” is only “effectively” true, describing physics at low energies but not at high energies. As you can see, the word “effective” here is definitely pulling its weight, helping to make the concept understandable…but if you don’t recognize it as a technical term and interpret it literally, you’re going to leave everyone confused!

Over time, I’ve gotten better at identifying when something is a technical term. It really is a skill you can learn: there are different tones people use when speaking, different cadences when writing, a sense of uneasiness that can clue you in to a word being used in something other than its literal sense. Without that skill, you end up worried about mathematicians’ motives for their evil schemes. With it, you’re one step closer to what may be the most important skill in science: the ability to recognize something you don’t know yet.

What’s so Spooky about Action at a Distance?

With Halloween coming up, it’s time once again to talk about the spooky side of physics. And what could be spookier than action at a distance?

Pictured here.

Ok, maybe not an obvious contender for spookiest concept of the year. But physicists have struggled with action at a distance for centuries, and there are deep reasons why.

It all dates back to Newton. In Newton’s time, all of nature was expected to be mechanical. One object pushes another, which pushes another in turn, eventually explaining everything that every happens. And while people knew by that point that the planets were not circling around on literal crystal spheres, it was still hoped that their motion could be explained mechanically. The favored explanations of the time were vortices, whirlpools of celestial fluid that drove the planets around the Sun.

Newton changed all that. Not only did he set down a law of gravitation that didn’t use a fluid, he showed that no fluid could possibly replicate the planets’ motions. And while he remained agnostic about gravity’s cause, plenty of his contemporaries accused him of advocating “action at a distance”. People like Leibniz thought that a gravitational force without a mechanical cause would be superstitious nonsense, a betrayal of science’s understanding of the world in terms of matter.

For a while, Newton’s ideas won out. More and more, physicists became comfortable with explanations involving a force stretching out across empty space, using them for electricity and magnetism as these became more thoroughly understood.

Eventually, though, the tide began to shift back. Electricity and Magnetism were explained, not in terms of action at a distance, but in terms of a field that filled the intervening space. Eventually, gravity was too.

The difference may sound purely semantic, but it means more than you might think. These fields were restricted in an important way: when the field changed, it changed at one point, and the changes spread at a speed limited by the speed of light. A theory composed of such fields has a property called locality, the property that all interactions are fundamentally local, that is, they happen at one specific place and time.

Nowadays, we think of locality as one of the most fundamental principles in physics, on par with symmetry in space and time. And the reason why is that true action at a distance is quite a spooky concept.

Much of horror boils down to fear of the unknown. From what might lurk in the dark to the depths of the ocean, we fear that which we cannot know. And true action at a distance would mean that our knowledge might forever be incomplete. As long as everything is mediated by some field that changes at the speed of light, we can limit our search for causes. We can know that any change must be caused by something only a limited distance away, something we can potentially observe and understand. By contrast, true action at a distance would mean that forces from potentially anywhere in the universe could alter events here on Earth. We might never know the ultimate causes of what we observe; they might be stuck forever out of reach.

Some of you might be wondering, what about quantum mechanics? The phrase “spooky action at a distance” was famous because Einstein used it as an accusation against quantum entanglement, after all.

The key thing about quantum mechanics is that, as J. S. Bell showed, you can’t have locality…unless you throw out another property, called realism. Realism is the idea that quantum states have definite values for measurements before those measurements are taken. And while that sounds important, most people find getting rid of it much less scary than getting rid of locality. In a non-realistic world, at least we can still predict probabilities, even if we can’t observe certainties. In a non-local world, there might be aspects of physics that we just can’t learn. And that’s spooky.

When to Look under the Bed

Last week, blogged about a rather interesting experiment, designed to test the quantum properties of gravity. Normally, quantum gravity is essentially unobservable: quantum effects are typically only relevant for very small systems, where gravity is extremely weak. However, there has been a lot of progress in putting larger and larger systems into interesting quantum states, and a team of experimentalists has recently proposed a setup. The experiment wouldn’t have enough detail to, for example, distinguish between rival models of quantum gravity, but it would provide evidence as to whether or not gravity is quantum at all.

Lubos Motl, meanwhile, argues that such an experiment is utterly pointless, because there is no possible way that gravity could not be quantum. I won’t blame you if you don’t read his argument since it’s written in his trademark…aggressive…style, but the gist is that it’s really hard to make sense of the idea that there are non-quantum things in an otherwise quantum world. It causes all sorts of issues with pretty much every interpretation of quantum mechanics, and throws the differences between those interpretations into particularly harsh and obvious light. From this perspective, checking to see if gravity might not actually be quantum (an idea called semi-classical gravity) is a bit like checking for a monster under the bed.

You might find semi-classical gravity!

In general, I share Motl’s reservations about semi-classical gravity. As I mentioned back when journalists were touting the BICEP2 results as evidence of quantum gravity, the idea that gravity could not be quantum doesn’t really make much sense. (Incidentally, Hossenfelder makes a similar point in her post.)

All that said, sometimes in science it’s absolutely worth looking under the bed.

Take another unlikely possibility, that of cell phone radiation causing cancer. Things that cause cancer do it by messing with the molecular bonds in DNA. In order to mess with molecular bonds, you need high-frequency light. That’s how UV light from the sun can cause skin cancer. Cell phones emit microwaves, which are very low-frequency light. It’s what allows them to be useful inside of buildings, where normal light wouldn’t reach. It also means it’s impossible for them to cause cancer.

Nevertheless, if nobody had ever studied whether cell phones cause cancer, it would probably be worth at least one study. If that study came back positive, it would say something interesting, either about the study’s design or about other possible causes of cancer. If negative, the topic could be put to bed more convincingly. As it happens, those studies have been done, and overall confirm the expectations we have from basic science.

Another important point here is that experimentalists and theorists have different priorities, due to their different specializations. Theorists are interested in confirmation for particular theories: they want not just an unknown particle, but a gluino, and not just a gluino, but the gluino predicted by their particular model of supersymmetry. By contrast, experimentalists typically aren’t very interested in proving or disproving one theory or another. Rather, they look for general signals that indicate broad classes of new physics. For example, experimentalists might use the LHC to look for a leptoquark, a particle that allows quarks and leptons to interact, without caring what theory might produce them. Experimentalists are also very interested in improving their techniques. Much like theorists, a lot of interesting work in the field involves pushing the current state-of-the-art as far as it will go.

So, when should we look under the bed?

Well, if nobody has ever looked under this particular bed before, and if seeing something strange under this bed would at least be informative, and if looking under the bed serves as a proving ground for the latest in bed-spelunking technology, then yes, we should absolutely look under this bed.

Just don’t expect to see any monsters.

Is Everything Really Astonishingly Simple?

Neil Turok gave a talk last week, entitled The Astonishing Simplicity of Everything. In it, he argued that our current understanding of physics is really quite astonishingly simple, and that recent discoveries seem to be confirming this simplicity.

For the right sort of person, this can be a very uplifting message. The audience was spellbound. But a few of my friends were pretty thoroughly annoyed, so I thought I’d dedicate a post to explaining why.

Neil’s talk built up to showing this graphic, one of the masterpieces of Perimeter’s publications department:

Looked at in this way, the laws of physics look astonishingly simple. One equation, a few terms, each handily labeled with a famous name of some (occasionally a little hazy) relevance to the symbol in question.

In a sense, the world really is that simple. There are only a few kinds of laws that govern the universe, and the concepts behind them are really, deep down, very simple concepts. Neil adroitly explained some of the concepts behind quantum mechanics in his talk (here represented by the Schrodinger, Feynman, and Planck parts of the equation), and I have a certain fondness for the Maxwell-Yang-Mills part. The other parts represent different kinds of particles, and different ways they can interact.

While there are only a few different kinds of laws, though, that doesn’t mean the existing laws are simple. That nice, elegant equation hides 25 arbitrary parameters, hidden in the Maxwell-Yang-Mills, Dirac, Kobayashi-Masakawa, and Higgs parts. It also omits the cosmological constant, which fuels the expansion of the universe. And there are problems if you try to claim that the gravity part, for example, is complete.

When Neil mentions recent discoveries, he’s referring to the LHC not seeing new supersymmetric particles, to telescopes not seeing any unusual features in the cosmic microwave background. The theories that were being tested, supersymmetry and inflation, are in many ways more complicated than the Standard Model, adding new parameters without getting rid of old ones. But I think it’s a mistake to say that if these theories are ruled out, the world is astonishingly simple. These theories are attempts to explain unlikely features of the old parameters, or unlikely features of the universe we observe. Without them, we’ve still got those unlikely, awkward, complicated bits.

Of course, Neil doesn’t think the Standard Model is all there is either, and while he’s not a fan of inflation, he does have proposals he’s worked on that explain the same observations, proposals that are also beyond the current picture. More broadly, he’s not suggesting here that the universe is just what we’ve figured out so far and no more. Rather, he’s suggesting that new proposals ought to build on the astonishing simplicity of the universe, instead of adding complexity, that we need to go back to the conceptual drawing board rather than correcting the universe with more gears and wheels.

On the one hand, that’s Perimeter’s mission statement in a nutshell. Perimeter’s independent nature means that folks here can focus on deeper conceptual modifications to the laws of physics, rather than playing with the sorts of gears and wheels that people already know how to work with.

On the other hand, a lack of new evidence doesn’t do anyone any favors. It doesn’t show the way for supersymmetry, but it doesn’t point to any of the “deep conceptual” approaches either. And so for some people, Neil’s glee at the lack of new evidence feels less like admiration for the simplicity of the cosmos and more like that one guy in a group project who sits back chuckling while everyone else fails. You can perhaps understand why some people felt resentful.