Monthly Archives: October 2015

What’s so Spooky about Action at a Distance?

With Halloween coming up, it’s time once again to talk about the spooky side of physics. And what could be spookier than action at a distance?

Pictured here.

Ok, maybe not an obvious contender for spookiest concept of the year. But physicists have struggled with action at a distance for centuries, and there are deep reasons why.

It all dates back to Newton. In Newton’s time, all of nature was expected to be mechanical. One object pushes another, which pushes another in turn, eventually explaining everything that every happens. And while people knew by that point that the planets were not circling around on literal crystal spheres, it was still hoped that their motion could be explained mechanically. The favored explanations of the time were vortices, whirlpools of celestial fluid that drove the planets around the Sun.

Newton changed all that. Not only did he set down a law of gravitation that didn’t use a fluid, he showed that no fluid could possibly replicate the planets’ motions. And while he remained agnostic about gravity’s cause, plenty of his contemporaries accused him of advocating “action at a distance”. People like Leibniz thought that a gravitational force without a mechanical cause would be superstitious nonsense, a betrayal of science’s understanding of the world in terms of matter.

For a while, Newton’s ideas won out. More and more, physicists became comfortable with explanations involving a force stretching out across empty space, using them for electricity and magnetism as these became more thoroughly understood.

Eventually, though, the tide began to shift back. Electricity and Magnetism were explained, not in terms of action at a distance, but in terms of a field that filled the intervening space. Eventually, gravity was too.

The difference may sound purely semantic, but it means more than you might think. These fields were restricted in an important way: when the field changed, it changed at one point, and the changes spread at a speed limited by the speed of light. A theory composed of such fields has a property called locality, the property that all interactions are fundamentally local, that is, they happen at one specific place and time.

Nowadays, we think of locality as one of the most fundamental principles in physics, on par with symmetry in space and time. And the reason why is that true action at a distance is quite a spooky concept.

Much of horror boils down to fear of the unknown. From what might lurk in the dark to the depths of the ocean, we fear that which we cannot know. And true action at a distance would mean that our knowledge might forever be incomplete. As long as everything is mediated by some field that changes at the speed of light, we can limit our search for causes. We can know that any change must be caused by something only a limited distance away, something we can potentially observe and understand. By contrast, true action at a distance would mean that forces from potentially anywhere in the universe could alter events here on Earth. We might never know the ultimate causes of what we observe; they might be stuck forever out of reach.

Some of you might be wondering, what about quantum mechanics? The phrase “spooky action at a distance” was famous because Einstein used it as an accusation against quantum entanglement, after all.

The key thing about quantum mechanics is that, as J. S. Bell showed, you can’t have locality…unless you throw out another property, called realism. Realism is the idea that quantum states have definite values for measurements before those measurements are taken. And while that sounds important, most people find getting rid of it much less scary than getting rid of locality. In a non-realistic world, at least we can still predict probabilities, even if we can’t observe certainties. In a non-local world, there might be aspects of physics that we just can’t learn. And that’s spooky.

When to Look under the Bed

Last week, blogged about a rather interesting experiment, designed to test the quantum properties of gravity. Normally, quantum gravity is essentially unobservable: quantum effects are typically only relevant for very small systems, where gravity is extremely weak. However, there has been a lot of progress in putting larger and larger systems into interesting quantum states, and a team of experimentalists has recently proposed a setup. The experiment wouldn’t have enough detail to, for example, distinguish between rival models of quantum gravity, but it would provide evidence as to whether or not gravity is quantum at all.

Lubos Motl, meanwhile, argues that such an experiment is utterly pointless, because there is no possible way that gravity could not be quantum. I won’t blame you if you don’t read his argument since it’s written in his trademark…aggressive…style, but the gist is that it’s really hard to make sense of the idea that there are non-quantum things in an otherwise quantum world. It causes all sorts of issues with pretty much every interpretation of quantum mechanics, and throws the differences between those interpretations into particularly harsh and obvious light. From this perspective, checking to see if gravity might not actually be quantum (an idea called semi-classical gravity) is a bit like checking for a monster under the bed.

You might find semi-classical gravity!

In general, I share Motl’s reservations about semi-classical gravity. As I mentioned back when journalists were touting the BICEP2 results as evidence of quantum gravity, the idea that gravity could not be quantum doesn’t really make much sense. (Incidentally, Hossenfelder makes a similar point in her post.)

All that said, sometimes in science it’s absolutely worth looking under the bed.

Take another unlikely possibility, that of cell phone radiation causing cancer. Things that cause cancer do it by messing with the molecular bonds in DNA. In order to mess with molecular bonds, you need high-frequency light. That’s how UV light from the sun can cause skin cancer. Cell phones emit microwaves, which are very low-frequency light. It’s what allows them to be useful inside of buildings, where normal light wouldn’t reach. It also means it’s impossible for them to cause cancer.

Nevertheless, if nobody had ever studied whether cell phones cause cancer, it would probably be worth at least one study. If that study came back positive, it would say something interesting, either about the study’s design or about other possible causes of cancer. If negative, the topic could be put to bed more convincingly. As it happens, those studies have been done, and overall confirm the expectations we have from basic science.

Another important point here is that experimentalists and theorists have different priorities, due to their different specializations. Theorists are interested in confirmation for particular theories: they want not just an unknown particle, but a gluino, and not just a gluino, but the gluino predicted by their particular model of supersymmetry. By contrast, experimentalists typically aren’t very interested in proving or disproving one theory or another. Rather, they look for general signals that indicate broad classes of new physics. For example, experimentalists might use the LHC to look for a leptoquark, a particle that allows quarks and leptons to interact, without caring what theory might produce them. Experimentalists are also very interested in improving their techniques. Much like theorists, a lot of interesting work in the field involves pushing the current state-of-the-art as far as it will go.

So, when should we look under the bed?

Well, if nobody has ever looked under this particular bed before, and if seeing something strange under this bed would at least be informative, and if looking under the bed serves as a proving ground for the latest in bed-spelunking technology, then yes, we should absolutely look under this bed.

Just don’t expect to see any monsters.

Is Everything Really Astonishingly Simple?

Neil Turok gave a talk last week, entitled The Astonishing Simplicity of Everything. In it, he argued that our current understanding of physics is really quite astonishingly simple, and that recent discoveries seem to be confirming this simplicity.

For the right sort of person, this can be a very uplifting message. The audience was spellbound. But a few of my friends were pretty thoroughly annoyed, so I thought I’d dedicate a post to explaining why.

Neil’s talk built up to showing this graphic, one of the masterpieces of Perimeter’s publications department:

Looked at in this way, the laws of physics look astonishingly simple. One equation, a few terms, each handily labeled with a famous name of some (occasionally a little hazy) relevance to the symbol in question.

In a sense, the world really is that simple. There are only a few kinds of laws that govern the universe, and the concepts behind them are really, deep down, very simple concepts. Neil adroitly explained some of the concepts behind quantum mechanics in his talk (here represented by the Schrodinger, Feynman, and Planck parts of the equation), and I have a certain fondness for the Maxwell-Yang-Mills part. The other parts represent different kinds of particles, and different ways they can interact.

While there are only a few different kinds of laws, though, that doesn’t mean the existing laws are simple. That nice, elegant equation hides 25 arbitrary parameters, hidden in the Maxwell-Yang-Mills, Dirac, Kobayashi-Masakawa, and Higgs parts. It also omits the cosmological constant, which fuels the expansion of the universe. And there are problems if you try to claim that the gravity part, for example, is complete.

When Neil mentions recent discoveries, he’s referring to the LHC not seeing new supersymmetric particles, to telescopes not seeing any unusual features in the cosmic microwave background. The theories that were being tested, supersymmetry and inflation, are in many ways more complicated than the Standard Model, adding new parameters without getting rid of old ones. But I think it’s a mistake to say that if these theories are ruled out, the world is astonishingly simple. These theories are attempts to explain unlikely features of the old parameters, or unlikely features of the universe we observe. Without them, we’ve still got those unlikely, awkward, complicated bits.

Of course, Neil doesn’t think the Standard Model is all there is either, and while he’s not a fan of inflation, he does have proposals he’s worked on that explain the same observations, proposals that are also beyond the current picture. More broadly, he’s not suggesting here that the universe is just what we’ve figured out so far and no more. Rather, he’s suggesting that new proposals ought to build on the astonishing simplicity of the universe, instead of adding complexity, that we need to go back to the conceptual drawing board rather than correcting the universe with more gears and wheels.

On the one hand, that’s Perimeter’s mission statement in a nutshell. Perimeter’s independent nature means that folks here can focus on deeper conceptual modifications to the laws of physics, rather than playing with the sorts of gears and wheels that people already know how to work with.

On the other hand, a lack of new evidence doesn’t do anyone any favors. It doesn’t show the way for supersymmetry, but it doesn’t point to any of the “deep conceptual” approaches either. And so for some people, Neil’s glee at the lack of new evidence feels less like admiration for the simplicity of the cosmos and more like that one guy in a group project who sits back chuckling while everyone else fails. You can perhaps understand why some people felt resentful.

Hooray for Neutrinos!

Congratulations to Takaaki Kajita and Arthur McDonald, winners of this year’s Nobel Prize in Physics, as well as to the Super-Kamiokande and SNOLAB teams that made their work possible.

Congratulations!

Unlike last year’s Nobel, this is one I’ve been anticipating for quite some time. Kajita and McDonald discovered that neutrinos have mass, and that discovery remains our best hint that there is something out there beyond the Standard Model.

But I’m getting a bit ahead of myself.

Neutrinos are the lightest of the fundamental particles, and for a long time they were thought to be completely massless. Their name means “little neutral one”, and it’s probably the last time physicists used “-ino” to mean “little”. Neutrinos are “neutral” because they have no electrical charge. They also don’t interact with the strong nuclear force. Only the weak nuclear force has any effect on them. (Well, gravity does too, but very weakly.)

This makes it very difficult to detect neutrinos: you have to catch them interacting via the weak force, which is, well, weak. Originally, that meant they had to be inferred by their absence: missing energy in nuclear reactions carried away by “something”. Now, they can be detected, but it requires massive tanks of fluid, carefully watched for the telltale light of the rare interactions between neutrinos and ordinary matter. You wouldn’t notice if billions of neutrinos passed through you every second, like an unstoppable army of ghosts. And in fact, that’s exactly what happens!

Visualization of neutrinos from a popular documentary

In the 60’s, scientists began to use these giant tanks of fluid to detect neutrinos coming from the sun. An enormous amount of effort goes in to understanding the sun, and these days our models of it are pretty accurate, so it came as quite a shock when researchers observed only half the neutrinos they expected. It wasn’t until the work of Super-Kamiokande in 1998, and SNOLAB in 2001, that we knew the reason why.

As it turns out, neutrinos oscillate. Neutrinos are produced in what are called flavor states, which match up with the different types of leptons. There are electron-neutrinos, muon-neutrinos, and tau-neutrinos.

Radioactive processes usually produce electron-neutrinos, so those are the type that the sun produces. But on their way from the sun to the earth, these neutrinos “oscillate”: they switch between electron neutrinos and the other types! The older detectors, focused only on electron-neutrinos, couldn’t see this. SNOLAB’s big advantage was that it could detect the other types of neutrinos as well, and tell the difference between them, which allowed it to see that the “missing” neutrinos were really just turning into other flavors! Meanwhile, Super-Kamiokande measured neutrinos coming not from the sun, but from cosmic rays reacting with the upper atmosphere. Some of these neutrinos came from the sky above the detector, while others traveled all the way through the earth below it, from the atmosphere on the other side. By observing “missing” neutrinos coming from below but not from above, Super-Kamiokande confirmed that it wasn’t the sun’s fault that we were missing solar neutrinos, neutrinos just oscillate!

What does this oscillation have to do with neutrinos having mass, though?

Here things get a bit trickier. I’ve laid some of the groundwork in older posts. I’ve told you to think about mass as “energy we haven’t met yet”, as the energy something has when we leave it alone to itself. I’ve also mentioned that conservation laws come from symmetries of nature, that energy conservation is a result of symmetry in time.

This should make it a little more plausible when I say that when something has a specific mass, it doesn’t change. It can decay into other particles, or interact with other forces, but left alone, by itself, it won’t turn into something else. To be more specific, it doesn’t oscillate. A state with a fixed mass is symmetric in time.

The only way neutrinos can oscillate between flavor states, then, is if one flavor state is actually a combination (in quantum terms, a superposition) of different masses. The components with different masses move at different speeds, so at any point along their path you can be more or less likely to see certain masses of neutrinos. As the mix of masses changes, the flavor state changes, so neutrinos end up oscillating from electron-neutrino, to muon-neutrino, to tau-neutrino.

So because of neutrino oscillation, neutrinos have to have mass. But this presented a problem. Most fundamental particles get their mass from interacting with the Higgs field. But, as it turns out, neutrinos can’t interact with the Higgs field. This has to do with the fact that neutrinos are “chiral”, and only come in a “left-handed” orientation. Only if they had both types of “handedness” could they get their mass from the Higgs.

As-is, they have to get their mass another way, and that way has yet to be definitively shown. Whatever it ends up being, it will be beyond the current Standard Model. Maybe there actually are right-handed neutrinos, but they’re too massive, or interact too weakly, for them to have been discovered. Maybe neutrinos are Majorana particles, getting mass in a novel way that hasn’t been seen yet in the Standard Model.

Whatever we discover, neutrinos are currently our best evidence that something lies beyond the Standard Model. Naturalness may have philosophical problems, dark matter may be explained away by modified gravity…but if neutrinos have mass, there’s something we still have yet to discover. And that definitely seems worthy of a Nobel to me!

Hexagon Functions III: Now with More Symmetry

I’ve got a new paper up this week.

It’s a continuation of my previous work, understanding collisions involving six particles in my favorite theory, N=4 super Yang-Mills.

This time, we’re pushing up the complexity, going from three “loops” to four. In the past, I could have impressed you with the number of pages the formulas I’m calculating take up (eight hundred pages for the three-loop formula from that first Hexagon Functions paper). Now, though, I don’t have that number: putting my four-loop formula into a pdf-making program just crashes the program. Instead, I’ll have to impress you with file sizes: 2.6 MB for the three-loop formula, 96 MB for the four-loop one.

Calculating such a formula sounds like a pretty big task, and it was, the first time. But things got a lot simpler after a chat I had at Amplitudes.

We calculate these things using an ansatz, a guess for what the final answer should look like. The more vague our guess, the more parameters we need to fix, and the more work we have in general. If we can guess more precisely, we can start with fewer parameters and things are a lot easier.

Often, more precise guesses come from understanding the symmetries of the problem. If we can know that the final answer must be the same after making some change, we can rule out a lot of possibilities.

Sometimes, these symmetries are known features of the answer, things that someone proved had to be correct. Other times, though, they’re just observations, things that have been true in the past and might be true again.

We started out using an observation from three loops. That got us pretty far, but we still had a lot of work to do: 808 parameters, to be fixed by other means. Fixing them took months of work, and throughout we hoped that there was some deeper reason behind the symmetries we observed.

Finally, at Amplitudes, I ran into fellow amplitudeologist Simon Caron-Huot and asked him if he knew the source of our observed symmetry. In just a few days he was able to link it to supersymmetry, giving us justification for our jury rigged trick. However, we figured out that his explanation went further than any of us expected. In the end, rather than 808 parameters we only really needed to consider 34.

Thirty-four options to consider. Thirty-four possible contributions to a ~100 MB file. That might not sound like a big deal, but compared to eight hundred and eight it’s a huge deal. More symmetry means easier calculations, meaning we can go further. At this point going to the next step in complexity, to five loops rather than four, might be well within reach.