Tag Archives: quantum field theory

GUTs vs ToEs: What Are We Unifying Here?

“Grand Unified Theory” and “Theory of Everything” may sound like meaningless grandiose titles, but they mean very different things.

In particular, Grand Unified Theory, or GUT, is a technical term, referring to a specific way to unify three of the fundamental interactions: electromagnetism, the weak force, and the strong force.

blausen_0817_smallintestine_anatomy

In contrast, guts unify the two fundamental intestines.

Those three forces are called Yang-Mills forces, and they can all be described in the same basic way. In particular, each has a strength (the coupling constant) and a mathematical structure that determines how it interacts with itself, called a group.

The core idea of a GUT, then, is pretty simple: to unite the three Yang-Mills forces, they need to have the same strength (the same coupling constant) and be part of the same group.

But wait! (You say, still annoyed at the pun in the above caption.) These forces don’t have the same strength at all! One of them’s strong, one of them’s weak, and one of them is electromagnetic!

As it turns out, this isn’t as much of a problem as it seems. While the three Yang-Mills forces seem to have very different strengths on an everyday scale, that’s not true at very high energies. Let’s steal a plot from Sweden’s Royal Institute of Technology:

running

Why Sweden? Why not!

What’s going on in this plot?

Here, each \alpha represents the strength of a fundamental force. As the force gets stronger, \alpha gets bigger (and so \alpha^{-1} gets smaller). The variable on the x-axis is the energy scale. The grey lines represent a world without supersymmetry, while the black lines show the world in a supersymmetric model.

So based on this plot, it looks like the strengths of the fundamental forces change based on the energy scale. That’s true, but if you find that confusing there’s another, mathematically equivalent way to think about it.

You can think about each force as having some sort of ultimate strength, the strength it would have if the world weren’t quantum. Without quantum mechanics, each force would interact with particles in only the simplest of ways, corresponding to the simplest diagram here.

However, our world is quantum mechanical. Because of that, when we try to measure the strength of a force, we’re not really measuring its “ultimate strength”. Rather, we’re measuring it alongside a whole mess of other interactions, corresponding to the other diagrams in that post. These extra contributions mean that what looks like the strength of the force gets stronger or weaker depending on the energy of the particles involved.

(I’m sweeping several things under the rug here, including a few infinities and electroweak unification. But if you just want a general understanding of what’s going on, this should be a good starting point.)

If you look at the plot, you’ll see the forces meet up somewhere around 10^16 GeV. They miss each-other for the faint, non-supersymmetric lines, but they meet fairly cleanly for the supersymmetric ones.

So (at least if supersymmetry is true), making the Yang-Mills forces have the same strength is not so hard. Putting them in the same mathematical group is where things get trickier. This is because any group that contains the groups of the fundamental forces will be “bigger” than just the sum of those forces: it will contain “extra forces” that we haven’t observed yet, and these forces can do unexpected things.

In particular, the “extra forces” predicted by GUTs usually make protons unstable. As far as we can tell, protons are very long-lasting: if protons decayed too fast, we wouldn’t have stars. So if protons decay, they must do it only very rarely, detectable only with very precise experiments. These experiments are powerful enough to rule out most of the simplest GUTs. The more complicated GUTs still haven’t been ruled out, but it’s enough to make fewer people interested in GUTs as a research topic.

What about Theories of Everything, or ToEs?

While GUT is a technical term, ToE is very much not. Instead, it’s a phrase that journalists have latched onto because it sounds cool. As such, it doesn’t really have a clear definition. Usually it means uniting gravity with the other fundamental forces, but occasionally people use it to refer to a theory that also unifies the various Standard Model particles into some sort of “final theory”.

Gravity is very different from the other fundamental forces, different enough that it’s kind of silly to group them as “fundamental forces” in the first place. Thus, while GUT models are the kind of thing one can cook up and tinker with, any ToE has to be based on some novel insight, one that lets you express gravity and Yang-Mills forces as part of the same structure.

So far, string theory is the only such insight we have access to. This isn’t just me being arrogant: while there are other attempts at theories of quantum gravity, aside from some rather dubious claims none of them are even interested in unifying gravity with other forces.

This doesn’t mean that string theory is necessarily right. But it does mean that if you want a different “theory of everything”, telling physicists to go out and find a new one isn’t going to be very productive. “Find a theory of everything” is a hope, not a research program, especially if you want people to throw out the one structure we have that even looks like it can do the job.

Four Gravitons and Some Wildly Irresponsible Amplitudes Predictions

My post on the “physics of decimals” a couple of weeks back caught physics blogger Luboš Motl’s attention, with predictable results. Mostly, this led to a rather unproductive debate about semantics, but he did bring up one thing that I think deserves some further clarification.

In my post, I asked you to imagine asking a genie for the full consequences of quantum field theory. Short of genie-based magic, is this the sort of thing I think it’s at all possible to know?

robinwilliams_aladdin

A Candle of Invocation? Sure, why not.

In a word, no.

The world is messy, not the sort of thing that tends to be described by neat exact solutions. That’s why we use approximations, and it’s why physicists can’t just step in and solve biology or psychology by deriving everything from first principles.

That said, the nice thing about approximations is that there’s often room for improvement. Sometimes this is quantitative, literally pushing to the next order of decimals, while sometimes it’s qualitative, viewing problems from a new perspective and attacking them from a new approach.

I’d like to give you some idea of the sorts of improvements I think are possible. I’ll focus on scattering amplitudes, since they’re my field. In order to be precise, I’ll be using technical terms here without much explanation; if you’re curious about something specific go ahead and ask in the comments. Finally, there are no implied time-scales here: I’ll be rating things based on whether I think they’re likely to eventually be understood, not on how long it will take us to get there.

Let’s begin with the most likely category:

Probably going to happen:

Mathematicians characterize the set of n-point cluster polylogarithms whose collinear limits are well-defined (n-1)-point cluster polylogarithms.

The seven-loop N=8 supergravity integrand is found, and the coefficient of its potential divergence is evaluated.

The dual Amplituhedron is found.

A general procedure is described for re-summing the L-loop coefficient of the Pentagon OPE for any L into a polylogarithmic form, at least at six points.

We figure out what the heck is up with the MHV-NMHV relation we found here.

Likely to happen, but there may be unforeseen complications:

N=8 supergravity is found to be finite at seven loops.

A symbol bootstrap becomes workable for QCD amplitudes at two or three loops, perhaps involving Landau singularities.

Something like a symbol bootstrap becomes workable for elliptic integrals, though it may only pass a “physicist” level of rigor.

Analogues to all of the work up to the actual Amplituhedron itself are performed for non-planar N=4 super Yang-Mills.

Quite possible, but I’m likely overoptimistic:

The space of n-point cluster polylogarithms whose collinear limits are well-defined (n-1)-point cluster polylogarithms that also obey the first entry condition and some number of final entry conditions turns out to be well-constrained enough that some all-loop all-point statements can be made, at least for MHV.

The enhanced cancellations observed in supergravity theories are understood, and used to provide a strong argument that N=8 supergravity is perturbatively finite.

All-multiplicity analytic QCD results at two loops, for at least the simpler helicity configurations.

The volume of the dual Amplituhedron is characterized by mathematicians and the connection to cluster polylogarithms is fully explored.

A non-planar Amplituhedron is found.

Less likely, but if all of the above happens I would not be all that surprised:

A way is found to double-copy the non-planar Amplituhedron to get an N=8 supergravity Amplituhedron.

The enhanced cancellations in N=8 supergravity turn out to be something “deep”: perhaps they are derivable from string theory, or provide a novel constraint on quantum gravity theories.

Various all-loop statements about the polylogarithms present in N=4 are used to make more restricted all-loop statements about QCD.

The Pentagon OPE is re-summed for finite coupling, if not into known functions than into a form that admits good numerics and various analytic manipulations. Alternatively, the sorts of functions that the Pentagon OPE can sum to are characterized and a bootstrap procedure becomes viable for them.

Irresponsible speculations, suited to public talks or grant applications:

The N=8 Amplituhedron leads to some sort of reformulation of space-time in a way that solves various quantum gravity paradoxes.

The sorts of mathematical objects found in the finite-coupling resummation of the Pentagon OPE lead to a revival of the original analytic S-matrix program, now with an actual chance to succeed.

Extremely unlikely:

Analytic all-loop QCD results.

Magical genie land:

Analytic finite coupling QCD results.

In Defense of Lord Kelvin, Michelson, and the Physics of Decimals

William Thompson, Lord Kelvin, was a towering genius of 19th century physics. He is often quoted as saying,

There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.

lord_kelvin_photograph

Certainly sounds like something I would say!

As it happens, he never actually said this. It’s a paraphrase of a quote from Albert Michelson, of the Michelson-Morley Experiment:

While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals.

albert_abraham_michelson2

Now that’s more like it!

In hindsight, this quote looks pretty silly. When Michelson said that “it seems probable that most of the grand underlying principles have been firmly established” he was leaving out special relativity, general relativity, and quantum mechanics. From our perspective, the grandest underlying principles had yet to be discovered!

And yet, I think we should give Michelson some slack.

Someone asked me on twitter recently what I would choose if given the opportunity to unravel one of the secrets of the universe. At the time, I went for the wishing-for-more-wishes answer: I’d ask for a procedure to discover all of the other secrets.

I was cheating, to some extent. But I do think that the biggest and most important mystery isn’t black holes or the big bang, isn’t asking what will replace space-time or what determines the constants in the Standard Model. The most critical, most important question in physics, rather, is to find the consequences of the principles we actually know!

We know our world is described fairly well by quantum field theory. We’ve tested it, not just to the sixth decimal place, but to the tenth. And while we suspect it’s not the full story, it should still describe the vast majority of our everyday world.

If we knew not just the underlying principles, but the full consequences of quantum field theory, we’d understand almost everything we care about. But we don’t. Instead, we’re forced to calculate with approximations. When those approximations break down, we fall back on experiment, trying to propose models that describe the data without precisely explaining it. This is true even for something as “simple” as the distribution of quarks inside a proton. Once you start trying to describe materials, or chemistry or biology, all bets are off.

This is what the vast majority of physics is about. Even more, it’s what the vast majority of science is about. And that’s true even back to Michelson’s day. Quantum mechanics and relativity were revelations…but there are still large corners of physics in which neither matters very much, and even larger parts of the more nebulous “physical science”.

New fundamental principles get a lot of press, but you shouldn’t discount the physics of “the sixth place of decimals”. Most of the big mysteries don’t ask us to challenge our fundamental paradigm: rather, they’re challenges to calculate or measure better, to get more precision out of rules we already know. If a genie gave me the solution to any of physics’ mysteries I’d choose to understand the full consequences of quantum field theory, or even of the physics of Michelson’s day, long before I’d look for the answer to a trendy question like quantum gravity.

Things You Don’t Know about the Power of the Dark Side

Last Wednesday, Katherine Freese gave a Public Lecture at Perimeter on the topic of Dark Matter and Dark Energy. The talk should be on Perimeter’s YouTube page by the time this post is up.

Answering twitter questions during the talk made me realize that there’s a lot the average person finds confusing about Dark Matter and Dark Energy. Freese addressed much of this pretty well in her talk, but I felt like there was room for improvement. Rather than try to tackle it myself, I decided to interview an expert on the Dark Side of the universe.

darth_vader

Twitter doesn’t know the power of the dark side!

Lord Vader, some people have a hard time distinguishing Dark Matter and Dark Energy. What do you have to say to them?

Fools! Light side astronomers call “dark” that which they cannot observe and cannot understand. “Fear” and “anger” are different heights of emotion, but to the Jedi they are only the path to the Dark Side. Dark Energy and Dark Matter are much the same: both distinct, both essential to the universe, and both “dark” to the telescopes of the light.

Let’s start with Dark Matter. Is it really matter?

You ask an empty question. “Matter” has been defined in many ways. When we on the Dark Side refer to Dark Matter, we merely mean to state that it behaves much like the matter you know: it is drawn to and fro by gravity, sloshing about.

It is distinct from your ordinary matter in that two of the forces of nature, the strong nuclear force and electromagnetism, do not concern it. Ordinary matter is bound together in the nuclei of atoms by the strong force, or woven into atoms and molecules by electromagnetism. This makes it subject to all manner of messy collisions.

Dark Matter, in contrast, is pure, partaking neither of nuclear nor chemical reactions. It passes through each of us with no notice. Only the weak nuclear force and gravity affect it. The latter has brought it slowly into clumps and threads through the universe, each one a vast nest for groupings of stars. Truly, Dark Matter surrounds us, penetrates us, and binds the galaxy together.

Could Dark Matter be something we’re more familiar with, like neutrinos or black holes? What about a modification of gravity?

Many wondered as much, when the study of the Dark Side was young. They were wrong.

The matter you are accustomed to composes merely a twentieth of the universe, while Dark Matter is more than a quarter. There is simply not enough of these minor contributions, neutrinos and black holes, to account for the vast darkness that surrounds the galaxy, and with each astronomer’s investigation we grow more assured.

As for modifying gravity, do you seek to modify a fundamental Force?

If so, you should be wary. Forces, by their nature, are accompanied by particles, and gravity is no exception. Take care that your tinkering does not result in a new sort of particle. If so, you may be unknowingly walking the path of the Dark Side, for your modification may be just another form of Dark Matter.

What sort of things could Dark Matter be? Can Dark Matter decay into ordinary matter? Could there be anti-Dark Matter?

As of yet, your scientists are still baffled by the nature of Dark Matter. Still, there are limits. Since only rare events could produce it from ordinary matter, the universe’s supply of Dark Matter must be ancient, dating back to the dawn of the cosmos. In that case, it must decay only slowly, if at all. Similarly, if Dark Matter had antimatter forms then its interactions must be so weak that it has not simply annihilated with its antimatter half across the universe. So while either is possible, it may be simpler for your theorists if Dark Matter did not decay, and was its own antimatter counterpart. On the other hand, if Dark Matter did undergo such reactions, your kind may one day be able to detect it.

Of course, as a master of the Dark Side I know the true nature of Dark Matter. However, I could only impart it to a loyal apprentice…

Yeah, I think I’ll pass on that. They say you can only get a job in academia when someone dies, but unlike the Sith they don’t mean it literally.

Let’s move on to Dark Energy. What can you tell us about it?

Dark “Energy”, like Dark Matter, is named for what people on your Earth cannot comprehend. Nothing, not even Dark Energy, is “made of energy”. Dark Energy is “energy” merely because it behaves unlike matter.

Matter, even Dark Matter, is drawn together by the force of gravity. Under its yoke, the universe would slow down in its expansion and eventually collapse into a crunch, like the throat of an incompetent officer.

However, the universe is not collapsing, but accelerating, galaxies torn away from each other by a force that must compose more than two thirds of the universe. It is rather like the Yuuzhan Vong, a mysterious force from outside the galaxy that scouts persistently under- or over-estimate.

Umm, I’m pretty sure the Yuuzhan Vong don’t exist anymore, since Disney got rid of the Expanded Universe.

That perfidious Mouse!

Well folks, Vader is now on a rampage of revenge in the Disney offices, so I guess we’ll have to end the interview. Tune in next week, and until then, may the Force be with you!

Entropy is Ignorance

(My last post had a poll in it! If you haven’t responded yet, please do.)

Earlier this month, philosopher Richard Dawid ran a workshop entitled “Why Trust a Theory? Reconsidering Scientific Methodology in Light of Modern Physics” to discuss his idea of “non-empirical theory confirmation” for string theory, inflation, and the multiverse. They haven’t published the talks online yet, so I’m stuck reading coverage, mostly these posts by skeptical philosopher Massimo Pigliucci. I find the overall concept annoying, and may rant about it later. For now though, I’d like to talk about a talks on the second day by philosopher Chris Wüthrich about black hole entropy.

Black holes, of course, are the entire-stars-collapsed-to-a-point-that-no-light-can-escape that everyone knows and loves. Entropy is often thought of as the scientific term for chaos and disorder, the universe’s long slide towards dissolution. In reality, it’s a bit more complicated than that.

2000px-chaos_star-svg

For one, you need to take Elric into account…

Can black holes be disordered? Naively, that doesn’t seem possible. How can a single point be disorderly?

Thought about in a bit more detail, the conclusion seems even stronger. Via something called the “No Hair Theorem”, it’s possible to prove that black holes can be described completely with just three numbers: their mass, their charge, and how fast they are spinning. With just three numbers, how can there be room for chaos?

On the other hand, you may have heard of the Second Law of Thermodynamics. The Second Law states that entropy always increases. Absent external support, things will always slide towards disorder eventually.

If you combine this with black holes, then this seems to have weird implications. In particular, what happens when something disordered falls into a black hole? Does the disorder just “go away”? Doesn’t that violate the Second Law?

This line of reasoning has led to the idea that black holes have entropy after all. It led Bekenstein to calculate the entropy of a black hole based on how much information is “hidden” inside, and Hawking to find that black holes in a quantum world should radiate as if they had a temperature consistent with that entropy. One of the biggest successes of string theory is an explanation for this entropy. In string theory, black holes aren’t perfect points: they have structure, arrangements of strings and higher dimensional membranes, and this structure can be disordered in a way that seems to give the right entropy.

Note that none of this has been tested experimentally. Hawking radiation, if it exists, is very faint: not the sort of thing we could detect with a telescope. Wüthrich is worried that Bekenstein’s original calculation of black hole entropy might have been on the wrong track, which would undermine one of string theory’s most well-known accomplishments.

I don’t know Wüthrich’s full argument, since the talks haven’t been posted online yet. All I know is Pigliucci’s summary. From that summary, it looks like Wüthrich’s primary worry is about two different definitions of entropy.

See, when I described entropy as “disorder”, I was being a bit vague. There are actually two different definitions of entropy. The older one, Gibbs entropy, grows with the number of states of a system. What does that have to do with disorder?

Think about two different substances: a gas, and a crystal. Both are made out of atoms, but the patterns involved are different. In the gas, atoms are free to move, while in the crystal they’re (comparatively) fixed in place.

147515main_phases_large

Blurrily so in this case

There are many different ways the atoms of a gas can be arranged and still be a gas, but fewer in which they can be a crystal, so a gas has more entropy than a crystal. Intuitively, the gas is more disordered.

When Bekenstein calculated the entropy of a black hole he didn’t use Gibbs entropy, though. Instead, he used Shannon entropy, a concept from information theory. Shannon entropy measures the amount of information in a message, with a formula very similar to that of Gibbs entropy: the more different ways you can arrange something, the more information you can use it to send. Bekenstein used this formula to calculate the amount of information that gets hidden from us when something falls into a black hole.

Wüthrich’s worry here (again, as far as Pigliucci describes) is that Shannon entropy is a very different concept from Gibbs entropy. Shannon entropy measures information, while Gibbs entropy is something “physical”. So by using one to predict the other, are predictions about black hole entropy just confused?

It may well be he has a deeper argument for this, one that wasn’t covered in the summary. But if this is accurate, Wüthrich is missing something fundamental. Shannon entropy and Gibbs entropy aren’t two different concepts. Rather, they’re both ways of describing a core idea: entropy is a measure of ignorance.

A gas has more entropy than a crystal, it can be arranged in a larger number of different ways. But let’s not talk about a gas. Let’s talk about a specific arrangement of atoms: one is flying up, one to the left, one to the right, and so on. Space them apart, but be very specific about how they are arranged. This arrangement could well be a gas, but now it’s a specific gas. And because we’re this specific, there are now many fewer states the gas can be in, so this (specific) gas has less entropy!

Now of course, this is a very silly way to describe a gas. In general, we don’t know what every single atom of a gas is doing, that’s why we call it a gas in the first place. But it’s that lack of knowledge that we call entropy. Entropy isn’t just something out there in the world, it’s a feature of our descriptions…but one that, nonetheless, has important physical consequences. The Second Law still holds: the world goes from lower entropy to higher entropy. And while that may seem strange, it’s actually quite logical: the things that we describe in more vague terms should become more common than the things we describe in specific terms, after all there are many more of them!

Entropy isn’t the only thing like this. In the past, I’ve bemoaned the difficulty of describing the concept of gauge symmetry. Gauge symmetry is in some ways just part of our descriptions: we prefer to describe fundamental forces in a particular way, and that description has redundant parameters. We have to make those redundant parameters “go away” somehow, and that leads to non-existent particles called “ghosts”. However, gauge symmetry also has physical consequences: it was how people first knew that there had to be a Higgs boson, long before it was discovered. And while it might seem weird to think that a redundancy could imply something as physical as the Higgs, the success of the concept of entropy should make this much less surprising. Much of what we do in physics is reasoning about different descriptions, different ways of dividing up the world, and then figuring out the consequences of those descriptions. Entropy is ignorance…and if our ignorance obeys laws, if it’s describable mathematically, then it’s as physical as anything else.

The “Lies to Children” Model of Science Communication, and The “Amplitudes Are Weird” Model of Amplitudes

Let me tell you a secret.

Scattering amplitudes in N=4 super Yang-Mills don’t actually make sense.

Scattering amplitudes calculate the probability that particles “scatter”: coming in from far away, interacting in some fashion, and producing new particles that travel far away in turn. N=4 super Yang-Mills is my favorite theory to work with: a highly symmetric version of the theory that describes the strong nuclear force. In particular, N=4 super Yang-Mills has conformal symmetry: if you re-scale everything larger or smaller, you should end up with the same predictions.

You might already see the contradiction here: scattering amplitudes talk about particles coming in from very far away…but due to conformal symmetry, “far away” doesn’t mean anything, since we can always re-scale it until it’s not far away anymore!

So when I say that I study scattering amplitudes in N=4 super Yang-Mills, am I lying?

Well…yes. But it’s a useful type of lie.

There’s a concept in science writing called “lies to children”, first popularized in a fantasy novel.

the-science-of-discworld-1

This one.

When you explain science to the public, it’s almost always impossible to explain everything accurately. So much background is needed to really understand most of modern science that conveying even a fraction of it would bore the average audience to tears. Instead, you need to simplify, to skip steps, and even (to be honest) to lie.

The important thing to realize here is that “lies to children” aren’t meant to mislead. Rather, they’re chosen in such a way that they give roughly the right impression, even as they leave important details out. When they told you in school that energy is always conserved, that was a lie: energy is a consequence of symmetry in time, and when that symmetry is broken energy doesn’t have to be conserved. But “energy is conserved” is a useful enough rule that lets you understand most of everyday life.

In this case, the “lie” that we’re calculating scattering amplitudes is fairly close to the truth. We’re using the same methods that people use to calculate scattering amplitudes in theories where they do make sense, like QCD. For a while, people thought these scattering amplitudes would have to be zero, since anything else “wouldn’t make sense”…but in practice, we found they were remarkably similar to scattering amplitudes in other theories. Now, we have more rigorous definitions for what we’re calculating that avoid this problem, involving objects called polygonal Wilson loops.

This illustrates another principle, one that hasn’t (yet) been popularized by a fantasy novel. I’d like to call it the “amplitudes are weird” principle. Time and again we amplitudes-folks will do a calculation that doesn’t really make sense, find unexpected structure, and go back to figure out what that structure actually means. It’s been one of the defining traits of the field, and we’ve got a pretty good track record with it.

A couple of weeks back, Lance Dixon gave an interview for the SLAC website, talking about his work on quantum gravity. This was immediately jumped on by Peter Woit and Lubos Motl as ammo for the long-simmering string wars. To one extent or another, both tried to read scientific arguments into the piece. This is in general a mistake: it is in the nature of a popularization piece to contain some volume of lies-to-children, and reading a piece aimed at a lower audience can be just as confusing as reading one aimed at a higher audience.

In the remainder of this post, I’ll try to explain what Lance was talking about in a slightly higher-level way. There will still be lies-to-children involved, this is a popularization blog after all. But I should be able to clear up a few misunderstandings. Lubos probably still won’t agree with the resulting argument, but it isn’t the self-evidently wrong one he seems to think it is.

Lance Dixon has done a lot of work on quantum gravity. Those of you who’ve read my old posts might remember that quantum gravity is not so difficult in principle: general relativity naturally leads you to particles called gravitons, which can be treated just like other particles. The catch is that the theory that you get by doing this fails to be predictive: one reason why is that you get an infinite number of erroneous infinite results, which have to be papered over with an infinite number of arbitrary constants.

Working with these non-predictive theories, however, can still yield interesting results. In the article, Lance mentions the work of Bern, Carrasco, and Johansson. BCJ (as they are abbreviated) have found that calculating a gravity amplitude often just amounts to calculating a (much easier to find) Yang-Mills amplitude, and then squaring the right parts. This was originally found in the context of string theory by another three-letter group, Kawai, Lewellen, and Tye (or KLT). In string theory, it’s particularly easy to see how this works, as it’s a basic feature of how string theory represents gravity. However, the string theory relations don’t tell the whole story: in particular, they only show that this squaring procedure makes sense on a classical level. Once quantum corrections come in, there’s no known reason why this squaring trick should continue to work in non-string theories, and yet so far it has. It would be great if we had a good argument why this trick should continue to work, a proof based on string theory or otherwise: for one, it would allow us to be much more confident that our hard work trying to apply this trick will pay off! But at the moment, this falls solidly under the “amplitudes are weird” principle.

Using this trick, BCJ and collaborators (frequently including Lance Dixon) have been calculating amplitudes in N=8 supergravity, a highly symmetric version of those naive, non-predictive gravity theories. For this particular, theory, the theory you “square” for the above trick is N=4 super Yang-Mills. N=4 super Yang-Mills is special for a number of reasons, but one is that the sorts of infinite results that lose you predictive power in most other quantum field theories never come up. Remarkably, the same appears to be true of N=8 supergravity. We’re still not sure, the relevant calculation is still a bit beyond what we’re capable of. But in example after example, N=8 supergravity seems to be behaving similarly to N=4 super Yang-Mills, and not like people would have predicted from its gravitational nature. Once again, amplitudes are weird, in a way that string theory helped us discover but by no means conclusively predicted.

If N=8 supergravity doesn’t lose predictive power in this way, does that mean it could describe our world?

In a word, no. I’m not claiming that, and Lance isn’t claiming that. N=8 supergravity simply doesn’t have the right sorts of freedom to give you something like the real world, no matter how you twist it. You need a broader toolset (string theory generally) to get something realistic. The reason why we’re interested in N=8 supergravity is not because it’s a candidate for a real-world theory of quantum gravity. Rather, it’s because it tells us something about where the sorts of dangerous infinities that appear in quantum gravity theories really come from.

That’s what’s going on in the more recent paper that Lance mentioned. There, they’re not working with a supersymmetric theory, but with the naive theory you’d get from just trying to do quantum gravity based on Einstein’s equations. What they found was that the infinity you get is in a certain sense arbitrary. You can’t get rid of it, but you can shift it around (infinity times some adjustable constant 😉 ) by changing the theory in ways that aren’t physically meaningful. What this suggests is that, in a sense that hadn’t been previously appreciated, the infinite results naive gravity theories give you are arbitrary.

The inevitable question, though, is why would anyone muck around with this sort of thing when they could just use string theory? String theory never has any of these extra infinities, that’s one of its most important selling points. If we already have a perfectly good theory of quantum gravity, why mess with wrong ones?

Here, Lance’s answer dips into lies-to-children territory. In particular, Lance brings up the landscape problem: the fact that there are 10^500 configurations of string theory that might loosely resemble our world, and no clear way to sift through them to make predictions about the one we actually live in.

This is a real problem, but I wouldn’t think of it as the primary motivation here. Rather, it gets at a story people have heard before while giving the feeling of a broader issue: that string theory feels excessive.

princess_diana_wedding_dress

Why does this have a Wikipedia article?

Think of string theory like an enormous piece of fabric, and quantum gravity like a dress. You can definitely wrap that fabric around, pin it in the right places, and get a dress. You can in fact get any number of dresses, elaborate trains and frilly togas and all sorts of things. You have to do something with the extra material, though, find some tricky but not impossible stitching that keeps it out of the way, and you have a fair number of choices of how to do this.

From this perspective, naive quantum gravity theories are things that don’t qualify as dresses at all, scarves and socks and so forth. You can try stretching them, but it’s going to be pretty obvious you’re not really wearing a dress.

What we amplitudes-folks are looking for is more like a pencil skirt. We’re trying to figure out the minimal theory that covers the divergences, the minimal dress that preserves modesty. It would be a dress that fits the form underneath it, so we need to understand that form: the infinities that quantum gravity “wants” to give rise to, and what it takes to cancel them out. A pencil skirt is still inconvenient, it’s hard to sit down for example, something that can be solved by adding extra material that allows it to bend more. Similarly, fixing these infinities is unlikely to be the full story, there are things called non-perturbative effects that probably won’t be cured. But finding the minimal pencil skirt is still going to tell us something that just pinning a vast stretch of fabric wouldn’t.

This is where “amplitudes are weird” comes in in full force. We’ve observed, repeatedly, that amplitudes in gravity theories have unexpected properties, traits that still aren’t straightforwardly explicable from the perspective of string theory. In our line of work, that’s usually a sign that we’re on the right track. If you’re a fan of the amplituhedron, the project here is along very similar lines: both are taking the results of plodding, not especially deep loop-by-loop calculations, observing novel simplifications, and asking the inevitable question: what does this mean?

That far-term perspective, looking off into the distance at possible insights about space and time, isn’t my style. (It isn’t usually Lance’s either.) But for the times that you want to tell that kind of story…well, this isn’t that outlandish of a story to tell. And unless your primary concern is whether a piece gives succor to the Woits of the world, it shouldn’t be an objectionable one.

Using Effective Language

Physicists like to use silly names for things, but sometimes it’s best to just use an everyday word. It can trigger useful intuitions, and it makes remembering concepts easier. What gets confusing, though, is when the everyday word you use has a meaning that’s not quite the same as the colloquial one.

“Realism” is a pretty classic example, where Bell’s elegant use of the term in quantum mechanics doesn’t quite match its common usage, leading to inevitable confusion whenever it’s brought up. “Theory” is such a useful word that multiple branches of science use it…with different meanings! In both cases, the naive meaning of the word is the basis of how it gets used scientifically…just not the full story.

There are two things to be wary of here. First, those of us who communicate science must be sure to point out when a word we use doesn’t match its everyday meaning, to guide readers’ intuitions away from first impressions to understand how the term is used in our field. Second, as a reader, you need to be on the look-out for hidden technical terms, especially when you’re reading technical work.

I remember making a particularly silly mistake along these lines. It was early on in grad school, back when I knew almost nothing about quantum field theory. One of our classes was a seminar, structured so that each student would give a talk on some topic that could be understood by the whole group. Unfortunately, some grad students with deeper backgrounds in theoretical physics hadn’t quite gotten the memo.

It was a particular phrase that set me off: “This theory isn’t an effective theory”.

My immediate response was to raise my hand. “What’s wrong with it? What about this theory makes it ineffective?”

The presenter boggled for a moment before responding. “Well, it’s complete up to high energies…it has no ultraviolet divergences…”

“Then shouldn’t that make it even more effective?”

After a bit more of this back-and-forth, we finally cleared things up. As it turns out, “effective field theory” is a technical term! An “effective field theory” is only “effectively” true, describing physics at low energies but not at high energies. As you can see, the word “effective” here is definitely pulling its weight, helping to make the concept understandable…but if you don’t recognize it as a technical term and interpret it literally, you’re going to leave everyone confused!

Over time, I’ve gotten better at identifying when something is a technical term. It really is a skill you can learn: there are different tones people use when speaking, different cadences when writing, a sense of uneasiness that can clue you in to a word being used in something other than its literal sense. Without that skill, you end up worried about mathematicians’ motives for their evil schemes. With it, you’re one step closer to what may be the most important skill in science: the ability to recognize something you don’t know yet.

What’s so Spooky about Action at a Distance?

With Halloween coming up, it’s time once again to talk about the spooky side of physics. And what could be spookier than action at a distance?

Pictured here.

Ok, maybe not an obvious contender for spookiest concept of the year. But physicists have struggled with action at a distance for centuries, and there are deep reasons why.

It all dates back to Newton. In Newton’s time, all of nature was expected to be mechanical. One object pushes another, which pushes another in turn, eventually explaining everything that every happens. And while people knew by that point that the planets were not circling around on literal crystal spheres, it was still hoped that their motion could be explained mechanically. The favored explanations of the time were vortices, whirlpools of celestial fluid that drove the planets around the Sun.

Newton changed all that. Not only did he set down a law of gravitation that didn’t use a fluid, he showed that no fluid could possibly replicate the planets’ motions. And while he remained agnostic about gravity’s cause, plenty of his contemporaries accused him of advocating “action at a distance”. People like Leibniz thought that a gravitational force without a mechanical cause would be superstitious nonsense, a betrayal of science’s understanding of the world in terms of matter.

For a while, Newton’s ideas won out. More and more, physicists became comfortable with explanations involving a force stretching out across empty space, using them for electricity and magnetism as these became more thoroughly understood.

Eventually, though, the tide began to shift back. Electricity and Magnetism were explained, not in terms of action at a distance, but in terms of a field that filled the intervening space. Eventually, gravity was too.

The difference may sound purely semantic, but it means more than you might think. These fields were restricted in an important way: when the field changed, it changed at one point, and the changes spread at a speed limited by the speed of light. A theory composed of such fields has a property called locality, the property that all interactions are fundamentally local, that is, they happen at one specific place and time.

Nowadays, we think of locality as one of the most fundamental principles in physics, on par with symmetry in space and time. And the reason why is that true action at a distance is quite a spooky concept.

Much of horror boils down to fear of the unknown. From what might lurk in the dark to the depths of the ocean, we fear that which we cannot know. And true action at a distance would mean that our knowledge might forever be incomplete. As long as everything is mediated by some field that changes at the speed of light, we can limit our search for causes. We can know that any change must be caused by something only a limited distance away, something we can potentially observe and understand. By contrast, true action at a distance would mean that forces from potentially anywhere in the universe could alter events here on Earth. We might never know the ultimate causes of what we observe; they might be stuck forever out of reach.

Some of you might be wondering, what about quantum mechanics? The phrase “spooky action at a distance” was famous because Einstein used it as an accusation against quantum entanglement, after all.

The key thing about quantum mechanics is that, as J. S. Bell showed, you can’t have locality…unless you throw out another property, called realism. Realism is the idea that quantum states have definite values for measurements before those measurements are taken. And while that sounds important, most people find getting rid of it much less scary than getting rid of locality. In a non-realistic world, at least we can still predict probabilities, even if we can’t observe certainties. In a non-local world, there might be aspects of physics that we just can’t learn. And that’s spooky.

Hexagon Functions III: Now with More Symmetry

I’ve got a new paper up this week.

It’s a continuation of my previous work, understanding collisions involving six particles in my favorite theory, N=4 super Yang-Mills.

This time, we’re pushing up the complexity, going from three “loops” to four. In the past, I could have impressed you with the number of pages the formulas I’m calculating take up (eight hundred pages for the three-loop formula from that first Hexagon Functions paper). Now, though, I don’t have that number: putting my four-loop formula into a pdf-making program just crashes the program. Instead, I’ll have to impress you with file sizes: 2.6 MB for the three-loop formula, 96 MB for the four-loop one.

Calculating such a formula sounds like a pretty big task, and it was, the first time. But things got a lot simpler after a chat I had at Amplitudes.

We calculate these things using an ansatz, a guess for what the final answer should look like. The more vague our guess, the more parameters we need to fix, and the more work we have in general. If we can guess more precisely, we can start with fewer parameters and things are a lot easier.

Often, more precise guesses come from understanding the symmetries of the problem. If we can know that the final answer must be the same after making some change, we can rule out a lot of possibilities.

Sometimes, these symmetries are known features of the answer, things that someone proved had to be correct. Other times, though, they’re just observations, things that have been true in the past and might be true again.

We started out using an observation from three loops. That got us pretty far, but we still had a lot of work to do: 808 parameters, to be fixed by other means. Fixing them took months of work, and throughout we hoped that there was some deeper reason behind the symmetries we observed.

Finally, at Amplitudes, I ran into fellow amplitudeologist Simon Caron-Huot and asked him if he knew the source of our observed symmetry. In just a few days he was able to link it to supersymmetry, giving us justification for our jury rigged trick. However, we figured out that his explanation went further than any of us expected. In the end, rather than 808 parameters we only really needed to consider 34.

Thirty-four options to consider. Thirty-four possible contributions to a ~100 MB file. That might not sound like a big deal, but compared to eight hundred and eight it’s a huge deal. More symmetry means easier calculations, meaning we can go further. At this point going to the next step in complexity, to five loops rather than four, might be well within reach.

Journalists Are Terrible at Quasiparticles

TerribleQuasiparticleHeadlineNo, they haven’t, and no, that’s not what they found, and no, that doesn’t make sense.

Quantum field theory is how we understand particle physics. Each fundamental particle comes from a quantum field, a law of nature in its own right extending across space and time. That’s why it’s so momentous when we detect a fundamental particle, like the Higgs, for the first time, why it’s not just like discovering a new species of plant.

That’s not the only thing quantum field theory is used for, though. Quantum field theory is also enormously important in condensed matter and solid state physics, the study of properties of materials.

When studying materials, you generally don’t want to start with fundamental particles. Instead, you usually want to think about overall properties, ways the whole material can move and change overall. If you want to understand the quantum properties of these changes, you end up describing them the same way particle physicists talk about fundamental fields: you use quantum field theory.

In particle physics, particles come from vibrations in fields. In condensed matter, your fields are general properties of the material, but they can also vibrate, and these vibrations give rise to quasiparticles.

Probably the simplest examples of quasiparticles are the “holes” in semiconductors. Semiconductors are materials used to make transistors. They can be “doped” with extra slots for electrons. Electrons in the semiconductor will move around from slot to slot. When an electron moves, though, you can just as easily think about it as a “hole”, an empty slot, that “moved” backwards. As it turns out, thinking about electrons and holes independently makes understanding semiconductors a lot easier, and the same applies to other types of quasiparticles in other materials.

Unfortunately, the article I linked above is pretty impressively terrible, and communicates precisely none of that.

The problem starts in the headline:

Scientists have finally discovered massless particles, and they could revolutionise electronics

Scientists have finally discovered massless particles, eh? So we haven’t seen any massless particles before? You can’t think of even one?

After 85 years of searching, researchers have confirmed the existence of a massless particle called the Weyl fermion for the first time ever. With the unique ability to behave as both matter and anti-matter inside a crystal, this strange particle can create electrons that have no mass.

Ah, so it’s a massless fermion, I see. Well indeed, there are no known fundamental massless fermions, not since we discovered neutrinos have mass anyway. The statement that these things “create electrons” of any sort is utter nonsense, however, let alone that they create electrons that themselves have no mass.

Electrons are the backbone of today’s electronics, and while they carry charge pretty well, they also have the tendency to bounce into each other and scatter, losing energy and producing heat. But back in 1929, a German physicist called Hermann Weyl theorised that a massless fermion must exist, that could carry charge far more efficiently than regular electrons.

Ok, no. Just no.

The problem here is that this particular journalist doesn’t understand the difference between pure theory and phenomenology. Weyl didn’t theorize that a massless fermion “must exist”, nor did he say anything about their ability to carry charge. Weyl described, mathematically, how a massless fermion could behave. Weyl fermions aren’t some proposed new fundamental particle, like the Higgs boson: they’re a general type of particle. For a while, people thought that neutrinos were Weyl fermions, before it was discovered that they had mass. What we’re seeing here isn’t some ultimate experimental vindication of Weyl, it’s just an old mathematical structure that’s been duplicated in a new material.

What’s particularly cool about the discovery is that the researchers found the Weyl fermion in a synthetic crystal in the lab, unlike most other particle discoveries, such as the famous Higgs boson, which are only observed in the aftermath of particle collisions. This means that the research is easily reproducible, and scientists will be able to immediately begin figuring out how to use the Weyl fermion in electronics.

Arrgh!

Fundamental particles from particle physics, like the Higgs boson, and quasiparticles, like this particular Weyl fermion, are completely different things! Comparing them like this, as if this is some new efficient trick that could have been used to discover the Higgs, just needlessly confuses people.

Weyl fermions are what’s known as quasiparticles, which means they can only exist in a solid such as a crystal, and not as standalone particles. But further research will help scientists work out just how useful they could be. “The physics of the Weyl fermion are so strange, there could be many things that arise from this particle that we’re just not capable of imagining now,” said Hasan.

In the very last paragraph, the author finally mentions quasiparticles. There’s no mention of the fact that they’re more like waves in the material than like fundamental particles, though. From this description, it makes it sound like they’re just particles that happen to chill inside crystals, like they’re agoraphobic or something.

What the scientists involved here actually discovered is probably quite interesting. They’ve discovered a new sort of ripple in the material they studied. The ripple can carry charge, and because it can behave like a massless particle it can carry charge much faster than electrons can. (To get a basic idea as to how this works, think about waves in the ocean. You can have a wave that goes much faster than the ocean’s current. As the wave travels, no actual water molecules travel from one side to the other. Instead, it is the motion that travels, the energy pushing the wave up and down being transferred along.)

There’s no reason to compare this to particle physics, to make it sound like another Higgs boson. This sort of thing dilutes the excitement of actual particle discoveries, perpetuating the misconception of particles as just more species to find and catalog. Furthermore, it’s just completely unnecessary: condensed matter is a very exciting field, one that the majority of physicists work on. It doesn’t need to ride on the coat-tails of particle physics rhetoric in order to capture peoples’ attention. I’ve seen journalists do this kind of thing before, comparing new quasiparticles and composite particles with fundamental particles like the Higgs, and every time I cringe. Don’t you have any respect for the subject you’re writing about?