Tag Archives: theoretical physics

Amplitudes Megapost

If you met me on a plane and asked me what I do, I’d probably lead with something like this:

“I come up with mathematical tricks to make particle physics calculations easier.”

People like me, who research these tricks, are sometimes known as Amplitudeologists. We studying scattering amplitudes, mathematical formulas used to calculate the probabilities of different things happening when sub-atomic particles collide.

Why do we want to make calculations easier? Because particle physics is hard!

More specifically, calculations in particle physics can be hard for three broad reasons: lots of loops, lots of legs, or more complicated theories.

Loops measure precision. They’re called loops because more complicated Feynman diagrams contain “loops” of particles, while the simplest, with no loops at all, are called “trees”. The more loops you include, the more precise your calculation becomes, but it also becomes more complicated.

Legs are the number of particles involved. If two particles collide and bounce off each other, then there are a total of four legs: two from the incoming particles, two from the outgoing ones. Calculations with more legs are almost always more complicated than calculations with fewer.

Most of the time, our end-goal is to calculate things that are relevant to the real world. Usually, this means QCD, or Quantum Chromodynamics, the theory of quarks and gluons. QCD is very complicated, though. Often, we work to hone our techniques on simpler theories first. N=4 super Yang-Mills has been called the simplest quantum field theory, particularly the further simplified, planar version. If you want a basic overview of it, check out the Handy Handbooks tab at the top of my blog. Often, progress in amplitudeology involves adapting tricks from planar N=4 super Yang-Mills to more complicated, and more realistic, theories.

I should point out that our goal in amplitudeology isn’t always to do more complicated calculations. Sometimes, it’s about doing a calculation we already know how to do, but in a way that’s more insightful. This lets us learn more about the theories we’re studying, as well as gaining insights about larger problems like the nature of space and time.

So what sorts of tricks do we use to do all this? Well, there are a few broad categories…

Generalized Unitarity

The prizewinning idea that started it all, generalized unitarity came out of the collaboration of Zvi Bern, Lance Dixon, and David Kosower, starting in the 90’s. The core of the idea is difficult to describe in a quick sentence, but it essentially boils down to noticing that, rather than thinking about every single multi-loop Feynman diagram independently, you can think of loop diagrams as what you get when you sew trees together.

This is a very powerful idea. These days, pretty much everyone who studies amplitudeology learns it, and it’s proven pivotal for a wide array of applications.

In planar N=4 super Yang-Mills it’s one of the techniques that can go to exceptionally high loop order, to six or seven loops. If you drop the “planar” condition, it’s still quite powerful. If you do things right, as Zvi Bern, John Joseph Carrasco, and Henrik Johansson found, you can get results in N=8 supergravity “for free”. This raises what has ended up being one of the big questions of our sub-field: does N=8 supergravity behave like most attempts at theories of quantum gravity, with pesky infinite results that we don’t know how to deal with, or does it behave like N=4 super Yang-Mills, which has no pesky infinities at all? Answering this question requires a dizzying seven-loop calculation, the mystique of which got me in to the field in the first place. Unfortunately, despite diligent efforts from Bern and collaborators, they’ve been stuck at four loops for quite some time. In the meantime they’ve been extending things in all the other amplitudes-directions: more legs, more complicated theories (in this case, supergravity with less supersymmetry), and more insight. Recently, it looks like they may have found a way around this hurdle, so the mystery at seven loops may not be so far away after all.

Generalized Unitarity is also one of the most powerful amplitudes tricks for real-world theories, in particular QCD. In this case, it’s main virtue is in legs, not loops, going up to seven-particles at one loop for practical, LHC-relevant calculations. There’s also a major effort to push this to two loops, with some success.

BCFW Recursion

If generalized unitarity was the trick that got experimentalists to sit up and take notice, BCFW is the one that got the attention of the pure theorists. In the mid-2000s Ruth Britto, Freddy Cachazo, and Bo Feng (later joined by theoretical physics superstar Ed Witten) figured out a way to build up tree amplitudes to any number of legs recursively for any theory, starting with three particles and working their way up. Their method was both fairly efficient and extremely insightful, and it’s another trick that’s made its way into every amplitudeologist’s arsenal. Further developments led to a recursive procedure that could work up to any number of loops in planar N=4 super Yang-Mills, which while not especially efficient did lead to…

The Positive Grassmannian, and the Amplituhedron

The work of Nima Arkani-Hamed, Jacob Bourjaily, Freddy Cachazo, Alexander Goncharov, Alexander Postnikov, and Jaroslav Trnka on the Positive Grassmannian (and more recently the Amplituhedron) has pushed the “more insight” direction impressively far. The Amplituhedron in particular captured the public’s imagination, as well as that of mathematicians, by packaging the all-loop amplitude into a particularly clean, mathematically meaningful form. Now they’re working on pushing this deep understanding to non-planar N=4 super Yang-Mills.

Integration Tricks

Generalized unitarity and the Amplituhedron have one thing in common: neither gives the full result. Calculating scattering amplitudes traditionally is a two-step process: first, add up all possible Feynman diagrams, then add up (integrate) all possible momenta. Generalized unitarity and the Amplituhedron let you skip the diagrams, but in both cases you still need to integrate. There’s a whole lore of integration techniques, from breaking things up into a basis of known “master” integrals (an example paper on this theme here), to attacking the integrations numerically via a process known as sector decomposition (one of the better programs that does this here). Higher-loop integrations are typically quite tough, even with these techniques.

Polylogarithms

These integrals will usually result in a type of mathematical functions called polylogarithms, or transcendental functions. Understanding these functions has led to an enormous amount of progress (and I’m not just saying that because it’s what I work on 😉 ).

It all started when Alexander Goncharov, Mark Spradlin, Cristian Vergu, and Anastasia Volovich figured out how to write a laboriously calculated seventeen-page two-loop six-particle amplitude in just two lines. To do this, they used mathematical properties of polylogarithms that were previously largely unknown to physicists. Their success inspired Lance Dixon, James Drummond, and Johannes Henn to use these methods to guess the correct answer at three loops, work that was completed with my involvement.

Since then, both groups have made a lot of progress. In general, Spradlin, Volovich, and collaborators have been pushing things farther in terms of legs and insight, while Dixon and collaborators have made progress at higher loops. So far we’ve gotten to four loops (here, plus unpublished work), while the others have proposals for any number of particles at two loops and substantial progress for seven particles at three loops.

All of this is still for planar N=4 super Yang-Mills. Using these tricks for more complicated theories is trickier. However, while you usually can’t just guess the answer like you can for N=4, a good understanding of the properties of polylogarithms can still take you quite far.

Integrability

Why did the polylogarithm folks start with six particles? Wouldn’t four or five have been easier?

As it turns out, four and five particle amplitudes are indeed easier, so much so that for planar N=4 super Yang-Mills they’re known up to any loop order. And while a number of elements went in to that result, one that really filled in the details was integrability.

Integrability is tough to describe in a short sentence, but essentially it involves describing highly symmetric systems all in one go, without having to use the step-by-step approximations of perturbation theory. For our purposes, this means bypassing the loop-by-loop perspective altogether.

Integrability is a substantial field in its own right, probably bigger than amplitudeology. There’s a lot going on, and only some of it touches on amplitudes-related topics. When it does, though, it’s quite impressive, with the flagship example being the work of Benjamin Basso, Amit Sever, and Pedro Vieira. They are able to compute amplitudes in planar N=4 super Yang-Mills for any and all loops, instead making an approximation based on the particle momenta. These days, they’re working on making their method more complete and robust, while building up understanding of other structures that might eventually allow them to say something about the non-planar case.

CHY and the Ambitwistor String

Ed Witten’s involvement in BCFW didn’t come completely out of left field. He had shown interest in N=4 super Yang-Mills earlier, with the invention of the twistor string. The twistor string calculates tree amplitudes in N=4 super Yang-Mills as the result of a string-theory-like framework. The advantage to such a framework is that, while normal quantum field theory involves large numbers of different diagrams, string theory only has one diagram “shape” for each loop.

This advantage has been thrust back into the spotlight recently via the work of Freddy Cachazo, Song He, and Ellis Ye Yuan. Their CHY formula works not just for N=4 super Yang-Mills, but for a wide (and growing) variety of other theories, allowing them to examine those theories’ properties in a particularly powerful way. Meanwhile, Lionel Mason and David Skinner have given the CHY formula a more solid theoretical grounding in the form of their ambitwistor string, which they have recently been able to generalize to a loop-level proposal.

Amplitudeology is a large and growing field, and there are definitely important people I haven’t mentioned. Some, like Henriette Elvang and Yu-tin Huang, have been involved with many different things over the years, so there wasn’t a clear place to put them. Others are part of the European community, where there’s a lot of work on string theory amplitudes and on pushing the boundaries of polylogarithms. Still others were left out simply because I ran out of room. I’ve only covered a small part of the field here, but I hope that small part gives you an idea of the richness of the whole.

Pentaquarks!

Earlier this week, the LHCb experiment at the Large Hadron Collider announced that, after painstakingly analyzing the data from earlier runs, they have decisive evidence of a previously unobserved particle: the pentaquark.

What’s a pentaquark? In simple terms, it’s five quarks stuck together. Stick two up quarks and a down quark together, and you get a proton. Stick two quarks together, you get a meson of some sort. Five, you get a pentaquark.

(In this case, if you’re curious: two up quarks, one down quark, one charm quark and one anti-charm quark.)

Artist’s Conception

Crucially, this means pentaquarks are not fundamental particles. Fundamental particles aren’t like species, but composite particles like pentaquarks are: they’re examples of a dizzying variety of combinations of an already-known set of basic building blocks.

So why is this discovery exciting? If we already knew that quarks existed, and we already knew the forces between them, shouldn’t we already know all about pentaquarks?

Well, not really. People definitely expected pentaquarks to exist, they were predicted fifty years ago. But their exact properties, or how likely they were to show up? Largely unknown.

Quantum field theory is hard, and this is especially true of QCD, the theory of quarks and gluons. We know the basic rules, but calculating their large-scale consequences, which composite particles we’re going to detect and which we won’t, is still largely out of our reach. We have to supplement first-principles calculations with experimental data, to take bits and pieces and approximations until we get something reasonably sensible.

This is an important point in general, not just for pentaquarks. Often, people get very excited about the idea of a “theory of everything”. At best, such a theory would tell us the fundamental rules that govern the universe. The thing is, we already know many of these rules, even if we don’t yet know all of them. What we can’t do, in general, is predict their full consequences. Most of physics, most of science in general, is about investigating these consequences, coming up with models for things we can’t dream of calculating from first principles, and it really does start as early as “what composite particles can you make out of quarks?”

Pentaquarks have been a long time coming, long enough that someone occasionally proposed a model that explained that they didn’t exist. There are still other exotic states of quarks and gluons out there, like glueballs, that have been predicted but not yet observed. It’s going to take time, effort, and data before we fully understand composite particles, even though we know the rules of QCD.

Got Branes on the Brain?

You’ve probably heard it said that string theory contains two types of strings: open, and closed. Closed strings are closed loops, like rubber bands. They give rise to gravity, and in superstring theories to supergravity. Open strings have loose ends, like a rubber band cut in half. They give us Yang-Mills forces, and super Yang-Mills for superstrings.

String theory has more than just strings, though. It also has branes.

Branes, short for membranes, are objects like strings but in other dimensions. The simplest to imagine is a two-dimensional membrane, like a sheet of paper. A three-dimensional membrane would fill all of 3D space, like an infinite cube of jello. Higher dimensional membranes also exist, up to string theory’s limit of nine spatial dimensions.

But you can keep imagining them as sheets of paper if you’d like.

So where did these branes come from? Why doesn’t string theory just have strings?

You might think we’re just trying to be as general as possible, including every possible dimension of object. Strangely enough, this isn’t actually what’s going on! As it turns out, branes can be in lower dimensions too: there are zero-dimensional branes that behave like particles, and one-dimensional branes that are similar to, but crucially not the same thing as, the strings we started out with! If we were just trying to get an object for every dimension we wouldn’t need one-dimensional branes, we’d already have strings!

(By the way, there are also “-1” dimensional branes, but that’s a somewhat more advanced topic.)

Instead, branes come from some strange properties of open strings.

I told you that the ends of open strings are “loose”, but that’s just loose language on my part. Mathematically, there are two options: the ends can be free to wander, or they can be fixed in place. If they’re free, they can move wherever they like with no resistance. If they’re fixed, any attempt to move them will just set them vibrating.

The thing is, you choose between these two options not just once, but once per dimension. You could have the end of the string free to move in two dimensions, but fixed in another, like a magnet was sticking it to some sort of 2D surface…like a brane.

Brane-worlds are dangerous places to live.

In mathematics, the fixed dimensions of end of the string are said to have Dirichlet boundary conditions, which is why this type of branes are called Dirichlet branes, or D-branes. In general, D-branes are things strings can end on. That’s why you can have D1-branes, that despite their string-like shape are different from actual strings: rather, they’re things strings can end on.

You might wonder whether we really need these things. Sure, they’re allowed mathematically, but is that really a good enough reason?

As it turns out, D-branes are not merely allowed in string theory, they are required, due to something called T-duality. I’ve talked about dualities before: they’re relationships between different theories that secretly compute the same thing. T-duality was one of the first-discovered dualities in string theory, and it involves relationships between strings wrapped around circular dimensions.

If a dimension is circular, then closed strings can either move around the circle, or wrap around it instead. As it turns out, a string moving around a small circle has the same energy as a string wrapped around a big circle, where here “small” and “big” are comparisons to the length of the string. It’s not just the energy, though: for every physical quantity, the two descriptions (big circle with strings traveling along it, small circle with strings wrapped around it) give the same answer: the two theories are dual.

If it works with closed strings, what about open strings?

Here something weird happens: if you perform the T-duality operation (switch between the small circle and the big one), then the ends of open strings switch from being free to being fixed! This means that even if we start out with no D-branes at all, our theory was equivalent to one with D-branes all along! No matter what we do, we can’t write down a theory that doesn’t have D-branes!

As it turns out, we could have seen this coming even without string theory, just by looking at (super)gravity.

Long before people saw astrophysical evidence for black holes, before they even figured out that stars could collapse, they worked out the black hole solution in general relativity. Without knowing anything about the sort of matter that could form a black hole, they could nevertheless calculate what space-time would look like around one.

In ten dimensional supergravity, you can do these same sorts of calculations. Instead of getting black holes, though, you get black branes. Rather than showing what space-time looks like around a high-mass point, they showed what it would look like around a higher dimensional, membrane-shaped object. And miraculously, they corresponded exactly to the D-branes that are supposed to be part of string theory!

So if we want string theory, or even supergravity, we’re stuck with D-branes. It’s a good thing we are, too, because D-branes are very useful. In the past, I’ve talked about how most of the fundamental forces of nature have multiple types of charge. One way for string theory to reproduce these multiple types of charge is with D-branes. If each open string is connected to two D-branes, it can behave like gluons, carrying a pair of charges. Since each end of the string is stuck to its respective brane, the charge corresponding to each brane must be conserved, just like charges in the real world.

D-branes aren’t one of the original assumptions of string theory, but they’re a large part of what makes string theory tick. M theory, string theory’s big brother, doesn’t have strings at all: just two- and five-dimensional branes. So be grateful for branes: they make the world a much more interesting place.

Science Never Forgets

I’ll just be doing a short post this week, I’ve been busy at a workshop on Flux Tubes here at Perimeter.

If you’ve ever heard someone tell the history of string theory, you’ve probably heard that it was first proposed not as a quantum theory of gravity, but as a way to describe the strong nuclear force. Colliders of the time had discovered particles, called mesons, that seemed to have a key role in the strong nuclear force that held protons and neutrons together. These mesons had an unusual property: the faster they spun, the higher their mass, following a very simple and regular pattern known as a Regge trajectory. Researchers found that they could predict this kind of behavior if, rather than particles, these mesons were short lengths of “string”, and with this discovery they invented string theory.

As it turned out, these early researchers were wrong. Mesons are not lengths of string, rather, they are pairs of quarks. The discovery of quarks explained how the strong force acted on protons and neutrons, each made of three quarks, and it also explained why mesons acted a bit like strings: in each meson, the two quarks are linked by a flux tube, a roughly cylindrical area filled with the gluons that carry the strong nuclear force. So rather than strings, mesons turned out to be more like bolas.

Leonin sold separately.

If you’ve heard this story before, you probably think it’s ancient history. We know about quarks and gluons now, and string theory has moved on to bigger and better things. You might be surprised to hear that at this week’s workshop, several presenters have been talking about modeling flux tubes between quarks in terms of string theory!

The thing is, science never forgets a good idea. String theory was superseded by quarks in describing the strong force, but it was only proposed in the first place because it matched the data fairly well. Now, with string theory-inspired techniques, people are calculating the first corrections to the string-like behavior of these flux tubes, comparing them with simulations of quarks and gluons, and finding surprisingly good agreement!

Science isn’t a linear story, where the past falls away to the shiny new theories of the future. It’s a marketplace. Some ideas are traded more widely, some less…but if a product works, even only sometimes, chances are someone out there will have a reason to buy it.

String Theorists Who Don’t Touch Strings

This week I’ve been busy, attending a workshop here at Perimeter on Superstring Perturbation Theory.

Superstrings are the supersymmetric strings that string theorists use to describe fundamental particles, while perturbation theory is the trick, common in almost every area of physics, of solving a problem by a series of increasingly precise approximations.

Based on that description, you’d think that superstring perturbation theory would be a central topic in string theory research. You wouldn’t expect it to be the sort of thing only a few people at the top of the field dabble in. You definitely wouldn’t expect one of the speakers at the workshop to mention that this might be the first conference on superstring perturbation theory he’s been to since the 1980’s.

String perturbation theory is an important subject, but it’s not one many string theorists use. And the reason why is that, oddly enough, very few string theorists actually use strings.

Looking at arXiv as I’m writing this, I can see only one paper in the theoretical physics section that directly uses strings. Most of them use something else: either older concepts like black holes, quantum field theory, and supergravity, or newer ones like d-branes. If you talked to the people who wrote those papers, though, most of them would describe themselves as string theorists.

The reason for the disconnect is that string theory as a field is much more than just the study of strings. String theory is a ten-dimensional universe (or eleven with M theory), where different ways of twisting up some of the dimensions result in different apparent physics in the remaining ones. It’s got strings, but also higher-dimensional membranes (and in the eleven dimensions of M theory it only has membranes, not strings). It’s the recipe for a long list of exotic quantum field theories, and a list of possible relations between them. It’s a new way to look at geometry, to think about the intersection of the nature of space and the dynamics of what inhabits it.

If string theory were really just about strings, it likely wouldn’t have grown any bigger than its quantum gravity rivals, like Loop Quantum Gravity. String theory grew because it inspired research directions that went far afield, and far beyond its conceptual core.

That’s part of why most string theorists will be baffled if you insist that string theory needs proof, or that it’s not the right approach to quantum gravity. For most string theorists, it doesn’t matter whether we live in a stringy world, whether gravity might eventually be described by another model. For most string theorists, string theory is a tool, one that opened up fields of inquiry that don’t have much to do with predicting the output of the LHC or describing the early universe. Or, in many cases, actually using strings.

Only the Boring Kind of Parallel Universes

PARALLEL UNIVERSES AT THE LHC??

No. No. Bad journalist. See what happens when you…

Mir Faizal, one of the three-strong team of physicists behind the experiment, said: “Just as many parallel sheets of paper, which are two dimensional objects [breadth and length] can exist in a third dimension [height], parallel universes can also exist in higher dimensions.

Bad physicist, bad! No biscuit for you!

Not nice at all!

For the technically-minded, Sabine Hossenfelder goes into thorough detail about what went wrong here. Not only do parallel universes have nothing to do with what Mir Faizal and collaborators have been studying, but the actual paper they’re hyping here is apparently riddled with holes.

BLACK holes! …no, actually, just logic holes.

But why did parallel universes even come up? If they have nothing to do with Faizal’s work, why did he mention them? Do parallel universes ever come up in real physics at all?

The answer to this last question is yes. There are real, viable ideas in physics that involve parallel universes. The universes involved, however, are usually boring ones.

The ideas are generally referred to as brane-world theories. If you’ve heard of string theory, you’ve probably heard that it proposes that the world is made of tiny strings. That’s all well and good, but it’s not the whole story. String theory has other sorts of objects in it too: higher dimensional generalizations of strings called membranes, branes for short. In fact, M theory, the theory of which every string theory is some low-energy limit, has no strings at all, just branes.

When these branes are one-dimensional, they’re strings. When they’re two-dimensional, they’re what you would normally picture as a membrane, a vibrating sheet, potentially infinite in size. When they’re three-dimensional, they fill three-dimensional space, again potentially up to infinity.

Filling three dimensional space, out to infinity…well that sure sounds a whole lot like what we’d normally call a universe.

In brane-world constructions, what we call our universe is precisely this sort of three-dimensional brane. It then lives in a higher-dimensional space, where its position in this space influences things like the strength of gravity, or the speed at which the universe expands.

Sometimes (not all the time!) these sorts of constructions include other branes, besides the one that contains our universe. These other branes behave in a similar way, and can have very important effects on our universe. They, if anything, are the parallel universes of theoretical physics.

It’s important to point out, though that these aren’t the sort of sci-fi parallel universes you might imagine! You aren’t going to find a world where everyone has a goatee, or even a world with an empty earth full of teleporting apes.

Pratchett reference!

That’s because, in order for these extra branes to do useful physical work, they generally have to be very different from our world. They’re worlds where gravity is very strong, or world with dramatically different densities of energy and matter. In the end, this means they’re not even the sort of universes that produce interesting aliens, or where we could send an astronaut, or really anything that lends itself well to (non-mathematical) imagination. From a sci-fi perspective, they’re as boring as can be.

Faizal’s idea, though, doesn’t even involve the boring kind of parallel universe!

His idea involves extra dimensions, specifically what physicists refer to as “large” extra dimensions, in contrast with the small extra dimensions of string theory. Large extra dimensions can explain the weakness of gravity, and theories that use them often predict that it’s much easier to create microscopic black holes than it otherwise would be. So far, these models haven’t had much luck at the LHC, and while I get the impression that they haven’t been completely ruled out, they aren’t very popular anymore.

The thing is, extra dimensions don’t mean parallel universes.

In fiction, the two get used interchangeably a lot. People go to “another dimension”, vaguely described as traveling along another dimension of space, and find themselves in a strange new world. In reality, though, there’s no reason to think that traveling along an extra dimension would put you in any sort of “strange new world”. The whole reason that our world is limited to three dimensions is because it’s “bound” to something: a brane, in the string theory picture. If there’s not another brane to bind things to, traveling in an extra dimension won’t put you in a new universe, it will just put you in an empty space where none of the types of matter you’re made of even exist.

It’s really tempting, when talking to laypeople, to fall back on stories. If you mention parallel universes, their faces light up with the idea that this is something they get, if only from imaginary examples. It gives you that same sense of accomplishment as if you had actually taught them something real. But you haven’t. It’s wrong, and Mir Faizal shouldn’t have stooped to doing it.

What Can Pi Do for You?

Tomorrow is Pi Day!

And what a Pi Day! 3/14/15 (if you’re in the US, Belize, Micronesia, some parts of Canada, the Philippines, or Swahili-speaking Kenya), best celebrated at 9:26:53, if you’re up by then. Grab a slice of pie, or cake if you really must, and enjoy!

If you don’t have some of your own, download this one!

Pi is great not just because it’s fun to recite digits and eat pastries, but because it serves a very important role in physics. That’s because, often, pi is one of the most “natural” ways to get larger numbers.

Suppose you’re starting with some sort of “natural” theory. Here I don’t mean natural in the technical sense. Instead, I want you to imagine a theory that has very few free parameters, a theory that is almost entirely fixed by mathematics.

Many physicists hope that the world is ultimately described by this sort of theory, but it’s hard to see in the world we live in. There are so many different numbers, from the tiny mass of the electron to the much larger mass of the top quark, that would all have to come from a simple, overarching theory. Often, it’s easier to get these numbers when they’re made out of factors of pi.

Why is pi easy to get?

In general, pi shows up a lot in physics and mathematics, and its appearance can be mysterious the uninitiated, as this joke related by Eugene Wigner in an essay I mentioned a few weeks ago demonstrates:

THERE IS A story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. “How can you know that?” was his query. “And what is this symbol here?” “Oh,” said the statistician, “this is pi.” “What is that?” “The ratio of the circumference of the circle to its diameter.” “Well, now you are pushing your joke too far,” said the classmate, “surely the population has nothing to do with the circumference of the circle.”

While it may sound silly, in a sense the population really is connected to the circumference of the circle. That’s because pi isn’t just about circles, pi is about volumes.

Take a bit to check out that link. Not just the area of a circle, but the volume of a sphere, and that of all sorts of higher-dimensional ball-shaped things, is calculated with the value of pi. It’s not just spheres, either: pi appears in the volume of many higher-dimensional shapes.

Why does this matter for physics? Because you don’t need a literal shape to get a volume. Most of the time, there aren’t literal circles and spheres giving you factors of pi…but there are abstract spaces, and they contain circles and spheres. A electric and magnetic fields might not be shaped like circles, but the mathematics that describes them can still make good use of a circular space.

That’s why, when I describe the mathematical formulas I work with, formulas that often produce factors of pi, mathematicians will often ask if they’re the volume of some particular mathematical space. It’s why Nima Arkani-Hamed is trying to understand related formulas by thinking of them as the volume of some new sort of geometrical object.

All this is not to say you should go and plug factors of pi together until you get the physical constants you want. Throw in enough factors of pi and enough other numbers and you can match current observations, sure…but you could also match anything else in the same way. Instead, it’s better to think of pi as an assistant: waiting in the wings, ready to translate a pure mathematical theory into the complicated mess of the real world.

So have a Happy Pi Day, everyone, and be grateful to our favorite transcendental number. The universe would be a much more boring place without it.

Pics or It Didn’t Happen

I got a tumblr recently.

One thing I’ve noticed is that tumblr is a very visual medium. While some people can get away with massive text-dumps, they’re usually part of specialized communities. The content that’s most popular with a wide audience is, almost always, images. And that’s especially true for science-related content.

This isn’t limited to tumblr either. Most of my most successful posts have images. Most successful science posts in general involve images. Think of the most interesting science you’ve seen on the internet: chances are, it was something visual that made it memorable.

The problem is, I’m a theoretical physicist. I can’t show you pictures of nebulae in colorized glory, or images showing the behavior of individual atoms. I work with words, equations, and, when I’m lucky, diagrams.

Diagrams tend to work best, when they’re an option. I have no doubt that part of the Amplituhedron‘s popularity with the press owes to Andy Gilmore’s beautiful illustration, as printed in Quanta Magazine’s piece:

Gotta get me an artist.

The problem is, the nicer one of these illustrations is, the less it actually means. For most people, the above is just a pretty picture. Sometimes it’s possible to do something more accurate, like a 3d model of one of string theory’s six-dimensional Calabi-Yau manifolds:

What, you expected a six-dimensional intrusion into our world *not* to look like Yog-Sothoth?

A lot of the time, though, we don’t even have a diagram!

In those sorts of situations, it’s tempting to show an equation. After all, equations are the real deal, the stuff we theorists are actually manipulating.

Unless you’ve got an especially obvious equation, though, there’s basically only one thing the general public will get out of it. Either the equation is surprisingly simple,

Isn’t it cute?

Or it’s unreasonably complicated,

Why yes, this is one equation that covers seventeen pages. You're lucky I didn't post the eight-hundred page one.

Why yes, this is one equation that covers seventeen pages. You’re lucky I didn’t post the eight-hundred page one.

This is great for first impressions, but it’s not very repeatable. Show people one giant equation, and they’ll be impressed. Show them two, and they won’t have any idea what the difference is supposed to be.

If you’re not showing diagrams or equations, what else can you show?

The final option is, essentially, to draw a cartoon. Forget about showing what’s “really going on”, physically or mathematically. That’s what the article is for. For an image, just pick something cute and memorable that references the topic.

When I did an article for Ars Technica back in 2013, I didn’t have any diagrams to show, or any interesting equations. Their artist, undeterred, came up with a cute picture of sushi with an N=4 on it.

That sort of thing really helps! It doesn’t tell you anything technical, it doesn’t explain what’s going on…but it does mean that every time I think of the article, that image pops into my head. And in a world where nothing lasts without a picture to document it, that’s a job well done.

Explanations of Phenomena Are All Alike; Every Unexplained Phenomenon Is Unexplained in Its Own Way

Vladimir Kazakov began his talk at ICTP-SAIFR this week with a variant of Tolstoy’s famous opening to the novel Anna Karenina: “Happy families are all alike; every unhappy family is unhappy in its own way.” Kazakov flipped the order of the quote, stating that while “Un-solvable models are each un-solvable in their own way, solvable models are all alike.”

In talking about solvable and un-solvable models, Kazakov was referring to a concept called integrability, the idea that in certain quantum field theories it’s possible to avoid the messy approximations of perturbation theory and instead jump straight to the answer. Kazakov was observing that these integrable systems seem to have a deep kinship: the same basic methods appear to work to understand all of them.

I’d like to generalize Kazakov’s point, and talk about a broader trend in physics.

Much has been made over the years of the “unreasonable effectiveness of mathematics in the natural sciences”, most notably in physicist Eugene Wigner’s famous essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences. There’s a feeling among some people that mathematics is much better at explaining physical phenomena than one would expect, that the world appears to be “made of math” and that it didn’t have to be.

On the surface, this is a reasonable claim. Certain mathematical ideas, group theory for example, seem to pop up again and again in physics, sometimes in wildly different contexts. The history of fundamental physics has tended to see steady progress over the years, from clunkier mathematical concepts to more and more elegant ones.

Some physicists tend to be dismissive of this. Lee Smolin in particular seems to be under the impression that mathematics is just particularly good at providing useful approximations. This perspective links to his definition of mathematics as “the study of systems of evoked relationships inspired by observations of nature,” a definition to which Peter Woit vehemently objects. Woit argues what I think any mathematician would when presented by a statement like Smolin’s: that mathematics is much more than just a useful tool for approximating observations, and that contrary to physicists’ vanity most of mathematics goes on without any explicit interest in observing the natural world.

While it’s generally rude for physicists to propose definitions for mathematics, I’m going to do so anyway. I think the following definition is one mathematicians would be more comfortable with, though it may be overly broad: Mathematics is the study of simple rules with complex consequences.

We live in a complex world. The breadth of the periodic table, the vast diversity of life, the tangled webs of galaxies across the sky, these are things that display both vast variety and a sense of order. They are, in a rather direct way, the complex consequences of rules that are at heart very very simple.

Part of the wonder of modern mathematics is how interconnected it has become. Many sub-fields, once distinct, have discovered over the years that they are really studying different aspects of the same phenomena. That’s why when you see a proof of a three-hundred-year-old mathematical conjecture, it uses terms that seem to have nothing to do with the original problem. It’s why Woit, in an essay on this topic, quotes Edward Frenkel’s description of a particular recent program as a blueprint for a “Grand Unified Theory of Mathematics”. Increasingly, complex patterns are being shown to be not only consequences of simple rules, but consequences of the same simple rules.

Mathematics itself is “unreasonably effective”. That’s why, when faced with a complex world, we shouldn’t be surprised when the same simple rules pop up again and again to explain it. That’s what explaining something is: breaking down something complex into the simple rules that give rise to it. And as mathematics progresses, it becomes more and more clear that a few closely related types of simple rules lie behind any complex phenomena. While each unexplained fact about the universe may seem unexplained in its own way, as things are explained bit by bit they show just how alike they really are.

Where do you get all those mathematical toys?

I’m at a conference at Caltech this week, so it’s going to be a shorter post than usual.

The conference is on something call the Positive Grassmannian, a precursor to Nima Arkani-Hamed’s much-hyped Amplituhedron. Both are variants of a central idea: take complicated calculations in physics and express them in terms of clean, well-defined mathematical objects.

Because of this, this conference is attended not just by physicists, but by mathematicians as well, and it’s been interesting watching how the two groups interact.

From a physics perspective, mathematicians are great because they give us so many useful tools! Many significant advances in my field happened because a physicist talked to a mathematician and learned that a problem that had stymied the physics world had already been solved in the math community.

This tends to lead to certain expectations among physicists. If a mathematician gives a talk at a physics conference, we expect them to present something we can use. Our ideal math talk is like when Q presents the gadgets at the beginning of a Bond movie: a ton of new toys with just enough explanation for us to use them to save the day in the second act.

Pictured: Mathematicians, through Physicist eyes

You may see the beginning of a problem here, once you realize that physicists are the James Bond in this analogy.

Physicists like to see themselves as the protagonists of their own stories. That’s true of every field, though, to some degree or another. And it’s certainly true of mathematicians.

Mathematicians don’t go to physics conferences just to be someone’s supporting cast. They do it because physics problems are interesting to them: by hearing what physicists are working on they hope to get inspiration for new mathematical structures, concepts jury-rigged together by physicists that represent corners that mathematics hasn’t yet explored. Their goal is to take home an idea that they can turn into something productive, gaining glory among their fellow mathematicians. And if that sounds familiar…

Pictured: Physicists, through Mathematician eyes

While it’s amusing to watch the different expectations go head-to-head, the best collaborations between physicists and mathematicians are those where both sides respect that the other is the protagonist of their own story. Allow for give-and-take, paying attention not just to what you find interesting but to what the other person does, without assuming a tired old movie script, and it’s possible to make great progress.

Of course, that’s true of life in general as well.