Category Archives: String Theory

String Theorists Who Don’t Touch Strings

This week I’ve been busy, attending a workshop here at Perimeter on Superstring Perturbation Theory.

Superstrings are the supersymmetric strings that string theorists use to describe fundamental particles, while perturbation theory is the trick, common in almost every area of physics, of solving a problem by a series of increasingly precise approximations.

Based on that description, you’d think that superstring perturbation theory would be a central topic in string theory research. You wouldn’t expect it to be the sort of thing only a few people at the top of the field dabble in. You definitely wouldn’t expect one of the speakers at the workshop to mention that this might be the first conference on superstring perturbation theory he’s been to since the 1980’s.

String perturbation theory is an important subject, but it’s not one many string theorists use. And the reason why is that, oddly enough, very few string theorists actually use strings.

Looking at arXiv as I’m writing this, I can see only one paper in the theoretical physics section that directly uses strings. Most of them use something else: either older concepts like black holes, quantum field theory, and supergravity, or newer ones like d-branes. If you talked to the people who wrote those papers, though, most of them would describe themselves as string theorists.

The reason for the disconnect is that string theory as a field is much more than just the study of strings. String theory is a ten-dimensional universe (or eleven with M theory), where different ways of twisting up some of the dimensions result in different apparent physics in the remaining ones. It’s got strings, but also higher-dimensional membranes (and in the eleven dimensions of M theory it only has membranes, not strings). It’s the recipe for a long list of exotic quantum field theories, and a list of possible relations between them. It’s a new way to look at geometry, to think about the intersection of the nature of space and the dynamics of what inhabits it.

If string theory were really just about strings, it likely wouldn’t have grown any bigger than its quantum gravity rivals, like Loop Quantum Gravity. String theory grew because it inspired research directions that went far afield, and far beyond its conceptual core.

That’s part of why most string theorists will be baffled if you insist that string theory needs proof, or that it’s not the right approach to quantum gravity. For most string theorists, it doesn’t matter whether we live in a stringy world, whether gravity might eventually be described by another model. For most string theorists, string theory is a tool, one that opened up fields of inquiry that don’t have much to do with predicting the output of the LHC or describing the early universe. Or, in many cases, actually using strings.

Am I a String Theorist?

Perimeter, like most institutes of theoretical physics, divides their researchers into semi-informal groups. At Perimeter, these are:

  • Condensed Matter
  • Cosmology
  • Mathematical Physics
  • Particle Physics
  • Quantum Fields and Strings
  • Quantum Foundations
  • Quantum Gravity
  • Quantum Information
  • Strong Gravity

I’m in the Quantum Fields and Strings group, which many people seem to refer to simply as the String Theory group. So for the past week or so, I’ve been introducing myself as a String Theorist. As I briefly mention in my Who Am I? post, this isn’t completely accurate.

Am I a String Theorist?

The theories that I study do derive from string theory. They were first framed by string theorists, and research into them is still deeply intertwined with string theory research. I’ve definitely had occasion to compare my results to those of string theorists, or to bring in calculations by string theorists to advance my work.

And if you’re the kind of person who views the world as a competition between string theory and its rivals (like Loop Quantum Gravity) then I suppose I’m on the string theory “side”. I’m optimistic, at least, that the reason why string theory research is so much more common than any other approach to quantum gravity is simply because string theory provides many more interesting and viable projects for researchers.

On the other hand, though, there’s the basic fact that the theories I work with are not, themselves, string theories. They’re quantum field theories, the broader class that encompasses the modern synthesis of quantum mechanics and special relativity. The theories I work with are often reasonably close to the well-tested theories of the real world, close enough that the calculations are more “particle physics” than the they are “string theory”.

Of course, all of that could change. One of the great things about string theory is the way it connects lots of different interesting quantum field theories together. There’s a “string”, the “GKP string”, involved in the work of Basso, Sever, and Vieira, work that I will probably get involved with here at Perimeter. The (2,0) theory is a quantum field theory, but it’s much closer to string theory than to particle physics, so if I get more involved with the (2,0) theory would that make me a string theorist?

The fact is, these days string theory is so ubiquitous that the question “Am I a String Theorist?” doesn’t actually mean anything. String theory is there, lurking in the background, able to get involved at any time even if it’s not directly involved at present. Theoretical physicists don’t fall into neat categories.

I am a String Theorist. Also, I am not.

How (Not) to Sum the Natural Numbers: Zeta Function Regularization

1+2+3+4+5+6+\ldots=-\frac{1}{12}

If you follow Numberphile on YouTube or Bad Astronomy on Slate you’ve already seen this counter-intuitive sum written out. Similarly, if you follow those people or Sciencetopia’s Good Math, Bad Math, you’re aware that the way that sum was presented by Numberphile in that video was seriously flawed.

There is a real sense in which adding up all of the natural numbers (numbers 1, 2, 3…) really does give you minus twelve, despite all the reasons this should be impossible. However, there is also a real sense in which it does not, and cannot, do any such thing. To explain this, I’m going to introduce two concepts: complex analysis and regularization.

This discussion is not going to be mathematically rigorous, but it should give an authentic and accurate view of where these results come from. If you’re interested in the full mathematical details, a later discussion by Numberphile should help, and the mathematically confident should read Terence Tao’s treatment from back in 2010.

With that said, let’s talk about sums! Well, one sum in particular:

\frac{1}{1^s}+\frac{1}{2^s}+\frac{1}{3^s}+\frac{1}{4^s}+\frac{1}{5^s}+\frac{1}{6^s}+\ldots = \zeta(s)

If s is greater than one, then each term in this infinite sum gets smaller and smaller fast enough that you can add them all up and get a number. That number is referred to as \zeta(s), the Riemann Zeta Function.

So what if s is smaller than one?

The infinite sum that I described doesn’t converge for s less than one. Add it up in any reasonable way, and it just approaches infinity. Put another way, the sum is not properly defined. But despite this, \zeta(s) is not infinite for s less than one!

Now as you might object, we only defined the Riemann Zeta Function for s greater than one. How do we know anything at all about it for s less than one?

That is where complex analysis comes in. Complex analysis sounds like a made-up term for something unreasonably complicated, but it’s quite a bit more approachable when you know what it means. Analysis is the type of mathematics that deals with functions, infinite series, and the basis of calculus. It’s often contrasted with Algebra, which usually considers mathematical concepts that are discrete rather than smooth (this definition is a huge simplification, but it’s not very relevant to this post). Complex means that complex analysis deals with functions, not of everyday real numbers, but of complex numbers, or numbers with an imaginary part.

So what does complex analysis say about the Riemann Zeta Function?

One of the most impressive results of complex analysis is the discovery that if a function of a complex number is sufficiently smooth (the technical term is analytic) then it is very highly constrained. In particular, if you know how the function behaves over an area (technical term: open set), then you know how it behaves everywhere else!

If you’re expecting me to explain why this is true, you’ll be disappointed. This is serious mathematics, and serious mathematics isn’t the sort of thing you can give the derivation for in a few lines. It takes as much effort and knowledge to replicate a mathematical result as it does to replicate many lab results in science.

What I can tell you is that this sort of approach crops up in many places, and is part of a general theme. There is a lot you can tell about a mathematical function just by looking at its behavior in some limited area, because mathematics is often much more constrained than it appears. It’s the same sort of principle behind the work I’ve been doing recently.

In the case of the Riemann Zeta Function, we have a definition for s greater than one. As it turns out, this definition still works if s is a complex number, as long as the real part of s is greater than one. Using this information, the value of the Riemann Zeta Function for a large area (half of the complex numbers), complex analysis tells us its value for every other number. In particular, it tells us this:

\zeta(-1)= -\frac{1}{12}

If the Riemann Zeta Function is consistently defined for every complex number, then it must have this value when s is minus one.

If we still trusted the sum definition for this value of s, we could plug in -1 and get

 1+2+3+4+5+6+\ldots=-\frac{1}{12}

Does that make this statement true? Sort of. It all boils down to a concept from physics called regularization.

In physics, we know that in general there is no such thing as infinity. With a few exceptions, nothing in nature should be infinite, and finite evidence (without mathematical trickery) should never lead us to an infinite conclusion.

Despite this, occasionally calculations in physics will give infinite results. Almost always, this is evidence that we are doing something wrong: we are not thinking hard enough about what’s really going on, or there is something we don’t know or aren’t taking into account.

Doing physics research isn’t like taking a physics class: sometimes, nobody knows how to do the problem correctly! In many cases where we find infinities, we don’t know enough about “what’s really going on” to correct them. That’s where regularization comes in handy.

Regularization is the process by which an infinite result is replaced with a finite result (made “regular”), in a way so that it keeps the same properties. These finite results can then be used to do calculations and make predictions, and so long as the final predictions are regularization independent (that is, the same if you had done a different regularization trick instead) then they are legitimate.

In string theory, one way to compute the required dimensions of space and time ends up giving you an infinite sum, a sum that goes 1+2+3+4+5+…. In context, this result is obviously wrong, so we regularize it. In particular, we say that what we’re really calculating is the Riemann Zeta Function, which we happen to be evaluating at -1. Then we replace 1+2+3+4+5+… with -1/12.

Now remember when I said that getting infinities is a sign that you’re doing something wrong? These days, we have a more rigorous way to do this same calculation in string theory, one that never forces us to take an infinite sum. As expected, it gives the same result as the old method, showing that the old calculation was indeed regularization independent.

Sometimes we don’t have a better way of doing the calculation, and that’s when regularization techniques come in most handy. A particular family of tricks called renormalization is quite important, and I’ll almost certainly discuss it in a future post.

So can you really add up all the natural numbers and get -1/12? No. But if a calculation tells you to add up all the natural numbers, and it’s obvious that the result can’t be infinite, then it may secretly be asking you to calculate the Riemann Zeta Function at -1. And that, as we know from complex analysis, is indeed -1/12.

What does Copernicus have to say about String Theory?

Putting aside some highly controversial exceptions, string theory has made no testable predictions. Conceivably, a world governed by string theory and a world governed by conventional particle physics would be indistinguishable to every test we could perform today. Furthermore, it’s not even possible to say that string theory predicts the same things with fewer fudge-factors, as string theory descriptions of our world seem to have dramatically many more free parameters than conventional ones.

Critics of string theory point to this as a reason why string theory should be excluded from science, sent off to the chilly arctic wasteland of the math department. (No offense to mathematicians, I’m sure your department is actually quite warm and toasty.) What these critics are missing is an important feature of the scientific process: before scientists are able to make predictions, they propose explanations.

To explain what I mean by that, let’s go back to the beginning of the 16th century.

At the time, the authority on astronomy was still Ptolemy’s Syntaxis Mathematica, a book so renowned that it is better known by the Arabic-derived superlative Almagest, “the greatest”. Ptolemy modeled the motions of the planets and stars as a series of interlocking crystal spheres with the Earth at the center, and did so well enough that until that time only minor improvements on the model had been made.

This is much trickier than it sounds, because even in Ptolemy’s day astronomers could tell that the planets did not move in simple circles around the Earth. There were major distortions from circular motion, the most dramatic being the phenomenon of retrograde motion.

If the planets really were moving in simple circles around the Earth, you would expect them to keep moving in the same direction. However, ancient astronomers saw that sometimes, some of the planets moved backwards. The planet would slow down, turn around, go backwards a bit, then come to a stop and turn again.

Thus sparking the invention of the spirograph.

In order to take this into account, Ptolemy introduced epicycles, extra circles of motion for the planets. The epicycle would move on the planet’s primary circle, or deferent, and the planet would rotate around the epicycle, like so:

French Wikipedia had a better picture.

These epicycles weren’t just for retrograde motion, though. They allowed Ptolemy to model all sorts of irregularities in the planets’ motions. Any deviation from a circle could conceivably be plotted out by adding another epicycle (though Ptolemy had other methods to model this sort of thing, among them something called an equant). Enter Copernicus.

Enter Copernicus’s hair.

Copernicus didn’t like Ptolemy’s model. He didn’t like equants, and what’s more, he didn’t like the idea that the Earth was the center of the universe. Like Plato, he preferred the idea that the center of the universe was a divine fire, a source of heat and light like the Sun. He decided to put together a model of the planets with the Sun in the center. And what he found, when he did, was an explanation for retrograde motion.

In Copernicus’s model, the planets always go in one direction around the Sun, never turning back. However, some of the planets are faster than the Earth, and some are slower. If a planet is slower than the Earth and it passes by it will look like it is going backwards, due to the Earth’s speed. This is tricky to visualize, but hopefully the picture below will help: As you can see in the picture, Mars starts out ahead of Earth in its orbit, then falls behind, making it appear to move backwards.

Despite this simplification, Copernicus still needed epicycles. The planets’ motions simply aren’t perfect circles, even around the Sun. After getting rid of the equants from Ptolemy’s theory, Copernicus’s model ended up having just as many epicycles as Ptolemy’s!

Copernicus’s model wasn’t any better at making predictions (in fact, due to some technical lapses in its presentation, it was even a little bit worse). It didn’t have fewer “fudge factors”, as it had about the same number of epicycles. If you lived in the 16th century, you would have been completely justified in believing that the Earth was the center of the universe, and not the Sun. Copernicus had failed to establish his model as scientific truth.

However, Copernicus had still done something Ptolemy didn’t: he had explained retrograde motion. Retrograde motion was a unique, qualitative phenomenon, and while Ptolemy could include it in his math, only Copernicus gave you a reason why it happened.

That’s not enough to become the reigning scientific truth, but it’s a damn good reason to pay attention. It was justification for astronomers to dedicate years of their lives to improving the model, to working with it and trying to get unique predictions out of it. It was enough that, over half a century later, Kepler could take it and turn it into a theory that did make predictions better than Ptolemy, that did have fewer fudge-factors.

String theory as a model of the universe doesn’t make novel predictions, it doesn’t have fewer fudge factors. What it does is explain, explaining spectra of particles in terms of shapes of space and time, the existence of gravity and light in terms of closed and open strings, the temperature of black holes in terms of what’s going on inside them (this last really ought to be the subject of its own post, it’s one of the big triumphs of string theory). You don’t need to accept it as scientific truth. Like Copernicus’s model in his day, we don’t have the evidence for that yet. But you should understand that, as a powerful explanation, the idea of string theory as a model of the universe is worth spending time on.

Of course, string theory is useful for many things that aren’t modeling the universe. But that’s the subject of another post.

What are Vacua? (A Point about the String Landscape)

A couple weeks back, there was a bit of a scuffle between Matt Strassler and Peter Woit on the subject of predictions in string theory (or more properly, the question of whether any predictions can be made at all). As a result, Strassler has begun a series on the subject of quantum field theory, string theory, and predictions.

Strassler hasn’t gotten to the topic of string vacua yet, but he’s probably going to cover the subject in a future post. While his take on the subject is likely to be more expansive and precise than mine, I think my perspective on the problem might still be of interest.

Let’s start with the basics: one of the problems often cited with string theory is the landscape problem, the idea that string theory has a metaphorical landscape of around 10^500 vacua.

What are vacua?

Vacua is the plural of vacuum.

Ok, and?

A vacuum is empty space.

That’s what you thought, right? That’s the normal meaning of vacuum. But if a vacuum is empty, how can there be more than one of them, let alone 10^500?

“Empty” is subjective.

Now we’re getting somewhere. The problem with defining a concept like “empty space” in string theory or field theory is that it’s unclear what precisely it should be empty of. Naively, such a space should be empty of “stuff”, or “matter”, but our naive notions of “matter” don’t apply to field theory or string theory. In fact, there is plenty of “stuff” that can be present in “empty” space.

Think about two pieces of construction paper. One is white, the other is yellow. Which is empty? Neither has anything drawn on it, so while one has a color and the other does not, both are empty.

“Empty space” doesn’t come in multiple colors like construction paper, but there are equivalent parameters that can vary. In quantum field theory, one option is for scalar fields to take different values. In string theory, different dimensions can be curled up in different ways (as an aside, when string theory leads to a quantum field theory often these different curling-up shapes correspond to different values for scalar fields, so the two ideas are related).

So if space can have “stuff” in it and still count as empty, are there any limits on what can be in it?

As it turns out, there is a quite straightforward limit. But to explain it, I need to talk a bit about why physicists care about vacua in the first place.

Why do physicists care about vacua?

In physics, there is a standard modus operandi for solving problems. If you’ve taken even a high school physics course, you’ve probably encountered it in some form. It’s not the only way to solve problems, but it’s one of the easiest. The idea, broadly, is the following:

First get the initial conditions, and then use the laws of physics to see what happens next.

In high school physics, this is how almost every problem works: your teacher tells you what the situation is, and you use what you know to figure out what happens next.

In quantum field theory, things are a bit more subtle, but there is a strong resemblance. You start with a default state, and then find the perturbations, or small changes, around that state.

In high school, your teacher told you what the initial conditions were. In quantum field theory, you need another source for the “default state”. Sometimes, you get that from observations of the real world. Sometimes, though, you want to make a prediction that goes beyond what your observations tell you. In that case, one trick often proves useful:

To find the default state, find which state is stable.

If your system starts out in a state that is unstable, it will change. It will keep changing until eventually it changes into a stable state, where it will stop changing. So if you’re looking for a default state, that state should be one in which the system is stable, where it won’t change.

(I’m oversimplifying things a bit here to make them easier to understand. In particular, I’m making it sound like these things change over time, which is a bit of a tricky subject when talking about different “default” states for the whole of space and time. There’s also a cool story connected to this about why tachyons don’t exist, which I’d love to go into for another post.)

Since we know that the “default” state has to be stable, if there is only one stable state, we’ve found the default!

Because of this, we can lay down a somewhat better definition:

A vacuum is a stable state.

There’s more to the definition than this, but this should be enough to give you the feel for what’s going on. If we want to know the “default” state of the world, the state which everything else is just a small perturbation on top of, we need to find a vacuum. If there is only one plausible vacuum, then our work is done.

When there are many plausible vacua, though, we have a problem. When there are 10^500 vacua, we have a huge problem.

That, in essence, is why many people despair of string theory ever making any testable predictions. String theory has around 10^500 plausible vacua (for a given, technical, meaning of plausible).

It’s important to remember a few things here.

First, the reason we care about vacuum states is because we want a “default” to make predictions around. That is, in a sense, a technical problem, in that it is an artifact of our method. It’s a result of the fact that we are choosing a default state and perturbing around it, rather than proving things that don’t depend on our choice of default state. That said, this isn’t as useful an insight as it might appear, and as it turns out there is generally very little that can be predicted without choosing a vacuum.

Second, the reason that the large number of vacua is a problem is that if there was only one vacuum, we would know which state was the default state for our world. Instead, we need some other method to pick, out of the many possible vacua, which one to use to make predictions. That is, in a sense, a philosophical problem, in that it asks what seems ostensibly to be a philosophical question: what is the basic, default state of the universe?

This happens to be a slightly more useful insight than the first one, and it leads to a number of different approaches. The most intuitive solution is to just shrug and say that we will see which vacuum we’re in by observing the world around us. That’s a little glib, since many different vacua could lead to very similar observations. A better tactic might be to try to make predictions on general grounds by trying to see what the world we can already observe implies about which vacua are possible, but this is also quite controversial. And there are some people who try another approach, attempting to pick a vacuum not based on observations, but rather on statistics, choosing a vacuum that appears to be “typical” in some sense, or that satisfies anthropic constraints. All of these, again, are controversial, and I make no commentary here about which approaches are viable and which aren’t. It’s a complicated situation and there are a fair number of people working on it. Perhaps, in the end, string theory will be ruled un-testable. Perhaps the relevant solution is right under peoples’ noses. We just don’t know.

Brown, Blue, and Birds

I gave a talk at Brown this week, so this post may be shorter than usual. On the topic of Brown I don’t have much original to say: the people were friendly, the buildings were brownish-colored, and bringing a car there was definitely a bad idea. Don’t park at Brown. Not even then.

There’s a quote from Werner Heisenberg that has been making the rounds of the internet. It comes out of a 1976 article by Felix Bloch where he describes taking a walk with Heisenberg, when the discussion turned to the subject of space and time:

I had just read Weyl’s book Space, Time and Matter, and under its influence was proud to declare that space was simply the field of linear operations.

“Nonsense,” said Heisenberg, “space is blue and birds fly through it.”

Heisenberg’s point is that sometimes in physics you need to ask what your abstractions are really describing. You need to make sure that you haven’t stretched your definitions too badly away from their original inspiration.

When people first hear that string theory requires eleven dimensions, many wonder if this point applies. In mathematics, it’s well known that a problem can be described in many dimensions more than the physical dimensions of space. There’s a lovely example in the book Flatterland (a sequel to Flatland, a book which any math-y person should read at least once) of the dimensions of a bike. The bike’s motion through space gives three dimensions: up/down, backward/forward, and left/right. However, the bike can move in other ways: its gears can each be in a different position, as can its handlebars, as can the wheels…in the end, a bike can be envisioned as having many more “dimensions” than our normal three-dimensional space, each corresponding to some internal position.

Is string theory like this? No.

The first hint of the answer comes from something called F theory. String theory is part of something larger called M theory, and since M theory has eleven dimensions this is usually the number of dimensions given. But F theory contains string theory in a certain sense as well, only F theory contains twelve dimensions.

So why don’t string theorists say that the world has twelve dimensions?

As it turns out, the extra dimension added by F theory isn’t “really” a dimension. It’s much more like the mathematical dimensions of a bike’s gears and wheels than it is like the other eleven dimensions of M theory.

What’s the difference? What, according to a string theorist, is the definition of a dimension of space?

It’s simple: Space is “blue” (or colorless, I suppose). Birds (and particles, and strings, and membranes) fly in it.

We’re using the same age-old distinction that Heisenberg was, in a way. What is space? Space is just a place where things can move, in the same way they move in our usual three dimensions. Space is where you have momentum, where that momentum can change your position. Space is where forces act, the set of directions in which something can be pulled or pushed in a symmetric way. Space can’t be reduced, at least not without a lot of tricks: a bird flying isn’t just another description of a lizard crawling, not in the way a bicycle’s gears moving can be thought of as turning through our normal three dimensions without any extra ones. And while F theory doesn’t fit this criterion, M theory really does. The membranes of M theory fly around in eleven dimensional space-time, just like a bird moves through three space and one time dimensions.

Space for a string theorist isn’t any crazier or more abstract than it is for you. It’s just a place where things can move.

Duality: Find out what it means to me

There’s a cute site out there called Why String Theory. Started by Oxford and the Royal Society, Why String Theory contains lots of concise and well-illustrated explanations of string theory, and it even wades into some of the more complex topics like AdS/CFT and string dualities in general. Their explanation of dualities is a nice introduction to why dualities matter in string theory, but I don’t think it does a very good job of explaining what a duality actually is or how one works. As your fearless host, I’m confident that I can do better.

Why String Theory defines dualities as when “different mathematical theories describe the same physics.” How does that work, though? In what sense are the theories different, if they describe the same thing? And if they describe the same thing, why do we need both of them?

1563px-face_or_vase_ata_01.svg_

You’ve probably seen the above image before, or one much like it. Look at it one way, and you see a goblet. Another, and you see two faces.

Now imagine that instead of a flat image, these are 3D objects, models you have in your house. You’ve got a goblet, and a pair of clay faces. You’re still pretty sure they fit together like they do in the image, though. Maybe they said they fit together on the packaging, maybe you stuck them together and it didn’t look like there were any gaps. Whatever the reason, you’re confident enough about this that you’re willing to assume it’s true.

Now suppose you want to figure out how long the noses on the faces are. In case you’ve never measured a human nose, I can let you know that it’s tricky. You could put a ruler along the nose, but it would be diagonal rather than straight, so you wouldn’t get an accurate measurement. Even putting the ruler beneath the nose doesn’t work for rounded noses like these.

That said, measuring the goblet is easy. You can run measuring tape around the neck of the goblet to find the circumference, and then calculate the diameter. And if you measure the goblet in this way, you also know how long the faces’ noses are.

You could go further, and build up a list of things you can measure on one object that tell you about the other one. The necks match up to the base of the goblet, the foreheads to the mouth, etc. It would be like a dictionary, translating between two languages: the language of measurements of the faces, and the language of measurements of the goblet.

That sort of “dictionary” is the essence of duality. When two theories have a duality (are dual to each other), you can make a “dictionary” to translate measurements in one theory to measurements in the other. That doesn’t mean, however, that the theories are clearly connected: like 3D models of the faces and the goblet, it may be that without looking at the particular “silhouette” defined by duality the two views are radically different. Rather than physical objects, the theories compare mathematical “objects”, so rather than physical obstructions like the solidity of noses we have to deal with mathematical ones, situations where one quantity or another is easier or harder to calculate depending on how the math is set up. For example, many dualities relate things that require calculations at very high loops to things that can be calculated with fewer loops (for an explanation of loops, check out this post).

As Why String Theory points out, one of the most prominent dualities is called AdS/CFT, and it relates N=4 super Yang-Mills (a Conformal Field Theory, or CFT) to string theory in something called Anti-de Sitter (AdS) space (tricky to describe, but essentially a world in which space is warped like a hyperbola). Another duality relates N=4 super Yang-Mills Feynman diagrams with n particles coming in from outside to diagrams with an n-sided shape and particles randomly coming in from the edges of the shape (these latter diagrams are called Wilson loops). In general N=4 super Yang-Mills is involved in many, many dualities, which is a big part of why it’s so dang cool.